Category

Concentration File Conversion
Concentration File Utilities
Ensembles
Graphics Creation
Graphics Utilities
HYSPLIT Configuration
Meteorological Data to ARL Format
Meteorological Data Editing
Meteorological Data Examination
Particle Utilities
Shapefile Manipulation
Trajectory Analysis

Listing

Program unix mac win gui
accudiv X X X X
add_data X X X
add_grid X X X
add_miss X X
add_time X X
add_velv X X X
aer2arl X X X
api2arl X X
apidump X X
arl2grad X X X
arl2meds X X X
arw2arl X X X X
asc2par X X X
ascii2shpX X X X
boxplots X X X X
c2array X X X X
c2datem X X X X
cat2svg X X X X
catps2ps X X X X
chk_data X X X
chk_file X X X X
chk_indexX X X
chk_rec X X X
chk_timesX X X
clusend X X X X
cluslist X X X X
clusmem X X X X
clusplot X X X X
cluster X X X X
con2arcv X X X
con2asc X X X X
con2cdf4 X X
con2ctbt X X X
con2dose X X X
con2grad X X X
con2inv X X X X
con2rem X X X X
con2srs X X X
con2stn X X X X
conappendX X X
conavg X X X
conavgpd X X X
concacc X X X
concadd X X X X
concmbn X X X
concplot X X X X
concrop X X X X
concsum X X X
condecay X X X
conedit X X X
confreq X X X
conhavrg X X X X
coninfo X X X X
conlight X X X X
conmask X X X
conmaxpd X X X
conmaxv X X X
conmerge X X X
conprob X X X X
conpuff X X X
conread X X X X
constats X X X
constnlstX X X
content X X X
contour X X X X
coversheetX X
dat2cntl X X X X
data_avrgX X
data_del X X X
data_yearX X X
datecol X X X
datesmry X X X
dbf2txt X X X
display X X
drn2arl X X X
dustbdy X X X X
dustedit X X X
edit_fluxX X X
edit_headX X X
edit_indexX X X
edit_missX X
edit_nullX X X
ensperc X X X
ensplots X X X X
enstala X X X
file_copyX X X X
file_mergeX X X
filedatesX X X
findgrib X X X
fires X X X
firew X X X
gelabel X X X X
gen2xml X X X
goes2ems X X X
grib2arl X X
gridplot X X X X
grid2xyllX X X
hycm_ens X
hycm_std X
hycs_cb4 X X
hycs_ens X X X X
hycs_gem X X X X
hycs_grs X X X
hycs_ier X X X
hycs_so2 X X X
hycs_std X X X X
hycs_var X X X X
hysptest X X X X
hyts_ens X X X X
hyts_std X X X X
inventoryX X X
isochron X X X X
latlon X X X X
lbfgsb X X X X
macc2dateX X X
matrix X X X X
mergextr X X X
merglist X X X X
metdates X X X
metlatlonX X X
metpoint X X X
narr2arl X X X
nuctree X X X X
par2asc X X X
par2conc X X X
parhplot X X X X
parmerge X X X
paro2n X X X
parshift X X X X
parsplot X X X X
parvplot X X X X
parxplot X X X X
pole2mercX X X
poleplot X X X X
prntbdy X X X
profile X X X X
rec_copy X X X
rec_insertX X X
rec_mergeX X X
scatter X X X X
sfc2arl X X X
showgrid X X X X
snd2arl X X X
splitsvg X X X X
stabplot X X X
stat2gridX X X X
statmain X X X X
stn2arl X X X X
stn2ge X X X X
stn2par X X X
tcmsum X X X X
tcsolve X X X X
testnuc X X X
timeplot X X X X
timeplus X X X
trajfind X X X
trajfreq X X X X
trajfrmt X X X
trajgrad X X X
trajmean X X X X
trajmerg X X X
trajplot X X X X
txt2dbf X X X X
unpacker X X X
var2datemX X X
velvar X X X
vmixing X X X
vmsmerge X X X
vmsread X X X
volcplot X X X
xtrct_gridX X X
xtrct_stnX X X
xtrct_timeX X X
zcoord X X X



Concentration File Conversion

c2array

USAGE: c2array -[arguments]
   -c[input to output concentration multiplier (1.0)]
   -d[write to diagnostic file (c2array.txt)]
   -h[half-life in days (0) to back decay measurments]
   -i[file name of file with names of input files (INFILE)]
   -m[measurement data file name (datem.txt)]
   -o[output concentration file name (c2array.csv)]
   -p[pollutant index select for multiple species (1)]
   -s[source position for backward mode (lat:lon)]
   -x[nearest neighbor or interpolation {(n)|i}]
   -z[level index select for multiple levels (1)]

Program to read multiple HYSPLIT concentration files and a DATEM formatted measured data file to create a merged CSV formatted coefficient matrix that can be used as input to a linear equation solver. The HYSPLIT files can be forward calculations from a source point, one file per release time, or backward calculations from multiple receptor points. The output matrix will consist of columns each release time and one row for each measurement placed in the last column. The HYSPLIT concentration binary input file names are identified in an input file of file names. The program determines the direction of the calculation (forward/backward) based upon the time difference between successive concentration outputs. The backward calculation is assumed to correspond to a measured value and requires the command line entry of a source position (-s) where the model calculations will be extracted. The half-life entry is used to decay correct the measurement data back to the simulation start time.

GUI: srm_solve.tcl        Tutorial: src_coef.sh

c2datem

USAGE: c2datem -[arguments]
   -c[input to output concentration multiplier]
   -d[write to diagnostic file]
   -e[output concentration datem format: (0)=F10.1 1=ES10.4]
   -h[header info: 2=input text, (1)=file name, 0=skip]
   -i[input concentration file name]
   -m[measurement data file name]
   -o[output concentration file name]
   -p[pollutant index select for multiple species]
   -r[rotation angle:latitude:longitude]
   -s[supplemental lat-lon information to output]
   -t[model output period can be longer than measurement period;
      and ending times do not have to be aligned:(0)=no, 1=yes]
   -x[n(neighbor) or i(interpolation)]
   -z[level select for multiple levels, if z=-1 read height from DATEM file]

Program to read a HYSPLIT concentration file and match the results with measurements in the DATEM file format to create an output file of model calculations in the DATEM file format that correspond in location and times with the measured data.

GUI: datem.tcl        Tutorial: conc_stat.sh

con2arcv

Usage: con2arcv [6 character file ID]
Input: cdump
Output: YYMMDDHH_KP_KL_{6charID}.flt and .hdr

Converts binary concentration file to ESRI's Arcview binary raster file format as one output file per sampling time period for each pollutant (KP) and level (KL).

con2asc

USAGE: con2asc -[options (default)]
   -c[Convert all records to one diagnostic file]
   -d[Delimit output by comma flag]
   -f[File flag for a file for each pollutant-level]
   -i[Input file name (cdump)]
   -m[Minimum output format flag]
   -o[Output file name base (cdump)]
   -s[Single output file flag]
   -t[Time expanded (minutes) file name flag]
   -u[Units conversion multiplier concentration (1.0)]
   -U[Units conversion multiplier deposition (1.0)]
   -v[Vary output by +lon+lat (default +lat+lon)]
   -x[Extended precision flag]
   -z[Zero points output flag]

Converts a binary concentration file to a latitude-longitude based ASCII file, one record per grid point, for any grid point with any level or pollutant greater than zero. One output file is created for each sampling time period.

GUI: conc_asc.tcl        Tutorial: conc_util.sh

con2cdf4

Usage: con2cdf4 [options] inpfile outfile
   Options
     -d : NetCDF deflate level (0-9, default 1)
     -h : Show this help message and exit
   Arguments
     inpfile : Input HYSPLIT cdump file name
     outfile : Output NetCDF file name

Converts HYSPLIT binary cdump concentration output to NetCDF

con2ctbt

USAGE: con2ctbt -[options (default)]
   -i[Input file name (cdump)]
   -o[Output file name base (cdump.srm)]
   -e[Emission amount (1.3E+15)]
   -c[Concentration number to output (1)]
   -s[Start site label (XXX00)]

Converts a binary concentration file to a latitude-longitude based ASCII file, one record per grid point, for any grid point not zero, in the agreed upon Comprehensive Test Ban Treaty Organization exchange format. See con2srs for a newer variation of this program compatible with output fields generated by FLEXPART.

con2dose

Usage: con2dose [input file] [output file]

Temporally averages the input binary concentration file (output from HYSPLIT), converts to dose units and outputs a new binary dose file. The calculation applies only for long-term doses and requires the con2dose.dat file which contains various dose conversion factors (EDE, bone, lung, thyroid, TEDE, etc.) for each radionuclide. The con2dose.dat file is only provided with the source code distribution. Requires each pollutant ID in the HYSPLIT output file to have a matching radiological species ID in the con2dose.dat file. This approach has been replaced by the more general Transfer Coefficient Matrix (TCM) where surrogate species are used in the HYSPLIT calculation and where decay is also applied in the post-processing step.

con2grad

Usage: con2grads [HYSPLIT filename]
  Output:
    concen.grd - Grads binary concentration data
    concen.ctl - Grads control script
    species.dep - Grads display script
    species.con - Grads display script

Converts a binary HYSPLIT concentration file to Grads binary format

con2rem

USAGE: con2rem -[options(default)]
   -a[activity input file name (activity.txt) or {create|fdnpp}]
   -b[Breathing rate for inhalation dose in m3/hr (0.925)]
   -c[Output type: (0)-dose, 1-air conc/dep]
   -d[Type of dose: (0)=rate (R/hr) 1=dose over averging period (R)]
   -e[include inhalation dose (0)=No 1=Yes]
   -f[fuel (1)=U235 2=P239]
   -g[normal decay=0 (used only with c=1), time averaged decay (1)]
   -h[help with extended comments]
   -i[input file name (cdump)]
   -n[noble gas 4-char ID (NGAS)]
   -o[output file name (rdump)]
   -p[process (1)=high-energy 2=thermal]
   -q[when -d1 convert dose from rem=(0) to sieverts=1]
   -s[sum species (0), 1=match to input, 2=output each species]
   -t[no decay=0 input decayed, or decay by species half-life (1)]
   -u[units conversion Bq->pCi, missing: assume input Bq]
   -w[fission activity in thermal mega-watt-hours, replaces -y option]
   -x[extended integration time in hours beyond calculation (0)]
   -y[yield (1.0)=kT]
   -z[fixed integration time in hours (0)]

Converts a HYSPLIT binary concentration/deposition file to dose in rem/hr. The HYSPLIT calculation should be done using a unit emission so that the concentration units are m-3 and the deposition units are m-2. This program reads the file activity.txt which contains the activity (Bq) at time=0 for all the isotope products. The -acreate switch will create a sample activity.txt file for a 1KT device, while the -afdnpp switch creates a sample file where the activity equals the maximum 3h emissions from the FDNPP accident. During the processing step, the cumulative product of the activity and dose factor is computed for each decay weighted concentration averaging period independently for noble gases and particles. For general applications, HYSPLIT should be configured for two species: NGAS and RNUC. For most radiological dose applications, this method is preferred over the original approach using con2dose because decay is treated in the post-processing step and multiple radionuclides can be assigned to a single computational species. This approach works best when emissions are constant. For time-varying emissions, use the program condecay before running con2rem.

GUI: con2rem.tcl        Tutorial: dose_cemit.sh

con2srs

USAGE: con2srs -[options (default)]
   -i[Input file name (cdump)]
   -o[Output file name base (cdump.srm)]
   -e[Emission amount (1.3E+15)]
   -c[Concentration number to output (1)]
   -l[Level to output (1)]
   -s[Start site label (XXX00)]
   -r (specifies regional grid. Global by default)
   -d (process the deposition grid; off by default; overrides -l)

Converts binary concentration file to a latitude-longitude based ASCII file, one record per grid point, for any grid point not zero. Adheres to the informal source-receptor-sensitivy (SRS) output format that seems to be in use with FLEXPART people for storing concentrations, backwards sensitivities, and even depositions. This program forces a simple output - a single species at a single level. It is up to other programs to combine SRS files into something more complex, if and when desired. See con2ctbt for the original version of this program.

con2stn

USAGE: con2stn [-options]
   -a[rotation angle:latitude:longitude]
   -c[input to output concentration multiplier]
   -d[mm/dd/yyyy format: (0)=no 1=w-Label 2=wo-Label]
   -e[output concentration datem format: (0)=F10.1 1=ES10.4]
   -h[half-life-days (one entry for each pollutant)]
   -i[input concentration file name]
   -m[maximum number of stations (200)]
   -o[output concentration file name]
   -p[pollutant index (1) or 0 for all pollutants]
   -r[record format 0=record (1)=column 2=datem]
   -s[station list file name (contains: id lat lon)]
   -t[transfer coefficient processing]
   -x[(n)=neighbor or i for interpolation]
   -z[level index (1) or 0 for all levels]

Program to read a hysplit concentration file and print the contents at one or more locations to an output file.

GUI: conc_stn.tcl        Tutorial: conc_util.sh

condecay

USAGE: condecay -[One Entry per Conversion] +[options(default)]
   -{Cnumb:Index:HalfL:Radio}
       Cnumb=column number in emission file
       Index=index number in binary input file
       HalfL=half life in days
       Radio=radionuclide character id
   +p[Process ID number]
   +d[Directory for TG_{YYYYMMDDHH} files]
   +e[Emissions base file name (cfactors).txt]
   +i[Input file base name (TG_)]
   +o[Output file base name (DG_)]
   +t[Time decay start: YYYYMMDDHH
       | 0000000000 each file
       | default from file #1]

Processes a series of binary unit-source concentration files, which consists of a file name starting with TG_{YYYYMMDDHH} that identifies the associated release time. The program command line contains one entry for each species to be multiplied by a source term and decayed by its half-life to the end of the sample collection period. Not all species in the input file need to be converted. The resulting output file name is called DG_{YYYYMMDDHH}. Emission values are defined in the input file named cfactors.txt. All concentration input files must be identical in terms of species and grid resolution. This program would be used in conjunction TCM processing applications. In the situation where the emission rate is constant with time, the program con2rem can be used to apply emissions, decay, and convert to dose. However, when emissions are time-varying, then the concentration file for every release period (TG files) would be processed by condecay for the emissions and decay, and then by con2rem for the final conversion to dose.
Tutorial: dose_temit.sh

conprob

Usage: conprob [-options]
   -b[base input file name (cdump)]
   -c[(conc_high:conc_mid:conc_low) set values]
   -d[dignostic output turned on]
   -p[pollutant index number (when input contains more than 1)]
   -t[temporal aggregation period (1)]
   -v[value below which concentration assumed to equal zero (0.0)]
   -x[Concentration multiplier: (1.0)]
   -y[Deposition multiplier: (1.0)]
   -z[level index number (when input contains more than 1)]

Reads multiple HYSPLIT concentration files and computes various probability levels independently at each grid point and then creates a new ouput file for each probability level (files=probXX). Also computed are the probabilities to exceed predefined concentration levels (files=cmax{01|10|00}) from minimum to maximum. Other output files include the mean, variance, coefficient of variation, and number of members.

GUI: prob_files.tcl        Tutorial: ens_data.sh

constnlst

USAGE: constnlst [-options]
   -i[Input file name of concentration file names]
   -o[output concentration file name]
   -a[start time (YYMMDDHHMM)]
   -b[stop time (YYMMDDHHMM)]
   -e[list concentrations (0) or 1 sum between start and stop times]
   -h[write header to output (0) or 1 for no headers]
   -w[output sample start time (0) or 1 simulation start time]
   -c[input to output concentration multiplier]
   -s[station list file name (contains: id lat lon)]
   -x[(n)=neighbor or i for interpolation]
   -z[level index (1) or 0 for all levels]
   -p[pollutant index (1) or 0 for all pollutants]
   -r[record format 0=record (1)=column 2=datem]
   -m[maximum number of stations (200)]
   -d[mm/dd/yyyy format: (0)=no 1=w-Label 2=wo-Label]

Program to read a HYSPLIT concentration file and print the contents at selected locations for a given time (if -a and -b are the same) or sums concentrations for a range of times. This program is similar to con2stn but with some different options required for certain web applications.

lbfgsb

USAGE: lbfgsb
Input: PARAMETER_IN_000
Output: SOURCE_OUT_000

The code will solve for values of the source emission rate vector to satisfy the measured values using a cost function approach to minimize the difference between the observations and model predictions by varying the source term from a first-guess estimate. The model predictions file should be in CSV format with the last column corresponding to the measurements. The first row is the time associated with each unknown source. The CSV file could be created using the program C2ARRAY. All solution configuration values are set in the PARAMETER_IN file. See the program's README file for more information.

GUI: srm_lbfgsb.tcl        Tutorial: src_cost.sh

macc2date

Usage: macc2date [filein] [fileout]

Program reads the output file of the cost function (lbfgsb) analysis and converts the time field from EXCEL fractional days format to year, month, day, hour, value, where value is the emission rate. The output is in the free-form cfactors format of time-varying emissions in much simpler format the EMITIMES. This file is normally referred to as cfactors.txt and it is used by programs tcmsum and concdecay.

GUI: srm_sum.tcl for use of cfactors.txt

matrix

Usage: matrix [-options]
   -i[input file name]
   -o[output file name]
   -y[latitude]
   -x[longitude]
   -z[level vertical index]
   -m[source (s) or receptor (r) mode]
   -f[force release date: yymmddhh]
   -d[date sample select: mmddhhnn]
   -n[normalization on]

The matrix program is used to extract source or receptor information from the HYSPLIT generated source-receptor matrix. Two output modes are available. In receptor mode a receptor location is specified and the program computes the contribution of each source to the selected receptor point. In source mode a source location is specified and the program outputs the concentrations at all the receptor locations. All output files are in standard HYSPLIT binary.

GUI: srm_disp.tcl        Tutorial: src_recp.sh

stat2grid

USAGE: stat2grid [-options]
   -i{in file}
   -o{out file}
   -v{variable #(3 to 8)}}

Program reads the statmain statistics output file (from -s) and converts the values by position to the HYSPLIT concentration grid format permitting plots of model performance statistics with respect to location. The -v selects one of the following metrics: corr(3), nmse(4), fb(5), fms(6), ksp(7), and rank(8). The index value represents the column number in the statmain output file.

GUI: conc_rank.tcl        Tutorial: src_stats.sh

statmain

USAGE: statmain [arguments]
  -a[averaging: space (s), time (t), low-pass filter (#)]
  -b[bootstrap resampling to compute correlation]
  -c[concentration normalization factor]
  -d[measurement directory or file name (when -t0)]
  -e[enhanced output in supplemental file]
  -f[station selection file suffix]
  -g[generate random initial seed for bootstrap resampling]
  -l[contingency level for spatial statistics (1)]
  -m[model variation string (when -t<>0]
  -o[write (1) merged data or read (2) merged file]
  -p[plume statistics (both meas & calc gt zero)]
  -r[calculation results directory or file name (when -t0)]
  -s[supplemental appended output file name]
  -t[tracer character number: (0),1,2,3, etc]
  -x[exclude 0,0 pairs]
  -y[set calculated to zero when below measured zero]
  -z[percentile level of non-zero for zero measured]

Program reads DATEM formatted measured and calculated air concentration files and perform some elementary statistical analyses for model comparison. Procedures are based upon the original ETEX workshop metrics. Input and output file names can be generated automatically when the tracer character ID is set to a non-zero value.

GUI: datem.tcl        Tutorial: src_stats.sh

tcsolve

Usage: tcsolve -[options]
   -i[input CSV matrix (c2array.csv)]
   -o[output file (tcsolve.txt)]
   -p[percent delete (0)]
   -u[units conversion(1)]
   -z[zero value(0)]

The code will compute the inverse of the coefficient matrix (CM) and solve for values of the source emission rate vector to satisfy the measured values. The default approach is to use Singular Value Decomposition on the CM, which is defined by M equations (receptors) and N unknowns (sources). The input file should be in CSV format with the last column corresponding to the measurements. The first row is the time associated with each unknown source. The CSV file could be created using the program C2ARRAY which processes the HYSPLIT binary coefficients and DATEM formatted measurement data files. The percent delete is the percentage of the lowest TCM values to be deleted.

GUI: srm_solve.tcl        Tutorial: src_coef.sh



Concentration File Utilities

con2inv

USAGE: con2inv -[options(default)]
   -i[input file name (cdump)]
   -o[output file name with base format (concinv.bin)]
   -d[process id (bin)]

Program to output the inverse of the concentration for source attribution calculations. The pollutant 4-char ID remains the same.

GUI: conc_invr.tcl

conappend

USAGE: conappend -[options]
   -i[File name of containing input file names]
   -o[Output summation file]
   -c[Concentration conversion factor (1.0)]

Program to read multiple HYSPLIT concentration files and append the values into a single file. The files all need to be identical in terms of grid size but each file would represent a different time period.

conavg

Usage: conavg [-options]
   -b[base input file name (cdump)]
   -d[dignostic output turned on]
Output: cmean

Reads multiple identical concentration files, in terms of the grid, and computes the mean value at each grid point, which is then written to the output file cmean. Note individual time headers are not checked and the output time fields are the same as the last input file. This program provides a quick way to generate an ensemble mean rather than using conprob which generates all the probability files including the mean.

conavgpd

USAGE: conavgpd -[options(default)]
   -i[input file name (cdump)]
   -o[output file name (xdump)]
   -m[concentration multiplier (1.0)]
   -h[averging period in hours]
   -a[start averaging period (YYMMDDHHMN)]
   -b[stop averaging period (YYMMDDHHMN)]
   -r[average (0) or sum=1]

Combines multiple sequential time periods from a binary concentration file and computes the average or sum based upon the period (-h) specified by the user and writes out a new concentration file.
Tutorial: dose_cemit.sh

concacc

USAGE: concacc -[options(default)]
   -i[input file name (cdump)]
   -o[output file name with base format (concacc.bin)]

Program to accumulate concentrations from one time period to the next and output the results to another file. For example, the program can be used to sum doses from individual time periods to get the total dose.
Tutorial: ind_test.sh

concadd

USAGE: concadd -[options(default)]
   -i[input file name (cdump)]
   -b[base file name to which input file is added (gdump)]
   -o[output file name with base file format (concadd.bin)]
   -g[radius (grid points around center) to exclude; default=0]
   -c[concentration conversion factor (1.0)]
   -p[process (0)=add | 1=max value | 2=zero mask | 3=intersect]
         | 4=replace | 5=divide c1/c2]
   -t[forces the sampling time start stop times as minimum and maximum]
   -z[zero mask value (base set to zero when input>mask; default=0.0)]
       if zero mask value < 0 :
       base set to zero when input> zero mask value * base

Program to add together two gridded HYSPLIT concentration files, where the input file is added into the base file and written as a new file. The input and base file need not be identical in time, but they must have the same number of pollutants and levels. The file contents are matched by index number and not height or species. The horizontal grids need to be identical or even multiples of each other and they also need to intersect at the corner point. Summations will occur regardless of any grid mismatches. Options are also available to select the maximum value or define the input file as a zero-mask such that grid points with non-zero values become zero at those locations in the base file when written to the output file. The intersect option only adds the input file to the base file when both have values greater than zero at the intersection point, otherwise the value is set to zero. The replace option will replace the value from the base file with the value from the input file. This option is normally used in conjunction with a non-zero radius. The -t flag forces the time labels for each sampling period to represent the minimum starting time and maximum stop time between the two input files.

GUI: conc_add.tcl        Tutorial: ind_test.sh

concmbn

USAGE: concmbn -[options(default)]
   -b[crop the final grid: yes:(1), no:0]
   -c[coarse grid file name (cdump)]
   -f[fine grid file name (fdump)]
   -m[Multiplier used with -v option: (1.0)-one (default)]
   -o[output file name (concmbn.bin)]
   -p[percent of white space added around plume (10)]
   -s[number of surrounding coarse grid points to average, (0)=none, x]
   -t[number of perimeter fine grid points to average, (0)=none, x]
   -v[Minimum value to extract to: (0.0)-zero (default) ]

Program to combine two gridded HYSPLIT concentration files; one a fine grid and one a coarse grid. The coarse grid is recalculated on a grid covering the same area except that the grid has the resolution of the fine grid provided. The file contents are matched by index number but not height or species. You must make sure that the fine grid spacing and span are even multiples of the coarse grid spacing and span. The coarse grid points must be also be points on the fine grid. Summations will occur regardless of any grid mismatches. Prior to writing the coarse grid values into to the final large fine grid, the new fine grid values are given a 1/r2 weighting using the surrounding coarse grid values.

concrop

USAGE: concrop -[options (default)]
   -b[latmin:lonmin:latmax:lonmax (all zeroes will produce full grid extract]
   -d[process id (txt)]
   -f[FCSTHR.TXT: (0)-none 1-output]
   -i[Input file name: (cdump)]
   -g[Grid limits rather than cropped file: (0)-no 1-center-radius -1-lat-lons]
   -o[Output file name: (ccrop)]
   -m[Multiplier used with -v option: (1.0)-one (default) ]
   -p[Time extract MMDDHH:MMDDHH]
   -v[Minimum value to extract to: (0.0)-zero (default) ]
   -w[Percent of white space added around plume (10)]
   -x[override white space: (0)-no (default) 1-yes]

Removes the whitespace around a HYSPLIT binary concentration grid file by resizing the grid about the non-zero concentration domain or by specifying a sub-grid on the command line. Using the -x option will force the extract of the selected domain regardless of white space.

GUI: conc_xtrct.tcl

concsum

USAGE: concsum -[options(default)]
   -i[input file name (cdump)]
   -o[output file name with base format (concsum.bin)]
   -d[process id (bin)]
   -l[level sum and pollutant sum]
   -p[pollutant sum label (SUM)]

Program to add multi-pollutant concentrations from one HYSPLIT concentration file and output the sum (as one pollutant) to another file (e.g. to get the total concentration when pollutants are different particle sizes). The level sum flag sums the pollutants and levels to one output field.
Tutorial: dose_cemit.sh

conedit

USAGE: conedit [options]
   -i[input file name (cdump)]
   -o[output file name (cedit)]
   -m[meteorology string (skip)]
   -p[pollutant string (skip)]

Edits the HYSPLIT binary concentration file by replacing the meteorology and pollutant type four character identification strings. Only files with one pollutant are supported. A string is left unchanged if not defined.

confreq

Usage: confreq [-arguments]
   -h{elp}
   -f{ile name of file names}
   -z{eros included}

Computes the concentration frequency distribution for one or more HYSPLIT binary concentration files specified in the file name of input files. All levels and species are included. Mismatches in time and grid are ignored. The current compilation maximum sort dimension is one million.

conhavrg

USAGE: conhavrg [options]
   -i[input file name (cdump)]
   -o[output file name (cavrg)]
   -s[surrounding grid points to average (1)]

Horizontally averages the HYSPLIT concentration file according to a grid point scan distance (s) and where the area average equals 2s*2s. Scan distance is not adjusted for differences in longitude distance with latitude. Mass is adjusted to insure that the average concentration remains the same.

GUI: conc_havg.tcl

coninfo

USAGE: coninfo -[options(default)]
   -i[input file name (cdump)]
   -t[list start/stop times (0=no), 1=yes]
   -j[output only first/last times in file (0=no), 1=Julian, 2=date]

Program prints summary information about the contents of a HYSPLIT concentration file such as the grid information, the start and stop times of each time period, or just the first and last time period.

GUI: conc_xtrct.tcl

conlight

USAGE: conlight -[options(default)]
   -i[input file name (cdump)]
   -o[output file name (xdump)]
   -l[level number to extract (0 = all)
   -m[concentration multiplier (1.0)]
   -p[period MMDDHH:MMDDHH]
   -s[species: 0-sum, (1)-select]
   -t[time periods to extract (1)]
   -z[concentration minimum (0.0)]
   -y[concentration minimum for sum pollutants (0.0)]

Extracts individual records from the binary concentration file where every -t{count}th record is output and if the file contains multiple species, the -s{pecies} number is selected for output. Multiplier and minimum values may be applied.

GUI: conc_add.tcl        Tutorial: ens_stats.sh

conmask

Usage: conmask [input] [mask] [output] [cmin]
   [input] - name of input file of concentrations
   [mask] - name of mask file where input may = 0
   [output] - name output file from input*mask
   [cmin] - input gt cmin mask then set input = 0

The program reads two HYSPLIT concentration files and applies the second file as a mask to the first. Any non-zero values in file #2 becomes zero in file #1. For example, in source attribution calculations, the second file can be used to eliminate grid cells that do not contain the source because the backward calculation was associated with a measurement of zero.

conmaxpd

USAGE: conmaxpd -[options(default)]
   -i[input file name (cdump)]
   -o[output file name (xdump)]
   -m[concentration multiplier (1.0)]
   -h[window period for maximum in hours]
   -a[start time for window (YYMMDDHH)]
   -b[stop time for window (YYMMDDHH)]

Computes the maximum concentration at each grid point per time window for all time windows that fall between the window start and stop times.

conmaxv

USAGE: conmaxv [options]
   -i[input file name (cdump)]
   -o[output file name (cavrg)]
   -s[sliding time window in minutes (-1, defaults to data interval)]

Computes the maximum value at each grid cell using a sliding time window. The sliding time interval must be evenly divisible by the averaging (data interval) time. The output represents a single time period, the maximum value over the entire time period. For example, for a one hour simulation, with output every 5 minutes, there would be 12 time periods in the HYSPLIT concentration output file. If the sliding time window is set to 15 minutes, at each grid point, the program computes 12 average concentrations, for time periods 1,2,3 and then 2,3,4, and so on. A single maximum value of those 12 sliding window averages is written to the output file at that grid point. Maximum concentrations are used primarily in hazardous chemical exposure calculations.

conmerge

USAGE: conmerge -[options]
   -d[Date to stop process: YYMMDDHH {00000000}]
   -i[Input file name of file names]
   -o[Output file name]
   -t[Time summation flag]

The program reads multiple HYSPLIT concentration files and sums the values to a single file. Options are to sum only matching time periods or sum all time periods into one time period.
Tutorial: dose_temit.sh

conpuff

Usage: conpuff
   prompted standard input:
   line 1 - Concentration file name
Output: standard

Reads the gridded HYSPLIT concentration file and prints the maximum concentration and puff mass centroid location for each concentration averaging period.

conread

Usage: conread
   prompted standard input:
   line 1 - Concentration file name
   line 2 - Extended diagnostics (0/1)
Output: standard

Program to read the gridded HYSPLIT concentration file and dump selected statistics each time period to standard output.

GUI: conc_file.tcl        Tutorial: conc_util.sh

constats

USAGE: constats {arguments}
   -f#[concentration file name (#<=2)]
   -o[output file name; undef stdout]
   -t[temporal match skip hour]
   -v[verbose output]

Compares two HYSPLIT concentration files [-f1{name} and -f2{name}] by computing the FMS overlap statistic with the assumption that both grids must be identical in terms of grid size, levels, pollutants, and number of time periods. When the -t flag is set, time mismatches are ignored.

mergextr

USAGE: mergextr [-arguments (default)]
   -a[date to start YYMMDDHH]
   -b[date to stop YYMMDDHH]
   -i[input file name of file names (mergelist.txt)]
   -o[Output file name: (xtrct.txt)]

This program will read a file of filenames (format: DG_YYYYMMDDHH) and create a new file of filenames with only the dates between and including the dates entered. The DG_ file name convention is usually associated with the concdecay program.

tcmsum

USAGE: tcmsum -[options(default)]
   -c[column number in source term file (1)]
   -d[output file name suffix (bin)]
   -i[input file name (cdump)]
   -o[output file base name (tcmsum)]
   -h[half-life-days (0.0 = none)]
   -p[pollutant sum label (SUM)]
   -s[source term input file name (cfactors.txt)]
   -t[time decay start: MMDDHH (run_start)]

Program to add together the pollutant concentrations from one HYSPLIT concentration file, where the pollutants represent different starting times as configured from a HYSPLIT Transfer Coefficient Matrix (TCM) simulation with ICHEM=10. The concentrations for each starting time are multiplied by a source term defined in an auxilary input file.

GUI: srm_sum.tcl



Ensembles

accudiv

USAGE: accudiv [-options]
Reads DATEM formatted measurement and modeled data and calcuates the accuracy/diversity among all the possible combinations of pairs, trios, etc.
   -b[base model output file name)]
   -m[measurement input file name ]
   -o[output file name ]

Applies a reduction technique to an ensemble. In this technique all the possible model combinations are tested and the chosen subensemble is the one that shows the minimum square error. The measurements and model results should be defined by DATEM format files with an identical number of records.

GUI: ens_reduc.tcl        Tutorial: ens_reduc.sh

ensperc

USAGE: ensperc [-options]
   -b[base model output file name)]
   -m[measurement input file name ]
   -o[output file name ]

Reads a DATEM formatted measurement file and model data files and calculates concentration percentiles for ensemble runs and measured values. The program requires DATEM formatted model outputs and measurements. Sequential numbers (.000) are automatically appended to the base name to generate the ensemble member file name. The cumulative concentration distribution is computed independently for measured and calculated pairs when both are non-zero.

enstala

USAGE: enstala [-options]
   -b[base model output file name)]
   -m[measurement input file name ]
   -o[output file name ]

Reads a DATEM formatted measurement file and model data files and calculates values to generate a Talagrand diagram. Sequential numbers (.000.txt) are automatically appended to the base name to generate the ensemble member file name.

var2datem

USAGE: var2datem -[arguments]
   -c[input to output concentration multiplier]
   -e[output concentration datem format: (0)=F10.1 1=ES10.4]
   -r[random number generator seed number (-1)]
   -p[percent standard deviation (10%) ]
   -n[minimum concentration (7.0) ]
   -d[write to diagnostic file]
   -h[header info: 2=input text, (1)=file name, 0=skip]
   -m[measurement data file name]
   -s[supplemental infomation ]
   -o[output concentration file name]

Program to read DATEM file and output randomly generated DATEM file results given a standard deviation and minimum concentration. It is a way to evaluate measurement uncertainty when computing model performance statistics.




Graphics Creation

boxplots

USAGE: boxplots [-arguments]
   -a[ascii output file]
   -c[concentration conversion factor]
   -d[datem formatted measurement file]
   +g[graphics: (0)-Postscript, 1-SVG
   -l[level index number (1)]
   -m[minimum scale for plot]
   -M[maximum scale for plot]
   -n[number of divisions (10)]
   -p[pollutant index number (1)]
   -s[start time period (1)]
   -t[title string]
   -u[units string for ordinate]
   -x[longitude]
   -y[latitude]
Output: boxplots.{ps|html}

Program to read the probability files output from the program conprob and create up to 12 (time periods) box plots per page. The probability files consist of binary concentration fields for each probability level. Box plots are created for a specific location.

GUI: disp_boxp.tcl        Tutorial: ens_data.sh

concplot

USAGE: concplot -[options (default)]
   -i[Input file name: (cdump)]
   -o[Output file name: (concplot.{ps|html})]
   +g[graphics: (0)-Postscript, 1-SVG
   and 38 additional options (see command line for details)

Primary graphical display program for HYSPLIT binary concentration files. The data are contoured and color-filled against a map background. At a minimum, only the name of the input file is required. Colors, contour intervals, map background, and label details may be adjusted through the approximately 40 command line options.

GUI: conc_rank.tcl        Tutorial: conc_disp.sh

contour

Usage: contour [-options]
   -d[Input metdata directory name with ending /]
   -f[input metdata file name]
   +g[graphics: (0)-Postscript, 1-SVG
   -y[Map center latitude]
   -x[Map center longitude]
   -r[Map radius (deg lat)]
   -v[Variable name to contour (e.g. TEMP)]
   -l[Level of variable (sfc=1)]
   -o[Output time offset (hrs)]
   -t[Output time interval (hrs)]
   -c[Color (1/3) or B&W (0/2); 0/1=lines 2/3=nolines]
   -g[Graphics map file (arlmap) or shapefiles.txt]
   -m[Maximum contour value (Auto=-1.0)]
   -i[Increment between contours (Auto=-1.0)]
   -a[Arcview text output]
Output: contour.{ps|html}

Contour fields from a meteorological data file in ARL format

GUI: disp_map.tcl        Tutorial: traj_flow.sh

display

Usage: display
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - meteorological_file_name
   line 3 - day, hour
   line 4 - center Latitude, Longitude
   line 5 - radius in deg Latitude
   line 6 - enter 0 (B&W) or 1 (Color)
   line 7 - enter 0 (None) or 1 (GIS output)
   line 8 - Level Number
   line 9 - four-character Variable ID
   line 10 - number of contours (max=8)
   line 11 - contour interval (-1=auto)
   line 12 - maximum contour value (-1=auto)
Output: display.{ps|html}

Interactive standard-input version of the contour program.

ensplots

USAGE: ensplots [-arguments]
   -b[base name for concentration files]
   -c[concentration conversion factor]
   +g[graphics: (0)-Postscript, 1-SVG
   -m[minimum scale for plot]
   -M[maximum scale for plot]
   -n[number of divisions (10)]
   -x[longitude]
   -y[latitude]
Output: ensplots.{ps|html}

Program to read the probability files output from the program conprob and create up to 12 (time periods) member plots per page in a format similar to boxplots. The graphic shows the member number distribution by concentration for a single location.

GUI: disp_box.tcl        Tutorial: ens_data.sh

gridplot

USAGE: gridplot -[options(default)]
   -a[scale: 0-linear, (1)-logarithmic]
   -b[Science on a Sphere output: (0)-No, 1-Yes)]
   -c[concentration/deposition multiplier (1.0)]
   -d[delta interval value (1000.0)]
   -f[(0), 1-ascii text output files for mapped values]
   -g[GIS: 0-none 1-GEN(log10) 2-GEN(value) 3-KML 4-partial KML]
   +g[graphics: (0)-Postscript, 1-SVG
   -h[height of level to display (m) (integer): (0 = dep)]
   -i[input file name (cdump.bin)]
   -j[graphics map background file name: (arlmap)]
   -k[KML options: 0-none 1-KML with no extra overlays]
   -l[lowest interval value (1.0E-36), -1 = use data minimum]
   -m[multi frames one file (0)]
   -n[number of time periods: (0)-all, numb, min:max, -incr]
   -o[output name (plot.ps)]
   -p[process output file name suffix]
   -r[deposition: (1)-each time, 2-sum]
   -s[species number to display: (1); 0-sum]
   -u[mass units (ie, kg, mg, etc)]
   -x[longitude offset (0), e.g., -90.0: U.S. center; 90.0: China center]
   -y[latitude center (0), e.g., 40.0: U.S. center]
   -z[zoom: (0=no zoom, 99=most zoom)

Creates postscript or html file to show concentration field evolution by using color fill of the concentration grid cells. It is designed especially for global grids such as Science on a Sphere output, which assumes the concentration file is a global lat/lon grid.

GUI: conc_grid.tcl        Tutorial: conc_disp.sh

isochron

USAGE: isochron -[options(default)]
   -d[time interval in hours, (-1)=automatic selection]
   -f[Frames: (0)-all frames one file, 1-one frame per file]
   -g[GIS: 0-none 1-GEN(log10) 2-GEN(value) 3-KML 4-partial KML]
   +g[graphics: (0)-Postscript, 1-SVG
   -h[level index (1); 0-all]
   -i[input file name (cdump.bin)]
   -j[Graphics map background file name: (arlmap)]
   -k[KML options: 0-none 1-KML with no extra overlays]
   -m[concentration multiplier (1.0)]
   -n[number of contours (12)]
   -o[output name (toa.ps or toa.xxx where xxx is defined by -p)]
   -p[Process output file name suffix]
   -s[species number to display: (1); 0-all]
   -S[species output table created: (0)-no, 1-yes]
   -t[lowest threshold value (1.0E-36)
   -u[mass units (ie, kg, mg, etc)]
   -x[longitude offset (0), e.g., -90.0: U.S. center; 90.0: China center]
   -y[latitude center (0), e.g., 40.0: U.S. center]
   -z[zoom: (0=no zoom, 99=most zoom)

Creates a postscript or html file to show the time it takes for a concentration grid cell to become non-zero after the start of the simulation. Times are designated by using color fill of the concentration grid cells.

GUI: conc_time.tcl        Tutorial: conc_disp.sh

parhplot

USAGE: parhplot -[options(default)]
   -a[GIS output: (0)-none, 1-GENERATE, 3-kml]
   +g[graphics: (0)-Postscript, 1-SVG
   -i[input file name (PARDUMP)]
   -k[Kolor: (0)-B&W 1-Color]
   -m[scale output by mass 1-yes (0)-no]
   -s[select output species (1)-first specie, 0-sum all, or species id number]
   -n[plot every Nth particle (1)]
   -o[output file name (parhplot.ps)]
   -j[Map background file: (arlmap) or shapefiles.txt]
   -p[Process file name suffix: (ps) or process ID]
   -t[age interval plot in hours (0)]
   -z[Zoom factor: 0-least zoom, (50), 100-most zoom]

Plot the horizontal mass distribution from a PARDUMP file.

GUI: disp_part.tcl        Tutorial: disp_part.sh

parsplot

USAGE: parsplot -[options(default)]
   -i[input file name (PARDUMP)]
   +g[graphics: (0)-Postscript, 1-SVG
   -k[Kolor: (0)-B&W 1-Color]
   -j[map background file or {none} to turn off]
   -n[plot every Nth particle (1)]
   -o[output file base name (otherwise PYYMMDDHH.{ps|html})]
   -p[particle max value for color scaling]
   -s[size scaling fraction from default (1.0)]
   -t[particle age plot option not available]

Global particle plot using a cylindrical equidistant projection. If the output file name is defined on the command line, multiple time periods will be written to the same file; otherwise output will be one file per time period!

GUI: disp_part.tcl

parvplot

USAGE: parvplot -[options(default)]
   -i[input file name (PARDUMP)]
   +g[graphics: (0)-Postscript, 1-SVG
   -k[Kolor: (0)-B&W 1-Color]
   -m[scale output by mass 1-yes (0)-no]
   -s[select output species (1) or species id number]
   -n[plot every Nth particle (1)]
   -o[output file name (parvplot.ps)]
   -p[Process file name suffix: (ps) or process ID]
   -t[age interval plot in hours (0)]
   -z[Zoom factor: 0-least zoom, (50), 100-most zoom]

Plot the vertical mass cross-section from a PARDUMP file.

GUI: disp_part.tcl        Tutorial: disp_part.sh

parxplot

USAGE: parxplot -[options(default)]
   -a[GIS output: (0)-none, 1-GENERATE, 3-kml]
   -i[input file name (PARDUMP)]
   +g[graphics: (0)-Postscript, 1-SVG
   -k[Kolor: (0)-B&W 1-Color]
   -m[scale output by mass 1-yes (0)-no]
   -s[select output species (1)-first specie, 0-sum all, or species id number]
   -n[plot every Nth particle (1)]
   -o[output file name (parxplot.ps)]
   -j[Map background file: (arlmap) or shapefiles.<(txt)|process suffix>]
   -p[Process file name suffix: (ps) or process ID]
   -t[age interval plot in hours (0)]
   -x[Force cross section: lat1,lon1,lat2,lon2]
   -z[Zoom factor: 0-least zoom, (50), 100-most zoom]

Plot the vertical cross-section through the plume center from a PARDUMP file.

GUI: disp_part.tcl        Tutorial: disp_part.sh

poleplot

Usage: poleplot [-options]
   -b[background map file name (arlmap)]
   -c[concentration data file name (cdump)]
   -g[graphics (0)=fill, 1=lines, 2=both]
   +g[graphics: (0)-Postscript, 1-SVG
   -l[lat/lon label interval in degrees (0.5)]
   -m[maximum concentration value set]
   -o[output file name (poleplot.ps)]
   -v[vector output (0)=no 1=yes]

Program plots HYSPLIT concentrations that were created on a polar grid (cpack=3) rather than the default cartesian grid. Polar grids are defined in terms distance and angle, where sector 1 starts from the north and sector 2 is clockwise adjacent. Sectors are defined by arc-degrees and distance in increments of kilometers.

GUI: pole_disp.tcl        Tutorial: conc_pole.sh

scatter

USAGE: scatter -[options]
   -i{input file}
   +g{graphics: (0)=postscript 1=svg}
   -o{output file (scatter.{ps|html})}
   -p{plot minimum (-999=auto)}
   -s{symbol plot: (0)=station 1=record 2=plus 3=day 4=hour}
   -x{plot maximum (-999=auto)}

Plots a scatter diagram from the file dataA.txt of merged measured and calculated values created by the statmain program. The default option plots the sampling station ID number at the data point.

GUI: datem.tcl        Tutorial: conc_stat.sh

showgrid

Usage: showgrid [-filename options]
   -D[input metdata directory name with ending /]
   -F[input metdata file name]
   +G[graphics: (0)-Postscript 1-SVG]
   -P[process ID number for output file]
   -I[grid point plotting increment (0=none)]
   -L[lat/lon label interval in degrees]
   -A[location of arlmap or shapefiles.<(txt)|processid>]
   -X[Read from standard input]
   -Q[subgrid lower left latitude (xxx.xx)]
   -R[subgrid lower left longitude (xxxx.xx)]
   -S[subgrid upper right latitude (xxx.xx)]
   -T[subgrid upper right longitude (xxxx.xx)]
   -B[plot symbol at each lat/lon in file SYMBPLT.CFG]
Output: showgrid.{ps|html}

Program will create a map of the full-domain of the meteorological data file with + marks at each grid intersection matching the specified interval (-I). A smaller map domain may be chosen. If desired a four-character symbol may be plotted at the latitude-longitude points specified in the input file (-B) which is in free-format (latitude longitude 4-characters).

GUI: disp_grid.tcl

stabplot

USAGE: stabplot -i -n -y
   -i[process ID number]
   +g[graphics: (0)-Postscript, 1-SVG]
   -n[sequential station number (1+2+...+n]
   -y[auto y axis log scaling]
Input: STABILITY.{processID}.txt
Output: stabplot.{processID}.{ps|html}

Plots a time series of Pasquill-Gifford stability categories at one or more locations. Requires the output file STABILITY.TXT from the vmixing program.

timeplot

USAGE: timeplot [-arguments]
   -i[input concentration file name]
   -s[secondary file name for display]
   -e[process ID number appended to output file]
   +g[graphics type: (0)=Postscript, 1=SVG]
   -c[concentration multiplier (1.0)]
   -m[minimum ordinate value (0.0)]
   -n[sequential station number (1+2+...+n]
   -p[0=draw only points, 1=no connecting line, 2=both]
   -x[maximum ordinate value (data)]
   -y[y axis log scaling (default=linear)]
   -z[three char time label (UTC)]
Output: timeplot.{ps|html}

Program to plot a time series of concentrations at one or more locations by reading the output file from program con2stn. Supplemental (secondary) data in DATEM format may also be included on the plot.

GUI: conc_stn.tcl        Tutorial: conc_util.sh

trajplot

USAGE: trajplot -[options (default)]
   -i[Input files: name1+name2+... or +listfile or (tdump)]
   -o[Output file name: (trajplot.{ps|html})]
   +g[graphics: (0)-Postscript, 1-SVG
   and 14 additional options (see command line for details)

Basic trajectory plotting program for HYSPLIT formatted ASCII endpoint files.

GUI: traj_disp.tcl        Tutorial: traj_basic.sh

volcplot

USAGE: volcplot -[options (default)]
   -i[Input concentration file name: (cdump)]
   -o[Output file name: (volcplot.{ps|html})]
   +g[graphics: (0)-Postscript, 1-SVG
   and 20 additional options (see command line for details)

Uses the binary gridded concentration output file from HYSPLIT to generate a visual ash cloud against a map background using the VAFTAD display format.




Graphical Utilities

cat2svg

USAGE: cat2svg -[options (default)]
   -i[Input concatenated file (catsvg.html)]
   -o[Output file name: (svg.html)]
   -t[Truncate blanks at end of output text: 0-no (1)-yes]

When html files containing SVG graphics are concatenated into a single file, the resulting file will have multiple <head> and <body> tags. This program normalizes it by removing spurious <head> tags and merging <body> tags.

catps2ps

USAGE: catps2ps -[options (default)]
   -i[Input concatenated file (catps.ps)]
   -o[Output file name: (postscript.ps)]
   -t[Truncate blanks at end of output text: 0-no (1)-yes]
   -m[Mac version delete save and restore]

Removes extraneous header information and updates the page count for multiple frame Postscript files that have been created by appending individual single-frame files.

GUI: hysplit.tcl

coversheet

Usage: coversheet [-options]
   -b[backtracking (0)=no, 1=yes]
   -i[input text file (RSMC.TXT)]
   -j[input date stamp file (COVER.TXT)]
   -o[output file (cover.ps)]

Creates a graphical coversheet from a text file that summarizes various HYSPLIT run parameters. In a subsequent step, the simulation grapics are appended to the coversheet. This particular version has been customized for RSMC applications.

gelabel

Usage: gelabel [-options]
   -p[Process file name suffix (ps|html) or process ID]
   -4[Plot below threshold minimum for chemical output: (0)-no, 1-yes]
   +g[graphics: (0)-Postscript, 1-SVG
Input: label input file is GELABEL_{ps|html}.{ps|html}

Creates a small table like graphic to assocate contour colors with concentration values. Results can be merged into Google Earth KML/KMZ files.

GUI: conc_disp.tcl and conc_grid.tcl        Tutorial: disp_ge.sh

gen2xml

USAGE: gen2xml -[options (default)]
   -i[Input generate filename: (hysgen.txt)]
   -a[Input generate attributes filename: (hysgen.att)]
   -o[Output file name: (hysgen.xml)]

Converts ESRI generate format GIS files to XML files for use in Google Map.

splitsvg

USAGE: splitsvg -[options (default)]
   -i[Input file (svg.html)]
   -o[Output file name: (split.svg)]
   -f[extract the First frame only: (0)-no 1-yes]

Create one SVG file per <svg> tag from an HTML file containing SVG graphics. Note that the name of an output file is preceeded by the frame number.

Tutorial: conc_disp.sh

stn2ge

USAGE: stn2ge [-options]
   -i[input file name output from con2stn]
   -s[selection station list file name]
   -o[output google earth file name (less .kml)]

Converts the concentration output from the con2stn program for the stations in the list file to Google Earth (kml) format. The station list file contains one or more records of station [ID latitude longitude].

GUI: conc_stn.tcl



HYSPLIT Configuration

dat2cntl

USAGE: dat2cntl [options]
   -c[numerator units conversion factor (1.0)]
   -d[emission rate set to the ...
     (0) = measured data value,
     1 = 1/measured data value,
     2 = value in the conversion factor field (-c)]
   -n[number variables on location line (3),4,5]
   -i[input measured data file name in datem format]
   -s[output station number as pollutant ID: no=0, yes=(1)]
   -t[index for trajectory start: 1,2,3,4,5,6,(7)]
     1 = ending of sample
     2 = middle of sample; 3 = end and middle
     4 = start of sample; 5 = end and start
    (7)= end, middle, start; 6 = middle and start
   -z[zero value (1.0E-12)

Converts a DATEM format measured data file into CONTROL.{variation} by reading the base CONTROL file and the DATEM observational data file. The CONTROL file should represent a forward calculation from the source location and time to encompass the sampling data that is expected to be impacted by the release. The reconfigured CONTROL files are numbered sequentially and correspond to the suffix name of the binary output file. The station ID is written in the pollutant ID field for single pollutant concentration simulations and it is incorporated into the output file name for trajectory calculations. Trajectory starts are at the beginning, middle, and end of the sampling period. Concentration runs release particles over the entire sampling period.

GUI: conc_geol.tcl and traj_geol.tcl        Tutorial: src_geol.sh

dustbdy

USAGE: dustbdy
Input: CONTROL and LANDUSE.ASC
Output: CONTROL

Generates multiple latitude-longitude source points from a CONTROL file in which the first two latitude-longtitude points are considered to be the corner points and the third point is the delta latitude-longtitude increment. Each source point represents a PM10 dust emission point based upon the land use category of desert.

GUI: hymenu.tcl        Tutorial: cust_dust.sh

dustedit

USAGE: dustedit -[options (default)]
   -b[bowen ratio selection value (2.5)]
   -i[input control file name (dustconus_controlXX)]
   -m[meteorology file name (hourly_data.txt)]
   -o[output control file name (dustconus_control.txt)]

Selects locations from /nwprod/fix/dustconus_controlXX that meet certain selection criteria based upon current meteorological conditions at each potential dust emission location. There are three steps:
  1. Find the maximum sensible heat flux from 1500-2100 UTC
  2. Determine if the latent flux >= 5 and sensible flux >= 25 watts/m2
  3. Use as an emission point if the Bowen ratio (S/L) >= 2.5
A new CONTROL file is written that is used in the dust script instead of dustconus_controlXX. The Bowen ratio is used as a surrogate for wet or dry conditions to turn off dust emissions where it has recently rained. Dust emissions will restart once it has dried out as indicated by a Bowen ratio above the threshold. This progran is used as a pre-processor in the operational dust forecast.

fires

Usage: fires
Usage: fires {bluesky directory}
Usage: fires -d{bluesky directory} -g{aggregation grid size}
Defaults: directory=./ grid_size=0.20

Generates multiple lat-lon source points from a control file in which the first lat lon points are considered to be the corner point. Each source point represents a forest fire smoke emission point based upon satellite analysis.

File Purpose
DOMAIN CONTROL file that specifies the fire domain
CONTROL CONTROL file modified with each emission location
BLUESKY.TXT Names BlueSky output files, lat, lon, area
BLUESKY.CSV Aggregated fire locations for BlueSky input
FIRES.ARC Yesterday's emissions on the aggregation grid
FIRES.NEW Emissions for today plus any decayed archive emissions
FIRES.TXT NESDIS HMS file of fire locations for today

The program is intended to be run in two modes. If no BlueSky emission data are available (BLUESKY.TXT does not exist), then a default emission scenario is assumed for each pixel location in FIRES.TXT. This pass will create the BlueSky burns input file (BLUESKY.CSV). This file must then be processed in the BlueSky framework to produce the specied emission file for that location. On the second pass to this program, the BLUESKY.TXT file is read to create a modified CONTROL file with emission locations, rates, and burn areas. The script calling this program should delete all BLUESKY.??? files prior to the first pass execution.

firew

Usage: fires
Usage: fires {bluesky directory}

Simpler version of fires for web applications.

goes2ems

Usage: goes2ems [start] [duration] [infile1] [infile2]
   [start] - initial HH hour or force YYYYMMDDHH
   [duration] - emission duration in hours
   [infile1] - first day GOES emission file
   [infile2] - second day GOES emission file

The program reads the GOES PM2.5 emissions file and converts it to the HYSPLIT time-varying EMITIMES file format.

hysplit executables

USAGE: hy{c|t}{m|s}_{xxx} [optional suffix]
   {c|t} where c=concentration and t=trajectory
   {m|s} where m=multi-processor and s=single-processor
   {xxx} where xxx=compilation-variation
     std=standard version
     ens=integrated grid ensemble
     var=random initialization
     gem=global eulerian model
     ier=ozone steady state
     grs=ozone seven equation
     so2=sulfur dioxide wet chemistry
     cb4=carbon bond four chemistry
  [.suffix] - added to all standard model input and output files if found

Various HYSPLIT executables are created using different compilation options from the same source code: hymodelc.F for air concentrations and hymodelt.F for trajectories.

A more detailed discussion about the input and output file requirements for the variations of the primary HYSPLIT executables can be found in the model tutorial. A detailed description of the additional file requirements for the various HYSPLIT chemistry versions (ier,grs,so2) can also be found on-line.

hysptest

USAGE: hysptest [optional suffix]
  [.suffix] - added to all standard model input and output files if found

Test version of HYSPLIT that will evaluate the CONTROL and SETUP.CFG files for consistency of options by actually running a single-particle dummy calculation using the meteorological data. This program is discussed in more detail in the model tutorial.

GUI: test_cfg.tcl        Tutorial: test_inputs.sh

latlon

USAGE: dustbdy [-p{process ID suffix}]
Input: CONTROL
Output: CONTROL

Generates multiple latitude-longitude source points from a CONTROL file in which the first point is considered to be the corner point and the third point is the delta latitude-longitude increment. An input CONTROL file requires exactly three starting locations:
  1: lower left corner of matrix grid (1,1)
  2: upper right corner of matrix grid (imax,jmax)
  3: location of grid point (2,2) adjacent to (1,1)

GUI: hymenu.tcl        Tutorial: src_recp.sh

nuctree

Usage: nuctree -n [-options]
   -o[output activity file name {DAUGHTER.TXT}]
   -n[Parent nuclide name (ie, Cs-137, I-131)]
   -p[Process ID number (CONTROL.pid, CONTROL_DAUGHTER.pid)]
   -r[generate random initial seed]

This program will display the daughter nuclides produced by a parent nuclide along with the half-life and branching fractions of each nuclide. It will also read a CONTROL file and use the pollutant ID to determine the daughter products. An additional CONTROL.DAUGHTER file will be created with the daughter products information.

GUI: conc_daug.tcl

prntbdy

USAGE: prntbdy
   prompted standard input:
   1 - landuse.asc
   2 - rouglen.asc
   3 - terrain.asc
Input: Select Input File Number
Output: standard

Prints the contents of the various boundary files in the bdyfiles directory.

testnuc

Usage: testnuc
Input: none
Output: standard

This is a hardwired program to test subroutine nucdcy in the HYSPLIT subroutine library which is used to calculate radioactive decay and the resulting daughter products defined on the same particle. Documentation is not available nor are there any known applications.

timeplus

Usage: timeplus year(I2) month(I2) day(I2) hour(I2) fhour(I4)

Prints a new date when given a date and forecast hour.

vmsmerge

USAGE: vmsmerge -[options(default)]
   -i[input base file name (VMSDIST).000]
   -o[output file name (VMSDIST)]

Merge multiple VMSDIST.XXX files into a single file by looping sequentially 001->999 through vmsdist files to the first missing file. The contents are merged into one file. These files are normally created by the MPI version.

vmsread

Usage: vmsread [-filename options]
   -i[input file name (VMSDIST)]
Output: vmsdist.txt

Program to read the HYSPLIT vertical mass distribution file on native levels and output results to a file at the same interval as in the MESSAGE file.

zcoord

Usage: zcoord
   prompted standard input:
   30.0 -25.0 5.0

Program to show the HYSPLIT vertical coordinates based upon the values of the three polynomial coefficients: AA BB CC. New values can be entered after the program stops. There is no input prompt. The values used in each HYSPLIT simulation can be found in the MESSAGE file line: Internal grid parameters (nlvl,aa,bb,cc)




Meteorological Data to ARL Format

api2arl

Usage: api2arl [-options]
   -h[help information with extended discussion]
   -e[encoding configuration file {name | create arldata.cfg}]
   -d[decoding configuration file {name | create api2arl.cfg}]
   -i[input grib data file name {DATA.GRIB2}]
   -o[output data file name {DATA.ARL}]
   -g[model grid name (4 char) default = {center ID}]
   -s[sigma compute=1 or read=2 or ignore=(0)]
   -t[top pressure (hPa) or number {50} of levels from bottom]
   -a[averge vertical velocity no=(0) or yes=numpts radius]
   -z[zero fields (no=0 yes=1)initialization flag {1}]

The program converts model output data in GRIB2 format using the ECMWF ecCodes library (formerly used the ECMWF grib_api library) to the ARL packed format required for input to HYSPLIT. The data must be a global latitude-longitude or regional Lambert grid defined on pressure surfaces. The program will only convert one time period in any one input file. Multiple time period output files can be created by appending each time period processed using the cat command.
v1 : Pressure level data from the NCEP NOMADS server
v2 : Hybrid sigma coordinate; index increases with Z
v3 : NOAA pressure and ECMWF hybrid ½o data; index decreases with Z
v4 : GSD HRRR data on pressure level surfaces

apidump

Usage: apidump
Input: data.grib2
Output: standard

Decodes a GRIB2 message, providing a simply inventory of the contents. There are no command line options and the GRIB message must be in a file named data.grib2.

arw2arl

USAGE-1: arw2arl [netcdf data file name]
USAGE-2: arw2arl -[options (default)]
   -b[beginning time period index (1)]
   -e[ending time period index (9999)]
   -t[time interval in minutes between outputs (60.0)]
   -s[create WRF variable namelist file for instantaneous winds]
   -a[create WRF variable namelist file for average fluxes]
   -k[create WRF variable namelist file for tke fields]
   -c[create and run with namelist file (1)=inst, 2=avrg flux, 3=tke]
   -d[diagnostic output turned on]
   -i[input netcdf data file name (WRFOUT.NC)]
   -o[output ARL packed file name (ARLDATA.BIN)]
   -p[packing configuration file name (ARLDATA.CFG)]
   -v[variable namelist file name (WRFDATA.CFG)]
   -n[number of levels to extract from sfc (50)]
   -z[zero initialization each time 0=no (1)=yes]

Advanced Research WRF to ARL format converts ARW NetCDF files to a HYSPLIT compatible format. The WRFDATA.CFG namelist file can be manually edited to select other variables to output. X variables are not found in the WRF output but are created by this program. All variables require a units conversion factor be defined in the namelist file. When the input file contains data for a single time period, then the ARL format output file from each execution should be appended (cat >>) to the output file from the previous time periods execution.

GUI: arch_arw.tcl

content

Usage: content
Input: GRIB1 file name
Output: standard

Dump the contents, by section, of a GRIB1 file

drn2arl

USAGE: drn2arl [setup file]
Input: DRONE_SETUP.CFG

Program to convert meterological drone data for ONE vertical profile to standard packed form for input into other ARL programs. See the associated README file for more detailed instructions.

grib2arl

Usage: grib2arl [-options]
   -[14 options, see source code]
This program is an orphaned application as it has been moved to the ~/data2arl/legacy directory and pre-compiled versions are no longer available.

GUI: arch_{era|ecm}.tcl

narr2arl

Usage: narr2arl [file_1] [file_2] [...]

Converts an North American Regional Reanalysis (NARR) model pressure GRIB1 file to the ARL packed format. Processes only one time period per execution. Multiple input files may be specified on the command line, each containing different variables required for the specific time period. Input data are assumed to be already on a conformal map projection on model levels. However, wind vectors need to be rotated from true to the projection.

sfc2arl

Usage: sfc2arl [filein] [fileout] [clat] [clon] [optional process ID number]

Creates a gridded meteorological file for multiple time periods in ARL packed format from an ASCII input file of surface observations. For example:

YY MM DD HH MM MSLP SFCP T02M RH2M MIXD WDIR WSPD PASQ SHGT
07 09 04 11 00 1007 0937 0021 0079 552.0 270 5.0 4 00175
07 09 04 12 00 1008 0938 0022 0065 745.0 300 6.0 4 00175

The output data grid is compiled for 25 by 25 grid points at 10 km resolution and it is centered about the command line lat-lon position. This program is very similar to stn2arl except the input file contains the additional variables, SFCP, T02M, RH2M, SHGT, and the vertical coordinate of the output grid is sigma rather than absolute pressure units.

snd2arl

Usage: snd2arl [filein] [fileout] [clat] [clon] [mixing] [yymmddhh] ...
    [optional process ID number]

Creates one time period ARL packed format meteorological file from a single rawinsonde over a 50 x 50 (10 km) grid and where clat-clon is the grid center and mixing is an integer from 1 to 7 defining the turbulent mixing from convective (1), neutral(4), to very stable (7). The input file should be free format with the following information (without the label record):

Pressure Temp DewPt Height Direction Speed
1007.00 27.40 18.40 93.00 150.00 4.63
1000.00 26.60 17.60 146.00 151.60 4.55
925.00 20.20 16.10 828.00 169.54 3.63
891.00 17.40 15.30 1150.44 178.16 3.18
883.89 17.20 15.25 1219.00 180.00 3.09

stn2arl

Usage: stn2arl [filein] [fileout] [clat] [clon] [optional process ID number]

Creates a gridded meteorological file for multiple time periods in ARL packed format from an ASCII input file of surface observations. For example:

YY MM DD HH MM WDIR WSPD MIXD PASQ
07 09 04 11 00 270 5.0 1500.0 4
07 09 04 12 00 280 5.0 1500.0 4

The output data grid is compiled for 25 by 25 grid points at 10 km resolution and it is centered about the command line lat-lon position. This program is very similar to sfc2arl except the input file contains only the minimum number of variables.

GUI: arch_user.tcl        Tutorial: meteo_enter.sh



Meteorological Data Editing

add_data

Usage: add_data [station data file name] [ARL gridded file name] ...
   followed by the optional parameters {none= (default)}:
   -d to turn on diagnostic output
   -g{#}, where # is the grid point scan radius (9999)
   -t{#}, where # is (1) for blending temperature or 0 for no blend
   -z{#}, vertical interpolation: (0)=PBL, 1=all, 2=above PBL

Edits the packed meteo data file based upon selected observations in the station data input file. The station file contains wind direction and speed at times, locations, and heights (any order). Observations are matched with the gridded data interpolated to the same location. Gridded winds are then adjusted in direction and speed to match the observation. Those winds are then inter- polated (using 1/r^2) back into the gridded data domain in the mixed layer. The program is not intended to be a replacement for data assimilation but a quick way to adjust the initial transport direction to match local observations near the computational starting point. The optional -g# parameter is used to limit the radius of influence of the 1/r^2 weighting to # grid points in a radial direction from each of the observation locations. The default (9999) is to use the whole grid. Vertical interpolation defaults to mixed layer only, but can be set to do the entire profile, or it can just process the levels above the mixed layer.

Sample ASCII station observation input file (missing = -1.0)
-------- required ----------------------- ---- optional ---
YY MM DD HH MM LAT LON HGT DIR SPD TEMP Up2 Vp2 Wp2
95 10 16 00 00 39.0 -77.0 10.0 120.0 15.0 270.0 1.0 1.0 0.5
95 10 16 02 00 39.0 -77.0 10.0 120.0 15.0 270.0 1.0 1.0 0.5
95 10 16 04 00 39.0 -77.0 10.0 120.0 15.0 270.0 1.0 1.0 0.5

add_grid

Usage: add_grid
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
Output: addgrid.bin

Program to interpolate an existing meteorological data file to a finer spatial resolution grid at integer intervals of the original grid spacing. The program should be run before add_data to insure that the resolution of the meteorological data file is similar to that of the observations. This combination of programs was used to merge WRF and DCNet data for dispersion model calculations.

add_miss

Usage: add_miss
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
Output: addmiss.bin

Examines a file for whole missing time periods (records missing) and creates a new output file with interpolated data inserted into the proper locations. The first two time periods in the file cannot be missing. The missing data records must already exist in the file to use this program. Files with DIFF records will not be interpolated correctly.

add_time

Usage: add_time
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - The output frequency in minutes
Output: addtime.bin

Program to interpolate additional time periods into a meteorological data file. The new data output interval should be an even time multiple of the existing data file. Options are set through standard input. Linear interpolation is used to create the data for the output time periods that fall between the time periods of the input data file. Files with DIFF records will not be interpolated correctly.

add_velv

Usage: add_velv
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - Lower left corner of output grid (lat,lon)
   line 4 - Upper right corner of output grid (lat,lon)
   line 5 - Number of vertical levels including surface
Output: addvelv.bin

Extracts a subgrid from a larger domain file and adds additional 3D records for the velocity variances: u^2, v^2, and w^2. These are computed from the TKE field. The program requires the input meteorological file to contain the turbulent kinetic energy field. The output file may then used as an input file for other programs that will add observational data. The program can be used to create a meteorological data file with velocity variances over a smaller domain where observational variance data (i.e. DCNet) can be assimilated using add_data.

aer2arl

Usage: aer2arl
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - Meteorological model top pressure (hPa)
Output: outfile.arl

Reads in an AER WRF formatted ARL file and writes out an ARL file with the new AWRF header and vertically staggered meteorological variables repositioned accordingly.

arl2meds

Usage: arl2meds
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
Output: meds.fix

Converts one ARL SHGT (terrain height) data record to MEDS format. This file may be required by the meds2arl converter if the surface terrain (ST) variable is not included within the MEDS data file. Note that the data may need to be realigned with either the prime meridian or dateline to match the alignment of the MEDS data. The MEDS data alignment is written to the meds.txt diagnostic message output file by the meds2arl program. Therefore it may take one or two iterations of arl2meds and meds2arl to get the proper command line input parameters.

This program is an orphaned application as the primary converter program meds2arl has been moved to the ~/data2arl/legacy directory.

data_avrg

Usage: data_avrg
   prompted standard input:
   line 1 - Grid point filter (delta: i,j)
   line 2 - /meteorological/data/directory/
   line 3 - ARLformat_datafile_name
Output: average.bin

Averages ARL format meteorological data according to input grid point filter options, such that each value is replaced by the average value of all grid points within the rectangular area ±i and ±j. Files with DIFF records will not be averaged correctly.

data_del

Usage: data_del
   prompted standard input:
   line 1 - Variable to remove (4-char ID)
   line 2 - /meteorological/data/directory/
   line 3 - ARLformat_datafile_name
Output: clean.bin

This program deletes a variable from the ARL formatted meteorogical file.

edit_flux

Usage: edit_flux
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - the new roughness length (m)
Output: editflux.bin

Edits the all flux fields based upon a pre-determined roughness length. Using U = U* k / ln(Z/Zo) with the original Zo and a new equation with a modified Zo^ that represents the new larger roughness length, take the ratio of the two equations such that U*^/U* = ln(Z/Zo)/ln(Z/Zo^). For computational purposes, assume that Z is always one meter greater than Zo^. Then the the momentum flux fields are multiplied by this ratio, while T* is divided by the ratio.

edit_head

Usage: edit_head
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - Incorrect time (YY MM DD HH FH)
   line 4 - Correct time (YY MM DD HH FH)

Edits the 50 byte ASCII header of each data record of a pre-existing meteorological data file. The program could be recompiled to perform other edits besides changing incorrect time labels.

edit_index

Usage: edit_index
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name

Edit the extended header (to byte 108) for each index record of an existing meteorological data file. The program needs to be modified and then recompiled to customize the edit for each problem problem encountered.

edit_miss

Usage: edit_miss
   prompted standard input:
   line 1 - maximum number of missing periods permitted
   line 2 - just list missing (0) or write into data file (1)
   line 3 - /meteorological/data/directory/
   line 4 - ARLformat_datafile_name
   line 5 - processing start day,hr

Program edit_miss is used to interpolate missing variables in an existing meteorological data file from adjacent time periods. The missing data must already exist in the file as valid records with either a blank or missing code in the field. For files with missing records, use program add_miss. Files with DIFF records will not be interpolated correctly.

edit_null

Usage: edit_null
   prompted standard input:
   line 1 - Data grid size (nxp, nyp) for all files
   line 2 - Data set name (with NULL field ID)
   line 3 - Output data set name with merged data
   line 4 - Data set name with the one-field to replace NULL

This program replaces one record per time period in a file where the missing record is identified by NULL. The new field is read from another file than only contains the one variable. The records are matched by time. For example, the program can be used to insert precipitation records into a file. Some editing and recompilation may be required.

file_copy

Usage: file_copy [file1] [file2]

Appends file1 to the end of file2. This program can be used in the event that the type file1>>file2 in DOS or cat file1>>file2 in UNIX cannot be used.

GUI: arch_ecm.tcl

file_merge

Usage: file_merge
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_file#1_name
   line 3 - /meteorological/data/directory/
   line 4 - ARLformat_file#2_name

Merges the data records from file #1 into file #2 by replacing the records in file #2 with the same time stamp as those in file #1. If data times in file #1 go beyond the end of file #2, those records are appended to file #2.

pole2merc

Usage: pole2merc
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_northern_hemisphere_name
   line 3 - /meteorological/data/directory/
   line 4 - ARLformat_southern_hemisphere_name
   line 5 - Lower left corner (lat,lon) of Mercator grid
   line 6 - Upper right corner (lat,lon) of Mercator grid
Output: mercator.bin

Merges two meteorology files, a northern hemisphere and southern hemisphere polar stereographic projection, to single Mercator projection output file. The polar grids need to be identical in grid size.

rec_copy

Usage: rec_copy
   prompted standard input:
   line 1 - /meteorological/FROMdata/directory/
   line 2 - ARLformat_copyFROM_datafile_name
   line 3 - /meteorological/TOdata/directory/
   line 4 - ARLformat_copyTO_datafile_name
   line 5 - Relative to start time record numbers copied (start,stop)
   line 6 - Start copy at (day, hour)

Copies a range of records from file #1 to file #2 starting at the time specified. Both meteorological grids need to be identical. The records to be copied are specified by the range of record numbers relative to the record number of the starting time. Records in the destination file are replaced regardless of whether they match in terms of content.

rec_insert

Usage: rec_insert
   prompted standard input:
   line 1 - the data grid size (nxp, nyp)
   line 2 - the OLD data set name
   line 3 - the NEW data set name
   line 4 - the special MERGE data set name

The program is designed to be customized and recompiled for each application. The current version copies the records from the OLD meteorological data file into the NEW file. During the copy it tests for the NULL variable ID in the OLD meteorological data file and replaces that record with the equivalent record number in the special MERGE data file.

rec_merge

Usage: rec_merge
   prompted standard input:
   line 1 - the data definition file (METDATA.CFG)
   line 2 - the OLD data set name
   line 3 - the NEW data set name
   line 4 - the special MERGE data set name

The program merges in an additional data file (MERGE) with one record per time period, reading an OLD archive format data file (one without the index record) to the NEW style format as defined by data definition file. Information about the original purpose of this program is no longer available.

xtrct_grid

Usage: xtrct_grid [optional arguments] and prompted input
   -h [shows this help display]
   -p [add process ID to the file names]
   -g [use grid points to define the subgrid]
   -n [number of time periods to average (0)]
   Prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - Lower left corner (lat,lon or i,j if -g set)
   line 4 - Upper right corner (lat,lon or i,j if -g set)
   line 5 - Number of data levels (including sfc)
Output: extract.bin

Extracts a subgrid from a larger domain meteorological file in ARL format. The subgrid is selected through the lower left and upper right corners either by latitude-longitude or grid point value if the -g command line option is invoked. Note that subgrids cannot be smaller than the minimum record length required to write an INDX record. The length of the index record is determined by the number of variables and levels.

xtrct_time

Usage: -p[process_id] -s[skip DDHH] -d[delete] -h[help]
   Prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - starting day, hour, min for extract
   line 4 - ending day, hour, min for the extract
   line 5 - skip time period interval (0=none 1=every other)
   line 6 - start and stop record numbers (over-ride or accept default)
Output: extract.bin

Program to extract a selected number of time periods from a ARL format meteorological data file. The (-d) delete option can be used to change the program to delete rather than extract the selected time period. The skip (-s) option only skips duplicate time periods matching the day hour (DDHH) specified.




Meteorological Data Examination

arl2grad

Usage: arl2grad {/meteo_dir_name/} {arl_filename} {output file name}
If the output filename is missing, then it is set to MODEL_ID.grd

Converts ARL packed meteorological data to Grads format

chk_data

Usage: chk_data
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
Output: standard

A simple program to dump the first and last four elements of the data array for each record (i.e. variable) of an ARL packed meteorological file. It is used primarily for diagnostic testing.

chk_file

Usage: chk_file [-i{nteractive} -s{hort} -f{file} -t{ime}]
   -i{nteractive} prompted standard input:
     line 1 - /meteorological/data/directory/
     line 2 - ARLformat_datafile_name
     line + - Enter to continue ...
   -s{hort} only index record information output
   -f{file} output to CHKFILE{1|2|3} otherwise to standard output
   -t{ime} only writes the first and last record of each time period

Program to dump the 50 byte ascii header on each record in addition to a summary of the information contained in the index record.

chk_index

Usage: chk_index
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line + - Enter to continue when changes noted
Output: standard

Check the extended header (bytes 1:108) for each meteorological index record to insure that they are consistent for all time periods in the data file. There is one index record before all of the data records each time period.

chk_rec

Usage: chk_rec
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
Output: standard

Elementary program to dump the 50 byte ASCII header for each record

chk_times

Usage: chk_times
   prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
Output: standard

Program to list all the time periods in a file and checks to that the number of records per time period are consistent. The maximum forecast hour is also returned.

data_year

Usage: data_year
Input: none (all meteorological files to be opened are hardcoded)
Output: AT{latitude}{longitude}

The program will create annual averages of a variable (temperature) at several pre-designated latitude-longitude positions using the 2.5° global reanalysis data. Directory, file names, and all other options are hardcoded.

datecol

Usage: datecol {meteorology file#1} {meteorology file#2} {...#n}
Output: standard

Prints the dates from an ARL format meteorology file to standard output as one line each for years, months, days, and hours. This program is used in several web applications.

datesmry

Usage: datesmry {output from program datecol}
Output: standard

Tabulates a set of dates from program datecol into bins for web use.

filedates

Usage: filedates [input file]

Lists the [YY MM DD HH FF] as a series of records to standard output showing all of the time periods in an ARL formatted meteorological data file. Used in several web applications.

findgrib

Usage: findgrib
   prompted standard input:
   line 1 - /GRIB1/directory/file_name
Output: standard

Finds the starting GRIB string at the beginning of each GRIB1 record and determines the record length between GRIB1 records from the byte count and the actual record length encoded in the binary data.

gridxy2ll

Usage: gridxy2ll [-filename options (in CAPS only)]
   -D[input metdata directory name with ending /]
   -F[input metdata file name]
   -P[process ID number for output file]
   -X[x point]
   -Y[y point]
   -W[image width]
   -H[image height]

Returns the latitude-longitde position of the upper-right corner point in meteorological grid units given the lower-left position and delta-width and -height in grid units.

inventory

Usage: inventory
Input: GRIB1 file name
Output: standard

Produces and inventory listing of all the records in GRIB1 file

metdates

Usage: metdates [input file]


Print the dates from an ARL formatted meteorology file. The program is similar in function to filedates and one or both of these are used for web applications.

metlatlon

Usage: metlatlon [directory filename]

Prints a text listing of the latitude-longitude associated with each grid point in an ARL formattted meteorological data file.

metpoint

Usage: metpoint [directory filename latitude longitude]

Determines if a location is within the domain of an ARL formattted meteorological data file. The program command line contains latitude, longitude, file name, and directory, and returns the i,j of the position. Negative values are outside the grid.

profile

Usage: profile [-options]
   -d[Input metdata directory name with ending /]
   -f[input metdata file name]
   -y[Latitude]
   -x[Longitude]
   -o[Output time offset (hrs)]
   -t[Output time interval (hrs)]
   -n[Hours after start time to stop output (hrs))]
   -p[process ID number for output text file]
   -e[extra digit in output values (0)-no,1-yes]

Extracts the meteorological profile at the selected location with the values always written to a file called profile.txt, which may be appended by a process ID. Without the -t option set, only the first time period will be extracted. Use -o to start at a time after the first time period.

GUI: disp_prof.tcl        Tutorial: meteo_prof.sh

unpacker

Usage: unpacker
Input prompted on the command line: Line 1 - GRIB1 file name
Output: standard

Decodes each section and all variables in a GRIB1 meteorological file.

velvar

Usage: velvar
Input: CONTROL
Output: velvar.txt

Creates a time series of velocity variance and diagnostic values using the meteorological data and location specified in a standard HYSPLIT CONTROL file.

vmixing

USAGE: vmixing (optional arguments)
   -p[process ID]
   -s[KBLS - stability method (1=default)]
   -t[KBLT - PBL mixing scheme (2=default)]
   -d[KMIXD - Mixing height scheme (0=default)]
   -l[KMIX0 - Min Mixing height (50=default)]
   -a[CAMEO variables (0[default]=No, 1=Yes, 2=Yes + Wind Direction]
   -m[TKEMIN - minimum TKE limit for KBLT=3 (0.001=default)]
   -w[an extra file for turbulent velocity variance (0[default]=No,1=Yes)]
Output: STABILITY{processID}.TXT

Creates a time series of meteorological stability parameters.

xtrct_stn

Usage: xtrct_stn [-i{nteractive} -f{ile of lat-lons} -r{rotate winds to true}]
   Prompted standard input:
   line 1 - /meteorological/data/directory/
   line 2 - ARLformat_datafile_name
   line 3 - Number of variables to extract, for each variable ...
   line 4 -     CHAR_ID(a4) level#(i2) Units_multiplier(f10)
   line 5 - Position of data extraction (lat,lon) when NOT -f
   line 6 - Interpolation as Nearest Neighbor (0) or Linear (1)
   line 7 - Output file name
   line 8 - Initial output record number (to append to an existing file)
Output: ASCII; records grouped by time period, then station, with all variables on one line

Creates a time series of meteorological variables interpolated to a specific latitude-longitude point. Multiple variables may be defined, each by their 4-character ID. To enhance the interactive mode, prompting for each input line can be turned on (-i). If the file of latitude-longitude points is defined (-f) then the position of the extraction point is not required. The first record of the lat-lon file is the number of lines to follow.




Particle Utilities

asc2par

USAGE: asc2par -[options(default)]
   -i[input file name (PARDUMP.txt)]
   -o[output file name (PARDUMP.bin)]

Converts the ASCII formatted particle dump file, created by the program par2asc to a big-endian binary file. Each output time is composed of a header record indicating the number of particles to follow for that time period:
Header Record: NUMPAR,NUMPOL,IYR,IMO,IDA,IHR
   Each particle output is composed of three data records:
   Record 1: MASS(1:NUMPOL)
   Record 2: TLAT,TLON,ZLVL,SIGH,SIGW,SIGV
   Record 3: PAGE,HDWP,PTYP,PGRD,NSORT

par2asc

USAGE: par2asc -[options(default)]
   -i[input file name (PARDUMP)]
   -o[output file name (PARDUMP.txt)]
   -v[inventory date/times (PARINV)]
   -a[optional one-line-per-particle output that
     can be imported into GIS:(0)-none ; 1-PAR_GIS.txt created]

Converts the particle dump output file (big-endian binary) to an ASCII-formatted particle file, and as an option, output an inventory file listing the date/time(s) of the output. Each output time of the ASCII-formatted particle file is compose of a header record indicating the number of particles to follow for that time period. With the optional -a1 text file option, PAR_GIS.txt is created, where each particle has one line, and the following is written: header record + one record for each particle.

par2conc

USAGE: par2conc -[options(default)]
   -i[input file name (PARDUMP)]

Converts the particle dump output file (big-endian binary) to a HYSPLIT-formatted concentration file using a CONTROL file to determine the grid and averaging times.

parmerge

USAGE: parmerge -[options(default)]
   -i[input base file name (PARDUMP).000]
   -o[output file name (PARDUMP)]

Merges multiple PARDUMP.XXX files into a single file. These files are normally created by the MPI version. The program loops sequentially 001 through 999 for any existing PARDUMP files and merges the contents into one file. The program stops at the first missing file.

paro2n

USAGE: paro2n -[options(default)]
   -i[input file name (PARDUMP)]
   -o[output file name (PARDUMP.NEW)]

Converts the {o}ld particle dump output file (big-endian binary) to the {n}ew format that includes the minutes field.

parshift

USAGE: parshift -[options (default)]
   -b[blend shift outside of the window]
   -d[delete particles instead of shift/rotate (0)-all #-specie]
   -i[input base file name (PARDUMP)]
   -o[output base file name (PARINIT)]
   -r[rotation degrees:kilometers:latitude:longitude]
   -s[search for multiple files with .000 suffix]
   -t[time MMDDHHMN (missing process first time only)]
   -w[window corner lat1:lon1:lat2:lon2 (-90.0:-180.0:90.0:180.0)]
   -x[shift delta longitude (0.0)]
   -y[shift delta latitude (0.0)]

The program provides for the spatial adjustment of the particle positions in the HYSPLIT binary particle position output file as specified on the command line. One or more files may be processed by a single execution with the adjusted output always written to a new file name. The position adjustment is only applied to one time period if the input file contains multiple time periods. The shift is specified in either delta latitude-longitude units. Unless a latitude-longitude window is specified, all particle positions in the fil are adjusted. Adjustments outside of the window may be linearly blended to zero adjustmen at a distance of two windows. Adjustments may also be specified as a rotation and distance from a point. Particles in a window may also be deleted.

GUI: par_shift.tcl        Tutorial: cust_toms.sh

stn2par

USAGE: stn2par -[options(default)]
   -i[input file name (meas-t1.txt)]
   -n[number of grid pts (50)]
   -g[grid interpolation (0.5) deg (0=stn)]
   -o[output file name (PARINIT)]
   -p[particle=(0) or puff=1 distribution]
   -r[turns on random initial seed number]
   -s[split factor (10) per station]
   -t[temporal output interval (24) hrs]
   -z[height distribution (3000) meters]

Converts measured data in DATEM format to a multi-time period PARDUMP file which can be used for HYSPLIT initialization or display applications.




Shapefile Manipulation

ascii2shp

Usage: ascii2shp [options] outfile type < infile
   reads stdin and creates outfile.shp, outfile.shx and outfile.dbf
   type must be one of these: points lines polygons
   infile must be in 'generate' format
   Options:
      -i Place integer value id in .dbf file (default)
      -d Place double precision id in .dbf file

Converts trajectory endpoints and concentration contours in GENERATE format to ESRI formatted shape files.
ascii2shp version 0.3.0
Copyright (C) 1999 by Jan-Oliver Wagner
The GNU GENERAL PUBLIC LICENSE applies. Absolutly No Warranty!
ARL ascii2shp version 1.1.10

GUI: asc2shp.tcl        Tutorial: disp_shp.sh

dbf2txt

USAGE: dbf2txt [-d delimiter] [-v] dbf-file

Extracts the text information from a database file (.dbf).

txt2dbf

Usage: txt2dbf [{-Cn | -In | -Rn.d}] [-d delimiter] [-v] txt-file dbf-file

Converts tab-delimited ASCII-Tables in the dbase-III-Format. The table structure is defined on the command line:
  -Cn [text, n characters]
  -In [integer, with n digits]
  -Rn.d [real, with n digits (including '.') and d decimals]

GUI: asc2shp.tcl        Tutorial: disp_shp.sh



Trajectory Analysis

clusend

USAGE: clusend - [ options (default)]
   -a[max # of clusters: #, (10)]
   -i[input file (DELPCT)]
   -n[min # of clusters: #, (3)]
   -o[output file (CLUSEND)]
   -t[min # of trajectories: #, (30)]
   -p[min % change in TSV difference from one step to next: %, (30)]

Scans the DELPCT output file from the cluster program, locating the first "break" in the data. "Break" is defined as an ICRIT% increase in TSV.

GUI: trajclus_run.tcl       Tutorial: traj_clus.sh

cluslist

USAGE: cluslist - [ options (default)]
   -i[input file (CLUSTER)]
   -n[number of clusters: #, (-9-missing)]
   -o[output file (CLUSLIST)]

Lists trajectories in each cluster per given number of clusters

GUI: trajclus_run.tcl       Tutorial: traj_clus.sh

clusmem

USAGE: clusmem - [ options (default)]
   -i[input file (CLUSLIST)]
   -o[output file (TRAJ.INP.Cxx)]
Input: CLUSLIST
Output: TRAJ.INP.C{i}, where i=cluster_number

Creates a file listing the trajectories in each cluster for input to programs trajplot or trajmean.

GUI: trajclus_run.tcl       Tutorial: traj_clus.sh

clusplot

USAGE: clusplot -[options (default)]
   -i[Input files: name (DELPCT)]
   -l[Label with no spaces (none)]
   -o[Output file name: (clusplot.ps)]
   -p[Process file name suffix: (ps) or label]
Input: DELPCT

Plots TSV data from DELPCT file created by cluster program. One can infer the final number(s) of clusters from the plot.

GUI: trajclus_run.tcl       Tutorial: traj_clus.sh

cluster

Usage: cluster
   prompted standard input:
   line 1 - Hours to do clustering
   line 2 - Endpoint interval to use
   line 3 - Skip trajectory interval
   line 4 - Cluster output directory
   line 5 - Map (0)-Auto 1-Polar 2-Lambert 3-Merc 4-CylEqu
Input: INFILE
Output: TCLUS, TNOCLUS, DELPCT, CLUSTER, CLUSTERno

Program starts with N trajectories (clusters) and ends with one cluster. MC is the current number of clusters. Program clusend identifies stop point. Cluster pairs are chosen based on total spatial variance (TSV - the sum of the within-group sum of squares). The two clusters to be paired result in the mimimum increase in TSV.

GUI: trajclus_run.tcl       Tutorial: traj_clus.sh

merglist

USAGE: merglist -[options (default)]
   -i[Input files: name1+name2+... or +listfile or (tdump)]
   -o[Output file base name: (mdump)]
   -p[Process file name suffix: (tdump) or process ID]

Program to merge HYSPLIT trajectory endpoints (tdump) files where the list of input files contains one trajectory per file which are then merged into a single output file. Although this program is usually associated with clustering, it can be used in any trajectory application.

GUI: trajclus_run.tcl       Tutorial: traj_clus.sh

trajfind

Usage: trajfind [in_file] [out_file] [lat] [lon]

Processes multiple trajectories resulting from the time-height split option contained in [in_file] to extract a single trajectory to [out_file] that passes nearest to the selected latitude-longitude given on the command line. This program is primarily designed to determine optimal balloon trajectories showing a time sequence of balloon heights required to reach the final position.

trajfreq

USAGE: trajfreq -[options (default)]
   -f[frequency file name (tfreq.bin)]
   -g[grid size in degrees (1.0)]
   -i[input file of file names (INFILE)]
   -h[number of hours to include in analysis (9999)]
   -r[residence time (0),1,2,3]:
     (0) = no
     1 = yes; divide endpoint counts by number of trajectories
     2 = yes; divide endpoint counts by number of endpoints
     3 = yes; divide endpoint counts by max count number for any grid cell
   -c[include only files with same length as first endpoint file]
     (0) = no
     1 = yes
   -s[select bottom:top (0:99999) m AGL]
   -b[YYMMDDHHNNFF - force begin date label (first file)]
   -e[YYMMDDHHNNFF - force end date label (last file)]
   -a[ascii2shp shapefile input file(0) or 1]
   -k[min longitude for ascii2shp shapefile input file]
   -l[max longitude for ascii2shp shapefile input file]
   -m[min latitude for ascii2shp shapefile input file]
   -n[max latitude for ascii2shp shapefile input file]

Converts multiple trajectory input files into a concentration file that represents trajectory frequencies.

GUI: traj_freq.tcl       Tutorial: traj_freq.sh

trajfrmt

Usage: trajfmt [file1] [file2]

Reads a trajectory endpoints file1, reformats, and writes file2. The program is designed to be a template for editing trajectory files. The code should be customized and recompiled for the intended problem. The current data record format follows: (8I6,F8.1,2F9.3,11(1X,F8.1)

trajgrad

Usage: trajgrad [file name]
Output:grads.bin

Trajectory model output data in ASCII format is converted to GrADS format.

trajmean

USAGE: trajplot -[options (default)]
   -i[Input files: name1+name2+... or +listfile or (tdump)]
   -m[Map projection: (0)-Auto 1-Polar 2-Lamb 3-Merc 4-CylEqu]
   -o[Output file name: (tmean)]
   -p[Process file name suffix: (ps) or process ID]
   -v[Vertical: 0-pressure (1)-agl]

Calculates the mean trajectory given a set of trajectories; the file input section option follows trajplot; the output is the standard tdump-format file.

GUI: trajclus_run.tcl       Tutorial: traj_clus.sh

trajmerg

Usage: trajmerg [file1] [file2] [file3]

Merges trajectories in file1 and file2 into file3.