This menu can be used to solve the Transfer Coefficient Matrix (TCM) for the source term vector given a measured data vector and where the matrix values are the dilution factors for each source-receptor pair. The measured data vector at multiple receptor locations and/or times is required and it must be defined in the DATEM format. The format of this file is discussed more detail in the GeoLocation menu. If a time-varying source solution is required, the Run Daily menu can be used to generate the required dispersion simulations.

Technical details regarding the computational approach used to solve the TCM can be
found in "Source term estimation using air concentration measurements and a
Lagrangian dispersion model–Experiments with pseudo and real cesium-137 observations
from the Fukushima nuclear accident",T. Chai, R. Draxler, A. Stein, *Atmospheric
Environment, 106*, 241-251.

The inverse modeling executable **lbfgsb** input files:

- Parameters_in.dat : control parameters
- CSV_IN : TCM file
- APRIORI : file name of first guess source terms if available

The inverse modeling executable **lbfgsb** output files:

- SOURCE_OUT_000 : release results
- CONC_OUT_000 : concentrations generated with 'out.dat' release
- Iterate_000 : minimization progress
- RPT_OUT_000 : additional run time output

**Step 1: defines the binary input files** which are the output file from the dispersion
simulations configured to produce output at an interval that can be matched to the measured
sampling data. Ideally the model simulation emission rate should be set to a unit value. Each
simulation should represent a different discrete emission period. For example, a four day
event might be divided into 16 distinct 6-hour duration emission periods. Therefore the matrix
would consist of 16 sources (columns) and as many rows as there are sampling data. The
entry field in step 1 represents the wild card string *entry* that will be matched to the files
in the *working* directory. The file names will be written into a file called *INFILE*.
This file should be edited to remove any unwanted entries.

**Step 2: defines the measured data input file** which is an ASCII text file in the
DATEM format. The first record provides some general information,
the second record identifies all the variables that then follow on all subsequent records. There
is one data record for each sample collected. All times are given in UTC. This file defines the
receptor data vector for the matrix solution. It may be necessary to edit this file to remove
sampling data that are not required or edit the simulation that produces the coefficient matrix
to insure that each receptor is within the modeling domain.

**Step 3: defines the file conversion details** from sampling units to the
emission units. For instance, the default value of 10^{+12} converts the emission
rate units pg/hr to g/hr if the sampling data are measured in pg/m^{3}
(pico = 10^{-12}). The exponent is +12 rather than -12 because it is applied to the
model results in the denominator (Emission=Measured/Model_TCM). The height and species fields
are the index numbers to extract from the concentration file if multiple species and levels
were defined in the simulation. The half life (in days) is only required when dealing with
radioactive pollutants and the measured data need to be decay corrected back to the
simulation start time.

**Step 4: creates the comma delimited input file** called *c2array.csv* with the
dilution factors in a column for each source and where each row represents a specific
receptor location. The last column is the measured value for that receptor. The column
title represents the start time of the emission in days from the year 1900. This step calls
the program *c2array* which reads each of the measured data values and matches them to
the input files to extract the dilution factors from each source to that measured value.
This step also creates an output file called *c2array.txt* which contains the
number of rows and columns in the matrix. This information is needed when creating the
*Parameters_in.dat* input file created by Step 5.

**Step 5: creates the PARAMETER_IN_000 input file** used by the
inverse modeling executable

- The first guess value represents an estimate of the source term. If time-varying information is known, it can be specified by a negative value in this field and a file name defined in the APRIORI variable.
- The scaling factor is used to reduce the numeric range of both the TCM values and the emission solution. A smaller range improves the solution convergence.
- The first-guess uncertainty needs to be defined as the sum of a fraction and constant value.
- The uncertainty should also be defined for the measurements by defining a fraction and fixed value.
- The solution may be bounded or unbounded.
- A logarithmic transformation can be applied to the solution or TCM results prior to computing a solution.

**Step 6 runs inverse modeling executable** with options defined in Step 5.
Different solutions can be tested by sequentially repeating steps 5 and 6. The solution
results from *SOURCE_OUT_000* are copied to the output file *source.txt* defined in
this step and also displayed by the GUI. The *Parameters_in.dat* can also be edited
manually to set parameters not defined in the GUI. In this case only repeating step 6 is
required. A solution may contain negative values as well as extreme positive emission
results. Such values are not realistic and are a result of model errors or other
uncertainties.

**Fukushima Example Inputs** assuming emissions output units *mBq*

- bckg_const=1e17
- LN_X=.false.
- LN_Y=.true.
- lbfgs_nbd=1
- X_Scaling=1d12
- Unc_o_f=1d-1
- Unc_o_a=3d-0
- Unc_b_f=1d3
- Unc_b_a=1d14

**PARAMETER_IN_000** detailed description

- ================ DIMENSIONS ================
- N_ctrl: Number of unknown source terms. If the source terms are 2-dimensional, Nx_ctrl and Ny_ctrl are their ranges. Note that N_ctrl = Nx_ctrl * Ny_ctrl

- ================ TCM_INPUTS ================
- CSV_IN: TCM file name (in csv format)
- N_obs: Number of observations
- CSV_IN file has (N_obs + 1) lines. The first line has observations time for all source terms (N_ctrl columns). From line 2, each line has "N_ctrl + 1" columns. Observations are listed as the last column.

- ================ RUN_CONTRL ================
- bckg_const: constant first guess, negative values prompt code to read from file
- APRIORI: File name of the a priori source terms
- LN_X/LN_Y: Switches for using log or original variables for control/metric variables

- ================ SMOOTH_PNT ================
- Smoother: Switch for smoothness penalty term
- c_smooth: constant to control the source(t) smoothness, trial and error is needed to decide on the magnitude

- ================ SMOOTH_P2D ================
- Switch and parameters to control smoothness of 2-dimensional sources

- ================ MODEL_UNC ================
- UNC_Model: Under development. Should be turned off at the moment.
- T_model_unc: Time scale of the model uncertainty growth. Still under development. Code section is not available.
- Floor_x: Lower bounds of control
- Ceiling_x: Upper bounds of control
- Floor_y: When LN_Y is turned on, Floor_y is enforced to avoid infinity

- ================ LBFGS_CTRL ================
- Max_iterations: Maximum number of function evaluations before termination. Refer
to the
*lbfgsb*source code for a description of the other parameters.

- Max_iterations: Maximum number of function evaluations before termination. Refer
to the
- ================ BOUNDS_ARR ================
- lbfgs_nbd is an array to indicate the type of bounds imposed on the control variables, and must be specified as follows:
- 0 : if x(i) is unbounded
- 1 : if x(i) has only a lower bound
- 2 : if x(i) has both lower and upper bounds
- 3 : if x(i) has only an upper bound

- ================ UNCERTANTY ================
- Uncertainties of the observations and a priori (first guess) are written into two parts, one proportional to the variable itself (fractional part) and the other independent of the values of the original variable (additive part). In addition to the measurement uncertainties, observational uncertainties should include the representative uncertainties. X_scaling is useful when either the source or the observations are too small or too large with current units.
- xoptim=emission rate /X_Scaling
- [emission rate variance = xoptim variance * X_Scaling**2]
- Observation variance = ( obs*Unc_o_f + Unc_o_a)**2
- a_priori source variance = (xoptim*Unc_b_f + Unc_b_a)**2