Environmental Engineering Reference
In-Depth Information
metrics exist for episodic simulations, but a comparable set of metrics for long-
term simulations remains elusive. This report outlines key aspects of model
performance evaluation using a 14-month California simulation as an example.
Traditional evaluation techniques are discussed along with infrequently used
approaches possibly more appropriate for long-term simulations.
2. Model Application
A 14-month simulation (December 1999-January 2001) was conducted with
CMAQv4.6 for a California domain (Fig. 1) with 12- × 12-km horizontal grid cells
and 15 vertical layers. Meteorological inputs were generated from simulations
with the mesoscale meteorological model (MM5), and emissions were prepared
internally at the California Air Resources Board. Gas-phase chemistry followed
the SAPRC99 mechanism; aerosol processes were modeled with the AE4 module.
Fig. 1. Simulation domain with observation sites and San Joaquin Valley transects
3. Approaches for Model Performance Evaluation
Traditional techniques . Model performance has traditionally been evaluated using
statistical measures of bias and error based on grid-cell average predictions matched
in space and time with observations from sites in the domain. These statistical
measures concisely summarize aspects of general model performance, and goals
for acceptable performance can be defined in terms of statistical quantities. Visually,
model performance is often examined using scatter and time-series plots of pollutant
concentration. An example of such a plot for our simulation is given in Fig. 2.
Despite their usefulness, statistical performance measures provide limited infor-
mation on the accuracy of individual parameterizations of atmospheric processes.
 
Search WWH ::




Custom Search