Environmental Engineering Reference
In-Depth Information
taken as they are, and the meteorological input is not evaluated separately. Recently,
more attention is given to the impact of the meteorological input data on the
results of the CTM's, like in the European initiative COST-728 (see also Stern
et al., 2008; de Meij et al., 2009).
The common practice in model evaluation studies is that off-line models, where
meteorological information/models is used as input to CTM's, are evaluated.
Gradually, more and more online models are being developed, in which the
chemical modules are an integrated part of the meteorological models, which
enables the calculation of feedback mechanisms (Baklanov and Korsholm, 2007).
The approach of the evaluation of such on-line models, as for example WRF-
CHEM, is not fully clear yet ( Grell et al., 2005).
Concerning observations, the quality assurance, and especially their spatial
representativity should be determined.
By comparing model results with observations, the first step should be by visual
inspection of various plots of observations against model results. Subsequently,
appropriate statistical methods should be used, see for example (Hanna et al., 1996;
Boylan and Russel, 2006). Ideally, a threshold should be given before the model
evaluation starts, stating that when the model performance is such that the results
are below the threshold, the model results are considered to be inadequate. As an
example Boylan and Russel (2006) and Sartelet et al. (2007) have defined a
performance goal for aerosol modelling. This goal is reached when the model has
reached the highest expected accuracy. The performance criterion is the goal for
an acceptable accuracy. They propose that for PM10 and its components the
performance goal for mean fractional bias is <50%, and the performance criterion
is <75%. For mean fractional error these values are respectively <30% and <60%.
In a model evaluation study also sensitivity runs should be defined and carried out.
During a model evaluation, it should be kept in mind that the total model
uncertainty is made up by the sum of the input data uncertainty + the model
uncertainty + the variability. The variability, caused by atmospheric turbulence
and meandering, is a part of the uncertainty contained in the observations which
can not be decreased. More so, especially the emission data also have an associated
inherent uncertainty which can not be decreased; country mean, yearly averaged
VOC-emissions can never be more accurate than +/− 30% (Borrego, 2009).
2. Model Intercomparison and Model Evaluation
One of the first European regional scale model intercomparison and model evaluation
studies has been carried out by Hass et al. (1997). Four photochemical models, i.e.
EMEP, EURAD, REM-3 and LOTOS were evaluated for an 8 day ozone episode
in August, 1990. The results showed that the models were capable to simulate the
daily pattern of O 3 -concentrations within 10-30% of the observations. The results
of this study have later been used in one of the first papers concerning an
ensemble approach by Delle Monache and Stull (2003).
Search WWH ::




Custom Search