Geoscience Reference
In-Depth Information
The excessive reliance on computer-generated model runs (at least on the large
scale) without critical analysis has led to what has been called ''meteorological
cancer'' by Len Snellman of the National Weather Service. Forecasters must not
simply use computer-generated forecasts as the truth and act merely as communi-
cators, but must critically assess how accurately they might be and recognize the
uncertainty of forecasts.
Recently, owing to the relatively extreme uncertainty of explicit convection
forecasts resulting from nonlinear advection and other nonlinear processes,
''ensemble forecasts'' are produced by running a model many times, each time for
slightly different initial conditions and model microphysics, to give the forecaster a
feeling for the range of possible realizations in the atmosphere. The best forecast
averaged over the long run may be the ensemble mean, but a look at all the fore-
casts gives one a feeling for what low-probability events, but possibly of very high
impact, are nevertheless possible. Exactly how to implement ensembles and how
many members should be used, etc. are part science and part art.
Warnings for severe weather events are issued by the National Weather
Service in the U. S. mainly on the basis of detection of severe weather by Doppler
radar observations and public reports. There is, however, a movement now to
attempt to issue warnings at least in part based on the output from ensembles of
short-range, high-resolution, numerical model runs. This procedure is called ''warn
on forecast'' and, if successful, has the potential to extend warning lead times to
allow people to take action to lessen the effects of severe weather by allowing them
to get out of harm's way and protect some structures in advance.
7.1.3 Evaluations of forecast skill
It is believed that a combination of ingredients-based and model-based forecasting
is necessary for the most accurate severe convection forecasts. The human fore-
caster plays a vital role in assessing the diagnostic quantities to determine whether
the proper ingredients are in place and in interpreting the model output and quan-
tifying uncertainty. He/she can make decisions in which erroneous observations
are discovered to have made it into the forecasting process and how much to
weight the impact of model forecasts vs. observations. Variables such as precipita-
tion totals, maximum wind speed, etc. may be verified at gridpoints and measures
of error used. It is beyond the scope of this text to explore the different measures
currently used. I recall once that Doug Lilly, in response to Bruce Morton, a fluid
dynamicist, who noted that while a certain phenomenon might seem to be
theoretically impossible, Lilly noted that it is in fact sometimes observed. When a
model produces a forecast, which is refuted by simple observations, we must go
with the observations.
Forecast verification for events in convective storms (e.g., of tornadoes, large
hail, strong winds) may be accomplished via various ad hoc quantities such
as probability of detection (POD), which is the percentage of events forecast
( d forecast events/sum of d of forecast and d of unforecast events), the false
alarm ratio (FAR), which is a measure of false alarms ( d of unforecast events/
Search WWH ::




Custom Search