Geoscience Reference
In-Depth Information
1.2.2 Numerical model simulation experiments
Numerical modeling of severe convection and convective systems began in the
mid-1970s, when computing power became adequate for simulating simple, two-
dimensional features using models. The foundation for the modeling of convective
storms had been laid in the 1960s and early 1970s by Yoshi Ogura at the Univer-
sity of Illinois at Champaign-Urbana and Norm Phillips and Jule Charney at
MIT, Carl Hane at Florida State University, and later NSSL did some early non-
hydrostatic two-dimensional squall line work. Robert Schlesinger at the University
of Wisconsin Madison did pioneering work with a simplified model that excluded
the existence of sound waves while retaining the compressibility of air and, in the
late 1970s, produced successful three-dimensional simulations of supercells. The
presence of sound waves due to the compressibility of air created problems
because sound waves are of such a high frequency that numerical integration
through each sound wave would require very short time steps and therefore be
computationally prohibitive. Bob Wilhelmson at the University of Illinois at
Champaign-Urbana and Joe Klemp at NCAR devised a method of numerical
integration to include the full compressibility of air. Their model became known
as the Klemp-Wilhelmson model and was used for almost two decades at NCAR,
before being superseded by the WRF (Weather Research and Forecasting Model)
at NCAR, which today is probably the most widely used community model in the
U. S. Interestingly, Joe Klemp began his research after getting training in chem-
ical engineering—not in atmospheric science. His ability to conduct research with
a technical background other than meteorology should be an inspiration to those
students wishing to undertake research in severe convective storms and tornadoes,
and have backgrounds
in physics, applied mathematics, and engineering
disciplines.
Other models that have been widely used include the ARPS (Advanced
Regional Prediction System) at CAPS (Center for Analysis and Prediction of
Storms) at OU under the direction of Ming Xue; RAMS (Regional Atmospheric
Modeling System) at CSU (Colorado State University) under the direction of Bill
Cotton; and other models at the University of Wisconsin Madison and Pennsylva-
nia State University. Results of significant studies from other less-used models
devised by Lou Wicker at NSSL and George Bryan at NCAR, among others,
also appear in the literature. Horizontal resolution is commonly employed down
to 250m and sometimes down to 100m. It is possible to use coarser resolution to
model the parent storm and with a nested grid use finer resolution to model sub-
storm features like tornadoes. When doing so, one must match boundary
conditions carefully. Computer time, storage, and speed are the current limiting
factors in using models with very fine grid spaceing. When complex cloud and
precipitation microphysics are employed the speed and storage requirements must
be significantly increased.
Some LES (large-eddy simulation) models 2 have been used to study tornadoes
as isolated vortices making contact with the ground. With a horizontal resolution
2 The shortest scales of motion are filtered out and parameterized.
Search WWH ::




Custom Search