Geoscience Reference
In-Depth Information
could be tested through a process known as “validation” or “verification.” In common scientific
parlance, these terms describe the process of independent confirmation of a previous finding. In the
present context, unlike say, performing a new experiment that backs up previously reported results,
one is instead seeking to demonstrate that a statistical model for a phenomenon can successfully
predict independent data that were not used in establishing the model in the first place. In other
words, it is a way of demonstrating that the statistical relationship is real, and not just a fluke of
statistics. Rather than using the full available instrumental record for the calibration of the proxy data,
one leaves some subinterval of the instrumental record aside for testing purposes. The proxy data are
then calibrated over the shortened calibration interval, and the resulting statistical model is used to
predict variations in climate over the remaining subinterval that was not used in the calibration
process. Since no information from that part of the instrumental record was used to calibrate the
proxy data, the extent to which the climate variations predicted by the statistical model match the
climate changes that were actually observed over that time interval provides a true test of the
reliability of the climate reconstruction.
This process of validation or verification is essential in establishing the credibility of a proxy-
based climate reconstruction. There is, however, an important compromise that must be struck: The
shorter the calibration period, the less robust the resulting statistical model, but the shorter the
validation period, the less reliable the independent validation test. 14 The information from the
calibration and validation process can also be used to estimate statistical uncertainties in the
reconstructions, yielding the important margins of error (variously referred to as “error bars” or
“confidence intervals”) that characterize the envelope of uncertainty surrounding the climate
reconstructions.
The instrumental data become increasingly sparse as one goes back in time before the twentieth
century; indeed, only a handful of records are available as far back as the early nineteenth century. On
the other hand, proxy data are available only up through the time they were collected. Many of the key
proxy records were obtained during the 1970s or early 1980s, and proxy data become increasingly
sparse after that period. We consequently chose the data-rich interval of 1902 to 1980 for our
calibration period, leaving aside the earlier nearly half-century of sparser instrumental data for the
validation tests. 15 These tests established that the statistical reconstructions were skillful—that is,
meaningful in a statistically rigorous sense 16 —as far back as A.D. 1400, but not earlier than that.
Onto the World Stage
When we initially wrote up our results for publication, we focused on what we felt was most
scientifically interesting, for example, that we recovered an unusual pattern for the 1816 “year
without a summer” that indicated a very cold Eurasia and lower than average temperatures in North
America (observations that are independently confirmed by historical accounts), but a warmer than
usual Middle East and Labrador (who knew?). Or that we had independently affirmed anecdotal
accounts that there was a whopper of an El Niño event in 1791—a year that, according to our
reconstruction, also happened to be a comparative scorcher for Europe and a large part of North
America. Then we did the least scientifically interesting thing one could possibly do with these rich
spatial patterns: We averaged them to obtain a single number for each year: the Northern Hemisphere
 
 
 
Search WWH ::




Custom Search