Geoscience Reference
In-Depth Information
“not understand how the story got past the editors.” Stated Allen, “it had this bizarre undertone of
being investigative but it didn't investigate … it purported to be authoritative, and it's just full of
holes.”
Of Tribes and Trees
One of the most serious allegations in the WSJ article was the claim by McIntyre that the hockey stick
was an artifact of the statistical conventions we had used. McIntyre and McKitrick had quietly
dropped their erroneous original assertion (in their 2003 paper discussed in chapter 8 ) that the
hockey stick was an artifact of bad data. Their new, albeit equally erroneous, assertion was that the
hockey stick was an artifact of the conventions used in applying principal component analysis (PCA)
to certain tree ring networks, which, they argued, “manufactured Hockey Sticks” even from pure
noise. They had initially submitted this argument to Nature as a comment on the original MBH98
article. Nature rejected their submission for lacking merit, 13 but the journal Geophysical Research
Letters ( GRL ) published the comment as an article two days before Regalado's February 12, 2005,
WSJ piece ran. 14
To investigate this newest claim and the reasons for its falsehood, we need to revisit the
statistical concept of PCA introduced in chapter 4 . As deniers have fastened onto McIntyre's claim in
attacks against the hockey stick that persist to this day, it is important to get to the bottom of it.
Interestingly, it wouldn't be the first time that it was necessary to delve into this seemingly arcane
construct (or, at least, its close relative) to understand the origins of a societally relevant scientific
controversy.
In The Mismeasure of Man, Stephen J. Gould explained how early-twentieth-century psychology
researchers had misused statistics to argue for the existence of a unique and unitary measure of human
intelligence—what they believed was a truly robust measure of IQ that could be applied across
cultures to rank the relative intelligence of members of various races, cultures, and even individual
tribes. 15 The researchers in question were Charles Spearman, who introduced the tool known as
factor analysis to the field, 16 and his disciple, Cyril Burt. With factor analysis, as with principal
component analysis (PCA), researchers can take a large two-dimensional dataset and break it up into
a small number of leading patterns found in the data (typically, just a handful of the most important
patterns can be used to characterize the lion's share of variation in the data). The distinction between
PCA and factor analysis is minor, and for our purposes we can consider them the same thing.
Spearman (and Burt subsequent to him) had applied the tool to a large set of different potential
measures of purported intelligence across various cultures. They were interested in seeing whether a
single dominant pattern emerged in the data. If such were the case, the first pattern derived from PCA
(the leading principal component, or PC#1) would explain the vast majority of variation in the data
and thus, they thought, signal the existence of a single underlying quality that could be defined as
“intelligence.” What they didn't realize, however, was that the converse is not necessarily true; just
because PC#1 explains a large amount of variation in the dataset does not mean that it has captured
all of the significant variation in the data. This, in essence, was the fatal error Spearman (and then
Burt) committed. Gould refers to the fallacy as “reification,” which he defines as “our tendency to
convert abstract concepts like 'intelligence'” into entities. 17 For our purposes, it amounts to the
 
 
 
 
 
Search WWH ::




Custom Search