Geoscience Reference
In-Depth Information
It is well known that Blaise Pascal around 1650 was adept in solving problems
related to the rolling of one or more dice. Hacking ( 2006 ) credits Christiaan
Huygens with introducing statistical inference in the first probability textbook
published in 1657 (also see Kendall 1970 , p. 29). Statistical reasoning, however,
became only slowly accepted by scientists and much later by the public. This is
evident from the history of the arithmetic mean. Some early astronomical calcula-
tions that show resemblance to the process of estimating the sample mean are
reviewed by Plackett ( 1970 ). The first average on record was taken by William
Borough in 1581 for a set of compass readings (Eisenhart 1963 ). The procedure of
averaging numbers was regarded with suspicion for a long period of time. Thomas
Simpson ( 1755 ) advocated the approach in a paper entitled: “On the advantage
of taking the mean of a number of observations in practical astronomy”, stating:
“It is well-known that the method practiced by astronomers to diminish the errors
arising from the imperfections of instrument and of the organs of sense by taking
the mean of several observations has not so generally been received but that some
persons of note have publicly maintained that one single observation, taken with
due care, was as much to be relied on, as the mean of a great number.”
Originally, the normal distribution was derived from the binomial distribution
by Abraham de Moivre in 1718. It became more widely known after its use
by Friedrich Gauss (in 1809) and the subsequent derivation of the central-limit
theorem, which helped to popularize the idea that many different random errors
combine to produce errors that are normally distributed. The normal distribution
became another corner stone of mathematical statistics with the development of
Student's t -test, analysis of variance and the chi-square test for goodness of fit.
During the first half of the twentieth century, many methods of mathematical
statistics were developed for statistical populations of independent (uncorrelated)
and identically distributed (iid) objects from which random samples can be drawn
to estimate parameters such as the mean, variance and covariance. The theory of
random sampling became well-established together with rules for determining the
exact numbers of degrees of freedom to be used in statistical inference. Generali-
zation to multivariate analysis followed naturally. Krumbein and Graybill ( 1965 )
introduced the “general linear model” as a basic tool of mathematical geology.
2.1.1 Emergence of Mathematical Statistics
Karl Pearson (1857-1936) greatly helped to establish the theory of mathematical
statistics and to make it more widely known. Many people to-day are familiar with
the correlation coefficient and the chi-square test for goodness of fit, which are two
of the tools invented by Pearson. R.A. Fisher (1890-1962) was a better mathema-
tician than Pearson ( cf . Stigler 2008 ). His earliest accomplishments included
finding the mathematical formula for the frequency distribution of the correlation
coefficient, and correct usage of degrees of freedom in statistical significance tests
including the chi-square test (Fisher Box 1978 ). Fisher ( 1960 ) developed statistical
design of experiments using significance tests including the F -tests in analysis
Search WWH ::




Custom Search