Information Technology Reference
In-Depth Information
single quantity Q the measurements yield a collection of quantities Q 1 ,
Q N .
Such a collection is often called an ensemble and the challenge is to establish the best
representation of the ensemble of measurements. Simpson, of “Simpson's rule” fame in
the calculus, was the first scientist to recommend in print [ 30 ] that all the measurements
taken in an experiment ought to be utilized in the determination of a quantity, not just
those considered to be the most reliable, as was the custom in the seventeenth century.
He was the first to recognize that the observed discrepancies between successively mea-
sured events follow a pattern that is characteristic of the ensemble of measurements. His
observations were the forerunner to the law of frequency of errors , which asserts that
there exists a relationship between the magnitude of an error and how many times it
occurs in an ensemble of experimental results. Of course, the notion of an error implies
that there is an exact value that the measurement is attempting to discern and that the
variability in the data is a consequence of mistakes being made, resulting in deviations
from the exact value, that is, in errors.
This notion of a correct value is an intriguing one in that it makes an implicit assump-
tion about the nature of the world. Judges do not allow questions of the form “Have you
stopped beating your wife?” because implicit in the question is the idea that the person
had beaten his wife in the past. Therefore either answer, yes or no, confirms that the
prisoner has beaten his wife in the past, which is, presumably, the question to be deter-
mined. Such leading questions are disallowed from the courtroom but are the bread and
butter of science. Scientists are clever people and consequently they have raised the
leading question to the level of hypothesis and turned the tables on their critics by ask-
ing “Have you measured the best value of this experimentally observed phenomenon?”
Of course, either answer reinforces the idea of a best value. So what is this mysterious
best value?
To answer this question we need to distinguish between statistics and probability;
statistics has to do with measurements and data, whereas probability has to do with the
mathematical theory of those measurements. Statistics arise because on the one hand
individual results of experiments change in unpredictable ways and, on the other hand,
the average values of long data sequences show remarkable stability. It is this statistical
regularity that suggests the existence of a best value and hints at a mathematical model
of the body of empirical data [ 8 ]. We point this out because it is not difficult to become
confused over meaning in a discussion on the probability associated with a statistical
process. The probability is a mathematical construct intended to represent the manner
in which the fluctuating data are distributed over the range of possible values. Statistics
represent the real world; probability represents one possible abstraction of that world
that attempts to make quantitative deductions from the statistics. The novice should take
note that the definition of probability is not universally accepted by the mathematical
community. One camp interprets probability theory as a theory of degrees of reason-
able belief and is completely disassociated from statistics in that a probability can be
associated with any proposition, even one that is not reproducible. The second camp,
with various degrees of subtlety, interprets probability theory in terms of the relative fre-
quency of the occurrence of an event out of the universe of possible events. This second
definition of probability is the one used throughout science and is adopted below.
Q 2 , ...,
Search WWH ::




Custom Search