Prediction can be defined as "to declare in advance: foretell on the basis of observation, experience, or scientific reason" (Webster’s 7 th New Collegiate Dictionary). Predictions can be short-term, long-term, qualitative, quantitative, negative, positive, optimistic, or pessimistic. They can be based on hunches, guesses, folklore, astrology, palm reading, extrapolations from data, or scientific laws and theories or alleged laws of historical change. The term prediction overlaps with prophecy but has somewhat different connotations.
Not until the seventeenth and eighteenth centuries did prediction take on its modern connotations. Pierre-Simon Laplace (1749-1827), arguably its first major theorist, presented a systematic answer to the major theoretical and practical problem prediction raises: Why are we so successful in some areas of research (e.g., astronomy, physics, and chemistry) but conspicuously less successful in others, particularly human behavior? The most frequently given answers are that some phenomena are intrinsically unpredictable, or we are not using correct scientific methods, or the complexity of the phenomena rule it out, or it is due to limits of human ignorance, fallibility, and superstition. The twentieth century saw significant developments in the theory and practice of prediction, with simultaneous significant advances in predictive accuracy and a greater awareness of the limits of such accuracy.
EMPIRICAL AND PRAGMATIC SIGNIFICANCE
There are four major theories concerning the possibility of predictive success in the areas where, so far, they have been rather limited. The first is Laplacian determinism, named for Laplace, who asserted that an intelligence endowed with omniscience (who therefore would know the current position and velocity of all particles, as well as the laws that control their behavior) could predict every future event.
The second view, the covering law model, is based on the two principles involved in the Laplacian view. As developed by Karl Popper (1934), it requires that an adequate explanation be deduced from universal laws and initial conditions (a generalization of Laplace’s positions and velocity). The more general and more precise the prediction, the better it is. This is because it is more testable, which in Popper’s view means "falsifiable."
The third position, probabilistic prediction, is more modest but still in accordance with Laplace’s view that our ignorance and fallibility forces us to rely on probability, not certainty, in our predictive endeavors. We can predict a 60 percent chance of rain this weekend, but not with the virtual certainty of astronomical predictions (e.g., that Venus will be in transit in 2112) or those based on (Newtonian) laws of gravity and motion. The quantum physics uncertainty principle is often seen as the paradigm case of this type, although weather predictions and many social phenomena may be better examples, given that quantum electrodynamics has the most numerically precise, accurate predictions in the history of science.
The final theory, pattern prediction, originated with Warren Weaver (1894-1978), a natural scientist, but was made widely known to economists and social scientists by Friedrich von Hayek (1899-1992). This position seems furthest from Laplacian determinism. Pattern prediction downplays precise quantitative predictions, and has a weakened type of testability, due to the theory of complex phenomena adapted from Weaver. It appears to harmonize well with modern chaos theory and with probabilistic reasoning about phenomena. However, the most recent developments in the theory and practice of prediction may be making it more susceptible to quantitative precision.
Any adequate theory of prediction should allow retrodiction as a special type. If one predicts that devaluing the currency will lead to inflation, then a study of such policies in ancient Rome, Egypt, or China should find these results. This can be defined as prediction, although it concerns not the society studied but what the scholars of such societies should find.
CRITICISM OF METHODOLOGY
B. F. Skinner’s behaviorism emphasized prediction and control; his position was extreme Laplacian determinism. The reason for our inability to be as successful at predicting human behavior as we are with predictions made in physics and astronomy is due to our failure to apply proper scientific method, primarily because of a lingering antiempirical belief in an inner man not subject to the laws of nature. But the history of science since Newton has postulated several unobservable entities (e.g., atoms, gravity, natural selection) with spectacular practical and empirical success.
Logical positivism (including Popper’s weakened version) associated predictions with testability, meaning both verifiability and falsifiability. But it has been argued on both logical and historical grounds that a false prediction does not always disprove a theory. Nor does verification of predictions prove the theory true for purely logical reasons (the fallacy of affirming the consequent).
Popper that, in contrast to unconditional historical prophecies, scientific predictions are conditional (1963). This is based on his model described above and the laws he argues are conditional not asserting the existence of the initial conditions. But the model only works for systems that are well isolated, stationary, and recurrent (Popper 1963). This is why predictions of eclipses and the regularity of the seasons can be made accurately, because the solar system is such a system. Such systems are not typical; they are special cases where scientific prediction becomes particularly impressive.
In addition, there are arguments based on the nondeterministic character of classical physics. The "three-body problem" undermines Laplacian determinism. The three-body problem arises out of the inability of physics (so far) to use Newton’s laws for more than two bodies.
The twentieth-century scientist Edward Lorentz, distinguished between two types of determinism thus: "one with clear rules that apply always and everywhere" so that repetition of the same conditions makes prediction possible and another where "small variations aggregate and amplify" (Kaplan and Kaplan, 2006, p. 221) and so repetition prevents prediction. The former will result in long-run variation canceling out, with a clear pattern emerging. The latter leads to what are now called "chaotic systems," which are extremely sensitive to initial conditions.
Despite these impediments, progress has been made in improving meteorological and other types of predictions by the use of "ensemble prediction" as well as "threshold" and "pattern effects." The former involves turning predictability itself into a variable, such as temperature or rainfall. The latter combines Weaver’s pattern prediction with the important idea of threshold effects, as in the second type of determinism described in the preceding paragraph. Threshold effects are best illustrated by the straw that broke the camel’s back.
Developments in the last two decades are improving efforts to predict possible disasters such as earthquakes, tornadoes, epidemics, and perhaps climate change, as well as the interactions of large groups of people, their societal effects, and their impact on the environment. (Cribb, 2006). Great use has been made of modern computers, often employing simulated effects over centuries (which leaves a great margin of error due to unreliable data), possibly unjustified assumptions in constructing the models, and human fallibility, yet there are good indications of improved predictive ability with computers.
Nonetheless, we are nowhere near Laplace’s omniscient intelligence, and for theoretical and practical reasons we are unlikely ever to be. The major reason is that the mathematics and data used are both so complicated (millions and millions of lines in a computer code) that no one person can master it. In addition, it relies on Baysean probability theory. The mathematics of that is not very complex, but it requires initial assignments of probability, which can then be modified by new evidence. However, there are numerous problems in deciding how to assign these so that they are not too arbitrary.