Information Technology Reference
In-Depth Information
The chance of a head on the first toss is 0.5 since either side of the coin is equally
likely to be face up. The chance of a tail followed by a head is 0
25; because
the two events are independent the probability of both occurring is the product of the
separate probabilities. The chance of a tail occurring on the first n
.
5
×
0
.
5
=
0
.
1 tosses and then a
5 n . Here again the independence of the events
requires the formation of the product of the n independent probabilities. The prize if
there is no head until the n th toss is therefore given by
5 n 1
.
×
.
=
.
head on the n th toss is 0
0
5
0
2 2 $2 2
2 3 $2 3
1
2 (
1
1
$2
) +
+
+···=
$1
+
$1
+
$1
+···=
$ n
which increases linearly with increasing n . Thus, the expected winnings of the player
will be $ n for n tosses of the coin. But n is arbitrary and can always become larger, in
particular it can be made arbitrarily large or infinite. This is the type of process that is
depicted in Figure 2.16 , where the more the number of tosses the greater is the average
value. In fact, there is no well-defined expectation value since there is no limiting value
for the mean. An excellent account of such games can be found in Weaver [ 89 ].
Many more natural data sets manifest this lack of convergence to the mean than most
scientists were previously willing to believe. Nicolaus Bernoulli and his cousin Daniel
pointed out that this game of chance led to a paradox, which arises because the banker
argues that the expected winnings of the player are infinite so the ante should be very
large. On the other hand, the player argues that if we consider a large number of plays
then half of them will result in a player winning only $1, so the ante should be small.
This failure to agree on an ante is the paradox, the St. Petersburg paradox, named after
the journal in which Daniel Bernoulli published his discussion of the game. We empha-
size that there is no characteristic scale associated with the St. Petersburg game, but
there is a kind of scaling; let's call it the St. Petersburg scaling. Here two parameters are
adjusted to define the game of chance: the frequency of occurrence decreases by a factor
of two with each additional head and the size of the winnings increases by a factor of
two with each additional head. The increase and decrease therefore compensate for one
another. This is the general structure for scaling; the scale of a given event increases in
100
St. Petersburg game
10
1
ordinary coin toss
0.1
10 0
10 1
10 2
10 3
10 4
10 5
10 6
10 7
N trials
Figure 2.16.
Playing the St. Petersburg game millions of times on a computer indicates that its local average
diverges to infinity [ 7 ]. Reproduced with permission.
Search WWH ::




Custom Search