Graphics Reference
In-Depth Information
The second property means that when p is integrated over all of S ,theresult
is 1, whether S is something one-dimensional like the interval [ a , b ] , in which
case the normality condition would be written b
a p ( s ) ds = 1, or something two-
dimensional, like the unit square, in which case we'd write
1
1
p ( x , y ) dy dx = 1.
(30.17)
0
0
For the most part our probability spaces will be things like the interval or
the sphere, which have the property that when we integrate the constant function
1 over them, the result is some finite value, which we'll call the size of S , and
denote size ( S ) . In these cases, the associated density will usually be the constant
function with value 1
size ( S ) , called a uniform density . Note that there is no
uniform density for the real line, however.
In the discrete case, the normality condition involved a sum; in the continuum
case, it involves an integral. You might be tempted to think of the probability
density as just like the probability mass in the discrete case, but they're quite
different, as the next inline exercise shows. The proper interpretation is that the
density represents probability per unit size . Thus, in cases where we have units,
probability and probability density differ by the units of size (i.e., length, area, or
volume).
/
Inline Exercise 30.5: Let S =[ 0, 1 ] and p ( x )= 2 x . Show that ( S , p ) is a
probability space by checking the two conditions on p . Observe that p ( 1 )= 2,
so a probability density may have values greater than 1, even though probability
masses are never greater than 1.
To return to our program, the set of all possible executions of the program is
infinite: 2 In fact, there's one execution for every real number between 0 and 1. So
we can say that our probability space is S =[ 0, 1 ] , the unit interval. And since we
regard each possible value of uniform(0,1) as equally likely, we associate to S
the uniform density defined by p ( x )= 1 for all x
[ 0, 1 ] .
Just as in the discrete case, a random variable is a function X from S to R ,
and an event is a subset of S , but we'll mostly restrict our attention to events of
the form a
X
b , where X is some random variable.
To be honest, not every subset of S is an event; only the “measurable”
ones. But it's essentially impossible to write down a non-measurable set, and
certainly not possible to encounter one while performing computations on an
ordinary computer, so we'll ignore this subtlety. If you like, you may consider
events to be restricted to things like intervals or rectangles, or other similarly
nice sets over which you know how to integrate.
The probability of an event E in a probability space ( S , p ) is the integral of
p over E , just as in the discrete space the probability of an event is the sum of the
probability masses of the points in the event.
2. We're pretending that our random number generator returns real numbers, rather than
floating-point representations of them.
 
 
Search WWH ::




Custom Search