Databases Reference
In-Depth Information
A.6 Stochastic Process
We are often interested in experiments whose outcomes are a function of time. For example, we
might be interested in designing a system that encodes speech. The outcomes are particular
patterns of speech that will be encountered by the speech coder. We can mathematically
describe this situation by extending our definition of a random variable. Instead of the random
variable mapping an outcome of an experiment to a number, we map it to a function of time.
Let S be a sample space with outcomes
{ ω i }
. Then the random or stochastic process X is a
mapping
X
:
S
F
(A.19)
where
F
denotes the set of functions on the real number line. In other words,
X
(ω) =
x
(
t
)
ω
S
,
x
F, −∞ <
t
<
(A.20)
(
)
The functions x
t
are called the realizations of the random process, and the collection of
functions
is called the ensemble of the stochastic process.
We can define the mean and variance of the ensemble as
{
x ω (
t
) }
indexed by the outcomes
ω
μ(
t
) =
E
[
X
(
t
) ]
(A.21)
2
2
σ
(
t
) =
E
[ (
X
(
t
) μ(
t
))
]
(A.22)
If we sample the ensemble at some time t 0 , we get a set of numbers
{
x
ω (
t 0 ) }
indexed by the
outcomes
, which by definition is a random variable. By sampling the ensemble at different
times t i , we get different random variables
ω
{
x ω (
t i ) }
. For simplicity we often drop the
ω
and t
and simply refer to these random variables as
.
We will have a distribution function associated with each of these random variables.
We can also define a joint distribution function for two or more of these random variables.
Given a set of random variables
{
x i }
{
x 1 ,
x 2 ,...,
x N }
,the joint cumulative distribution function is
defined as
F X 1 X 2 ··· X N (
x 1 ,
x 2 ,...,
x N ) =
(
X 1 <
x 1 ,
X 2 <
x 2 ,...,
<
x N )
(A.23)
P
X N
Unless it is clear from the context what we are talking about, we will refer to the cdf of the
individual random variables X i as the marginal cdf of X i .
We can also define the joint probability density function for these random variables
f X 1 X 2 ··· X N (
in the same manner as we defined the pdf in the case of the sin-
gle random variable. We can classify the relationships between these random variables in a
number of different ways. In the following we define some relationships between two random
variables. The concepts are easily extended to more than two random variables.
Two random variables X 1 and X 2 are said to be independent if their joint distribution
function can be written as the product of the marginal distribution functions of each random
variable; that is,
x 1 ,
x 2 ,...,
x N )
F X 1 X 2 (
x 1 ,
x 2 ) =
F X 1 (
x 1 )
F X 2 (
x 2 )
(A.24)
This also implies that
f X 1 X 2 (
x 1 ,
x 2 ) =
f X 1 (
x 1 )
f X 2 (
x 2 )
(A.25)
Search WWH ::




Custom Search