Environmental Engineering Reference
In-Depth Information
The relative uncertainties of the parameter estimates
can be conveniently expressed by the standard error as
a fraction of the mean. In this case the relative uncer-
tainties of the mean, standard deviation, and skewness
are 25, 25, and 40%, respectively. These results support
the assertion that the relative accuracy of the skewness
is generally less than that of the mean and standard
deviation.
moments, particularly for large samples (Haan, 1977).
The method of moments is severely affected if the
data contain errors in the tails of the distribution,
where the moment arms are long (Chow, 1954), and is
particularly severe in highly skewed distributions (Haan,
1977). In contrast, the relative asymptotic bias of
upper quartiles (i.e., high return period events) is
smallest for the method of moments and is largest
for the maximum likelihood method when the true
distribution is either the log-normal distribution
and another distribution is fitted to it (Strupczewski
et al., 2002).
10.6.2 Maximum Likelihood Method
The maximum likelihood method selects the population
parameters that maximize the likelihood of the observed
outcomes. Consider the case of n independent outcomes
x 1 , x 2 , . . . , x n , where the probability of any outcome, x i ,
is given by p X ( x i | θ 1 , θ 2 , . . . , θ m ), where θ 1 , θ 2 , . . . , θ m are
the population parameters. The probability of the n
observed (independent) outcomes is then given by the
product of the probabilities of each of the outcomes.
This product is called the likelihood function , L ( θ 1 ,
θ 2 , . . . , θ m ), where
EXAMPLE 10.11
Determine the maximum likelihood estimates of the
mean and standard deviation of samples that are
assumed to be drawn from a normal distribution.
Solution
n
For a normal distribution, the probability distribution
can be expressed as
L
(
θ θ
,
,
,
θ
)
=
p x
(
|
θ θ
,
,
,
θ
)
(10.64)
1
2
m
X
i
1
2
m
i
=
1
2
1
1
2
x
µ
σ
x
p x
( |
µ σ
,
)
=
exp
The values of the parameters that maximize the value
of L are called the maximum likelihood estimates of the
parameters. Since the form of the probability function,
p X ( x | θ 1 , θ 2 , . . . , θ m ), is assumed to be known, the
maximum likelihood estimates can be derived from
Equation (10.64) by equating the partial derivatives of
L with respect to each of the parameters, θ i , to zero. This
leads to the following m equations
x
x
σ
2
π
x
x
and hence the likelihood function for N measurements
is given by Equation (10.64) as
N
(
L
(
µ σ
,
)
=
p x
|
µ σ
,
)
x
x
i
x
x
i
=
1
N
N
=
1
1
L
2
exp
(
x
µ
)
=
0
,
i
=
1
,
,
m
(10.65)
i
x
2
σ
2
σ
2
π
θ
x
x
i
=
1
It is more convenient to work with the log-likelihood
function, which is given by
This set of m equations can then be solved simultane-
ously to yield the m maximum likelihood parameters
ˆ
ˆ
ˆ
1 2 m . In some cases, it is more convenient to
maximize the natural logarithm of the likelihood func-
tion than the likelihood function itself. This approach is
particularly convenient when the probability distribu-
tion function involves an exponential term. It should be
noted that since the logarithmic function is monotonic,
values of the estimated parameters that maximize the
logarithm of the likelihood function also maximize the
likelihood function.
The method of moments and the maximum likeli-
hood method do not always yield the same estimates of
the population parameters. The maximum likelihood
method is generally preferred over the method of
θ θ
,
,
,
θ
N
1
2
ln[ (
L
µ σ
,
)]
= −
N
ln
2
π
N
ln
σ
(
x
µ
)
x
x
x
i
x
2
σ
2
x
i
=
1
Taking the partial derivatives of ln L and setting them
equal to zero yields
N
ln L
1
(
ˆ
) =
=
x
µ
0
(10.66)
i
x
ˆ
µ
σ
2
x
x
i
=
1
N
ln L
N
1
(
) =
2
ˆ
= −
+
x
µ
0
(10.67)
i
x
ˆ
ˆ
σ
σ
σ
3
x
x
x
i
=
1
Search WWH ::




Custom Search