Geoscience Reference
In-Depth Information
r
n 1
Þ X n
2
ð
i ¼1 x i x JK
ð
Þ
σ JK ¼
.
The advantage of the jackknife estimate for the standard deviation is that it can
be generalized to other types of estimators such as the median.
The bootstrap generalizes the mean and its standard deviation in a different way.
Suppose that n samples X i , all of size n , are drawn with replacement from an
empirical pro ba bility distribution. The average value of the sample mea ns can
be written as X BS . Then the bootstrap estimate of the standard deviation of X BS is
. It is easy to show that x JK ¼
x and
σ JK ¼ σ
ðÞ
x
n
r
X n
i ¼1
2
ð
Þ
X i X BS
σ BS ¼
( cf . Efron 1982 , p. 2).
The rationale of bias reduction by using the jackknife is as follows. The
jackknife was originally invented by Quenouille ( 1949 ) under another name and
with the purpose to obtain a nonparametric estimate of bias associated with some
types of estimators. Bias can be formally defined as BIAS
n 2
ˑ F ˑ
ðÞ
where E F denotes expectation under the assumption that n iid quantities were drawn
E F
F
from an unknown probability distribution F , ˑ ¼ ˑ F is the estimate of a
parameter of interest with F representing the empirical probability distribution.
Quenouille's bias estimate ( cf . Efron 1982 , p. 5) is based on sequentially deleting
values x i from a sample of n values to generate different empirical probability
distributions
F i .
F i based on ( n
1) values and obtaining the estimates ˑ i ¼ ˑ
X n
i
1 ˑ i
Þ ˑ ˑ
¼
Suppose
ˑ ¼
, then Quenouille's bias estimate becomes n
ð
1
n
ˑ ¼
and the bias-corrected “jackknifed estimate” of
ˑ
becomes
ð
n
1
Þˑ
.
This estimate is either unbiased or less biased than ˑ
.
Box 12.1: Expectation E n as a Function of 1/ n
Suppose E n denotes the expectation of ˑ
for a sample of size n . Then, E n
a 1 ðÞ
n
a 2 ðÞ
n 2
a 3 ðÞ
n 3
¼ ˑ þ
for most statistics including all maximum
likelihood estimates. The functions a 1 ( F ), a 2 ( F ) , ...
þ
þ
þ ...
are independent of n .Also,
a 1 F
ðÞ
a 2 F
ðÞ
n 1
a 3 F
ðÞ
n 1
E n 1 ¼ ˑ þ
1 þ
þ
þ ...
(Schucany et al. 1971 ). Consequently,
n
2
3
ð
Þ
ð
Þ
¼
n
o
ˑ
a 2 ðÞ
nn
1
n 2
1
nn 1
E F
nE n
ð
n
1
Þ
E n 1 ¼ ˑ
Þ þ
a 3 F
ðÞ
þ
þ ...
.
ð
1
2
ð
Þ
ˑ
ˑ
is biased in O (1/ n 2 ). For
Contrary to
, which is biased in O (1/ n ),
X n
i ¼1 x i
2
ð
x
Þ
example, if the maximum likelihood estimate ˑ ¼
of variance of
X n
i
X n
i
n
2
n 1 . Suppose that E n is plotted
against 1/ n (see Fig. 12.1 ). Then the unbiased estimator
1 x i
1 x i x
ð
Þ
is used, ˑ ¼
¼
¼
the mean x
¼
n
ˑ ¼
E 1 , and
E nE1
E n 1
1 =n
E n
n , r
BIAS
¼
E n
E 1 ¼
( n
1)( E n 1
E n )
and
1
=
ð
n
1
Þ
1
=
ˑ ¼
1) E n 1 . Consequently, the jackknife, simply replaces E n by
E n1 ( cf . Efron 1982 , p. 8).
nE n
( n
Search WWH ::




Custom Search