Agriculture Reference
In-Depth Information
If we relax the requirement that the subsets must be independent, we can use
other approaches to generate a set of subsamples. If we partition the observed
sample into
m
random groups with size
b
n
/
m
, a subsample of size
n
-
b
can be
obtained by dropping the
a-
th random group. An estimate (
ʸ
¼
J
a
) of the parameter of
interest can be evaluated for each replicate, using the same functional form as the
sample estimator but only based on data that remain after omitting the
a-
th group.
We can define
J
a
ʸ
mʸ
Þʸ
J
a
¼
ð
m
1
:
ð
10
:
35
Þ
The jackknife estimator of
ʸ
is
X
m
a
¼1
ʸ
J
a
m
;
J
ʸ
¼
ð
10
:
36
Þ
and the jackknife variance estimator is defined as
2
X
m
a
¼1
ʸ
J
J
a
ʸ
ʸ
¼
V
JK
:
ð
10
:
37
Þ
mm
ð
1
Þ
In general, this estimator becomes more stable as
m
increases. The maximum
precision of the estimator occurs with a non-random group of size 1, where we
obtain
n replicates
by omitting the units of the sample one at a time.
One possible alternative to the jackknife is represented by the bootstrap tech-
nique. In the context of SRS with replacement (independent and identically dis-
tributed observations) from a given sample of size
n
, we can construct the so-called
bootstrap
universe of subsamples selected from the
n
n
possible
replicates
.If
ʸ
is the
for the observed sample,
ʸ
b
a
estimator of the parameter
ʸ
is the
bootstrap
estimator
, having the same functional form of
ʸ
of
but evaluated on a
replicate
. The number
of possible samples,
n
n
,
is very large. Therefore, the procedure generally stops after
the random selection of a predefined
m
number of subsamples, which are consid-
ered enough for making inferences on the variance of the estimator. The bootstrap
estimate of the parameter is the average over the subsample of the
m
estimates
ʸ
X
m
a
¼1
ʸ
b
a
m
;
b
ʸ
¼
ð
10
:
38
Þ
and the bootstrap variance estimator is defined as
Search WWH ::
Custom Search