Information Technology Reference
In-Depth Information
(
) -
(
)
*
**
* =
ˆ
ˆ
ˆ
ˆ
RqFFqFF
,
,
1
n
1
n
*
Â
Â
(
()
)
(
()
) -
*
=
Qy n t
,
Qy
,
h
t
(2.1)
*
ˆ
ˆ
*
i
i
i
i
F
F
n
n
i
=
1
i
=
1
4. Repeat 1-3 a large number B times to get R 1 ,..., R B . The boot-
strap estimate of expected excess error is
B
1
*
Â
ˆ
r
boot =
R b
.
B
b
=
1
See Efron (1982) for more details.
The jackknife estimate of expected excess error is
(
)
ˆ
(
)
ˆ
r
jack =-
n
1
RR
-
,
()
.
F
where
( i ) is the empirical distribution function of ( x 1 ,..., x i -1 , x i +1 ,...,
x n ), and
1
n
(
)
(
)
ˆ
ˆ
Â
ˆ
ˆ ,
ˆ
()
RRFFR n
=
i
,
,
=
RRRFF
,
=
.
()
()
()
i
i
i
=
1
Efron (1982) showed that the jackknife estimate can be reexpressed as
1
n
1
n
1
n
Â
Â
Â
(
()
) -
(
()
)
ˆ
r
jack =
Qy
h
t
nn Qy
,
h
t
()
()
i
,
ˆ
i
i
ˆ
i
i
i
n
F
F
i
=
1
i
=
1
j
=
1
The cross-validation estimate of expected excess error is
n
n
1
1
Â
(
()
) -
Â
(
()
)
ˆ
r
cross =
Qy
h
t
Qy
,
h
t
.
()
ˆ
ˆ
i
,
i
i
i
F
i
F
n
n
i
=
1
i
=
1
Let the training sample omit patients one by one. For each omission,
apply the prediction rule to the remaining sample and count the number
(0 or 1) of errors that the realized prediction rule makes when it predicts
the omitted patient. In total, we apply the prediction rule n times and
predict the outcome of n patients. The proportion of errors made in these
n predictions is the cross-validation estimate of the error rate and is the
first term on the right-hand side. [Stone (1974) is a key reference on
cross-validation and has a good historical account. Also see Geisser
(1975).]
Search WWH ::




Custom Search