Biomedical Engineering Reference
In-Depth Information
conditions. If F is twice differentiable around t
0
, it is not unreasonable to ex-
pect that appropriate estimators of F(t
0
) will converge faster than n
1=3
. This
is suggested, firstly, by results in classical nonparametric kernel estimation of
densities and regression functions where kernel estimates of the functions of
interest exhibit the n
2=5
convergence rate under a (local) twice-differentiability
assumption on the functions and, secondly, by the work of Mammen (1991) on
kernel-based estimation of a smooth monotone function while respecting the
monotonicity constraint. In a recent paper, Groeneboom et al. (2010) provide
a detailed analysis of smoothed kernel estimates of F in the current status
model.
Two competing estimators are proposed by Groeneboom et al. (2010): the
MSLE (maximum smoothed likelihood estimator), originally introduced by
Eggermont and LaRiccia (2001) in the context of density estimation, which
is a general likelihood-based M estimator and turns out to be automatically
smooth, and the SMLE (smoothed maximum likelihood estimator), which is
obtained by convolving the usual MLE with a smooth kernel. IfP
n
denotes the
empirical measure of the f
i
;U
i
g's, the log-likelihood function can be written
as
Z
l
n
(F) =
f log F(u) + (1 ) log(1 F(u))gdP
n
(;u) :
For i 2f0; 1g, dene the empirical sub-distribution functions
X
1
n
G
n;i
(u) =
1
[0;u]fig
(T
j
;U
j
) :
j=1
Note that dP
n
(u;) = dG
n;1
(u) + (1 ) dG
n;0
(u). Now, consider a proba-
bility density k that has support [1; 1], is symmetric and twice continuously
differentiable onR; let K denote the corresponding distribution function; and
let K
h
(u) = K(u=h) and k
h
(u) = (1=h) k(u=h), where h > 0. Consider now
kernel-smoothed versions of theG
n;i
's given by
G
n;i
(t) =
R
g
n;i
(u) du for
[0;t]
i = 0; 1, where
Z
g
n;i
(t) =
k
h
(tu) dG
n;i
(u) :
Search WWH ::
Custom Search