Biomedical Engineering Reference
In-Depth Information
piecewise constant functions defined as
X
m (t;b) =
b j I[t j1 < t t j ];
j=1
and Huang and Rossini (1997) chose the continuous piecewise linear functions
with the form
b j b j1
X
t j t j1 t b j t j1 b j1 t j
m (t;b) =
I[t j1 < t t j ];
t j t j1
j=1
where m 0 b 0 b 1 ::: b m M for 1 < m 0 < M 0 < 1. The MLE
based on sieve functions are then given by the solution of the score equations
0
@ U (;)
U (;)
1
0
@ @l(;)=@
@l(;)=@
1
A =
A = 0:
U(;) =
Denote the sieve MLE as ^ s and ^ s . Then the variance-covariance of ^ s
and ^ s can be estimated by the inverse of the observed Fisher information ma-
trix, which requires the values @U (;)
@
= ^ s ;= ^ s , @U (;)
= ^ s ;= ^ s and
@
= ^ s ;= ^ s . Because the sieve estimation depends on the partition
on the support of C i 's, the biggest advantage of sieve estimation is that it
works well when the number of distinct observed time points is large. And the
@U (;)
@
convergence rate of the sieve MLE is relatively faster than the usual MLE.
But problems arise on the choice of the partition (number of knots and the
bandwidth) and the choice of the functions. Generally, the number of parti-
tion intervals m should increase when the sample size n increases. Huang and
Rossini (1997) suggested that m be an integer with rate O(n ) for 0 < < 1
with max 1jm (t j t j1 ) C n for some constant C . Rossini and Tsiatis
(1996) proved that when 1=4 < < 1, ^ s and S ^ s are consistent. Furthermore,
p n( ^ s 0 ) converges to a normal distribution with mean 0 and the variance-
covariance matrix achieves the information lower bound, which indicates the
eciency of the sieve MLE of .
 
Search WWH ::




Custom Search