Biomedical Engineering Reference
In-Depth Information
as
k X
(t) =
x j I(t j1 < t t j )
j=1
with t 0 = 0 and x 1 > x 2 > ::: > x k because (t) = logit[S 0 (t)] is a non-
increasing function. We can reparameterize it as (t) = 0 P j:t j <t exp( j ).
The log-likelihood thus can be rewritten as
8
<
2
4 (1 i )( 0 X
j:t j <C i
3
5
X
exp( j ) + Z 0 )
l(;) =
:
i=1
2
4 1 + exp( 0 X
j:t j <C i
3
9
=
5
exp( j ) + Z 0 )
log
:
;
@l(;)
@
The MLE can be computed by solving the equations
= 0 and
@l(;)
@
= 0 simultaneously or by applying the Newton-Raphson algorithm,
which requires the first and second derivatives of the log-likelihood with re-
spect to the parameters. Similar to the results in Section 4.2, under some
regularity conditions, the MLE ^ n and S n (t) are consistent estimators of 0
and S 0 (t) with similar properties of the estimator given in Section 4.2; that
is, the overall convergence rate is n 1=3 , but the estimator of the regression
parameter ^ n can achieve the p n convergence rate. The variance-covariance
matrix of ^ n achieves the information bound indicating that ^ n is an ecient
estimator of 0 . Details of the proof can be found in Huang (1995).
Here we consider another approach, the sieve MLE. The idea of the sieve
estimation is the same as the method discussed above, that is, approximating
the innite-dimensional parameter space of the baseline (t) by a sequence of
finite-dimensional parameters , where 2 with the nite and bounded
parameter space. Again, to maximize the log-likelihood l(;), we need only
to maximize l(;) over and in the space R p . Instead of using the step
functions with jumps at t j ;j = 1;:::;k, sieve estimation denes a partition
0 = t 0 < t 1 < ::: < t m1 < t m = of [0;], with [0;] the bounded support
of C i 's. The function has a lot of choices; Rossini and Tsiatis (1996) used
 
Search WWH ::




Custom Search