Biomedical Engineering Reference
In-Depth Information
9.3
Observed Information Matrix in Sieve MLE
We now apply the method described in Section 9.2 to a class of sieve MLEs.
We show that if the parameter space and the space H can be approximated
by a common approximation space, then the least-squares calculation in Sec-
tion 9.2 yields the observed information matrix. In other words, computation
of the observed information matrix is equivalent to solving the least-squares
problem of Section 9.2. So there is no need to actually carry out the least-
squares computation when the observed information can be computed as in
the ordinary setting of parametric estimation. This is computationally con-
venient because the observed information matrix is based on either the first
derivatives or the second derivatives of the log-likelihood function and these
derivatives are often already available in a numerical algorithm for computing
the MLE of unknown regression parameters.
On the other hand, for problems in which direct computation of the ob-
served information matrix is dicult, one can instead solve the least-squares
nonparametric regression problem to obtain the observed information matrix.
These nonparametric regression problems can be solved using standard least-
squares fitting programs.
As in finite-dimensional parametric models, some regularity conditions are
required for the MLE b n to be root-n consistent and asymptotically normal.
These regularity conditions usually include certain smoothness assumptions
on the innite-dimensional parameter and the underlying probability model.
Consequently, the least favorable direction will be a smooth function such as a
bounded Lipschitz function. Then we can take H to be the class of such smooth
functions. Many spaces designed for ecient computation can be used to ap-
proximate an element in H under appropriately defined distance. For example,
we may use the space of polynomial spline functions (Schumaker, 1981). This
 
Search WWH ::




Custom Search