Biomedical Engineering Reference
In-Depth Information
tion 9.6 concludes with some discussions. Some technical details are included
in the appendices.
9.2
Consistent Information Estimation
Let X 1 ;:::;X n be independent random variables with a common probability
measure P ; , where (;) 2 . Here, is an open subset of R d and
is a general space equipped with a norm kk . Assume that P ; has a
density p(; ;) with respect to a -nite measure. Denote = (;) and let
0 = ( 0 ; 0 ) 2 be the true parameter value under which the data are
generated. Suppose there exists a sequence of finite-dimensional spaces f n g
that converges to , in the sense that for any 2 we can nd n 2 n
such that k n k ! 0 as n ! 1. The (sieve) MLE of 0 is the value
^ n ( ^ n ; ^ n ) that maximizes the log-likelihood
X
` n () =
log p(Xi; i ; ;)
i=1
over the parameter space T n n . Here we assume that the MLE exists.
Denote the Euclidean norm on R d by kk. Suppose it has been shown that
k^ n 0 k T n k ^ n 0 k 2 + k ^ n 0 k 2
o 1=2
= O p (r n );
(9.1)
where r n is a sequence of numbers converging to infinity. Consistency and
rate of convergence in nonparametric and semiparametric models have been
addressed by several authors; see, for example, van de Geer (1993), Shen and
Wong (1994), and van der Vaart and Wellner (1996). The results and methods
developed by these authors can often be used to verify Equation (9.1).
The motivation to study consistent information estimation is the following.
In many semiparametric models, in addition to that in Equation (9.1), it can
 
Search WWH ::




Custom Search