Biomedical Engineering Reference
In-Depth Information
In practice, we may replace 2 by its MLE under the full model, denoted by
b
F . To select the best subset of random eect covariates, we then minimize
n
X
b
x is
z is
k 2 + 2
F
ky i
s
b
is
b
b
i=1
over all possible subsets, where
is an estimate of ; this approach is similar
to the C p criterion for the linear regression model 31 .
Alternatively, we may replace 2 by its conditional MLE, the maximizer
of the conditional likelihood (2.8), i.e.,
b
X
n
1
N
b
cs =
x is
z is
k 2 ;
b
ky i
s
b
is
i=1
P
n
i=1 n i . Then we nd a subset of x and z which minimizes
N log
where N =
cs + 2
b
b
and this can be viewed as an extension of AIC for linear regression models.
Vaida and Blanchard 46 also proposed a nite sample correction for cAIC;
here we omit the details.
Chen and Dunson 10 propose a hierarchical Bayesian model to identify
random eects having zero variance. A key step in their approach is to
apply a modified Cholesky decomposition for the covariance matrix A
of the random eects:
A = D T D;
(2.9)
where D = diagfd 1 ;; d q gis a diagonal matrix, and is a lower tri-
angular matrix with one on its diagonal. Represent i = Dv i , where
v i = (v i1 ;; v iq ) T is a vector of independent standard normal latent vari-
ables. Thus, model (2.1) can be rewritten as
y i = X i + Z i Dv i + " i :
Thus, we can select signicant random eects variables by identifying
nonzero diagonal elements of D. This can be done by choosing mixture
priors with positive point mass at zero for d j under the Bayesian variable
selection framework. Following standard convention, Chen and Dunson 10
choose conjugate priors for and 2 . The modied Cholesky decomposi-
tion allows us to choose the prior for nonzero o-diagonal elements of ,
given d 1 ;; d q , to be a normal distribution. With these priors, we are
ready to run MCMC to get the posterior distribution of the parameters,
including posterior probabilities for models. Since the priors for the d j 's
Search WWH ::




Custom Search