Biomedical Engineering Reference
In-Depth Information
Then the PRESS statistic can be calculated without tting the model n
times. Although the parameter is unknown in practice, we replace in
PRESS by its MLE or residual maximum likelihood estimate (REMLE).
Then one selects the best subset by minimizing the PRESS statistic over
all 2 d possible subsets. Liu, et al. 30 also studied the theoretical properties
of PRESS. For linear regression models, leave-one-out cross-validation is
asymptotically equivalent to the C p criterion and the AIC criterion (see
[43]), and intuitively such a relationship should still hold for model (2.2).
Thus, the PRESS variable selection criterion will be asymptotically incon-
sistent, i.e., the probability of selecting the smallest correct model does not
converge to 1 as either n or N go to1.
We then introduce penalized likelihood approaches, such as AIC and
BIC for model (2.2). Pauler 38 derived the BIC and its modications for lin-
ear mixed eects models, and Vaida and Blanchard 46 proposed conditional
AIC for mixed eects models. Let ` i (; ) be the logarithm of the condi-
tional likelihood function of y i given x i and z i . Then dene a penalized
conditional log-likelihood function as
n
` i (; ) d
X
X
1
n
p j (j j
j);
(2.5)
i=1
j=1
where p j () is a penalty function with a regularization parameter j . Max-
imizing (2.5) yields a penalized likelihood estimate. j controls model com-
plexity, and can be set to a xed value (as in AIC or BIC) or chosen adap-
tively by a data-driven method such as the generalized cross-validation
(GCV) 11 . In fact, the tuning parameters j need not be the same for all j;
this allows us to incorporate prior information for the unknown coecients
by using dierent values for each predictor. For instance, we may wish to
be sure of keeping certain theoretically important predictors in the model,
so we might choose not to penalize their coecients.
Residual (restricted) maximum likelihood (REML) is often used to con-
struct an unbiased estimate for in mixed eects models. Thus, we might
consider penalized residual likelihood instead of penalized conditional like-
lihood, see [20] for a discussion of penalized REML. As yet another alter-
native, we may consider penalized prole likelihood by replacing the con-
ditional likelihood
P
P
n
i=1 ` i (; ()),
where () is the MLE of given . Throughout this paper, we focus on
the penalized likelihood (2.5).
i ` i (; ) by the prole likelihood
Search WWH ::




Custom Search