Biomedical Engineering Reference
In-Depth Information
et al.
22
imposed a normality assumption on the population, and imple-
mented covariance selection by minimizing the following negative penalized
likelihood function with L
q
penalty:
!
P
t1
j=1
tj
e
ij
X
m
X
n
g
2
X
m
t1
X
fe
it
n log
t
j
q
:
+
+
j
ij
t
t=1
i=1
t=2
j=1
Note that since D is diagonal, u
i1
;; u
id
are uncorrelated. The AR repre-
sentation for elements of L and D allows us to use penalized least squares
for covariance selection (see [28]). Thus, without the normality assump-
tion, we are still able to parsimoniously estimate the covariance matrix.
We rst estimate
t
using the mean squared errors from model (3.1). For
t = 2;; m, covariance matrix structure can be selected by minimizing
the following penalized least squares functions:
X
n
(e
it
t1
X
t1
X
1
2n
tj
e
ij
)
2
+
p
t;j
(j
tj
j);
(3.2)
i=1
j=1
j=1
where p
t;j
()'s are penalty functions with tuning parameter
t;j
. This re-
duces the non-sparse elements in the lower triangle matrix L. With esti-
mated L and D, can be easily estimated by
1
1
b
b
b
L
D(
L
)
T
.
4. Variable Selection for GEE Model Fitting
The generalized estimating equations (GEE) approach of Liang and Zeger
27
provides a unied way to t regression models with clustered/longitudinal
data for discrete or continuous y. It can be viewed as an extension of quasi-
likelihood approach for generalized linear models (GLIM; see [1], [32]) to
allow longitudinally correlated clusters. Let
ij
= E(y
ij
jx
ij
) = g(x
ij
)
for known link function g(), and Var(y
ij
x
ij
) = V(
ij
) for a scale pa-
rameter and variance function V(). Let
i
= (
i1
;;
in
i
)
T
, x
ij
=
[x
ij1
; :::; x
ijd
]
T
, and D
i
be a matrix with (j; k)-element @
ij
=@
k
. Liang
and Zeger
27
proposed estimating by solving the following generalized
estimating equations
j
X
G()
de
=
n
i=1
D
i
A
1=2
R
i
A
1=2
(y
i
i
) = 0;
(4.1)
i
i
where A
i
is a n
i
n
i
diagonal matrix with elements V(
ij
), and R
i
is
the working correlation matrix.
Search WWH ::
Custom Search