Biomedical Engineering Reference
In-Depth Information
Residuals
r 1 and r 1 have the same direction. This is an important step to
simplify our analysis.
As a sanity check, the following calculations are performed:
e
p
(1) For i,h i ;
e
r 1 i= 1=
A.
(2) For j,
r 1 i=ha j s + b j j ; s a 1
a 2
b 1
h j ;
e
1 i
b j (a 1 a 2 )
b 1
= a j
h j ; 1
i:
As special cases:h 1 ;
e
r 1
i= a 2 ,h 2 ;
e
r 1
i= a 2 , and for j3,h j ;
r 1
e
i=
a j .
The above analysis demonstrates some basic techniques that will be
used in the consequent LARS steps. Now we can use induction to show the
following.
Theorem 3: In the example described in the beginning of this section,
LARS choose covariates 1 ; 2 ; :::; mA one by one sequentially in the rst
mA steps.
It takes some energy to verify the above theorem. We skip it. Readers
can nd the proof in Huo and Ni 28 . This example shows that LARS can
choose all the covariates outside an intuitively optimal subset before it
reaches any covariate inside the optimal subset.
4.1.1. Standardized Covariates
Readers may notice that LARS should proceed along the direction that
depends on the correlations between i 's and the residual. Meanwhile, in
our previous case study, the proceeding direction is determined due to the
inner product. The inner product is not proportional to the correlation since
the response s and the covariate vectors i 's are not standardized to have
mean 0. However, this discrepancy can be easily remedied as follows. The
key observation is that LARS only depend on geometric information. More
specically, the result depends only onh i ; si, i = 1; 2; :::; m, andh i ; j
i,
1i; jm. For example, an orthogonal transform of both s and i 's will
retain the results in LARS. We state this without a proof.
Lemma 4: After a simultaneously orthogonal transform on both response
and covariates, the results of LARS from the transformed data is the same
orthogonal transform of the LARS results from the original data.
Search WWH ::




Custom Search