Biomedical Engineering Reference
In-Depth Information
Risk ination criterion (RIC) is suggested in George and Foster 19
from a minimax estimation vantage point. RIC recommends the
model that minimizes
RSS(x)
2
RIC =
+ 2 log pkxk
0 ;
where p is the number of available predictors. This is derived from
selecting the model with minimum risk ination. Readers can see
that RIC is another special case of (P0), by taking 0 = 2 2 log p.
Although (P0) is a ne compendium of many subset selection criteria,
solving (P0) generally requires exhaustive search of all the possible sub-
sets. Whenkxk
0 (i.e., the number of the selected covariates) increases, the
methods based on exhaustive search become rapidly impractical. In fact,
solving (P0) in general is an NP-hard problem. The following theorem can
be considered as an extension of a result that was originally presented by
Natarajan 37 .
Theorem 1: Solving the problem (P0) with a xed 0 is an NP-hard
problem.
Using the idea of Lagrange multiplier, we can see that solving (P0)
with 0 is equivalent to solving the sparse approximate solution (SAS)
problem in Natarajan 37 with ", which is proven in Natarajan 37 to be NP-
hard. Hence, in general, solving (P0) is NP-hard.
3.2. Greedy Algorithms and (P1)
Due to the hardness of solving (P0), a relaxation idea has been pro-
posed. The relaxation replaces the ` 0 norm with the ` 1 norm in the ob-
jective, which leads to (P1). The idea of relaxation started in sparse signal
representation 6 . Theoretical properties are derived later in [11, 10]. A par-
tial list of new representative results includes Tropp 49;48 , Gribonval and
Nielsen 25 , and Chen and Huo 4;5 . Being compared with the problem in
this chapter, the problem of sparse signal representation has a dierent
objective. In sparse signal representations, researchers consider a redun-
dant dictionary 33;23 and the conditions under which the sparsest repre-
sentation can be solved via a linear programming. Their formulations of
(P0) and (P1) are slightly dierent from ours. However, a group of results
in this chapter are certainly motivated by some recent results in sparse
representation. More connections are discussed in Huo and Ni 28 .
Search WWH ::




Custom Search