Biomedical Engineering Reference
In-Depth Information
By computing the minimum on the right-hand side, we finally have
N
ˆ(
x
) =
1 ˕(
x j ),
(4.67)
j
=
where
log
ʲ 1
x 2
x 2
2
2
|
x
|
1
2 |
ʲ 1
˕(
x
) =
x 2
| +
+
+
x
|
+
4
.
(4.68)
+
4
ʲ 1
+|
x
)
is shown by the solid line. For comparison, the constraint for the L 1 -norm solution,
|
The constraint
˕(
x
)
in Eq. ( 4.68 ) is plotted in Fig. 4.1 .InFig. 4.1 a, the plot of
˕(
x
, is also shown by the broken line. The plots in Fig. 4.1 a show that the constraint
of the Champagne algorithm
x
|
˕(
)
x
is very similar to (but sharper than) the L 1 -norm
|
|
constraint
, suggesting that the Champagne algorithm produces sparse solutions.
In Fig. 4.1 b, the plots of
x
ʲ 1
ʲ 1
ʲ 1
10 are shown
by the dot-and-dash, broken, and solid lines, respectively. The vertical broken line at
x
˕(
x
)
when
=
0
.
1,
=
1 and
=
=
0showsthe L 0 -norm constraint,
x
0 , for comparison. It is shown here that the
ʲ 1 —namely the noise variance. When the
shape of the constraint
˕(
x
)
depends on
noise variance is small (i.e., a high SNR),
, and becomes
closer to the L 0 -norm constraint. That is, the Champagne algorithm uses an adaptive
constraint. When the SNR of the sensor data is high, it gives solutions with enhanced
sparsity. In contrast, when the sensor data is noisy, the shape of
˕(
x
)
is much sharper than
|
x
|
becomes similar
to the shape of the L 1 -norm constraint and the algorithm gives solutions with mild
sparsity.
˕(
x
)
(a)
(b)
5
1
4
0.8
3
0.6
2
0.4
1
0.2
0
0
5
0
5
5
0
5
x
x
Fig. 4.1 Plots of the cost function
˕(
x
)
shown in Eq. ( 4.68 ). a The solid line shows the plot of
ʲ 1
˕(
x
)
with
=
1, and the broken line the plot of
|
x
|
, which is the constraint for the L 1 -norm
ʲ 1
ʲ 1
ʲ 1
minimum-norm solution. b Plots of
10 shown by the
dot -and- dash , broken ,and solid lines , respectively. In this figure, the vertical broken line at x
˕(
x
)
when
=
0
.
1,
=
1and
=
=
0
shows the constraint for the L 0 -norm solution
 
Search WWH ::




Custom Search