Biomedical Engineering Reference
In-Depth Information
ʣ y is expressed as
where according to Eq. (B.30),
ʣ y = ʲ 1 I
+ ʱ 1 FF T
.
(2.65)
If the increase of the likelihood in Eq. ( 2.64 ) with respect to the iteration count
becomes very small, the iteration may be stopped.
2.10.3 L 1 -Regularized Method
The method of L 1 -norm regularization can also be derived based on the Bayesian
formulation. To derive the L 1 -regularization. we use the Laplace distribution as the
prior distribution
2 b exp
N
1
1
b |
p
(
x
) =
x j |
.
(2.66)
j
=
1
Then, using Eq. ( 2.57 ), (and replacing F with H ), the cost function is derived as
N
2
F(
x
) = ʲ
y
Hx
+
2 b
1 |
x j | ,
(2.67)
j
=
ʾ =
which is exactly equal to Eq. ( 2.53 ), if we set
.
Another formulation for deriving the L 1 -regularized method is known. It uses the
framework of the sparse Bayesian learning described in Chap. 4 . In Chap. 4 , assuming
the Gaussian prior,
2 b
ʱ j
2
1 / 2 exp
2 x j
N
N
ʱ j
, ʱ 1
j
p
(
x
| ʱ ) =
1 N(
x j |
0
) =
,
(2.68)
ˀ
j
=
j
=
1
we derive the marginal likelihood for the hyperparameter
ʱ =[ ʱ 1 ,...,ʱ N ]
,
p
(
y
| ʱ )
, using,
p
(
y
| ʱ ) =
p
(
y
|
x
)
p
(
x
| ʱ )
d x
,
(2.69)
and eventually derive the Champagne algorithm. However, instead of implementing
Eq. ( 2.69 ), there is another option in which we compute the posterior distribution
p
(
|
)
x
y
using
p
(
x
|
y
)
p
(
y
|
x
)
p
(
x
| ʱ )
p
( ʱ )
d
ʱ =
p
(
y
|
x
)
p
(
x
),
(2.70)
 
 
Search WWH ::




Custom Search