Biomedical Engineering Reference
In-Depth Information
new
3
n
ʱ
3
n
+
i
(for
i
=
,
,
ʱ
where
1
2
3) is the update value from Eq. (
4.41
) and
(for
+
m
=
,
,
m
3) is a new update value for these hyperparameters.
The rationale for this hyperparameter tying can be explained using the cost
function analysis described in Sect.
4.7
. Let us compute the constraint function in
Eq. (
4.66
) for a two-dimensional case in which the unknown parameters are denoted
x
1
and
x
2
. The constraint is rewritten in this case as,
1
2
x
j
ʽ
j
+
2
(ʲ
−
1
ˆ(
x
1
,
x
2
)
=
min
ʽ
log
+
ʽ
j
)
.
(4.92)
,ʽ
1
2
j
=
1
When the hyperparameters
ʽ
1
and
ʽ
2
are tied together, i.e., when we set these hyper-
parameters at the same value
ʽ
, the constraint function is changed to
x
1
+
x
2
(ʲ
−
1
ˆ(
x
1
,
x
2
)
=
min
ʽ
+
2log
+
ʽ)
.
(4.93)
ʽ
By implementing this minimization, the value of
ʽ
that minimizes the right-hand
side of the above equation,
ʽ
, is derived as
a
2
a
+
+
8
ʲ
−
1
a
ʽ
=
,
4
x
1
x
2
. Substituting this
where
a
=
+
ʽ
into Eq. (
4.93
), we derive the constraint
function,
2log
a
2
ʲ
−
1
a
4
a
a
+
+
8
ʲ
−
1
ˆ(
x
1
,
x
2
)
=
a
2
ʲ
−
1
a
+
+
.
(4.94)
4
a
+
+
8
The plot of the constraint function in Eq. (
4.94
) is shown in Fig.
4.2
a. For com-
parison, the Champagne constraint function when untying
ʽ
2
(Eq.
4.92
)is
shown in Fig.
4.2
b. The constraint functions for the
L
2
and
L
1
-norm regularizations
are also shown in Fig.
4.2
c, d for comparison. These plots show that the Champagne
constraint function when tying
ʽ
1
and
ʽ
2
has a shape very similar to the constraint
type of constraint does not generate a sparse solution. Thus, when tying the hyper-
parameter update values, the sparsity is lost among the solutions of
x
2
n
+
1
,
x
2
n
+
2
,
and
x
2
n
+
3
and there is no shrinkage over the source vector components. However,
since the sparsity is maintained across voxels, a sparse source distribution can still
be reconstructed.
ʽ
1
and