Biomedical Engineering Reference
In-Depth Information
function equal to zero. Therefore, we cannot obtain an optimum solution of x based
only on the least-squares method.
A general strategy for overcoming this problem is to integrate a “desired property”
of the unknown parameter x into the estimation problem. That is, we choose x so as
to maximize this “desired property,” and also satisfy y
Fx . Quite often, a small
norm of the solution vector is used as this “desired property,” and in this case, the
optimum estimate
=
x is obtained using
2
x
=
argmin
x
x
subject to y
=
Fx
.
(2.22)
In the optimization above, the notation of “subject to” indicates a constraint,
(i.e., the above optimization requires that the estimate
x be chosen such that x
2 as well as satisfies y
minimizes
Fx .) To solve the constraint optimization
problem in Eq. ( 2.22 ), we use the method of Lagrange multipliers that can convert a
constrained optimization problem to an unconstrained optimization problem. In this
method, using an M
x
=
1 column vector c as the Lagrange multipliers, we define a
function called the Lagrangian
×
L (
x
,
c
)
such that
2
c T
L (
x
,
c
) =
x
+
(
y
Fx
) .
(2.23)
The solution
x is obtained by minimizing
L (
x
,
c
)
above with respect to x and
c —the solution
x being equal to
x obtained by solving the constrained optimization
in Eq. ( 2.22 ).
To derive an x that minimizes Eq. ( 2.23 ), we compute the derivatives of
L (
,
)
x
c
with respect to x and c , and set them to be zero, giving
L (
x
,
c
)
F T c
=
2 x
=
0
,
(2.24)
x
L (
,
)
x
c
=
=
.
y
Fx
0
(2.25)
c
Using the equations above, we can derive
F T FF T 1 y
x
=
.
(2.26)
The solution in Eq. ( 2.26 ) is called the minimum-norm solution, which is well known
as a solution for the ill-posed linear inverse problem.
2.7 Properties of the Minimum-Norm Solution
The minimum-norm solution is expressed as
F T
FF T
) 1
F T
FF T
) 1 Fx
F T
FF T
) 1
x
=
(
(
Fx
+ ʵ ) =
(
+
(
ʵ .
(2.27)
 
 
Search WWH ::




Custom Search