Image Processing Reference
In-Depth Information
To
nd the p ¼
2 matrix norm, we must solve the following optimization problem:
k Ax k 2 ¼ x H A H Ax
maximize
x
(
3
:
146
)
x H x ¼
subject to:
1
Using Lagrange multiplier technique, this is equivalent to the following optimization
problem:
J ¼ x H A H Ax l( x H x
maximize
x
1
)
(
3
:
147
)
Setting the gradient of J with respect to x equal to zero, we obtain the equation
q J
q x ¼
2A H Ax
! A H Ax ¼ l x
2
l x ¼
0
(
3
:
148
)
Therefore, the solution for vector x must be an eigenvector of square matrix A H A
corresponding to eigenvalue
l
and the resulting norm is
k Ax k 2 ¼ x H A H Ax ¼ l x H x ¼ l
(
3
:
149
)
Since we are maximizing the norm,
l
must be chosen to be the maximum eigenvalue
nite matrix A H A, therefore,
of the positive de
k A k 2 ¼ max l( A H A )
(
3
:
150
)
The p norm has the property that for any two matrices A and B, the following
inequality holds:
k Ax k p k A k p k x k p
( 3 : 151 )
and
k AB k p k A k p k B k p
(
3
:
152
)
Frobenius Norm
Frobenius norm is another matrix norm that is not a p norm. It is de
ned as
!
1
2
X
X
m
n
2
k A k F ¼
a ij
(
3
:
153
)
i ¼ 1
j ¼ 1
Frobenius norm is also called Euclidean norm. As a simple example, the Frobenius
norm of an n n identity matrix is
p . The Frobenius norm can also be
k I k F ¼
expressed as
p
Trace( A H A )
k A k F ¼
(
:
)
3
154
Search WWH ::




Custom Search