Digital Signal Processing Reference
In-Depth Information
Define the mutual coherence of the matrix
B
as follows
Definition 5.1.
[17] The mutual coherence of a given matrix
B
is the largest
absolute normalized inner product between different columns from
B
. Denoting the
k
th column in
B
by
b
k
, the mutual coherence is given by
b
k
b
j
|
|
μ
(
˜
)=
b
j
2
.
B
max
b
k
2
.
1
≤
k
,
j
≤
L
,
k
=
j
With this definition, one can prove the following theorem
N
×
L
full
Theorem 5.1.
[50], [69] For the system of linear equations
x
=
B
α
(
B
∈
R
rank with L
≥
N), if a solution
α
exists obeying
1
1
2
1
α
0
<
+
,
μ
(
˜
B
)
(
)
(
)
that solution is both unique solution of
.
In the rest of the chapter we show how the variants of (
5.1
) can be used to develop
robust algorithms for object classification.
P
1
and the unique solution of
P
0
5.2
Sparse Representation-based Classification
In object recognition, given a set of labeled training samples, the task is to
identify the class to which a test sample belongs to. Following [156] and [112],
in this section, we briefly describe the use of sparse representations for biometric
recognition, however, this framework can be applied to a general object recognition
problem.
Suppose that we are given
L
distinct classes and a set of
n
training images
per class. One can extract an
N
-dimensional vector of features from each of these
images. Let
B
k
=[
n
matrix of features from the
k
th
class, where
x
kj
denote the feature from the
j
th
training image of the
k
th
class. Define
a new matrix or dictionary
B
, as the concatenation of training samples from all the
classes as
x
k
1
,...,
x
kj
,...,
x
kn
]
be an
N
×
N
×
(
n
.
L
)
B
=[
B
1
,...,
B
L
]
∈
R
=[
x
11
,...,
x
1
n
|
x
21
,...,
x
2
n
|......|
x
L
1
,...,
x
Ln
]
.
N
We consider an observation vector
y
∈
R
of unknown class as a linear combination
of the training vectors as
L
i
=
1
n
j
=
1
α
ij
x
ij
y
=
(5.4)
Search WWH ::
Custom Search