Information Technology Reference
In-Depth Information
deliberately chosen because of their high correlation in order to explore the efficiency
of the algorithms implemented. As will be shown, the feature selection algorithm has
been found to be robust enough to find a suitable solution for this problem.
5.3 Description of the Classifier
In order to evaluate the classification error associated to a subset of features, it is nec-
essary to specify the classifier. In this chapter, the Mean Square Error (MSE) linear
classifier has been chosen as one of the most appropriates because of its simplicity
and good results.
In a linear classifier, the decision rule depends on C different linear combinations
of the input features:
M
(13)
=
g
=
f
w
+
x
w
c
0
c
n
nc
n
1
where
x
represents the values of the n -th feature, M represents the number of input
n
features,
w represents the weights of the linear combination, and g is the linear
combination obtained for the c -th class ( c = 1,..., C ). The final decision corresponds to
the linear combination with the highest result.
This process can be alternatively described by using matrix notation, defining the
input patterns matrix as:
nc
1
1
"
1
x
x
"
x
(14)
11
12
1
N
Q
=
#
#
%
#
x
x
"
x
M
1
M
1
MN
where N represents the number of input patterns. Note that the first row equals 1 in
order to implement the independent terms of the linear combinations. The weights of
the classifier can defined as:
w
w
"
w
01
11
M
1
w
w
"
w
(15)
02
12
M
2
V
=
#
#
%
#
w
w
"
w
0
C
1
C
MC
where C represents the number of classes to classify ( C = 3, speech, music, and
noise). The output of the linear combinations for the input patterns is:
(16)
Y
=
V
Q
with Y being a matrix with C rows and N columns.
The error (E) thus results in being:
(17)
E
=
Y
T
=
V
Q
T
Search WWH ::




Custom Search