Information Technology Reference
In-Depth Information
1
1
P E (e)
P E (e)
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
e
e
0
0
−1
0
1
−1
0
1
(a)
(b)
1
1
P E (e)
P E (e)
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
e
e
0
0
−1
0
1
−1
0
1
(c)
(d)
Fig. 4.14 Error PMFs for p =0 . 5 with: a) α =0 . 49 , β =0 . 01 ;b) α =0 . 8 , β =0 . 8 ;
c) α =0 . 6 , β =1 ;d) α =0 . 7 , β =0 . 89 .
4.2 The Discrete-Output Perceptron
The perceptron with threshold activation function (also called step function),
z = θ ( y ), implements a classifier function given by (see Sect. 3.3)
z = θ ( y )=
1 ,y = w T x + w 0
0
,
(4.52)
y = w T x + w 0 > 0
1 ,
which is known as the McCulloch-Pitts model [156]. Rosenblatt [190] has
proposed a training rule for this classifier which is proved to converge in a
finite number of iterations provided the classes are linearly separable (the
so-called perceptron convergence theorem). Here, we study how the SEE risk
copes with the classifier problem (not restricted to linearly separable classes).
We still use formula (4.2) but now the class error probabilities are given by
 
Search WWH ::




Custom Search