Information Technology Reference
In-Depth Information
Fig. 6.10. Partial cost corresponding to Hebb's rule
We now turn to differential costs. The following partial cost, represented
on Fig. 6.10,
V ( z )=
z
is the simplest monotonic decreasing function.
After introducing its derivative in ∆ w ( t ), we find
M
w = µ 1
M
y k x k ,
k =1
which is nothing but Hebb's rule. As was already discussed, since the par-
tial cost is a monotonic decreasing function, it is necessary to introduce the
normalization constraint on the weights to guarantee that the algorithm will
stop. Then a single iteration su ces to find the cost minimum. We will take
advantage of that result to initialize the Minimerror algorithm.
The perceptron algorithm may be derived from the following partial cost:
V ( z )=
(
z )
shown on Fig. 6.11. The weight updates at each iteration are
M
w = µ 1
M
z k ) y k x k .
Θ (
k =1
This is equivalent to a nonadaptive (“batch”) version of the perceptron al-
gorithm. Here the weights are updated using all the incorrectly classified ex-
amples at each iteration (thanks to the Θ function in ∆ w , which eliminates
Search WWH ::




Custom Search