Database Reference
In-Depth Information
in the interests of parallelism, we defer that change until we can accumulate many changes
in the Reduce phase.
The Reduce Function For each key i , the Reduce task that handles key i adds all the asso-
ciated increments and then adds that sum to the i th component of w .
Probably, these changes will not be enough to train the perceptron. If any changes to w
occur, then we need to start a new MapReduce job that does the same thing, perhaps with
different chunks from the training set. However, even if the entire training set was used on
the first round, it can be used again, since its effect on w will be different if w has changed.
12.2.9
Exercises for Section 12.2
EXERCISE 12.2.1 Modify the training set of Fig. 12.6 so that example b also includes the
word “nigeria” (yet remains a negative example - perhaps someone telling about their trip
to Nigeria). Find a weight vector that separates the positive and negative examples, using:
(a) The basic training method of Section 12.2.1 .
(b) The Winnow method of Section 12.2.3 .
(c) The basic method with a variable threshold, as suggested in Section 12.2.4 .
(d) The Winnow method with a variable threshold, as suggested in Section 12.2.4 .
! EXERCISE 12.2.2 For the following training set:
([1, 2], +1) ([2, 1], +1)
([2, 3], −1) ([3, 2], −1)
describe all the vectors w and thresholds θ such that the hyperplane (really a line) defined
by w . x θ = 0 separates the points correctly.
! EXERCISE 12.2.3 Suppose the following four examples constitute a training set:
([1, 2], −1) ([2, 3], +1)
([2, 1], +1) ([3, 2], −1)
(a) What happens when you attempt to train a perceptron to classify these points using
0 as the threshold?
!! (b) Is it possible to change the threshold and obtain a perceptron that correctly classifies
these points?
Search WWH ::




Custom Search