Digital Signal Processing Reference
In-Depth Information
1) + ηδ h j p i
w ij ( n )= w ij ( n
(6.54)
by including a momentum term as described in [61]
1) + ηδ h j p i
Δ w ij ( n )= α Δ w ij ( n
(6.55)
where α is a positive constant called the momentum constant.
Describe how this affects the weights and also explain how a
normalized weight updating can be used for speeding the MLP
backpropagation training.
6. The momentum constant is in most cases a small number with
0
α< 1. Discuss the effect of choosing a small negative constant
with
1
0 for the modified weight updating rule from
equation (6.55).
7. Create two data sets, one for training an MLP and the other for
testing the MLP. Use a single-layer MLP and train it with the
given data set. Use two possible nonlinearities: f ( x )=
x
1+v 2 and
f ( x )= π tan x −1 . Determine for each of the given nonlinearities
a) The computational accuracy of the network by using the test
data.
b) The effect on the network performance by varying the size of
the hidden layer.
8. Comment on the differences and similarities between the Kohonen
map and the LVQ.
9. Which unsupervised learning neural networks are “topology-
preserving” and which are “neighborhood-preserving”?
10. Consider a Kohonen map performing a mapping from a 3-D input
onto a 1-D neural lattice of 100 neurons. The input data are random
points uniformly distributed inside a sphere of radius 1 centered
at the origin. Compute the map produced by the neural network
after 100, 1000, and 10,000 iterations.
11. Write a program to show how the Kohonen map can be used for
image compression. Choose blocks of 4
representing gray values
from the image as input vectors for the feature map.
12. When does the radial-basis neural network become a “fuzzy” neu-
×
Search WWH ::




Custom Search