Digital Signal Processing Reference
In-Depth Information
Theoutputvectorofthemixtureofexpertsistheweightedoutput
of the experts, and becomes
μ =
i
g i μ i
(6.11)
Both g and μ depend on the input x ; thus, the output is a nonlinear
function of the input.
6.3
Self-organizing Neural Networks
Self-organizing maps implement competition-based learning paradigms.
They represent a nonlinear mapping from a higher-dimensional feature
space onto a usually 1-D or 2-D lattice of neurons. This neural network
has the closest resemblance to biological cortical maps. The training
mechanism is based on competitive learning: similarity (dissimilarity) is
selected as a measure, and the winning neuron is determined based on
the largest activation. The output units are imposed on a neighborhood
constraint such that similarity properties between input vectors are
reflected in the output neurons' weights. If both the input and the neuron
spaces (lattices) have the same dimension, then this self-organizing
feature map [141] also becomes topology-preserving.
Self-organizing feature map
Mathematically, the self-organizing map (SOM) determines a transfor-
mation from a high-dimensional input space onto a one-dimensional
or two-dimensional discrete map. The transformation takes place as an
adaptive learning process such that when it converges, the lattice rep-
resents a topographic map of the input patterns. The training of the
SOM is based on a random presentation of several input vectors, one at
a time. Typically, each input vector produces the firing of one selected
neighboring group of neurons whose weights are close to the input vector.
The most important features of such a network are the following:
1. A 1-D or 2-D lattice of neurons on which input patterns of arbitrary
dimension are mapped, as visualized in figure 6.6a.
2. A measure that determines a winner neuron basedonthesimilarity
between the weight vector and the input vector.
Search WWH ::




Custom Search