Information Technology Reference
In-Depth Information
3.3.4 Counterpropagation Networks
A
counterpropagation network
,
as proposed by Hecht-Nielsen (1987a, 1988), is a
combination of a Kohonen's
self-organizing map
of Grossberg's learning. The
combination of two neuro-concepts provides the new network with properties that
are not available in either one of them. For instance, the network can for a given set
of input-output vector pairs
xy
learn the functional
(
x
,
yxy
), (
,
),..., (
, )
nn
11
2 2
relationship
y
=
f
(
x
) between the input vector
x =
and the output vector
(, , )
n
x
x
x
12
y =
yy y
If the inverse of the function
f
(
x
) exists, then the network can also
generate the inverse functional relationship
(, , .
n
12
x
=
f
1
()
y
.
When adequately trained, the counterpropagation network can serve as a
bi-
directional associative memory
, useful for pattern mapping and classification,
analysis of statistical data, data compression and, above all, for function
approximation.
Counter Propagation
network
y
1
X
1
X
2
y
2
:
:
:
:
:
:
y
m
X
n
Input Layer
Output Layer
Kohonen Layer
Figure 3.9.
Configuration of a counterpropagation network
The overall configuration of a counterpropagation network is presented in
Figure 3.9. It is a three-layer network configuration that includes the input layer,
the Kohonen
competitive layer
as hidden layer, and the
Grossberg output layer
.
The hidden layer performs the key mapping operations in a competitive
winner-
takes-all fashion
. As a consequence, each given particular input vector
x x x
activates only a single neuron in the Kohonen layer, leaving all
other neurons of the layer inactive (see Figure 3.10). Once the competition process
is terminated, a set of weights connecting the activated neuron with the neurons of
the output layer defines the output of the activated neuron (say
p
) as the sum of
products
(,
,...,
)
1
p
2
p
np
Search WWH ::
Custom Search