Databases Reference
In-Depth Information
FIGURE 2.1: Hopfield neural network representation.
P
M =
w n i n
(2.1)
n=1
Where w represents the correlated weight, and i represents the input value
for the nth stored pattern. This type of memory consumption effect occurs in
different neural network schemes, including the feed-forward neural network,
Hopfield network, radial basis function (RBF) neural network, morphological
associative memory (MAM) [28], and Hamming associative memory [29]. The
accuracy of the Hopfield network will significantly deteriorate if the number
of patterns stored is greater than 0.138N, where N represents the number of
nodes in the network.
Not all neural network approaches have this type of memory representation.
For instance, the memory representation of a Kohonen SOM [30] is different
from other neural networks (See Figure 2.2). For each node in the SOM lattice,
a pattern is represented using a vector-weight representation. Each node stores
a set of weights for a particular pattern vector. Thus, for a d dimensional
pattern vector, there is an equivalent number of weights, w; w = d.
2.1.2 Inter-Neuron Communication Frequency
The communication frequency of a neural network implementation is re-
lated to the number of communications, i.e., messages or signals, projected by
a single node (or neuron) toward other nodes in the network. In actual im-
plementations, a high communication frequency leads to network congestion,
which limits the scalability of the recognition implementation. Therefore, for
a network to be scalable, it is important that the communication frequency
be kept to a minimum.
Search WWH ::




Custom Search