Databases Reference
In-Depth Information
of entries. For instance, in a three-dimensional GN network, the bias entry of
each neuron is (left, right, top, bottom), which is equivalent to n row .
3.4 Graph Neuron Limitations
The GN pattern recognition approach exchanges subpattern information
between two or more adjacent neurons. For instance, a GN network will mem-
orize the pattern “abcdef” in the form of subpatterns: “ab,”“abc,”“bcd,”“cde,”
“def,” and “ef.” Note that the number of subpatterns is equivalent to the num-
ber of active neurons.
The GN's limited awareness of the overall pattern affects the accuracy of
its recognition scheme. As the size of the pattern increases, it is more di -
cult for a GN network to obtain an overview of the pattern's composition.
Different patterns that have a similar subpattern structure lead to false recall
and incomplete results. Let us suppose that a GN network can allocate six
possible element values, e.g., “u,”“v,”“w,”“x,”“y,” and “z,” for a five-element
pattern. The pattern “uvwxz” is introduced, followed by “zvwxy.” These two
patterns are stored by the GN array. Next, we introduce the pattern “uvwxy”;
this will produce a recall. Because the last pattern does not match the pre-
viously stored patterns, the recall is false. The reason for this false recall is
that a GN only knows of its own value and the values of its adjacent neurons.
The input patterns are stored as the segments “uv,”“uvw,”“vwx,”“wxy,” and
“xy.” The last input pattern, though different from the two previous patterns,
comprises the segments also found in previously stored patterns. Figure 3.7
uses a graphical representation to simplify this example.
FIGURE 3.7: An illustration of the crosstalk phenomenon for patterns input
to a GN network.
Search WWH ::




Custom Search