Databases Reference
In-Depth Information
FIGURE 4.3: HGN composition for crosstalk example (see Figure 3.8).
its value with the adjacent GNs. The resulting bias array structure is shown
in Table 4.1. All active, non-edge GNs (V 2, W 3, and X 4) will send their
bias index to their corresponding GN in the higher layer (in this case, V 2 →
1V 1, W 3 → 1W 2, and X4 → 1X3). Once a layer-1 GN receives a bias index,
it is activated. The recognition process at this level compares the base level
bias indices received by adjacent layer-1 GNs. The bias array contents of each
GN in layer-1 are also shown in Table 4.1. The active, non-edge GN, 1W 2,
sends its index to the top layer GN, TW. At this stage, TW checks its bias
array for an appearance of the index retrieved from 1W 2. If it appears in
the bias array, TW will recall the index and propagate it back to all GNs
in the network. Otherwise, a new index will be generated and propagated to
the network. Tables 4.2 and 4.3 show the bias arrays obtained when patterns
“zvwxy” and “uvwxy” are introduced into the network.
In the HGN implementation, pattern“uvwxy”was found to be different from
patterns “uvwxz” and “zvwxy.” Therefore, the crosstalk problem is solved by
this hierarchical scheme.
4.1.2 Computational Design for a Hierarchical One-Shot
Learning DPR Scheme
The hierarchical composition of a GN network is built up by layers of neu-
rons. The size of the HGN network is important in constructing an e cient
composition that is based on the availability and capacity of the processing
Search WWH ::




Custom Search