Databases Reference
In-Depth Information
2.4 Pattern Distribution Techniques
Implementations of existing neural network/machine learning approaches
for pattern recognition have shown some limitations. These include the gener-
alization problem and complex learning mechanisms. These limitations affect
the scalability of the approaches for real-time and large-scale recognition de-
ployments. Furthermore, existing approaches are CPU-centric, i.e., they have
been developed with the single-processing mechanism in mind. According to
Ikeda et al. [40], it is di cult for current neural network approaches to imple-
ment actual associative memory principles, in which simple low-cost devices
are equipped with these algorithms for pattern recognition purposes.
In solving the scalability issue within pattern recognition applications, we
intend to shift the recognition paradigm from a sequential-based CPU-centric
approach toward a parallel in-network approach. The in-network processing
paradigm concentrates on the delegation and distribution of processes over the
body of a network rather than utilizing a single-processing device or node. The
ability of a system to distribute data across a number of processors or nodes
in the network is an important aspect in the distributed approach for pattern
recognition. It is essential that pattern distribution techniques be applied. In
this section, two different pattern distribution techniques are described:
1. Subpattern Distribution: Each pattern is partitioned into subpatterns
for recognition over the entire network. Each node within the network
receives a subpattern for processing.
2. Set Distribution: A pattern set containing a number of patterns is dis-
tributed for recognition. Each pattern subset will be executed by a spe-
cific processing node within the network.
Figure 2.7 shows a comparison of the techniques. These techniques will be
discussed in the following subsections.
2.4.1 Subpattern Distribution
The subpattern distribution technique in the DPR approach involves di-
viding a pattern into small-scale subpatterns. These subpatterns will be dis-
tributed across several processing nodes for the recognition process. The work
of Garai and Chaudhuri [25] on the Distributed Hierarchical Genetic Algo-
rithm (DHGA) for e cient optimization and pattern matching is an example
of this distribution technique. In this work, the entire search space is divided
into subspaces, and the search process is conducted at this level. Parallel ge-
netic algorithms are implemented on each subspace.
Ikeda et al. [40] proposed a distributed approach for Hamming associative
memory using the decoupled Hamming AM approach. The input vector is
Search WWH ::




Custom Search