Databases Reference
In-Depth Information
FIGURE 2.4: Comparison of the estimated processing speedup between recog-
nition processes with different parallel fractions (P) as a function of the num-
ber of parallel processors used.
as in the Hebbian and incremental learning approaches. Because the training
in a GN is conducted within a single-cycle, the recognition process is faster.
2.3 System Approaches
Existing distributed pattern recognition schemes have been designed and
deployed using a top-down approach. Relatively CPU-centric (or sequential-
based) algorithms were modified and enhanced to perform in a distributed
manner. Furthermore, existing schemes tend to only partially apply the dis-
tribution mechanism, i.e., only in the context of training and validation. Some
of these examples include feed-forward neural networks and self-organizing
maps. Different types of distribution approaches have been considered [39]:
1. Process Farming: In this approach, the recognition process is distributed
across a number of parallel processors. Each processor uses a copy of
the algorithm to carry out a training process, as shown in Figure 2.5.
In this configuration, each processing network consists of a master node
and several worker nodes. Each worker node performs training or recog-
nition processes independently. However, for the purposes of evalua-
tion/adjustment, updates (in terms of a bias weight and errors) must be
Search WWH ::




Custom Search