Databases Reference
In-Depth Information
FIGURE 2.6: Distributed pattern recognition based on the process pipelining
approach.
training data that is hardly classified due to dissimilar feature values,
are significantly different than the weight changes produced on a loosely
cohesive data, i.e., data that is easily classified and clustered.
2. Highly Congestive Network: In terms of the number of training cycles
required to obtain an optimum output, algorithms such as the feed-
forward neural network and Hopfield network are highly iterative in
nature. A large number of iterations in the training/recognition process
will lead to massive communication exchanges within any distributed
environment, and thus create a highly congested network.
3. Unchanged Level of Complexity: In existing distributed pattern recog-
nition schemes, actual pattern recognition processes are applied at a
smaller scale, i.e., similar algorithms are used with a smaller training
space. Therefore, the complexity of the algorithm is unchanged. By re-
ducing the amount of training data used, executing recognition processes
at a smaller scale may improve the algorithm's performance time. How-
ever, the processing time also depends on the number of learning cycles
implemented for each recognition process. Though the complexity of
the algorithm remains unchanged, it is hard to estimate its resource re-
quirements. Therefore, this approach may not be applicable for resource-
constrained networks, such as wireless sensor networks (WSNs).
Search WWH ::




Custom Search