Databases Reference
In-Depth Information
tively unexplored area. The complexity of existing pattern recognition algo-
rithms limits their distribution factor. Several initiatives have attempted to
parallelize and distribute a pattern recognition algorithm across a distributed
system. However, the parallelization process poses a significant hurdle for this
type of implementation.
The neural network approach is a promising tool for Internet-scale pattern
recognition. This method has the ability to perform parallel computations us-
ing interconnected neurons. However, there are several implementation issues,
including convergence problems, complex iterative learning procedures, and
the fact that the training data required for optimum recognition leads to low
scalability.
In this chapter, we will further discuss the important characteristics and
aspects of DPR.
2.1 Scalability of Neural Network Approaches
In general, scalability can be achieved using a distributed approach. There-
fore, the scalability factors for the pattern recognition schemes can be derived
from the scalability requirements for any distributed system. There are two
important factors that are closely related to the scalability of recognition
schemes: storage capacity and inter-neuron communication frequency for neu-
ral network implementation. As Srinivas and Janakiram [26] explained, these
two factors have been proposed based on the scalability requirements for dis-
tributed systems. The following subsections discuss these two factors in the
context of common neural network approaches.
2.1.1 Pattern Storage Capacity
A baseline evaluation of storage capacity is based on how an increase in
the number of stored patterns affects a given network. For each processing
node, the memory capacity for pattern storage is analyzed. In recognition
approaches, the importance of memory capacity lies in its ability to provide
a scalable storage medium for large-scale patterns. Within a given neural
network, the effect that the quantity of patterns has on the size of the memory
required per node is evaluated.
Existing neural networks, such as Hopfield networks [27] (See Figure 2.1)
and feed-forward neural networks, rely largely on the weight calculations in
their recognition processes. In this context, each processing node would have
a collection of weight-input values stored within its memory. For P patterns,
the simplest approximation for the size of the memory, M, is given by the
following equation:
Search WWH ::




Custom Search