Databases Reference
In-Depth Information
2.2 Key Components of DPR
There are three main components of a scalable distributed pattern recogni-
tion scheme: the learning algorithms, the processing approach, and the train-
ing procedure.
2.2.1 Learning Mechanism
In pattern recognition, learning approaches play an important role in deter-
mining the e ciency and accuracy of the pattern store and recall operations.
Prominent approaches include Hebbian learning [10], incremental learning
[11], and one-shot learning. Hebbian learning is a classical learning technique
that is based on the synaptic plasticity concept. The output of a neuron has a
significant impact on the input to other neurons. Hebbian learning is a well-
known technique for spatio-temporal pattern recognition in auto-associative
neural networks. However, the potential for saturation and “catastrophic for-
getting” makes the Hebbian learning technique less scalable. Most of the ex-
isting neural network algorithms implement Hebbian learning, including the
Hopfield and feed-forward neural networks. A simple form of Hebbian learning
follows the rule:
w ab = x a x b
(2.3)
Where w ab represents the weight connecting neuron b to a. The input of
neuron a and postsynaptic response of neuron b are represented by x a and
x b , respectively.
Incremental learning was developed to solve the scalability issue in pattern
recognition [35]. It simplifies the problem of large training sets, specifically in
machine learning algorithms, such as the Support Vector Machine (SVM) [36].
In incremental learning, training data are divided into several subsets. Each
data subset individually undergoes a training phase. Subsequently, the results
from each training session are combined to form the actual results. When
there are a large number of training patterns, this training approach increases
the scalability of the algorithm. However, problems are encountered when
using the method to treat large-scale patterns. More computational resources
are required to process larger patterns. Furthermore, this approach tends to
be tightly coupled and requires computations of kernel functions, which are
costly.
In one-shot learning, a minimal amount of initial data are required for a
system to learn information. Past implementations of this learning mecha-
nism used a probabilistic approach, such as the Bayesian classifier [37, 38].
Categories of objects can be learned from a small data set. One-shot learning
will learn from the information obtained from these categories. In the sense
Search WWH ::




Custom Search