Databases Reference
In-Depth Information
Chapter 3
One-Shot Learning Considerations
An interesting area of current research focuses on developing capabilities of
smart objects, such as sensors to do complex processing beyond simple data
collection, including mechanisms for energy conservation, in lightweight de-
vices. Sensors that are able to perform recognition or clustering of events
in situ minimize the communication time between sensors and controlling de-
vices, such as the base station, and thus improve the performance of the entire
network at a large-scale. Such capabilities are limited by the complex com-
putation requirements of existing recognition or clustering algorithms, such
as highly iterative training, frequent weight adjustments, and an inability to
perform data distribution for large-scale processing.
One-shot learning is a type of learning mechanism that was inspired by the
ability of biological systems, such as a human being, to recognize objects at
a single glance [37]. It is estimated that a child has learned almost all of the
10,000 to 30,000 object categories by the age of six. Data can be recognized or
clustered quickly and e ciently if objects can be recognized without having
to iteratively memorize its characteristics or features.
One-shot learning was developed as a mechanism for systems to learn infor-
mation with a minimal amount of initial data. In the artificial, computational
world, the key motivation and intuition for one-shot learning is that systems,
like humans, can use prior information of object categories to learn and classify
new objects.
An important characteristic that differentiates one-shot learning from other
styles of learning is the emphasis on the principle of knowledge transfer, which
encapsulates prior knowledge of learned categories and allows for learning on
minimal training examples [58]. The question remaining to be answered is how
this might be achieved. According to Lake et al. [59], one hypothesis is that the
sharing of partial knowledge is core to one-shot learning. This type of learning
through inference is also used in the Graph Neuron (GN) implementation by
Khan and Mihailescu [2]. In GN, patterns are stored based on the similarities
of adjacent pattern elements within a particular pattern. These similarities
are stored and are the basis of comparison for incoming patterns. In a work
conducted by Bart and Ullman [60], a one-shot learning scheme was carried
out using selected features that were derived from the learned classification
tasks performed in prior learning.
35
Search WWH ::




Custom Search