Databases Reference
In-Depth Information
FIGURE 8.12: Total recognition times for each DHGN subnet in binary pat-
tern recognition using different numbers of subpatterns derived from 16 KB bi-
nary images. ©IEEE. Reprinted, with permission, from Amin, A.H.M.; Khan,
A.I.; “A divide-and-distribute approach to single-cycle learning HGN network
for pattern recognition,” Control Automation Robotics & Vision (ICARCV),
2010 11th International Conference on 7-10 Dec. 2010, pp. 2118-2123, doi:
10.1109/ICARCV.2010.5707852.
Note that the red-colored neurons are the query-activated neurons. The
DHGN subnet will extract information from the database that pertains to the
respective value specified in the query. In this example, the neuron that han-
dles data on the status of the employee will select all of the tuples containing
the “exec” status (see Algorithm 5).
The DHGN's pattern matching capability and the short response time re-
main insensitive to the increases in the number of stored patterns and make
this approach ideal for cloud computing (see Figure 8.12). Moreover, the
DHGN does not require rules to be defined, manual interventions by an op-
erator to set thresholds to achieve the desired results, or heuristics entailing
iterative operations for the memorization and recall of patterns [56]. In ad-
dition, this approach allows induction of new patterns in a fixed number of
steps and maintains a high level of scalability, i.e., the performance and accu-
racy do not degrade as the number of stored pattern increases over time. Its
pattern recognition capability is comparable with contemporary approaches.
Furthermore, all computations are completed in the pre-defined number of
steps, and thus the approach implements one shot, i.e., single-cycle learning.
Search WWH ::




Custom Search