Databases Reference
In-Depth Information
FIGURE 2.8: A labeled graph with a vertex set V = {1, 2, 3, 4, 5, 6, 7} and
edge set E = {{1, 2}, {2, 3}, {3, 4}, {4, 5}, {5, 6}, {6, 7}, {2, 6}, {5, 7}, {1,
7}}.
al. [50], the number of possible matches between two graphs grows factorially
with the size. Therefore, scalability is an important issue to be resolved.
The in-network processing capability of Graph Neuron eliminates the scala-
bility issue experienced by other graph-based pattern recognition algorithms.
General Neuron scales up appropriately with an increase in both pattern size
and database. Recognition processes are distributed to a set of processing
nodes and processed in parallel. In addition, GN can perform exact and inex-
act pattern matching based on different sets of attributes. The GN architecture
and some implementations will be discussed further in Chapter 3.
2.5.2 Hierarchical Graph Neuron
The Hierarchical Graph Neuron (HGN) [3] implements a one-shot memo-
rization and recall operation using a novel distributed algorithm. The HGN is
an enhancement to the graph neuron (GN) algorithm. This improved scheme
recognizes incomplete or noisy patterns. The HGN filters noise and crosstalk
out of the patter data input by linking multiple GN networks, which resolves
the crosstalk problem (see Section 3.4) encountered in closely matched pat-
terns. The HGN scheme is a lightweight, in-network processing algorithm that
does not require expensive floating point computations. It is very suitable for
real-time applications and tiny devices, such as wireless sensor networks. The
HGN can perform direct pattern matching procedures, and the short response
time is insensitive to increases in the number of stored patterns. Moreover,
Search WWH ::




Custom Search