Databases Reference
In-Depth Information
the HGN does not require that the operator define rules or set thresholds to
achieve the desired results nor does it require heuristics, which entail iterative
operations for memorization and pattern recall.
2.5.3 Distributed Hierarchical Graph Neuron
The Distributed Hierarchical Graph Neuron (DHGN) [46] is a parallel as-
sociative memory-based pattern recognition algorithm that extends the func-
tionalities and capabilities of the GN algorithm. It is a single-cycle learning
algorithm that has an in-network processing capability. By e ciently dissemi-
nating recognition processes across the network, the algorithm is able to reduce
computational loads [54]. Therefore, it is suitable for deployment in wireless
sensor networks and other fine-grained computational networks. In addition,
DHGN can be deployed as a recognition engine for large-scale data-processing
on coarse-grained networks, such as computational grids and clouds [55, 56].
2.6 Resource Considerations for DPR Implementations
Neural networks designed as processing schemes for pattern recognition ap-
plications have inspired many to attempt their deployment in a physical com-
putational network, such as grid and local networks. The fundamental prin-
ciples of in-network distributed processing for complex computations have
been established by the methods used to communicate inputs and outputs
between processing nodes. Nevertheless, there are several issues that need to
be addressed when an in-network approach is used to deploy complex algo-
rithms. These issues include resource considerations and incurred communi-
cation costs.
Current approaches for implementing pattern recognition algorithms in dis-
tributed environments have focused on improving the performance time and
providing scalability in response to increasing data size and dimension. Nev-
ertheless, these approaches are overburdened by their highly complex compu-
tations and require significant resources to perform in a distributed manner.
For example, the computational complexity of a recognition process using
a Hopfield network with n neurons on a single processor is equivalent to
O (n log n). For the algorithm to exhibit peak performance, it is important
that the network acquire su cient computational resources. However, not all
types of computational networks available in the current technological climate
can acquire su cient resources.
Resource-awareness is an important aspect absent from existing DPR
schemes. Because the granularities of networks differ, it is essential for the
computational and storage costs incurred by a distributed scheme are consid-
Search WWH ::




Custom Search