Databases Reference
In-Depth Information
3. Result communication phase: After the recognition process in each
DHGN subnet is completed, the results (in terms of recall or store) are
communicated back to the SI module for further analysis. In this com-
munication, messages comprising the subnet id (sn id ), status (sn st ), and
index stored/recall (sn idx ), in the form of sn id , sn st , sn idx are sent to
the SI module by all of the top-layer neurons in each subnet. The total
number of messages communicated from the subnets to the SI module,
n msg (sn → SI), is equivalent to the number of subnets available, n sn .
Therefore, n msg (sn → SI) = n sn .
The following relations describe the micro-communications between neurons
in each DHGN subnet.
5.1.2.2.1 Base layer For each neuron in the base layer, the number of
message communications can be derived from the number of messages commu-
nicated between adjacent neurons for each input subpattern. For neurons at
the edge of the base layer, the number of communication exchanges is equiva-
lent to the number of different elements in the subpattern. Non-edge neurons
communicate with adjacent neurons in both the preceding and the succeed-
ing columns and communicate bias indices to the neurons at the next higher
layer. The amount of message exchange is v 2 + 1, where v is the number of
possible element values. The cumulative communication costs for each input
recognition process for all neurons in the base layer of a single DHGN subnet
is derived from the following equation:
v 2 + 1
n l msg =
×(s sp −2) + 2v
(5.5)
5.1.2.2.2 Middle layers The communication costs for neurons in the
middle layers are similar to that of the base layer. However, the number of neu-
rons available differs for each layer. For each middle layer i, where 1 ≤ i ≤ l t ,
the number of message exchanges for a single input subpattern recognition is
derived as follows:
v 2 + 1
n l msg =
×(s sp −(2i + 2)) + 2v
(5.6)
Equation 5.7 presents the cumulative communication costs for all neurons
in the middle layers:
l t −1
n l (i,total)
v 2 + 1
=
×(s sp −(2i + 2)) + 2v
(5.7)
msg
i=1
Search WWH ::




Custom Search