Database Reference
In-Depth Information
15.3.5 som-b aseD k nowleDge r eCommenDation l ayer
Prior availability of recommendation about knowledge base could potentially optimize
the accessibility and usability issues related to large data sets and minimize the overall
application costs. Unsupervised machine learning-based visual knowledge recom-
mendation was the main aspect of this knowledge recommendation layer. The guided
self-organizing map (g-SOM) has the property of effectively creating spatially orga-
nized internal representations of various features of input signals and their abstrac-
tions. One aspect of this SOM is that the self-organization process that can discover
semantic relationships in complex environmental attributes. The model of SOM was
developed by T. Kohonen while applying neurobiology concepts for which adjacent
neurons react to similar stimuli [44]. The SOM was a rough simplification and abstrac-
tion of the processes in the brain. The grid structure of the topology of the competition
layer can be hypercuboid, triangles, circles, or hexagonal [22,42,52,53]. The purpose
of this architectural layer was to analyze the dynamic data set using SOM cluster-
ing and provide visual knowledge recommendation about the less correlated attributes
that could be selected for an application based on quick visual inspection or based on
some sort of automated reasoning. The projection matrix from feature representation
layer was used to design and initialize the SOM, whereas dynamically established least
correlated attribute list was used to guide the g-SOM clustering in terms of initial-
izing the weights. A SOM of [ n × 1] network size was created to capture the natural
grouping among the data points where selected list had n attributes. A set of data
can be separated into several classes using unsupervised competitive learning; each
neuron of the competition layer is responsible for the recognition of a certain class of
data. The similarity measure on the data induces a similarity measure on the classes.
Because the competition layer consists of a set of neurons without any architecture, the
neighborhood relation among individual clusters cannot be implemented, only SOMs
feature maps can. This last one relies on the modification of competitive networks,
defining a topological structure on the competition layer, training the SOMs in such a
way that adjacent clusters are represented by adjacent neurons of the last layer. While
in competitive learning only, the weight vector of the winner neuron is shifted toward
the current input vector, in contrast, SOMs change all weight vectors of neurons close
to the winner. Neurons of the competition layer are organized in the shape of a straight
line, of a rectangle, of a cuboid, etc. Self-organizing maps no longer use an angle mea-
surement for the degree of similarity, but Euclidean distance between the input vector
and the weight vector, thus the network input of neuron u in the competition layer is
n
((,)
net
=
W vu o
)
2
=
w
()
u
i
u
j
vj
(15.4)
j
=
1
The neuron with the smallest net Us , is declared the winner, and then the change in
the weight vector of neuron u in the competition layer is determined by
WWvu t
()
u
=
()
u
+
(,
)
σ
()(
t
iW
()
u
)
(15.5)
new
old
s
Search WWH ::




Custom Search