Information Technology Reference
In-Depth Information
Hidden
Positive Correlation
...
t
t+1
t+2
...
Figure 4.4: Positive correlations exist between elements of
a feature such as a line that reliably exists in the environment
(i.e., it repeats at different times, intermixed among other such
correlated features).
Model learning should represent such
correlations.
Input
Figure 4.5: Network for the demonstration of Hebbian cor-
relational model learning of the environmental regularity of a
line.
out and tries to attack you (but you narrowly escape) as
“stay away from tigers in this part of the woods,” then
you'll probably be in trouble when you go to another
part of the woods, or encounter a lion.
detail below) learns to represent the correlations present
between the pixels in a line.
4.3.1
Simple Exploration of Correlational Model
Learning
Open the project hebb_correl.proj.gz in
chapter_4 to begin.
You will see a network with a 5x5 input layer and
a single receiving hidden unit (figure 4.5), in addition
to the usual other windows. To make things as sim-
ple as possible, we will just present a single rightward
leaning diagonal line and see what effect the Hebbian
learning has on this hidden unit's weights. Thus, the en-
vironment will have these units 100 percent correlated
with each other and the firing of the hidden unit, and
this extreme strong correlation should be encoded by
the effects of the Hebbian learning mechanism on the
weights.
First, let's look at the initial weights of this hidden
unit.
, !
Our approach toward model learning is based on corre-
lations in the environment. These correlations are im-
portant, because in general it seems that the world is in-
habited by things with relatively stable features (e.g., a
tree with branches, mammals with legs, an individual's
face with eyes, nose, and mouth, and so on), and these
features will be manifest as reliable correlations in the
patterns of activity in our sensory inputs.
Figure 4.4 shows a simple example of the correla-
tions between the individual pixels (picture elements)
that make up the image of a line. These pixels will all be
active together when the line is present in the input, pro-
ducing a positive correlation in their activities. This cor-
relation will be reliable (present across many different
input images) to the extent that there is something reli-
able in the world that tends to produce such lines (e.g.,
edges of objects). Further, the parsimony of our model
can be enhanced if only the strongest (most reliable)
features or components of the correlational structure are
extracted. We will see in the next section how Hebbian
learning will cause units to represent the strongest cor-
relations in the environment.
Before delving into a more detailed analysis of Heb-
bian learning, we will first explore a simplified exam-
ple of the case shown in figure 4.4 in a simulation. In
this exploration, we will see how a single unit (using a
Hebbian learning mechanism that will be explained in
Select r.wt in the network window, and then click
on the hidden unit.
You should see a uniform set of .5 weight values,
which provide an “blank page” starting point for ob-
serving the effects of subsequent learning.
, !
Then, click back on act , and then do Run in the
hebb_correl_ctrl control panel.
You will just see the activation of the right-leaning
diagonal line.
, !
Then, click back on r.wt .
You will see that the unit's weights have learned to
represent this line in the environment.
, !
Click on Run again.
, !
Search WWH ::




Custom Search