Information Technology Reference
In-Depth Information
nomenon of graceful degradation , where function de-
grades “gracefully” with increasing amounts of damage
to neural tissue. Simplistically, we can explain this by
saying that removing more neurons reduces the strength
of the signals, but does not eliminate performance en-
tirely. In contrast, the CPU in a standard computer will
tend to fail catastrophically when even one logic gate
malfunctions.
A less obvious but equally important aspect of grad-
edness has to do with the way that processing happens
in the brain. Phenomenologically, all of us are probably
familiar with the process of trying to remember some-
thing that does not come to mind immediately — there
is this fuzzy sloshing around and trying out of different
ideas until you either hit upon the right thing or give
up in frustration. Psychologists speak of this in terms
of the “tip-of-the-tongue” phenomenon, as in, “its just
at the tip of my tongue, but I can't quite spit it out!”
Gradedness is critical here because it allows your brain
to float a bunch of relatively weak ideas around and see
which ones get stronger (i.e., resonate with each other
and other things), and which ones get weaker and fade
away. Intuition has a similar flavor — a bunch of rela-
tively weak factors add up to support one idea over an-
other, but there is no single clear, discrete reason behind
it.
Continuous Dimension
Figure 1.5: Graded activation values are important for rep-
resenting continuous dimensions (e.g., position, angle, force,
color) by coarse coding or basis-function representations as
shown here. Each of the four units shown gives a graded ac-
tivation signal roughly proportional to how close a point is
along the continuous dimension to the unit's preferred point,
which is defined as the point where it gives its maximal re-
sponse.
figure 1.4, a neuron could convey that the first object
pictured is almost definitely a cup, whereas the second
one is maybe or sort-of a cup and the last one is not
very likely to be a cup. Similarly, people tend to classify
things (e.g., cup and bowl ) in a graded manner accord-
ing to how close the item is to a prototypical example
from a category (Rosch, 1975).
Gradedness is critical for all kinds of perceptual and
motor phenomena, which deal with continuous underly-
ing values like position, angle, force, and color (wave-
length). The brain tends to deal with these continua in
much the same way as the continuum between a cup
and a bowl. Different neurons represent different “pro-
totypical” values along the continuum (in many cases,
these are essentially arbitrarily placed points), and re-
spond with graded signals reflecting how close the cur-
rent exemplar is to their preferred value (see figure 1.5).
This type of representation, also known as coarse cod-
ing or a basis function representation, can actually give
a precise indication of a particular location along a con-
tinuum, by forming a weighted estimate based on the
graded signal associated with each of the “prototypical”
or basis values.
Another important aspect of gradedness has to do
with the fact that each neuron in the brain receives in-
puts from many thousands of other neurons. Thus, each
individual neuron is not critical to the functioning of
any other — instead, neurons contribute as part of a
graded overall signal that reflects the number of other
neurons contributing (as well as the strength of their in-
dividual contributions). This fact gives rise to the phe-
Computationally, these phenomena are all examples
of bootstrapping and multiple constraint satisfac-
tion . Bootstrapping is the ability of a system to “pull
itself up by its bootstraps” by taking some weak, in-
complete information and eventually producing a solid
result. Multiple constraint satisfaction refers to the abil-
ity of parallel, graded systems to find good solutions to
problems that involve a number of constraints. The ba-
sic idea is that each factor or constraint pushes on the
solution in rough proportion to its (graded) strength or
importance. The resulting solution thus represents some
kind of compromise that capitalizes on the convergence
of constraints that all push in roughly the same direc-
tion, while minimizing the number of constraints that
remain unsatisfied. If this sounds too vague and fuzzy
to you, don't worry — we will write equations that ex-
press how it all works, and run simulations showing it
in action.
Search WWH ::




Custom Search