Information Technology Reference
In-Depth Information
it vanishes in all cases when certainty of the outcome is approached
[ p ( n ) Æ 1].
Although the sketch on probabilities dealt exclusively with urns, balls,
and draws, students of statistical learning theory will have recognized in
Eqs. (39), (41), and (42) the basic axioms of this theory [Estes, 1959; Eqs.
(5), (6), and (9)], and there is today no doubt that under the given experi-
mental conditions animals will indeed trace out the learning curves derived
for these conditions.
Since the formalism that applies to the behavior of these experimental
animals applies as well to our urn, the question now arises: can we say an
urn learns? If the answer is “yes,” then apparently there is no need for
memory in learning, for there is no trace of black balls left in our urn when
it finally “responds” correctly with white balls when “stimulated” by each
draw; if the answer is “no,” then by analogy we must conclude it is not learn-
ing that is observed in these animal experiments.
To escape this dilemma it is only necessary to recall that an urn is just
an urn, and it is animals that learn. Indeed, in these experiments learning
takes place on two levels. First, the experimental animals learned to behave
“urnlike,” or better, to behave in a way which allows the experimenter to
apply urnlike criteria. Second, the experimenter learned something about
the animals by turning them from nontrivial (probabilistic) machines into
trivial (deterministic) machines. Hence, it is from studying the experimenter
whence we get the clues for memory and learning.
C. Finite Function Machines
1. Deterministic Machines
With this observation the question of where to look for memory and learn-
ing is turned into the opposite direction. Instead of searching for mecha-
nisms in the environment that turn organisms into trivial machines, we have
to find the mechanisms within the organisms that enable them to turn their
environment into a trivial machine.
In this formulation of the problem it seems to be clear that in order to
manipulate its environment an organism has to construct—somehow—an
internal representation of whatever environmental regularities it can get
hold of. Neurophysiologists have long since been aware of these abstract-
ing computations performed by neural nets from right at the receptor level
up to higher nuclei (Lettvin et al ., 1959; Maturana et al ., 1968; Eccles et al .,
1967). In other words, the question here is how to compute functions rather
than states, or how to build a machine that computes programs rather than
numerical results. This means that we have to look for a formalism that
handles “finite function machines.” Such a formalism is, of course, one level
higher up than the one discussed before, but by maintaining some pertinent
analogies its essential features may become apparent.
Search WWH ::




Custom Search