Information Technology Reference
In-Depth Information
that humans know to be true but that cannot be proved within a formal system
based on a set of axioms. Penrose claims that this finding shows that comput-
ers, which can only operate by following algorithms, are therefore necessar-
ily more limited than humans. This argument has been the subject of much
debate by many people, including Turing. He observed that such results from
mathematical logic could have implications for the Turing Test:
There are certain things that [any digital computer] cannot do. If it is rigged
up to give answers to questions as in the imitation game, there will be some
questions to which it will either give a wrong answer, or fail to give an answer
at all however much time is allowed for a reply. 15
Fig. 16.9. A four-node Hopfield network
with feedback loops.
In the context of the Turing Test, the existence of such nonalgorithmic truths
implies the existence of a class of “unanswerable” questions. However, Turing
asserted that these questions are only a concern for the Turing Test if humans
are able to answer the same questions.
Neural networks revisited
Rather than delve deeper into these hotly contested, largely philosophical
issues, we shall look again at what the brain might tell us about intelligence
and consciousness. We start with another look at neural networks. In the body,
a neural network consists of interconnected nerve cells that work together,
such as in the brain. In computer science, a neural network is a network of
electronic components loosely modeled on the operation of the brain. As we
have seen, the artificial neural networks (ANNs) described in Chapter 13 have
successfully performed many pattern recognition tasks.
These ANNs, however, are very far from functioning like a realistic neural
network in a living organism. Besides the huge difference in the numbers of
neurons and connections, the primary element lacking is that of feedback , in
which information is sent back into the system to adjust behavior. The favored
method of training the ANN is back propagation , in which the initial output is
compared to the desired output, and the system is adjusted until the difference
between the two is minimized. However, the ANN was purely a feed-forward
network that produced a specific output for each given set of inputs. In real
brains, nerve cells not only feed forward but also send information back to
other neurons.
An example of an ANN that allows feedback is the Hopfield network ( Fig. 16.9 ),
named after the multidisciplinary scientist John Hopfield ( B.16.3 ). This net-
work introduces bidirectional connections between the artificial neurons and
assumes that the weights for each connection are the same in each direction.
Such neural networks are able to function as auto-associative memories - that is,
when a pattern of activity is presented to the network, the neurons and con-
nections form a memory of this pattern. Even if you only input a part of the
original pattern, the auto-associative memory can retrieve the entire original
pattern. It is also possible to design these networks to store temporal sequences
of patterns, capturing the order in time in which they occur. Feeding in only
a part of this sequence generates the whole sequence, just as hearing the first
few notes of a song brings back the whole song. The computer architect Jeff
B.16.3. John Hopfield was originally
trained as a physicist but is most
widely known for his research on
ANNs. He was responsible for setting
up the Computation and Neural
Systems PhD program at Caltech and
is now the Howard A. Prior Professor
of Molecular Biology at Princeton.
 
Search WWH ::




Custom Search