Information Technology Reference
In-Depth Information
Chapter 5
Error-Driven Task Learning
Contents
5.1
Overview
5.1 Overview ......................147
5.2 ExplorationofHebbianTaskLearning......148
5.3 UsingErrortoLearn:TheDeltaRule ......150
5.3.1 DerivingtheDeltaRule...........152
5.3.2 LearningBiasWeights ...........152
5.4 Error Functions, Weight Bounding, and Activa-
tionPhases .....................154
5.4.1 CrossEntropyError.............154
5.4.2 Soft Weight Bounding . . ..........155
5.4.3 ActivationPhasesinLearning .......156
5.5 ExplorationofDeltaRuleTaskLearning.....156
5.6 The Generalized Delta Rule: Backpropagation . . 158
5.6.1 DerivationofBackpropagation .......160
5.6.2 GenericRecursiveFormulation.......161
5.6.3 The Biological Implausibility of Backprop-
agation ...................162
5.7 TheGeneralizedRecirculationAlgorithm ....162
5.7.1 DerivationofGeneRec ...........163
5.7.2 Symmetry, Midpoint, and CHL .......165
5.8 BiologicalConsiderationsforGeneRec......166
5.8.1 WeightSymmetryintheCortex .......166
5.8.2 Phase-Based Activations in the Cortex . . . 167
5.8.3 Synaptic Modification Mechanisms .....168
5.9 Exploration of GeneRec-Based Task Learning . . 170
5.10Summary ......................171
5.11FurtherReading ..................1 72
An important component of human learning is focused
on solving specific tasks (e.g., using a given tool or
piece of software, reading, game playing). The objec-
tive of learning to solve specific tasks is complemen-
tary to the model learning objective from the previous
chapter, where the goal was to represent the general sta-
tistical structure of the environment apart from specific
tasks. In this chapter, we focus on task learning in neu-
ral networks. A simple but quite general conception of
what it means to solve a task is to produce a specific
output pattern for a given input pattern. The input spec-
ifies the context, contingencies, or demands of the task,
and the output is the appropriate response. Reading text
aloud or giving the correct answer for an addition prob-
lem are two straightforward examples of input-output
mappings that are learned in school. We will see that
there are many other more subtle ways in which tasks
can be learned.
It would be ideal if the CPCA Hebbian learning rule
developed for model learning was also good at learn-
ing to solve tasks, because we would then only need
one learning algorithm to perform both of these impor-
tant kinds of learning. Thus, we begin the chapter by
seeing how well it does on some simple input-output
mappings. Unfortunately, it does not perform well.
To develop a learning mechanism that will perform
well on task learning, we derive an error-driven learn-
ing algorithm called the delta rule that makes direct use
of discrepancies or errors in task performance to ad-
147
Search WWH ::




Custom Search