Behaviorism and Developments in Instructional Design and Technology (Distance Learning)

introduction: the basics of behaviorism

The theory of behaviorism concentrates on the study of overt behaviors that can be observed and measured (Good & Brophy, 1990). In general, the behavior theorists view the mind as a “black box” in the sense that response to stimulus can be observed quantitatively, ignoring the possibility of thought processes occurring in the mind. Behaviorists believe that learning takes place as the result of a response that follows on a specific stimulus. By repeating the S-R (stimulus-re sponse) cycle, the organism (may it be an animal or human) is conditioned into repeating the response whenever the same stimulus is present. The behavioral emphasis on breaking down complex tasks, such as learning to read, into subskills that are taught separately, has a powerful influence on instructional design. Behaviors can be modified, and learning is measured by observable change in behavior. The behavior theorists emphasize the need of objectivity, which leads to great accentuation of statistical and mathematical analysis. The design principles introduced by the behavior theorists continue to guide the development of today’s computer-based learning. In distance-education courseware and instructional software, key behavior-modification principles are used. For example, a typical course Web site usually states the objectives of the software; uses text, visual, or audio to apply appropriate reinforcers; provides repetition and immediate feedback; uses principles to shape, chain, model, punish, and award the learners; incorporates a scoring system as a part of the system; and provides status of the progress of the learner. Major learning theorists associated with behaviorism are the following:

• Pavlov

• Thorndike

• Skinner

• Watson

• Gagne

The major educational technology developments in America that can be attributed to behaviorism are the following:

• The behavioral objectives movement

• The teaching machine phase

• The programmed instruction movement

• The individualized instructional approaches

• The computer-assisted learning

• The systems approach to instruction

Major instructional design theorists associated with behaviorism are as follows:

• Glaser

• Gagne and Briggs

• Dick and Carey

• Mager

background: behaviorism andlearning theories

The advent of behavioral theories can be traced back to the elder Sophists of ancient Greece, Cicero, Her-bart, and Spencer (Saettler, 1990). Behaviorism, as a learning theory, can be traced back to Aristotle, whose essay “Memory” focused on associations being made between events such as lightning and thunder. Other philosophers that followed Aristotle’s thoughts are Hobbes (1650), Hume (1740), Brown (1820), Bain (1855), and Ebbinghause (1885). Franklin Bobbitt developed the modern concept of behavioral objectives in the early 1900s. More recently, the names associated with the development of the behaviorist theory include Pavlov, Thorndike, Watson, and B. F. Skinner.

The Russian physiologist Ivan Petrovich Pavlov is the precursor to behavioral science. He is best known for his work in classical conditioning or stimulus substitution. Pavlov’s experiment involved food, a dog, and a bell. His work inaugurated the era of S-R psychology.

Pavlov placed meat powder (an unconditioned stimulus) on a dog’s tongue, which caused the dog to automatically salivate (the unconditioned response). The unconditioned responses are natural and not learned. On a series of subsequent trials, Pavlov sounded a bell at the same time he gave the meat powder to the dog. When the food was accompanied by the bell many times, Pavlov found that he could withhold the food, and the bell’s sound itself would cause the dog to salivate. The bell became the conditioned stimulus that caused the conditioned response of salivating (Thomas, 1992). In 1904, he was awarded the Nobel Prize for his research on digestive processes.

The stimulus and response items of Pavlov’s experiment can be summarized as follows:

• Food: Unconditioned Stimulus

• Salivation: Unconditioned Response

• Bell: Conditioned Stimulus

• Salivation: Conditioned Response

Pavlov also made the following observations (Mer-gel, 1998):

• Stimulus generalization: Once the dog has learned to salivate at the sound of the bell, it will salivate at other similar sounds.

• Extinction: If you stop pairing the bell with the food, salivation will eventually cease in response to the bell.

• Spontaneous recovery: Extinguished responses can be “recovered” after an elapsed time, but will soon extinguish again if the dog is not presented with food.

• Discrimination: The dog could learn to discriminate between similar bells (stimuli), and discern which bell would result in the presentation of food and which would not.

• Higher order conditioning: Once the dog has been conditioned to associate the bell with food, another unconditioned stimulus, such as a light, may be flashed at the same time that the bell is rung. Eventually the dog will salivate at the flash of the light without the sound of the bell.

Thorndike (1874-1949)

Another influential contributor to establishing education as a science was Edward L. Thorndike. Thorndike’s laws were built upon the stimulus-response hypothesis of Pavlov. He was also a strong advocate of educational measurement.

Around the turn of the century, Thorndike conducted researches in animal behavior before becoming interested in human development. He was interested in discovering whether animals, such as cats and dogs, could learn their tasks through imitation or observation.

Thorndike’s laws of learning for humans, based on connectionism, stated that learning was the formation of a connection between stimulus and response. His behavioral learning theory studied increasing a behavior with the use of rewards, punishment, and practice. Three major laws in Thorndike’s laws of learning are the law of effects, which suggested that the strength of connection is dependent on what follows, the law of exercise, which suggested that practice strengthens the connection while disuse weakens it, and the law of readiness, which suggested that if physically ready, the connection is satisfying for the organism.

Close temporal sequence is not the only means of insuring the connection of the satisfaction with the response producing it. The other equally important factors are the frequency, energy, and duration of the connection, and the closeness with which the satisfaction is associated with the response. The result is most clearly seen in the effect of increasing the interval between the response and the satisfaction or discomfort. Such an increase diminishes the rate of learning. Minimum delay in reinforcement has a crucial impact on the learning process. What is called attention to the response or knowledge of the results counts also. A slightly satisfying or indifferent response made often may win a closer connection than a more satisfying response made only rarely.

Thorndike believed that when the response was positive, a neural bond would be established between the stimulus and response, and learning takes place when the bonds are formed into patterns of behavior (Saettler, 1990). This is the origin of the linear style using trial and error in laboratory research for inquiry-based learning.

Watson (1878-1958)

John B. Watson is credited with coining the term behaviorism. Like Thorndike, he was originally involved in animal research, but later became involved in the study of human development. Watson believed that humans are born with a few reflexes and emotional reactions of love and rage, and all other behaviors are established through stimulus-response associations through conditioning.

Watson demonstrated classical conditioning in an experiment involving a young child named Albert and a white rat. Originally, Albert was not afraid of the rat. Watson made a sudden loud noise whenever Albert touched the rat. Frightened by the loud noise, Albert became conditioned to fear and to avoid the rat. The fear was generalized to other small animals. Watson then extinguished the fear by presenting the rat without loud noises. Research of the study suggests that the conditioned fear was more powerful and permanent than it really was (Good & Brophy, 1990; Harris, 1979; Samelson, 1980).

Watson’s work demonstrated the role of conditioning in the development of emotional responses to certain stimuli. This helps to explain certain fears, phobias, and prejudices that people develop.

skinner (1904-1990)

Like Pavlov, Thorndike, and Watson, Burrhus Fried-erich Skinner believed in the stimulus-response pattern of conditioned behaviors. His theory ignored the possibility of any processes occurring in the mind and directly dealt with changes in observable behaviors. In other words, actual behaviors are the focus of concern in his theory, rather than emotions, thoughts, or other hypothetical constructs (Maddux, Johnson, & Willis, 1992).

Most of Skinner’s research was centered around the Skinner box. A Skinner box is an experimental space that contains one or more operands, such as a lever, that may be pressed by a rat. The box also contained various sources of stimuli. Skinner contributed much to the study of operant conditioning, which is a change in the probability of a response due to an event that followed the initial response (Skinner, 1968). The theory of Skinner is based on the idea that learning is a function of change in behavior. When a particular S-R pattern is reinforced (rewarded), the individual is conditionedto respond. Changes in behavior are the result of an individual’s response to events (stimuli) that occur in the environment. In his early career, Skinner started with experimenting with animals such as pigeons and rats. He later turned his research interests from animals to humans, especially his own daughters.

Principles and Mechanisms of Skinner’s Operant Conditioning

• Positive reinforcement or reward: Behavior that is positively reinforced will reoccur; intermittent reinforcement is particularly effective. A reinforcer is anything that strengthens the desired response. A reinforcer could be verbal praise, a good grade, or a feeling of satisfaction.

• Negative reinforcement: Responses that allow escape from painful or undesirable situations are likely to be repeated. A negative reinforcer is any stimulus that results in the increased frequency of a response when it is withdrawn. It is different from aversive stimuli, or punishment, which results in reduced responses (Good & Brophy, 1990).

• Punishment: Responses that bring painful or undesirable consequences will be suppressed, but may reappear if reinforcement contingencies change.

• Extinction or nonreinforcement: Responses that are not reinforced are not likely to be repeated.

• The schedules of reinforcement: The schedules of reinforcement can govern the contingency between responses and reinforcement and their effects on establishing and maintaining behavior. Schedules that depend on the number of responses made are called ratio schedules. The ratio of the schedule is the number of responses required per reinforcement. If the contingency between responses and reinforcement depends on time, the schedule is called an interval schedule.

For an animal to learn a behavior, such as pressing a lever to produce food, successive approximations of the behavior are rewarded until the animal learns the association between the lever and the food reward. To begin the shaping process, the animal may be rewarded for simply turning in the direction of the lever, then be rewarded for moving toward the lever, be rewarded for brushing against the lever, and finally be rewarded for pawing the lever. If placed in a cage, an animal may take an extended period of time to figure out that pressing a lever will produce food. Behavioral chaining occurs when a succession of steps needs to be learned. Information should be presented in small amounts so that responses can be reinforced or shaped. The animal would master each step in sequence until the entire sequence is learned.

Due to stimulus generalization, reinforcements will generalize across similar stimuli producing secondary conditioning. Once the desired behavioral response is accomplished, reinforcement does not have to be present every single time. In fact, it can be maintained more successfully through partial reinforcement schedules. Skinner explained partial reinforcement schedules including interval schedules, ratio schedules, fixed-interval schedules, variable-interval schedules, fixed-ratio schedules, and variable-ratio schedules. He found that variable-interval and variable-ratio schedules produce more persistent rates of response because the learners cannot predict when the reinforcement will occur (Milhollan & Forisha, 1972).

Difference between Classical and Operant Conditioning

Skinner’s work differs from that of his predecessors (classical conditioning) in that he studied the operant behaviors that are voluntary behaviors used in operating on the environment. The organism can emit responsesinstead of only eliciting a response due to an external stimulus.

He also emphasized the use of positive reinforcement in a repetitive manner. Another distinctive aspect of Skinner’s theory is that it attempted to provide behavioral explanations for a broad range of cognitive phenomena. For example, Skinner explained motivation in terms of deprivation and reinforcement schedules.

Implications for Educational Technology

Skinner’s operant conditioning has been widely applied in behavior modifications as well as teaching and instructional development, particularly in areas such as classroom management and programmed instruction. His influential book, The Technology of Learning (1968), explained how classroom instruction should reflect the behaviorist principles of operant conditioning.

Many of Skinner’s instructional techniques are still widely used today (Roblyer, 2006). Consider the implications of Skinner’s theory as applied to the development of programmed instruction (Markle, 1969; Skinner, 1968):

1. Practice should take the form of question-answer (stimulus-response) frames that expose the student to the subject in steps.

2. Require that the learner make a response for every frame and receive immediate feedback.

Table 1. Difference between classical and operant conditioning


Classical Conditioning

Operant Conditioning

Uses the term response

Uses the term behavior

Main components: stimulus and its response

Main components: behavior and its consequence

Cannot be used to shape behavior

Can be used to shape behavior

The stimulus causes the response

The consequence influences the behavior

Association between stimuli and responses

Reinforcement

Based on involuntary reflexive behavior

Based on voluntary behavior

3. Try to arrange the difficulty of the questions so the response is always correct, resulting in a positive reinforcement.

4. Ensure that good performance in the lesson is paired with secondary reinforcers such as verbal praise, prizes, and good grades.

The general criticism about Skinner is that he denies the existence of free will or human freedom. Skinner defines these values as self-control, which he believes humans do not possess. Many critics labeled him one-sided because he fails to consider any options other than what is “easily observed and manipulated” (Schellenberg, 1978).

Robert Gagne (1916-2002)

Like Skinner, Robert Gagne emphasized the use of positive reinforcement in a repetitive manner. Between 1949 and 1958, when Gagne was the director of the perceptual and motor skills laboratory of the U.S. Air Force, he began to develop some of his ideas that comprise his learning theory called the conditions of learning or cumulative learning theory. Although Gagne’s earlier work is grounded in the behaviorist tradition, his current work seems to be influenced by the information-processing view of learning and memory. He has published with David Merrill, Leslie Briggs, Walter Wager, and several other authors.

Gagne is best known for three of his contributions: the events of instruction, the types of learning, and learning hierarchies (Roblyer, 2003).

The Events of Instruction

Gagne identified the following nine events of instruction as elements of a good lesson (Gagne, Briggs, & Wager, 1992):

1. Gaining attention

2. Informing the learner of the objective

3. Stimulating recall of prerequisite learning

4. Presenting new materials

5. Providing learning guidelines

6. Eliciting performance

7. Providing feedback about corrections

8. Assessing performance

9. Enhancing retention and recall

The Types of Learning

The early writings by Gagne, Briggs, and Wager (1974) identified three categories of human factors that affect the learning event (see Table 2).

According to Gagne, each new skill learned should build on previously acquired skills. When designing instruction, the prerequisite lower-level skills and knowledge required have to be explained for an instructional objective.

In the 1990s, Gagne et al. (1992) identified several types of learning behaviors that students demonstrate after acquiring knowledge:

• Verbal information

• Intellectual skill

• Cognitive strategy

• Attitude

• Motor skill

Learning Hierarchies

Robert Gagne developed his taxonomy of learning to explore the practice-observable behaviors in 1972. Gagne distinguished the following eight different classes of intellectual skills in which human beings learn. These intellectual skills can be categorized on a dimension of complexity, ranging from simple recognition to abstract processes (Gagne et al., 1974):

Table 2. Human factors that affect the learning event

Major Categories

Human Factors

External Stimulus

Factors

• Contiguity: time relationship between stimulus and response

• Repetition: frequency of exposure to a stimulus

• Reinforcement: follow-up to the stimulus

Internal Cognitive Factors

• Factual information: from memory

• Intellectual skills: ability to manipulate information

• Cognitive strategies: ability to process meaningful information

Internal Affective Factors

• Inhibition: reluctance to react to a stimulus

• Anxiety: tension

1. Signal learning: The individual learns to make a general, diffuse response to a signal. Such was the classical-conditioned response of Pavlov.

2. Stimulus-response learning: The learner acquires a precise response to a discriminated stimulus.

3. Chaining: A chain of two or more stimulus-response connections is acquired.

4. Verbal association: The learning of chains that are verbal.

5. Discrimination learning: The individual learns to make different identifying responses to many different stimuli that may resemble each other in physical appearance.

6. Concept learning: The learner acquires a capability of making a common response to a class of stimuli.

7. Rule learning: A rule is a chain of two or more concepts.

8. Problem solving: This kind of learning requires the internal events usually called thinking.

The more complex kinds of intellectual processing are based upon these simpler varieties. To teach a skill, a teacher has to identify its prerequisite skills and ensure that the student possesses them. Gagne et al. (1992) called this building process “a learning hierarchy.”

Today, Gagne is considered an experimental psychologist who is concerned with learning and instruction. The model he proposed is a widely accepted model used to inform the design process. He has impacted instructional design for K-12 schools as well as for business, industry, and the military.

Main Focus: Behaviorism and the developments in instructional design and technology

The following major educational technology developments in America can be attributed to behaviorism (Saettler, 1990):

• The behavioral objectives movement

• The teaching machine phase

• The programmed instruction movement

• The individualized instructional approaches

• The computer-assisted learning

• The systems approach to instruction

Behavioral objectives Movement

The behaviorist theory is sometimes referred to as objectivist because the behaviorists emphasize the need for objectivity, which leads to great accentuation of statistical and mathematical analysis. They believe behaviors can be modified, and learning is measured by observable changes in behavior. As of today, learning objectives written by teachers are still widely recognized and very useful. Here is an example of a learning objective:

After having completed the unit, the student will be able to answer correctly 85% of the questions.

The behavioral objectives movement can be traced back to Benjamin Bloom in the 1950s and 1960s. At a time when the primary learning theory was behaviorism, an approach that viewed students as passive recipients of learning provided by their teachers and parents, Bloom presented his taxonomy organization in Taxonomy ofEducational Objectives: Book 1, Cognitive Domain (Bloom, Englehart, Furst, Hill, & Krathwohl, 1956). The view was that learning involved pupils’ accumulation and remembering of varied pieces of information. Bloom and his colleagues began development of a taxonomy in the cognitive, affective, and psychomotor domains. Cognitive is for mental skills, affective is for growth in feelings or emotional areas, while psychomotor is for manual or physical skills.

Bloom’s cognitive taxonomy is organized into six levels:

• Knowledge

• Comprehension

• Application

• Analysis

• Synthesis

• Evaluation

Bloom’s “learning for mastery” defines mastery in terms of specific educational objectives, and mastery of each unit is essential for students before they advance to the next one.

Each teacher begins a new term or course with the expectation that about a third of his students will adequately learn what he has to teach. He expects about a third to fail or just “get by. ” Finally, he expects another third to learn a good deal of what he has to teach, but not enough to regard them as a “good student.” (Bloom, Hastings, & Madaus, 1971, p. 43)The affective domain includes the manner in which we deal with things emotionally, such as attitudes, motivations, feelings, values, and appreciation. The major categories listed in order are as follows (Bloom, Mesia, & Krathwohl, 1964):

• Receiving phenomena: Awareness, willingness to hear, selected attention

Responding to phenomena: Active participation on the part of the learners

• Valuing: The worth or value a person attaches to a particular object, phenomenon, or behavior.

• Organization: Organizes values into priorities.

• Internalizing values (characterization): Has a value system that controls his or her behavior.

The psychomotor domain includes physical movement, the use of the motor-skill areas, and coordination. Most of the time, the development of these skills requires practice, and is measured in terms of speed, precision, distance, procedures, or techniques in execution. The major categories listed in order are as follows (Bloom et al., 1964):

• Perception: The ability to use sensory cues to guide motor activity.

• Set: Readiness to act. It includes mental, physical, and emotional sets.

• Guided response: The early stages in learning a complex skill that includes imitation and trial and error.

• Mechanism: This is the intermediate stage in learning a complex skill.

• Complex overt response: The skillful performance of motor acts that involve complex movement patterns.

• Adaptation: Skills are well developed and the individual can modify movement patterns to fit special requirements.

• Origination: Creating new movement patterns to fit a particular situation or specific problem.

According to Bloom et al. (1971), nearly all students can achieve mastery of material in a course when given the time and quality of instruction that they need. Therefore, to reach mastery, the student needs to get 80% to 90% ofthe answers right. The basic instructional task was to define the course into educational units and find methods and material to help the students to reach the set level. The student would be tested with a formative test that would either indicate mastery or emphasize on what still needed to be learned in order to reach the next level.

By the late 1960s, most teachers were writing behavioral objectives (Mergel, 1998). Other names for objectives are “learning targets,” “educational objectives,” and “pupil outcomes.” Virtually all the tests pupils take in school are intended to measure one or more of the cognitive processes, and instruction is expected to focus on assisting students attain mastery of some subject area. The learning success may be measured by tests developed to measure each objective. To develop objectives, a learning task must be broken down through analysis into specific measurable tasks. Teachers began to write behavioral objectives for their lessons that were descriptions of specific, terminal behaviors manifested in terms of observable, measurable behavior. Cognitive objectives focus on memorizing, interpreting, and other intellectual activities.

A good objective states learning objectives in specified, quantifiable, terminal behaviors.As of today, Bloom’s taxonomy is still widely recognized and very useful. In a popular textbook used by teacher-training programs in the United States, Peter W. Airasian wrote, “Although teachers’ objectives may be explicit or implicit or clear or fuzzy, it is best that objectives be explicit, clear, and measurable. Regardless of how they are stated and what they are called, objectives are present in all teaching” (Airasian, 2001, p. 74).

Similarly, Robert Mager wrote Preparing Instructional Objectives in 1962, which prompted the interest and use of behavioral objectives among educators. In the topic, Mager described an objective as “a description of a performance you want learners to be able to exhibit before you consider them competent. An objective describes an intended result of instruction rather than the process of instruction itself’ (1984, p. 21). Objectives are important because teachers need an objective to find out if learning has been accomplished, and students need an objective as the means to organize their own efforts toward accomplishment. There is no sound basis for the selection of instructional materials when clearly defined objectives are lacking.

According to Mager, an objective must include the three major components:

1. Performance: What the learner should be able to do

2. Conditions: Under what circumstances

3. Criterion: How well it must be done

Later, Gagne and Briggs developed a set of instructions for writing objectives that is based on Mager’s work.

Teaching Machines and Programmed instruction Movement

Although the elder Sophists, Comenius, Herbart, and Montessori used the concept of programmed instruction in their repertoire, B. F. Skinner is the most current and probably the best known advocate ofteaching machines and programmed learning. Other contributors to this movement include Pressey and Crowder.

Edward Thorndike described the premise of computer-based instruction half a century before the feasibility of such a system became possible. Thorndike (1912, p. 165) wrote, “If, by a miracle of mechanical ingenuity, a topic could be so arranged that only to him who had done what was directed on page one would page two become visible, and so on, much that now requires personal instruction could be managed by print.”

In his machine, Sidney Pressey sought to incorporate Thorndike’s vision. Noticing that objective tests were becoming common in schools, in the 1920s, Pressey began experimenting with a machine for testing and scoring in his introductory psychology courses. Soon he recognized its potential for teaching and learning. Pressey (1926, p. 374) stated, “the procedure in mastery of drill and informational material were in many instances simple and definite enough to permit handling of much routine teaching by mechanical means.” Pressey maintained that the teacher is “burdened by such routine of drill and information-fixing.”

Pressey’s teaching machine resembled a typewriter carriage with a window that revealed a question with four answers. The user pressed one of the four keys that corresponded to different answers. When the user pressed a key, the machine recorded the answer on a counter and then displayed the next question. Once finished, the person scoring the test slipped the test sheet back into the device and noted the score on the counter. Pressey demonstrated his multiple-choice machine at the 1925 American Psychological Association meeting (Travers, 1967).

Despite his confidence that the machine he developed would lead to an “industrial revolution in education” (Pressey, 1932, p. 672), this type of machine was never widely used. In the same year that Pressey predicted the revolution, the unemployment rate was over 20% high due to the Great Depression, and new developments in educational technology were delayed until after World War II.

More than 30 years later, among the group of far-sighted researchers, Skinner had a vision of machines that could teach. He envisioned the following (1954, 1958):

• Machines are able to arrange appropriate contingencies of reinforcement by which specific forms of behavior could be set up by the use of specific stimuli.

• The learner should be able to put together his or her own response rather than select from alternatives.

• The learner must pass through a carefully designed sequence of steps. Each step must be small enough that it can always be taken, yet in taking it, the student moves closer to fully competent behavior.

• The student must not be able to proceed to the following step until the first has been accomplished.

• The machine stimulates constant interaction between the program and the user.

This teaching machine has good record-keeping facilities of students’ progress. According to Maddux et al. (1992, p. 105), “Skinner’s machine presented the questions in 30 radial frames on a 12-inch disk.” They continued, “He called the material in the frames the program. Programs that led each user through the same materials in the same sequence were referred to as linear programs.” Skinner thought the success of such a machine depends on the material used in it. The learner’s concentration is improved by the use of these packages addressing the environmental factors that should be inductive to learning. Its other features also free the educator of rote work.

Skinner’s teaching machine required the learner to compose an answer rather than simply choose from a list of options. Also, the machine required the learner to proceed through a series of steps in a prescribed sequence, or through linear programs. Each step had to be small so that everyone would be successful, yet each step had to lead closer to the target behavior (referred to as task analysis; Maddox et al., 1992). Skinner’s machine was demonstrated in 1954. If used effectively, these machines would take the role of a private tutor, bringing one programmer (educator) into contact with a large number of students.

Skinner’s work on teaching machines has stimulated a large body of research. Today, his criteria for the teaching machines are still important components in developing modern computer-based learning programs (Maddox et al., 1992). Even though Skinner’s teaching machine stimulated a large body of interest, the device was not widely adopted by educators. However, his idea of a teaching machine has led to programmed instruction.

Later on, Norman Crowder (1959) did not agree with Skinner that every learner should progress in the same sequence. His programs varied the sequence to some learners or omitted certain frames for others, depending on learner responses (referred to as branching programs). These modern programs come in various forms in the current educational software market: computerized drill and practice, simulations, and tutorials.

Today, the Education Thesaurus of UNESCO’s International Bureau of Education (2002) defines teaching machines as, “Devices that mechanically, electrically and/or electronically present instructional programs at a rate controlled by the learners’ responses.”

Early Use of Programmed instruction

Sometimes called programmed learning, programmed instruction is a book or workbook that employs the principles proposed by Skinner in his design of the teaching machine, with a special emphasis on task analysis and reinforcement for correct responses (Maddox et al., 1992). Skinner was also a proponent of programmed instruction, and much of the system is based on his theory of the nature of learning. It is an innovation that was more widely accepted in education than a teaching machine.

Believing that by tightly structuring the environment, students’ behaviors can be shaped to achieve learning, Skinner envisioned lessons that use carefully planned steps of stimulus-response pairing and reinforcement to reach a goal. The lessons are to be administered in small, incremental steps.

Skinner and J. G. Holland experimentally used programmed instruction in the 1920s and 1930s. Early use of programmed instruction tended to concentrate on the development of hardware rather than course content. In the late 1950s, they first used programmed instruction in behavioral psychology courses at Harvard. The first practical implementation of programmed instruction was achieved in 1960 by Basic Systems Inc. (Mechner, 1977). In the early 1960s, the proponents led by Skinner defined programmed instruction as using (a) an active response by the learner, (b) immediate reinforcement of correct responses, and (c) successive approximations toward the knowledge to be learned, in a sequence of steps so small that the learner can take each one without difficulty (1977).

The use of programmed instruction appeared in American elementary and secondary schools at around the same time (Saettler, 1990). Programs have been devised for the teaching of arithmetic, foreign languages, physics, spelling, reading, psychology, and several other subjects.Industry and the military used programmed instruction to train personnel. Osguthrope and Zhou (1989) discussed the popularity of this approach in the 1950s and 1960s.

Although many educators agree that programmed instruction can contribute to more efficient classroom procedure and supplement conventional teaching methods, there has been considerable controversy regarding the merits of programmed instruction as the sole method of teaching. Programmed learning died out in the later part of the 1960s (Reiser, 1987). Researchers agreed that programmed instruction did not appear to live up to its original claims (Criswell, 1989; Reiser; Tillman, & Glynn, 1987). Concerned developers moved away from hardware development to programs based on analysis of learning.

By the early 1960s, there was a strong backlash against the use of both teaching machines and programmed instruction. Fitzgerald (1970) listed the rigidity of these devices in skimming as one of the overriding disadvantages. He also suggested that both teaching machines and programmed instruction would lead to dehumanization due to over reliance on machines.

Decades later, Skinner (1986) explained why programmed instruction and teaching machines were never popular:

The machines were crude, the programs were untested, and there were no ready standards of comparison. Teaching machines would have cost money that was not budgeted. Teachers misunderstood the role of the machines and were fearful of losing their jobs. (p. 105)

Today, the Education Thesaurus of UNESCO’s International Bureau of Education (2002) defines programmed instruction as, “Learning in which the students progress at their own rate using workbooks, textbooks or electromagnetic resources that provide information in discrete steps, test learning at each step and provide immediate feedback about achievement.”

individualized Approaches to instruction

Similar to teaching machines and programmed instruction, individualized instruction began in the early 1900s and was revived in the 1960s. The Keller Plan (sometimes called Keller Method, personalized system of instruction or PSI), individually prescribed instruction (IPI), program for learning in accordance with needs (PLAN), and individually guided education are all examples of individualized instruction in the United States (Saettler, 1990). Also similar to the previously mentioned behavioral objectives movement, teaching machine phase, and programmed instruction movement, the movement of individualized approaches to instruction represents the achievements for the neo-behaviorist systems approach to instruction. All had their foundation in the behavioral theories and psychological principles to the technology of education and have shown to generate a significant educational result.

The Keller Plan

The Keller Plan was developed by F. S. Keller, his colleague J. Gilmore Sherman, and two psychologists at the University of Brazilia. The Keller Plan is derived from the behaviorists reinforcement psychology with influence from teaching machines and programmed instructions The group had in mind that students would perform better if they found satisfaction in their work (Buskist, Cush, & DeGrandpre, 1991). They argued that positive consequences (praise, good grades, feeling of achievement) were more important than the negative ones (boredom, failure, or punishment). Briefly, “those features which seem to distinguish (PSI) from conventional teaching procedures” include the following (Reboy & Semb, 1991, p. 213):

1. Mastery criteria

2. Self-pace

3. Stress upon the written word

4. The use of proctors

5. Lectures used for motivation rather than sources of information

The Keller Plan was developed for higher education, whereas Bloom’s mastery was to accomplish mastery learning in K-12. A PSI course is divided into units and students have to show a mastery of the unit to be able to go ahead. Students are allowed to individually pace their own learning. The mastery level is usually set at 85% to 100% results (Buskist et al., 1991). The course is based upon a standard textbook, a study guide, journal articles, and other readings. Other common characteristics of PSI are the use of proctors. Proctors are undergraduate students who have successfully finished the course and are aware of the problems of new students. The proctors assist the students, score their quizzes, and react as feedback to the instructor of the course in general. The instructor’s lectures and demonstrations in the PSI plan are not for instructional purpose but for enrichment and to provide motivation.

Individually Prescribed Instruction (1964)

In 1962, Robert Glaser synthesized the work of previous researchers and introduced the concept of IPI in 1962. IPI is an approach where the results of a learner’s placement test are used to plan learner-specific instruction. The main features of IPI include prepared units, behavioral objectives, planned instructional sequences for various subjects, as well as a pretest and posttest for each unit, and materials to be used to continually evaluate the learner to meet behavioral objectives (Saettler, 1990). The use of IPI dwindled in the 1970s when it lost funding.

Program for Learning in Accordance with Needs (1967)

Headed by John C. Flanagan, PLAN was developed under sponsorship of American Institutes for Research (AIR), Westinghouse Learning Corporation, and several U.S. School districts. The main features of PLAN include selected items from about 6,000 behavioral objectives; instructional modules that took about two weeks of instruction each and was made up of approximately five objectives, mastery learning, and remedial learning plus retesting (Saettler, 1990). PLAN was abandoned in the late 1970s because of upgrading costs.

computer-assisted instruction (caI)

During the 1950s, CAI was first used in education, and training with early work was done by IBM. The mediation of instruction entered the computer age in the 1960s when Patrick Suppes and Richard Atkinson conducted their initial investigations into CAI in mathematics and reading. Developed through a systematic analysis of curriculum, Suppes’ (1979) CAI provided learner feedback, branching, and response tracking.

CAI grew rapidly in the 1960s when federal funding for research and development in education and industrial laboratories was implemented. To determine the possible effectiveness of CAI, the U.S. government developed two competing companies, Control Data Corporation and Mitre Corporation, who came up with the PLATO (Programmed Logic for Automatic Teaching Operations) and TICCIT (Time-Shared, Interactive, Computer-Controlled Information Television) projects. Another significant development in the instructional applications of computers during the 1960s and 1970s was the development of the IBM 1500 computer. Kinzer, Sherwood, and Bransford (1986, p. 25) stated that the IBM 1500 was “the only computer ever developed specifically for computer-assisted instruction, rather than as a general-purpose computer for widespread applications.”

PLATO is the first large-scale project for the use of computers in education. It is a proj ect developed through the partnership between Control Data Corporation, the University of Illinois’ Computer Education Research Laboratory (CERL), and the National Science Foundation. Designed to use a mainframe-based system, PLATO allowed a sizeable library of programs available for students, a sophisticated record-management system to keep track of individual students’ progress, and a large number of simultaneous users (Pagliaro, 1983).

The PLATO IV system, introduced during the early 1970s, enabled up to 600 students to simultaneously access educational software (Alessi & Trollip, 1985). Each terminal serviced one terminal display and keyboard. The several thousand terminal systems served undergraduate education as well as elementary school reading, a community college in Urbana, and several campuses in Chicago (Office of Technology Assessment, 1982). During the early 1970s, PLATO IV was introduced, a large time-shared, instructional system. All data and programs were stored on a central computer.

The original PLATO system continued to grow throughout the 1970s and early 1980s to over 1,000 terminals throughout the country (Alessi & Trollip, 1985). Control Data Corporation starting setting up PLATO systems around 1975. They already had over 100 PLATO systems operating by 1985.

The design principles introduced by Suppes continue to guide the development of today’s instructional software. In CAI packages, key behavior-modification principles are used. A typical CAI software usually states the objectives of the software; uses text, visual, or audio to apply appropriate reinforcers; provides repetition and immediate feedback; uses principles to shape, chain, model, punish, and award the learners; incorporates a scoring system as a part of the system; and provides status of the progress of the learner (Mergel, 1998).

By using the CAI packages, individual learners can master the subject matter on their own time and at their own pace. As the student continually is kept on track of his or her performance, motivation is also enhanced. In contrast to being a mere receiver of information, the learner now more actively participates.

Despite money and research, by the mid-1970s, it was apparent that CAI was not going to be the success that people had expected due to the following reasons (Mergel, 1998):

• CAI had been oversold and could not deliver.

• Lack of support from certain sectors

• Technical problems in implementation

• Lack of quality software

• High cost

Some researchers also argue that CAI was very much drill and practice, which is controlled by the program developer rather than the learner. Little branching of instruction was implemented in the programs (Saet-tler, 1990).

systems Approach to instruction or instructional systems Design (isd)

The systems approach involves setting goals and objectives, analyzing resources, devising a plan of action, and continuous evaluation and modification of the program (Saettler, 1990). This approach is rooted in the military and business world, was developed in the 1950s and 1960s, and has dominated educational technology and educational development since the 1970s. The systems approach to curriculum design is an attempt to use a process of logical development and ongoing monitoring and evaluation to allow continuous evaluation of the curriculum.

The onset of World War II introduced the huge instructional problem of training thousands of military personnel quickly and effectively. The answer at the time was an enormous influx of mediated learning material: films, slides, photographs, audiotapes, and print materials. In the 1960s, the military was rapidly infusing instructional systems development into their standard training procedures. This period was distinguished by the articulation of components of instructional systems and the recognition of their system properties.

The systems approach to instructional design was often accredited to James Finn. Seels (1989) described Finn as the father of the instructional design movement because he linked the theory of systems design to educational technology and thus, encouraged the integrated growth of these related fields of study. Finn has also made educational technologists aware that technology was as much a process as a piece of hardware (1989).

The systems approach views a system as a set of interrelated parts, all working toward a defined goal. Examples of systems include the human body and a community. Parts of a system will depend on each other for input and output. The entire system uses feedback to determine if the goal is achieved. In 1962 Robert Glaser employed the term instructional system and named, elaborated, and diagramed its components.

Robert Gagne’s The Conditions of Learning (1965) is a milestone that elaborated the analysis of learning objectives and related different classes of learning objectives to appropriate instructional designs. Gagne introduces behaviorist literature into the systems approach. His work has contributed greatly to the field of instructional technology in the aspect of instructional design.A systems-approach model of designing instruction is utilized to help learners understand the process of instructional design. Gagne also introduced the idea of task analysis to instructional design. Through task analysis, an instructional task could be broken down into sequential steps: a hierarchical relationship of tasks and subtasks. Gagne built on the principles of the systems approach that Skinner explored in programmed instruction.

The current version of the systems approach is a process comprised of a series of phases. Sometimes referred to as the ADDIE model, the systems approach of instructional design contains the following major phases: analysis, design, development, implementation, and evaluation.

• Analysis

o Determine the instructional goal.

o Analyze the instructional goal.

o Analyze the learners and context of learning.

• Design

o Write performance objectives.

• Development

o Develop instructional strategies.

o Develop and select instruction.

o Develop assessment instruments.

• Implementation

o Implement the system.

o Revise the instruction if necessary.

• Evaluation

o Design and conduct the formative evaluation of instruction.

o Conduct summative evaluation

Each step receives input from the previous step and provides output for the next step. A system is modified if the goal is not achieved. Each component is carefully linked.

In the field of education, the systems-approach model first focused on language laboratories. The instruction can be viewed as a systematic process in which every component is crucial to achieve the goal of successful learning. These components include the learner, instructor, instructional materials, and the learning environment. The many components of the system interact to achieve learning. The focus is on what the learner will be able to know when the instruction is concluded. The systems approach does not prescribe or promote any particular teaching methodology. No one method will be appropriate for all objectives or for all students. Rather, it is a vehicle that helps teachers to think more systematically and logically about the objectives relevant to their students, and the means of achieving and assessing these. These early efforts of ISD in education led to several ISD models that were developed in the late 1960s at Florida State University.

Design models can be defined as the visualized representations of an instructional design process, displaying the main phases and their relationships. Each phase has an outcome that feeds the subsequent phase.

Currently, there are more than 100 different ISD models, but almost all are based on the generic ADDIE. The more commonly known models are the Dick and Carey Model, the Kemp Model, the ICARE Model, and the ASSURE Model. While a number of versions of the ISD model exist, the Dick and Carey model is very popular in current instructional design programs. The ADDIE model has been in use for training development for several decades. Today, Walter Dick and Lou Carey are widely viewed as the torchbearers of the approach with their authoritative book The Systematic Design of Instruction (1978).

Dick and Carey’s model, the systems-approach model for designing instruction, is based on the assumption that there is a predictable link between a stimulus and the response that it produces in a learner. It describes the phases of an iterative process that starts by identifying instructional goals and ends with evaluation. This model includes analysis, design, development, formative evaluation, plus needs assessment in a nonlinear relationship (Dick & Carey, 1978).

In a classroom setting, the instructional material is linked to the response that it produces in a learner through the learning of the materials. Instruction is specifically targeted on the skills and knowledge to be taught, and supplies the appropriate conditions for the learning of these outcomes.

The Dick and Carey model prescribes a methodology for designing instruction based on breaking instruction down into smaller components. The designer needs to identify the subskills the student must master that, in aggregate, permit the intended behavior to be learned, and then select the stimulus and strategy for its presentation that builds each subskill.

The instructional implication of the model is that learning is based on mastering a set of behaviors are predictable and therefore reliable. This model assumes that the correct instructional analysis and instruction will lead to demonstrable skills.

The following is a list of the elements of Dick and Carey’s model explained in The Systematic Design of Instruction:

1. Determine the instructional goal.

2. Analyze the instructional goal.

3. Analyze the learners and contexts.

4. Write performance objectives.

5. Develop assessment instruments.

6. Develop instructional strategy.

7. Develop and select instruction.

8. Design and conduct formative evaluation.

9. Revise instruction.

10. Use summative evaluation.

Establishing an instructional goal or goals is typically preceded by needs assessment. The needs assessment is a formal process of identifying discrepancies between current outcomes and desired outcomes for an organization. Dick and Carey described the performance objectives as a statement of what the learners would be expected to do when they have completed a specified course of instruction, stated in terms of observable performances. Subordinate objectives are objectives that must be attained in order to accomplish a terminal objective; terminal objectives are objectives the learner will be expected to accomplish when they have completed a course of instruction. Through the learner and context analysis, key learner characteristics and the context in which the learning will occur are identified. The information provides the basis for developing accurately targeted instruction.

The designer conducts instructional analysis for an instructional goal in order to identify the relevant skills, their subordinate skills, and information required for a student to achieve the goal. The technique of hierarchical analysis is applied for goals in the intellectual skills domain to identify the critical subordinate skills needed to achieve the goal and their interrelationships. Formative evaluation is used to collect data and information that is used to improve a program, conducted while the program is still being developed. And finally, summative evaluation is conducted after an instructional program has been implemented and formative evaluation completed to present conclusions (http://www. ic.arizona.edu/~teachorg/nlii03/isd.htm).

future trends and conclusion: behavioral teaching and learning

Behavioral approaches to teaching generally involve the following:

1. The skills and information to be learned are broken down into small units.

2. Students’ work is checked regularly and feedback is provided as well as encouragement (reinforcement).

3. Teaching is “out of context.” Behaviorists generally believe that students can be taught best when the focus is directly on the content to be taught. Behavioral instruction often takes the material out of the context in which it will be used.

4. Instruction is direct or “teacher centered.” Teachers must direct the learning process.

5. Learning is passive.

6. Students must learn the correct response.

7. Learning requires an external reward.

8. Knowledge is a matter of remembering information.

9. Understanding is a matter of seeing existing patterns.

10. Applications require “transfer of training,” which requires “common elements” among problems.

Educational Implications

The behavioral emphasis on breaking down complex tasks into subskills that are taught separately is very common in American schools today.

The behavioral approaches to instruction, such as programmed instruction, are outcome based and emphasize small step size, overt responses, and frequent reinforcement of responses. From the behavioral viewpoint, the learner responds to a stimulus during instruction. Through reinforcement, successive approximations of the response are transformed into desired behaviors. Only the overt response is accepted, while the learner’s thought is virtually ignored. Learning is understood to be the result of a casual link between instructional stimuli and student responses, which are strengthened or weakened through reinforcement.

Behavioral teaching and learning tend to focus on skills that will be used later. You learn facts about American history, for example, because it is assumed that knowing those facts will make you a better citizen when you are an adult. You learn basic mathematics and computational skills because you may need them when you get a job. Behavioral learning does not, however, generally ask you to actually put the skills or knowledge you learn into use in a “real” or “authentic” situation. That will come later when you graduate and get a job.

These effects are critical to the effectiveness of a computer program because they influence the learning events of a lesson. The behavior theorists give a great deal of attention to individual responses during interactions with computers. Behaviorists favor software designed for drill and practice and tutorial instruction. Drill-and-practice and tutorial applications have their roots in the early works on teaching machines and programmed instruction.

During computer-assisted learning, the effectiveness of a program depends on the internal responses to a stimulus, the senses used, and the ease of use of the computer so as to minimize distractions. Table 3 compares the basic behavioral roots, the general implications of the roots to general instructional design, and the implications of the behavioral roots to instructional technology.

What are the strengths and weaknesses of using the behavior approach to instructional design? Mergel (1998) pointed out the weakness that the learners may find themselves in a situation where the stimulus for the correct response does not occur, therefore, he or she cannot respond. For example, a worker who has been conditioned to respond to a certain cue at work stops production when an anomaly occurs because of lack of understanding of the system. The strength of using the behavior approach to instructional design, pointed out by Mergel, is that the learner is focused on a clear goal and can respond automatically to the cues of that goal. For example, World War II pilots were conditioned to react to silhouettes of enemy planes, a response that, one would hope, became automatic (1998). There were also researchers who questioned the breaking down of subject material into small parts, believing that it would lead away from an understanding of the “whole” (Saettler, 1990).

Table 3. A comparison of basic behavioral roots, the general implications of the roots to general instructional design, and the implications of the behavioral roots to instructional technology

Behavioral Roots

Implications to General Instructional Design

Implications to Instructional Technology

1) Skills and information to be learned are broken down into small units.

There must be definite goals to accomplish.

The computer program can provide these attributes.

• Practice should take the form of question-answer (stimulus-response) frames that expose the student to the subject in gradual steps.

• Information should be presented in small amounts so that responses can be reinforced (“shaping”).

• The first level of a program must be mastered before the learner can continue on to the next level.

• The difficulty of the questions is arranged so the response is always correct and hence results in a positive reinforcement.

• The content is divided into small modules or units.

• Prerequisites are textually displayed.

• Prerequisites are graphically displayed.

2) Students’ work is checked regularly and feedback is provided as well as encouragement (reinforcement).

Correct responses must be followed with immediate feedback or reinforcement. Stimuli can be positive or negative.

The sensory perception in learning is very important.

The computer program can provide these attributes.

• Depending on the function of the program, there can be positive or negative feedback.

• The stimuli can be changed in a program if the response is not satisfactory.

• It ensures that good performance in the lesson is paired with secondary reinforcers such as praise, prizes, and good scores.

• The sound, animation, audio, and color in a program are important to give immediate feedback and have to do with the sensory perception of the learner.

• Positive feedback can be given regularly. The learner can stop the program immediately if he or she does not like it.

3) Teaching is “out of context.” Behavioral instruction often takes the material out of the context in which it will be used.

Students can be taught best when the focus is directly on the content to be taught.

The computer program can provide these attributes.

• It divides content into small modules or units.

• The computer program plays the role of a teacher in posting the problems to be solved.

• The content is textually displayed and reviewed.

• The content is graphically displayed and reviewed.

4) Instruction is direct or “teacher centered.” Teachers must direct the learning process.

The learning environment must be controlled.

The computer program can provide these attributes.

• Lectures, tutorials, drills, demonstrations, and other forms of controlled teaching guide the design of the software.

• The teacher or program controls the presentation sequence and display rate.

• The content is textually displayed.

• The content is graphically displayed.

• Attention-focusing devices such as animation, sound, pointers, and so forth are used.

• The computer program plays the role of a teacher in posting the problems to be solved.

5) Learning is passive.

Learning is a change in behavior in which a stimulus in the environment has an influence on the learning and behavior of the learner.

The computer program can provide these attributes.

• The purpose of the software is stated clearly.

• A computer program can be very linear.

• The teacher or program controls the presentation sequence and display rate.

• Attention-focusing devices, such as animation, sound, pointers, and so forth, are used.

6) Students must learn the correct response.

The exercise can be checked once it is completed.

The computer program can provide these attributes.

• The exercise can be checked by the computer once it is completed.

• Problems can be randomized, and the same problem can be repeated until the learner answers correctly.

• It repeats content not mastered.

• It displays the score or correct answers.

• It helps screen for incorrect answers.

• It provides outcome guides coordinated with performance tasks (e.g., activity check sheets).

• It provides answer keys.

• It requires that the learner make a response for every frame and receive immediate feedback.

Table 3.

7) Learning requires an external reward.

Extrinsic motivation plays a critical role in behaviorism.

The computer program can provide these attributes. • Extrinsically motivated, the learner wishes to get a better score.

8) Knowledge is a matter of remembering information.

Learning is taking place through memorizing.

The computer program can provide these attributes.

• A drill program with repetition of problems is emphasized.

• It reviews prerequisite content and vocabulary.

9) Understanding is a matter of seeing existing patterns.

The learner has learned something when you can observe his or her behavior.

The computer program can provide these attributes.

• The scoring and evaluation functions of a program are two ways to decide if the learner has learned.

• The progress of the learner can be monitored by the computer program.

• It conducts performance tests.

• It provides questions on new content.

• It allows a limited response time for memory-level questions.

• It uses computers for context-rich testing.

10) Applications require “transfer of training,” which requires “common elements” among problems.

A transfer of knowledge is taking place.

The computer program can provide these attributes.

• It cross-references content to similar examples.

• The computer program can provide additional information or examples.

Neo-Behavioral Theories

Classic behaviorism suggests that human nature is neither inherently positive nor negative but, rather, is shaped by influences from the person’s environment (Ormrod, 1999). In this view, learning emphasizes the attainment of measurable objectives that are achieved through a systematic instructional design process. Penland (1981) suggested a variation on the theme of behaviorism that is called a neo-behaviorist perspective. Neo-behaviorism departs from classic behaviorism in that, while the latter is concerned exclusively with observable behaviors, the former acknowledges the importance of self-direction that is internal to the individual. Thus, whereas classical behaviorism is only concerned with the environment as a determinant of behavior, neo-behaviorism stresses the interaction of the individual and environment.

KEY TERMs

Classical Conditioning: The Russian physiologist Ivan Petrovich Pavlov is the precursor to behavioral science. He is best known for his work in classical conditioning or stimulus substitution. Pavlov’s experiment involved food, a dog, and a bell. His work inaugurated the era of S-R psychology. Pavlov placed meat powder (an unconditioned stimulus) on a dog’s tongue, which caused the dog to automatically salivate (the unconditioned response). The unconditioned responses are natural and not learned. On a series of subsequent trials, Pavlov sounded a bell at the same time he gave the meat powder to the dog. When the food was accompanied by the bell many times, Pavlov found that he could withhold the food, and the bell’s sound itself would cause the dog to salivate.

Computer-Assisted Instruction (CAI): During the 1950s, CAI was first used in education, and training, with early work, was done by IBM. The mediation of instruction entered the computer age in the 1960s when Patrick Suppes and Richard Atkinson conducted their initial investigations into CAI in mathematics and reading. Developed through a systematic analysis of curriculum, Suppes’ (1979) CAI provided learner feedback, branching, and response tracking. CAI grew rapidly in the 1960s, when federal funding for research and development in education and industrial laboratories was implemented.

Instructional Objectives: A description of a performance you want learners to be able to exhibit before you consider them competent. An objective describes an intended result of instruction rather than the process of instruction itself.

Keller Plan: The Keller Plan (sometimes called Keller Method, personalized system of instruction or PSI), individually prescribed instruction (IPI), program for learning in accordance with needs (PLAN), and individually guided education are all examples of individualized instruction. The Keller Plan was developed by F. S. Keller, his colleague J. Gilmore Sherman, and two psychologists at the University of Brazilia. The Keller Plan is derived from the behaviorists reinforcement psychology with influence from teaching machines and programmed instructions.

Operant Conditioning: Skinner contributed much to the study of operant conditioning, which is a change in the probability of a response due to an event that followed the initial response. The theory of Skinner is based on the idea that learning is a function of change in behavior. When a particular S-R pattern is reinforced (rewarded), the individual is conditioned to respond. Changes in behavior are the result of an individual’s response to events (stimuli) that occur in the environment. Principles and Mechanisms of Skinner’s Oper-ant Conditioning include: Positive Reinforcement or Reward, Negative Reinforcement, Punishment, and Extinction or Nonreinforcement.

Programmed Instruction: Sometimes called programmed learning, programmed instruction is a book or workbook that employs the principles proposed by Skinner in his design of the teaching machine, with a special emphasis on task analysis and reinforcement for correct responses.

The Schedules of Reinforcement: The schedules of reinforcement can govern the contingency between responses and reinforcement and their effects on establishing and maintaining behavior. Schedules that depend on the number of responses made are called ratio schedules. The ratio of the schedule is the number of responses required per reinforcement. If the contingency between responses and reinforcement depends on time, the schedule is called an interval schedule.

Skinner Box: Most of Skinner’s research was centered around the Skinner box. A Skinner box is an experimental space that contains one or more operands such as a lever that may be pressed by a rat. The box also contained various sources of stimuli. Skinner contributed much to the study of operant conditioning, which is a change in the probability of a response due to an event that followed the initial response. Changes in behavior are the result of an individual’s response to events (stimuli) that occur in the environment. In his early career, Skinner started with experimenting with animals such as pigeons and rats. He later turned his research interests from animals to humans, especially his own daughters.

Teaching Machines: B. F. Skinner is the most current and probably the best-known advocate of teaching machines. Other contributors to this movement include Pressey and Crowder. Noticing that objective tests were becoming common in schools, in the 1920s, Pressey began experimenting with a machine for testing and scoring in his introductory psychology courses. Soon he recognized its potential for teaching and learning. Despite his confidence that the machine he developed would lead to an “industrial revolution in education,” this type of machine was never widely used.

Next post:

Previous post: