ABSTRACT

LEARNING OBJECTIVES After reading this chapter, you should be able to ■ describe the four simple reinforcement

schedules and the types of behavior they produce during reinforcement and extinction

■ give examples of reinforcement schedules from everyday life

■ explain the difference between contingency-shaped and rule-governed behavior

■ describe different theories about why there is a postreinforcement pause on fixed-ratio schedules, and explain which theory is best

■ discuss explanations of why responding is faster on variable-ratio schedules than on variable-interval schedules

■ give examples of how the principles of operant conditioning have been used in behavior modification with children and adults

Among B. F. Skinner’s many achievements, one of the most noteworthy was his experi-mental analysis of reinforcement schedules. A reinforcement schedule is simply a rule that states under what conditions a reinforcer will be delivered. To this point, we have mainly considered cases in which every occurrence of the operant response is followed by a reinforcer. This schedule is called continuous reinforcement (CRF), but it is only one of an infinite number of possible rules for delivering a reinforcer. In the real world, responses are sometimes, but not always, followed by reinforcers. A salesman may make many phone calls in vain for every time he succeeds in selling a magazine subscription. A typist may type dozens of pages, comprised of thousands of individual keystrokes, before finally receiving payment for a completed job. A lion may make several unsuccessful attempts to catch a prey before it finally

obtains a meal. Recognizing that most behaviors outside the laboratory receive only intermittent reinforcement, Skinner devoted considerable effort to the investigation of how different schedules of reinforcement have different effects on behavior (Ferster & Skinner, 1957).