Sunteți pe pagina 1din 18

Classical Conditioning

by Saul McLeod published 2008, updated 2014

Behaviorism as a movement in psychology appeared in 1913 when John Broadus


Watson published the classic article Psychology as the behaviorist views it.
John Watson proposed that the process of classical conditioning (based
onPavlovs observations) was able to explain all aspects of human psychology.
Everything from speech to emotional responses were simply patterns of stimulus and
response. Watson denied completely the existence of the mind or consciousness.
Watson believed that all individual differences in behavior were due to different
experiences of learning. He famously said:
"Give me a dozen healthy infants, well-formed, and my own specified world to bring
them up in and I'll guarantee to take any one at random and train him to become any type
of specialist I might select - doctor, lawyer, artist, merchant-chief and, yes, even beggarman and thief, regardless of his talents, penchants, tendencies, abilities, vocations and the
race of his ancestors (Watson, 1924, p. 104).

Classical Conditioning Examples


Classical conditioning theory involves learning a new behavior via the process of
association. In simple terms two stimuli are linked together to produce a new learned
response in a person or animal. There are three stages of classical conditioning. At
each stage the stimuli and responses are given special scientific terms:

Stage 1: Before Conditioning:


In this stage, the unconditioned stimulus (UCS) produces anunconditioned
response (UCR) in an organism. In basic terms, this means that a stimulus in the
environment has produced a behavior / response which is unlearned (i.e.

unconditioned) and therefore is a natural response which has not been taught. In this
respect no new behavior has been learned yet.
For example, a stomach virus (UCS) would produce a response of nausea (UCR). In
another example a perfume (UCS) could create a response of happiness or desire
(UCR).
This stage also involves another stimulus which has no effect on a person and is called
the neutral stimulus (NS). The NS could be a person, object, place, etc. The neutral
stimulus in classical conditioning does not produce a response until it is paired with
the unconditioned stimulus.

Stage 2: During Conditioning:


During this stage a stimulus which produces no response (i.e. neutral) is associated
with the unconditioned stimulus at which point it now becomes known as
the conditioned stimulus (CS).
For example a stomach virus (UCS) might be associated with eating a certain food
such as chocolate (CS). Also perfume (UCS) might beassociated with a specific
person (CS).
Often during this stage the UCS must be associated with the CS on a number of
occasions, or trials, for learning to take place. However, one trail learning can happen
on certain occasions when it is not necessary for an association to be strengthened
over time (such as being sick after food poisoning or drinking too much alcohol).

Stage 3: After Conditioning:


Now the conditioned stimulus (CS) has been associated with the unconditioned
stimulus (UCS) to create a new conditioned response (CR).
For example a person (CS) who has been associated with nice perfume (UCS) is now
found attractive (CR). Also chocolate (CS) which was eaten before a person was sick
with a virus (UCS) is now produces a response of nausea (CR).

Little Albert Experiment (Phobias)

Ivan Pavlov showed that classical conditioning applied to animals. Did it also apply
to humans? In a famous (though ethically dubious) experiment, Watson and Rayner
(1920) showed that it did.
Little Albert was a 9-month-old infant who was tested on his reactions to various
stimuli. He was shown a white rat, a rabbit, a monkey and various masks. Albert
described as "on the whole stolid and unemotional" showed no fear of any of these
stimuli. However, what did startle him and cause him to be afraid was if a hammer
was struck against a steel bar behind his head. The sudden loud noise would cause
"little Albert to burst into tears.
When Little Albert was just over 11 months old the white rat was presented and
seconds later the hammer was struck against the steel bar. This was done 7 times over
the next 7 weeks and each time Little Albert burst into tears. By now little Albert only
had to see the rat and he immediately showed every sign of fear. He would cry
(whether or not the hammer was hit against the steel bar) and he would attempt to
crawl away.
In addition, the Watson and Rayner found that Albert developed phobias of objects
which shared characteristics with the rat; including the family dog, a fur coat, some
cotton wool and a Father Christmas mask! This process is know as generalization.

Watson and Rayner had shown that classical conditioning could be used to create a
phobia. A phobia is an irrational fear, i.e. a fear that is out of proportion to the danger.
Over the next few weeks and months Little Albert was observed and 10 days after
conditioning his fear of the rat was much less marked. This dying out of a learned
response is called extinction. However, even after a full month it was still evident, and
the association could be renewed by repeating the original procedure a few times.

Classical Conditioning in the Classroom


The implications of classical conditioning in the classroom are less important than
those of operant conditioning, but there is a still need for teachers to try to make sure
that students associate positive emotional experiences with learning.
>
If a student associates negative emotional experiences with school, then this can
obviously have bad results, such as creating a school phobia
For example, if a student is bullied at school they may learn to associate the school
with fear. It could also explain why some students show a particular dislike of certain
subjects that continue throughout their academic career. This could happen if a student
is humiliated or punished in class by a teacher.

Critical Evaluation
Classical conditioning emphasizes the importance of learning from the environment,
and supports nurture over nature. However, it is limiting to describe behavior solely in
terms of either nature or nurture, and attempts to do this underestimate the complexity
of human behavior. It is more likely that behavior is due to an interaction between
nature (biology) and nurture (environment).
A strength of classical conditioning theory is that it is scientific. This is because it's
based on empirical evidence carried out by controlled experiments. For

example, Pavlov (1902) showed how classical conditioning can be used to make a dog
salivate to the sound of a bell.
Classical conditioning is also a reductionist explanation of behavior. This is because
complex behavior is broken down into smaller stimulus - response units of behavior.
Supporters of a reductionist approach say that it is scientific. Breaking complicated
behaviors down to small parts means that they can be scientifically tested. However,
some would argue that the reductionist view lacks validity. Thus, whilst reductionism
is useful, it can lead to incomplete explanations.
A final criticism of classical conditioning theory is that it is deterministic. This means
that it does not allow for any degree of free will in the individual. According a person
has no control over the reactions they have learned from classical conditioning, such
as a phobia.
The deterministic approach also has important implications for psychology as a
science. Scientists are interested in discovering laws which can then be used to predict
events. However, by creating general laws of behavior, deterministic psychology
underestimates the uniqueness of human beings and their freedom to choose their own
destiny.

References
Pavlov, I. P. (1897/1902). The work of the digestive glands. London: Griffin.
Watson, J. B. (1913). Psychology as the behaviorist views it. Psychological Review,
20, 158177.
Watson, J. B., & Rayner, R. (1920). Conditioned emotional reactions. Journal of
Experimental Psychology, 3(1), pp. 114.
Watson, J. B. (1924). Behaviorism. New York: People's Institute Publishing Company.

To summarize, classical conditioning (later developed by John Watson) involves


learning to associate an unconditioned stimulus that already brings about a particular
response (i.e. a reflex) with a new (conditioned) stimulus, so that the new stimulus
brings about the same response.

Pavlov developed some rather unfriendly technical terms to describe this process. The
unconditioned stimulus (or UCS) is the object or event that originally produces the
reflexive / natural response.
The response to this is called the unconditioned response (or UCR). The neutral
stimulus (NS) is a new stimulus that does not produce a response.

Once the neutral stimulus has become associated with the unconditioned stimulus, it
becomes a conditioned stimulus (CS). The conditioned response (CR) is the response
to the conditioned stimulus.

References
Pavlov, I. P. (1897/1902). The work of the digestive glands. London: Griffin.
Pavlov, I. P. (1928). Lectures on conditioned reflexes. (Translated by W.H. Gantt)
London: Allen and Unwin.
Pavlov, I. P. (1955). Selected works. Moscow: Foreign Languages Publishing House.

Skinner - Operant Conditioning


by Saul McLeod published 2007, updated 2015

By the 1920s, John B. Watson had left academic psychology and


otherbehaviorists were becoming influential, proposing new forms of learning other
than classical conditioning. Perhaps the most important of these was Burrhus Frederic
Skinner. Although, for obvious reasons he is more commonly known as B.F. Skinner.
Skinner's views were slightly less extreme than those of Watson (1913). Skinner
believed that we do have such a thing as a mind, but that it is simply more productive
to study observable behavior rather than internal mental events.
The work of Skinner was rooted in a view that classical conditioning was far too
simplistic to be a complete explanation of complex human behavior. He believed that
the best way to understand behavior is to look at the causes of an action and its
consequences. He called this approach operant conditioning.

Operant Conditioning deals with operants - intentional actions that have an effect on
the surrounding environment. Skinner set out to identify the processes which made
certain operant behaviours more or less likely to occur.

BF Skinner: Operant Conditioning


Skinner is regarded as the father of Operant Conditioning, but his work was based
on Thorndikes (1905) law of effect. Skinner introduced a new term into the Law of
Effect - Reinforcement. Behavior which is reinforced tends to be repeated (i.e.
strengthened); behavior which is not reinforced tends to die out-or be extinguished
(i.e. weakened).
Skinner (1948) studied operant conditioning by conducting experiments using animals
which he placed in a 'Skinner Box' which was similar to Thorndikes puzzle box.

B.F. Skinner (1938) coined the term operant conditioning; it means roughly changing
of behavior by the use of reinforcement which is given after the desired response.
Skinner identified three types of responses or operant that can follow behavior.

Neutral operants: responses from the environment that neither increase nor
decrease the probability of a behavior being repeated.
Reinforcers: Responses from the environment that increase the probability of a
behavior being repeated. Reinforcers can be either positive or negative.
Punishers: Responses from the environment that decrease the likelihood of a
behavior being repeated. Punishment weakens behavior.
We can all think of examples of how our own behavior has been affected by
reinforcers and punishers. As a child you probably tried out a number of behaviors
and learned from their consequences.
For example, if when you were younger you tried smoking at school, and the chief
consequence was that you got in with the crowd you always wanted to hang out with,
you would have been positively reinforced (i.e. rewarded) and would be likely to
repeat the behavior.
If, however, the main consequence was that you were caught, caned, suspended from
school and your parents became involved you would most certainly have been
punished, and you would consequently be much less likely to smoke now.

Positive Reinforcement
Skinner showed how positive reinforcement worked by placing a hungry rat in his
Skinner box. The box contained a lever on the side and as the rat moved about the box
it would accidentally knock the lever. Immediately it did so a food pellet would drop
into a container next to the lever.
The rats quickly learned to go straight to the lever after a few times of being put in the
box. The consequence of receiving food if they pressed the lever ensured that they
would repeat the action again and again.

Positive reinforcement strengthens a behavior by providing a consequence an


individual finds rewarding. For example, if your teacher gives you 5 each time you
complete your homework (i.e. a reward) you will be more likely to repeat this
behavior in the future, thus strengthening the behavior of completing your homework.

Negative Reinforcement
The removal of an unpleasant reinforcer can also strengthen behavior. This is known
as negative reinforcement because it is the removal of an adverse stimulus which is
rewarding to the animal or person. Negative reinforcement strengthens behavior
because it stops or removes an unpleasant experience.
For example, if you do not complete your homework, you give your teacher 5. You
will complete your homework to avoid paying 5, thus strengthening the behavior of
completing your homework.
Skinner showed how negative reinforcement worked by placing a rat in his Skinner
box and then subjecting it to an unpleasant electric current which caused it some
discomfort. As the rat moved about the box it would accidentally knock the lever.
Immediately it did so the electric current would be switched off. The rats quickly
learned to go straight to the lever after a few times of being put in the box. The
consequence of escaping the electric current ensured that they would repeat the action
again and again.
In fact Skinner even taught the rats to avoid the electric current by turning on a light
just before the electric current came on. The rats soon learned to press the lever when
the light came on because they knew that this would stop the electric current being
switched on.
These two learned responses are known as Escape Learning and Avoidance Learning.

Punishment (weakens behavior)

Punishment is defined as the opposite of reinforcement since it is designed to weaken


or eliminate a response rather than increase it. It is an aversive event that decreases the
behavior that it follows
Like reinforcement, punishment can work either by directly applying an unpleasant
stimulus like a shock after a response or by removing a potentially rewarding
stimulus, for instance, deducting someones pocket money to punish undesirable
behavior.
Note: It is not always easy to distinguish between punishment and negative
reinforcement.
There are many problems with using punishment, such as:
Punished behavior is not forgotten, it's suppressed - behavior returns when
punishment is no longer present.
Causes increased aggression - shows that aggression is a way to cope with
problems.
Creates fear that can generalize to undesirable behaviors, e.g., fear of school.
Does not necessarily guide toward desired behavior - reinforcement tells you
what to do, punishment only tells you what not to do.

Schedules of Reinforcement
Imagine a rat in a Skinner box. In operant conditioning if no food pellet is delivered
immediately after the lever is pressed then after several attempts the rat stops pressing
the lever (how long would someone continue to go to work if their employer stopped
paying them?). The behavior has been extinguished.
Behaviorists discovered that different patterns (or schedules) of reinforcement had
different effects on the speed of learning and on extinction. Ferster and Skinner (1957)
devised different ways of delivering reinforcement, and found that this had effects on

1. The Response Rate - The rate at which the rat pressed the lever (i.e. how
hard the rat worked).
2. The Extinction Rate - The rate at which lever pressing dies out (i.e. how
soon the rat gave up).

Skinner found that the type of reinforcement which produces the slowest rate of
extinction (i.e. people will go on repeating the behavior for the longest time without
reinforcement) is variable-ratio reinforcement. The type of reinforcement which has
the quickest rate of extinction is continuous reinforcement.

(A) Continuous Reinforcement


An animal/human is positively reinforced every time a specific behaviour occurs, e.g.
every time a lever is pressed a pellet is delivered and then food delivery is shut off.
Response rate is SLOW

Extinction rate is FAST

(B) Fixed Ratio Reinforcement


Behavior is reinforced only after the behavior occurs a specified number of times. E.g.
one reinforcement is given after every so many correct responses, e.g. after every 5th
response. For example a child receives a star for every five words spelt correctly.
Response rate is FAST
Extinction rate is MEDIUM

(C) Fixed Interval Reinforcement


One reinforcement is given after a fixed time interval providing at least one correct
response has been made. An example is being paid by the hour. Another example
would be every 15 minutes (half hour, hour, etc.) a pellet is delivered (providing at
least one lever press has been made) then food delivery is shut off.
Response rate is MEDIUM
Extinction rate is MEDIUM

(D) Variable Ratio Reinforcement


Behavior is reinforced after an unpredictable number of times. For examples gambling
or fishing.
Response rate is FAST
Extinction rate is SLOW (very hard to extinguish because of unpredictability )

(E) Variable Interval Reinforcement


Providing one correct response has been made, reinforcement is given after an
unpredictable amount of time has passed, e.g. on average every 5 minutes. An
example is a self-employed person being paid at unpredictable times.
Response rate is FAST
Extinction rate is SLOW

Behavior Shaping
A further important contribution made by Skinner (1951) is the notion of behaviour
shaping through successive approximation. Skinner argues that the principles of
operant conditioning can be used to produce extremely complex behaviour if rewards
and punishments are delivered in such a way as to encourage move an organism closer
and closer to the desired behaviour each time.
In order to do this, the conditions (or contingencies) required to receive the reward
should shift each time the organism moves a step closer to the desired behaviour.
According to Skinner, most animal and human behaviour (including language) can be
explained as a product of this type of successive approximation.

Behavior Modification
Behavior modification is a set of therapies / techniques based on operant conditioning
(Skinner, 1938, 1953). The main principle comprises changing environmental events
that are related to a person's behavior. For example, the reinforcement of desired
behaviors and ignoring or punishing undesired ones.
This is not as simple as it sounds always reinforcing desired behavior, for example,
is basically bribery.
There are different types of positive reinforcements. Primary reinforcement is when a
reward strengths a behavior by itself. Secondary reinforcement is when something
strengthens a behavior because it leads to a primary reinforcer.
Examples of behavior modification therapy include token economy and behavior
shaping

Token Economy

Token economy is a system in which targeted behaviors are reinforced with tokens
(secondary reinforcers) and later exchanged for rewards (primary reinforcers).
Tokens can be in the form of fake money, buttons, poker chips, stickers, etc. While the
rewards can range anywhere from snacks to privileges or activities.
Token economy has been found to be very effective in managing psychiatric patients.
However, the patients can become over reliant on the tokens, making it difficult for
them to adjust to society once they leave prisons, hospital etc.
Teachers also use token economy at primary school by giving young children stickers
to reward good behavior.

Educational Applications
In the conventional learning situation operant conditioning applies largely to issues of
class and student management, rather than to learning content. It is very relevant to
shaping skill performance.
A simple way to shape behavior is to provide feedback on learner performance, e.g.
compliments, approval, encouragement, and affirmation. A variable-ratio produces the
highest response rate for students learning a new task, whereby initially reinforcement
(e.g. praise) occurs at frequent intervals, and as the performance improves
reinforcement occurs less frequently, until eventually only exceptional outcomes are
reinforced.
For example, if a teacher wanted to encourage students to answer questions in class
they should praise them for every attempt (regardless of whether their answer is
correct). Gradually the teacher will only praise the students when their answer is
correct, and over time only exceptional answers will be praised.
Unwanted behaviors, such as tardiness and dominating class discussion can be
extinguished through being ignored by the teacher (rather than being reinforced by
having attention drawn to them).

Knowledge of success is also important as it motivates future learning. However it is


important to vary the type of reinforcement given, so that the behavior is maintained.
This is not an easy task, as the teacher may appear insincere if he/she thinks too much
about the way to behave.

Operant Conditioning Summary


Looking at Skinner's classic studies on pigeons / rat's behavior we can identify some
of the major assumptions of the behaviorist approach.
Psychology should be seen as a science, to be studied in a scientific manner. Skinner's
study of behavior in rats was conducted under carefully controlled laboratory conditions.
Behaviorism is primarily concerned with observable behavior, as opposed to internal
events like thinking and emotion. Note that Skinner did not say that the rats learned to
press a lever because they wanted food. He instead concentrated on describing the easily
observed behavior that the rats acquired.
The major influence on human behavior is learning from our environment. In the
Skinner study, because food followed a particular behavior the rats learned to repeat that
behavior, e.g. operant conditioning.
There is little difference between the learning that takes place in humans and that in
other animals. Therefore research (e.g. operant conditioning) can be carried out on
animals (Rats / Pigeons) as well as on humans. Skinner proposed that the way humans
learn behavior is much the same as the way the rats learned to press a lever.

So, if your layperson's idea of psychology has always been of people in laboratories
wearing white coats and watching hapless rats try to negotiate mazes in order to get to
their dinner, then you are probably thinking of behavioral psychology.
Behaviorism and its offshoots tend to be among the most scientific of
thepsychological perspectives. The emphasis of behavioral psychology is on how we
learn to behave in certain ways. We are all constantly learning new behaviors and how
to modify our existing behavior. Behavioral psychology is the psychological approach
that focuses on how this learning takes place.

Critical Evaluation
Operant conditioning can be used to explain a wide variety of behaviors, from the
process of learning, to addiction and language acquisition. It also has practical
application (such as token economy) which can be applied in classrooms, prisons and
psychiatric hospitals.
However, operant conditioning fails to take into account the role of inherited
and cognitive factors in learning, and thus is an incomplete explanation of the learning
process in humans and animals.
For example, Kohler (1924) found that primates often seem to solve problems in a
flash of insight rather than be trial and error learning. Alsosocial learning
theory (Bandura, 1977) suggests that humans can learn automatically through
observation rather than through personal experience.
The use of animal research in operant conditioning studies also raises the issue of
extrapolation. Some psychologists argue we cannot generalize from studies on
animals to humans as their anatomy and physiology is different from humans, and
they cannot think about their experiences and invoke reason, patience, memory or
self-comfort.

References
Bandura, A. (1977). Social learning theory. Englewood Cliffs, NJ: Prentice Hall.
Ferster, C. B., & Skinner, B. F. (1957). Schedules of reinforcement.
Kohler, W. (1924). The mentality of apes. London: Routledge & Kegan Paul.
Skinner, B. F. (1938). The Behavior of organisms: An experimental analysis. New
York: Appleton-Century.
Skinner, B. F. (1948). Superstition' in the pigeon. Journal of Experimental
Psychology, 38, 168-172.

Skinner, B. F. (1951). How to teach animals. Freeman.


Skinner, B. F. (1953). Science and human behavior. SimonandSchuster.com.
Thorndike, E. L. (1905). The elements of psychology. New York: A. G. Seiler.
Watson, J. B. (1913). Psychology as the Behaviorist views it. Psychological Review,
20, 158177.

S-ar putea să vă placă și