Sunteți pe pagina 1din 15

c  

 
:

The learning theory of Thorndike represents the original S-R framework of behavioral
psychology: Learning is the result of associations forming between stimuli and responses.
Such associations or "habits" become strengthened or weakened by the nature and frequency
of the S-R pairings. The paradigm for S-R theory was trial and error learning in which certain
responses come to dominate others due to rewards. The hallmark of connectionism (like all
behavioral theory) was that learning could be adequately explained without refering to any
unobservable internal states.

Thorndike's theory consists of three primary laws: (1) law of effect - responses to a situation
which are followed by a rewarding state of affairs will be strengthened and become habitual
responses to that situation, (2) law of readiness - a series of responses can be chained together
to satisfy some goal which will result in annoyance if blocked, and (3) law of exercise -
connections become strengthened with practice and weakened when practice is discontinued.
A corollary of the law of effect was that responses that reduce the likelihood of achieving a
rewarding state (i.e., punishments, failures) will decrease in strength.

The theory suggests that transfer of learning depends upon the presence of identical elements
in the original and new learning situations; i.e., transfer is always specific, never general. In
later versions of the theory, the concept of "belongingness" was introduced; connections are
more readily established if the person perceives that stimuli or responses go together (c.f.
Gestalt principles). Another concept introduced was "polarity" which specifies that
connections occur more easily in the direction in which they were originally formed than the
opposite. Thorndike also introduced the "spread of effect" idea, i.e., rewards affect not only
the connection that produced them but temporally adjacent connections as well.

  :

Connectionism was meant to be a general theory of learning for animals and humans.
Thorndike was especially interested in the application of his theory to education including
mathematics (Thorndike, 1922), spelling and reading (Thorndike, 1921), measurement of
intelligence (Thorndike et al., 1927) and adult learning (Thorndike at al., 1928).



The classic example of Thorndike's S-R theory was a cat learning to escape from a "puzzle
box" by pressing a lever inside the box. After much trial and error behavior, the cat learns to
associate pressing the lever (S) with opening the door (R). This S-R connection is established
because it results in a satisfying state of affairs (escape from the box). The law of exercise
specifies that the connection was established because the S-R pairing occurred many times
(the law of effect) and was rewarded (law of effect) as well as forming a single sequence (law
of readiness).
'    :

i i t ti l t/ i

i R ti i t t i t l t t ti
l i

l i i l t it ti

4 t lli i ti t ti l

c     
c      V l V' VV 
 V'

    Vi VVVV iti Vl 


i
Vt tV VitV 
tt VV
VlV
VVti lV  VVi
 i
V li lV 
iti
i
Vi
lV
tti
VV
V
tlVtilVl
V itVVtilVVVi
ii 
VV
tlVtilV l VV

V
tVttV V
tVltVi
V
VtVilV
VVtV 
iV
V
i
ti ti
VlV VtVtiVVVconditioned stimulusV C VC
lV
tti
V
VtVi
ii 
tVtilV
 ilVV
Vi

tVt
Vl iV
VlV
ll VtVtVunconditioned stimulusV !"V
Vunconditioned responseV !RV tilV
VtVC"V
VtV!"VVt lVi V
tllVtVt VtiliV V it V
V
tV 
iV i
VtV  VVilV
VtVtVC"VlV ll VtiVtV
conditioned responseV CRV

lVVV li lV 


iti
i
VttVV VtVt V
lVt tV
V
ti
V
ttV
liVl
i
V
VVi
l VV 
iti
i
Vli
V 
iti
i
V
VtV
tV 
t ti
V 
iti
i
VVHermissenda crassicornisV

VV
V 
V VV
 V VVV
V
VV
V
V

Vi i
lV
VtVV lVV li lV 
iti
i
Vi
l VtVliV

iti
i
VVl#V  V$i
ViV V
VtVil VV i ti
Vi
V  V
lV
ti  VttVtVt
VilVliti
Vi
VtV
VVtV V 
Vi

tV

VtV VttVV ll VtVunconditioned responseVtV  V 
VtVlitVi
VtV

VVtVlVt 
i i
V V
llV VtVlV ll VtVpsychic secretionsV
%VtiVti
VV i t VttViVVti lVtilVi
VtV  #V
i
V
V
tV 
VtV  V V
t V itVtV Vt
VtiVtilV l V
 VassociatedV itV V
V Vliti
V
VitV
V
ViVi
itilV i
tV
lV VVllVtV llVtV  VtVtiV V
VtVV Vtiti
VtV  Vtt VtV
litVi
V
VtVtVllV
c 
 V

—V V
V
—V V V
VV
‘V V

V 
 
V  V
‘V V

V!"!V 
V  V
‘V #V$V%V  V
‘V &V  V%V  V
‘V V'( V
—V #V)
VV
‘V #V$V)%V
‘V #V'V
V
—V &V
VV

V  V
—V V!VVV
—V oV"V
V
—V (V*
V
—V VV+V  V
—V ãV,-V(
V

[


. V
 V  V  VVVV

VVVV V

—V å    


V. V  V  VV
VVV"V 
VV
V
VV/"V VV
VV  V  VV V VV  V
—V Î     
V!V V  VV"V
V
 V V
V V%VV

VVV/"V
—V à    
V. VV  VV"V V/"V VVV!
 VV"V

V
 VV VVV
V  VV
V  V VV
VV
 V
 VVV/"V
V
 VV

VV V
V VV| | V!VV
V
%V VV 
|  | V
—V O      
V. V

V  VV"V V/"VV
 V
 V VVV
VV
—V X   
V'(  V  V
V VV  V

V
 V
VV  V

V/(V V  V 
VV
VV  V

V 
VV  V

VV  V


V
VV%V%V
V
V%
VV  V

V

V
VV
 V
VV  V

V
V  VVVV%V VV  VV
VVVV  V

V
    
—V à       " 

 

 0   


 
1 
 
      

         
  " 


 
    

         



 "  

 

—V [     V V "VV"VVV
V 
V
 VVV

V
VV
VVV
V VV  VV
0 V

V 
VV
V
VV
V0

V  V




V
V
V



V
—V "0 V  V V "V
V
VVV
VVV"
V 
VV
V
 V
VVV V
V V1V V+V V
V V
V V1V
VVV0 V 

V
 !
[e  e e va at 

In addition to the simple procedures described above, some classical conditioning studies are
designed to tap into more complex learning processes. Some common variations are
discussed below.

[et Classal s "at#" eve sal #"t#""

In this procedure, two CSs and one US are typically used. The CSs may be the same modality
(such as lights of different intensity), or they may be different modalities (such as auditory
CS and visual CS). In this procedure, one of the CSs is designated CS+ and its presentation is
always followed by the US. The other CS is designated CSí and its presentation is never
followed by the US. After a number of trials, the organism learns to i$%&i'i()t* CS+ trials
and CSí trials such that CRs are only observed on CS+ trials.

During R*v*&$)l T&)i(i(g, the CS+ and CSí are reversed and subjects learn to suppress
responding to the previous CS+ and show CRs to the previous CSí.

[et Classal ISI s +at,+ ,+t,++

This is a discrimination procedure in which two different CSs are used to signal two different
interstimulus intervals. For example, a dim light may be presented 30 seconds before a US,
while a very bright light is presented 2 minutes before the US. Using this technique,
organisms can learn to perform CRs that are appropriately timed for the two distinct CSs.

[et Latent nht,n ,nt,nn

In this procedure, a CS is presented several times before paired CS-US training commences.
The pre-exposure of the subject to the CS before paired training slows the rate of CR
acquisition relative to organisms that are not CS pre-exposed. Also see Latent inhibition for
applications.

[et Contone nhton ontonn

Three phases of conditioning are typically used:

'   VV

V "V "23V
VVVV"V V
 V V 
VV
V
'   VV
-
"2 "V
VVV V

VV
VVVV "2VV VV
-
V
V "V VVVV"V

V "2 "4V
3
V  V 

V
V 
VV
- -
"2 "V
V V


V
 VV "2 "4V

'   VV

V
VV
VV
V "4V
VVVV"
VVV V
V
VVVV1
VVV
V "4V
 V V V VV
 

VVVV. V
V
V

[et Blon
/ / 3 3 /
‰  0 1 2 4 2  4421

This form of classical conditioning involves two phases.

'   VV

V "V "3V
VVVV"
V

'   VV

V V "V "2 "3V


VVVV"
V

à  VV

V
V
VVV "V "VV "3V
V 
V V 5 VV
V
VVV
5VVV

VV "V
 
 VVV
V
VV V 5VV
1
VVV
V "
V

[et A latons

[et Lttle Ale t


6 6 : 6 : : 6
‰  7 8 9 ;  88 ;  ;7 8 ;;7 ;8

wohn B. Watson, founder of behaviorism, demonstrated classical conditioning empirically


through experimentation using the Little Albert experiment in which a child ("Albert") was
presented with a white rat (CS). After a control period in which the child reacted normally to
the presence of the rat, the experimentors paired the presence of the rat with a loud, jarring
noise caused by clanging two pipes together behind the child's head (US). As the trials
progressed, the child began showing signs of distress at the sight of the rat, even when
unaccompanied by the frightening noise. Furthermore, the child demonstrated generalization
of stimulus associations, and showed distress when presented with any white, furry object²
even such things as a rabbit, dog, a fur coat, and a Santa Claus mask with hair.

[et Behavo al the a es


< < @ <
‰  = > ? A A = >A= 
In human psychology, implications for therapies and treatments using classical conditioning
differ from operant conditioning. Therapies associated with classical conditioning are
aversion therapy, flooding and systematic desensitization.

Classical conditioning is short-term, usually requiring less time with therapists and less effort
from patients, unlike humanistic therapies.[ itBtiCD DEE ] The therapies mentioned are designed
to cause either aversive feelings toward something, or to reduce unwanted fear and aversion.

[et Theo es of lassal ontonn

There are two competing theories of how classical conditioning works. The first, stimulus-
response theory, suggests that an association to the unconditioned stimulus is made with the
conditioned stimulus within the brain, but without involving conscious thought. The second,
stimulus-stimulus theory involves cognitive activity, in which the conditioned stimulus is
associated to the concept of the unconditioned stimulus, a subtle but important distinction.

    , referred to as S-R theory, is a theoretical model of behavioral


psychology that suggests humans and other animals can learn to associate a new stimulus, the
conditioned stimulus (CS), with a pre-existing stimulus, the unconditioned stimulus (US),
and can think, feel or respond to the CS as if it were actually the US.

The opposing theory, put forward by cognitive behaviorists, is stimulus-stimulus theory (S-S
theory). S-S theory is a theoretical model of classical conditioning that suggests a cognitive
component is required to understand classical conditioning and that S-R theory is an
inadequate model. It proposes that a cognitive component is at play. S-R theory suggests that
an animal can learn to associate a conditioned stimulus (CS) such as a bell, with the
impending arrival of food termed the unconditioned stimulus, resulting in an observable
behavior such as salivation. S-S theory suggests that instead the animal salivates to the bell
because it is associated with the concept of food, which is a very fine but important
distinction.

To test this theory, psychologist Robert Rescorla undertook the following experiment. [2] Rats
learned to associate a loud noise as the unconditioned stimulus, and a light as the conditioned
stimulus. The response of the rats was to freeze and cease movement. What would happen
then if the rats were habituated to the US? S-R theory would suggest that the rats would
continue to respond to the CS, but if S-S theory is correct, they would be habituated to the
concept of a loud sound (danger), and so would not freeze to the CS. The experimental results
suggest that S-S was correct, as the rats no longer froze when exposed to the signal light.[3]
His theory still continues and is applied in everyday life.[1]

[et In o la lt e

une of the
F
earliest literary references to classical conditioning can be found in the comic
novel T if iiGfTHit H
I  
  tl  (1759) by Laurence Sterne. The
[ ]
narrator Tristram Shandy explains how his mother was conditioned by his father's habit of
winding up a clock before having sex with his wife:

‰VV

V
VV  VVVV 
V  V VVV VVV

VVV V
VV VV V
VV
V VV
V"0 VVV V VV V
5
VV
VVV" 0 V5V  VVV V
0(V V V V

  VVV%(0

V V V
V V
65) V% V
 V% VV V

-V
VV VVVV!VV%V
( V5V V( 
V  V% V
V
VVV
VVV
V VV V
VV  VV
VVVV
%VV VVVVVV VVVV V%VVV  V V
 V VV
V
VVVVVV

%V
V
V iti
VVi V i VV
V 

 ti
Vi
V
tVitVVllVtVtV
l
tVttVVVtV l V
VVtVi V l V 
V&tVtVt tVV
VtVti
V
i lV Vi
tVV &'Vi V:&(i Vt
V
i
ti
VVi VtV  iV) V V ti
lV
t VtV
tVVtV
ti
VttVt
VtV
ViVtVV   VV V ti
Vt
VllVtV V
Vj i V tV

*
tV lViVi
VtV ti
V
lVh Clockwork OranJ eVi
V i VtVil#V
ti V

Vt 
itV*l ViV i
VVlti
VtV VV
V
ViV  VtV t V
il
tV tViV
ViV
lVtVV
Vil
tV tV ittVi
 i
ViilV

V

V
V"$i

VBll"VVtV V"V+i tVBV,i


t"VtlVtV"liti
V  "V


VtV
V"C
iV"t iV-"VVV"C
i"V ,i
V(illiVVV
""tiVV 
V #Vlit
i
VtV itVli
V 
iti
i
#VVi 
i
"V

u 
 

+V7( VVV V

8V6V V
VV
K K L M
  | 
 |  |
 |  |
  | |  N  |

!V
V%V
 
 VV|  | 
|  V%V  VV
VVV
V


V9.


OV

u 
 ViVtVVV 
.
VtV iVtV 
V
VVV
iV/
tV 
iti
i
ViV iti
i VV li lV 
iti
i
V lV ll V
respondent conditioning) in that operant conditioning deals with the modification of
"voluntary behavior" or operant behavior. uperant behavior "operates" on the environment
and is maintained by its consequences, while classical conditioning deals with the
conditioning of respondent behaviors which are elicited by antecedent conditions. Behaviors
conditioned via a classical conditioning procedure are not maintained by consequences.[1]

Contents
V

—V V V
 VVP VV
Q
‘V 
V+V 
VVV V
—V V R
V VVV
—V #V'   V 
VVV V
—V &V+
VV VV

VV
1
V
—V VV  V
—V oVV  VV
‘V o
V
 VV  V
‘V o
V+0VV  V
—V (V 0

VVVV
—V VV  V'V
—V ãV+V V V
—V VV V
—V VV VVV VVV
—V V"V
V
—V #V
V
—V &V V 
V

[et Renfo eent, nshent, an extnton

Reinforcement and punishment, the core tools of operant conditioning, are either positive
(delivered following a response), or negative (withdrawn following a response). This creates
a total of four basic consequences, with the addition of a fifth procedure known as extinction
(i.e. no change in consequences following a response).

It's important to note that actors are not spoken of as being reinforced, punished, or
extinguished; it is the actions that are reinforced, punished, or extinguished. Additionally,
reinforcement, punishment, and extinction are not terms whose use is restricted to the
laboratory. Naturally occurring consequences can also be said to reinforce, punish, or
extinguish behavior and are not always delivered by people.

—V  V
VV
1VV

VV VVVV V1
V
—V 
 V
VV
1VV

VV VVVV 

V1
V
—V V
VV VVV
1V  VV 
VVV V
V

1 V VV VV V
1
VV VV
V 

V1
VVV
VV V
VV  VVV
V
VV V VV 
VVV VVV


V
[et Fo ontexts of o e ant ontonn

Here the terms itiv and  gtiv are not used in their popular sense, but rather: itiv
refers to addition, and  gtiv refers to subtraction.

What is added or subtracted may be either reinforcement or punishment. Hence itiv 


i t is sometimes a confusing term, as it denotes the "addition" of a stimulus or
increase in the intensity of a stimulus that is aversive (such as spanking or an electric shock).
The four procedures are:


V '        V 3 SV
VVV V

3V
V V VV

  
VV
V V
 VV1VVV 
VVV"V V
 VV
  
V
V
VVV
 V
 VV V VVVV  
V
VV V V
V
V

 VV 
V

V A        V
3 TV
VVV V

3V
V V VV
  VVV
V
  
V V
 VV 
V1
VVV"V
V V V VV VV V
V
V
 V
V
V
V V VV  
VVV V V
V
V

 VV VVVV
V
V
V 
V
#
V '      V
 3V
V V
 V V V
  3 UV

VVV V

3V
V V VV
  
V
V
V VV
VV
V
V
  VVV
VVV 
V
&
V A      V 3V
V V
 V V V 3 VV
V
VV V

3V
V V VV  VVV
  
V
V
V VVV
 
VV  VV
V V
  VVV
VVV 
V

hl

—V h   V
VVVV  VVVVV V
 
VVV

V
VV
V
  

V+V  V  VV VV


  V
V
VV
VV
  VV  V
3V V VVV
V
  VV V  VV

V

V
—V „   V
VVV V

3VVV
V VV
VV
 V
VVV"V V V
V
VVV
 VV VV  V
VVVV V
 V 
VVV
 VV V VVV
 VVV V 
V VVV V
V
 VV 
V
—V A          V
VV VV V
  V  

VVV
 

V 3V 
V VV
VVV V V

V 
VV
V
V  V

VVVV 
V 
V V
V 0 
V V
V
  VV
V  V V VV

VV
VVVV
V V 
V
VV 
V V
VV
V  V
 VV
V

V
 VV
VVV V V 
# V
—V " V
VV VVV VVVV
 VV 
V
VV
V

VV
& V
—V  V
VV
 VVV 
V V V


V
 VVV
1VV VV  V 
&V

[et Tho ne's law of effet


W W [
‰  X Y Z \ ]  \\ZY
uperant conditioning, sometimes called it^  tl itiig or it^  tll ig,
was first extensively studied by Edward L. Thorndike (187 ±19 9), who observed the
behavior of cats trying to escape from home-made puzzle boxes.[5] When first constrained in
the boxes, the cats took a long time to escape. With experience, ineffective responses
occurred less frequently and successful responses occurred more frequently, enabling the cats
to escape in less time over successive trials. In his law of effect, Thorndike theorized that
successful responses, those producing tif ig consequences, were "stamped in" by the
experience and thus occurred more frequently. Unsuccessful responses, those producing
 ig consequences, were t  t and subsequently occurred less frequently. In
short, some consequences t gt   behavior and some consequences w    behavior.
Thorndike produced the first known learning curves through this procedure. B.F. Skinner
(190 ±1990) formulated a more detailed analysis of operant conditioning based on
reinforcement, punishment, and extinction. Following the ideas of Ernst +ach, Skinner
rejected Thorndike's mediating structures required by "satisfaction" and constructed a new
conceptualization of behavior without any such references. So, while experimenting with
some homemade feeding mechanisms, Skinner invented the operant conditioning chamber
which allowed him to measure rate of response as a key dependent variable using a
cumulative record of lever presses or key pecks.[6]

[et Boloal o elates of o e ant ontonn

The first scientific studies identifying neurons that responded in ways that suggested they
encode for conditioned stimuli came from work by +ahlon deLong.[7][8] and by R.T. "Rusty"
Richardson and deLong.[8] They showed that nucleus basalis neurons, which release
acetylcholine broadly throughout the cerebral cortex, are activated shortly after a conditioned
stimulus, or after a primary reward if no conditioned stimulus exists. These neurons are
equally active for positive and negative reinforcers, and have been demonstrated to cause
plasticity in many cortical regions.[9] Evidence also exists that dopamine is activated at
similar times. There is considerable evidence that dopamine participates in both
reinforcement and aversive learning.[10] Dopamine pathways project much more densely onto
frontal cortex regions. Cholinergic projections, in contrast, are dense even in the posterior
cortical regions like the primary visual cortex. A study of patients with Parkinson's disease, a
condition attributed to the insufficient action of dopamine, further illustrates the role of
dopamine in positive reinforcement.[11] It showed that while off their medication, patients
learned more readily with aversive consequences than with positive reinforcement. Patients
who were on their medication showed the opposite to be the case, positive reinforcement
proving to be the more effective form of learning when the action of dopamine is high.

[et Fato s that alte the effetveness of onseqenes

When using consequences to modify a response, the effectiveness of a consequence can be


increased or decreased by various factors. These factors can apply to either reinforcing or
punishing consequences.


V O       V V

VVV
1V V VVVV 
V
VVV
VV
  V
V V


V
 VV

VVV

1V V
V
VV V  
VVVV
  

VV
 V
V
V VV VV VVVVV 
V"V
V  V VV
 V  VV V
V
VVVVVV V V
V
V
VV
V

V 4  VVV

VV  VV
1V
VV V 
VV


VVV
1
V‰V V V V V VVV 

V
 V 
VV
 
V 
V V
V V VVV VV
 V
VVVV
 VVVV  VVV V
V
1V VV VV
V 
V

V'VV
 V
V
 VV
V VVVV VVV
V
V VVVV
 V V
V V  VV V
V
#
V c    VVV
1V
VV  V  VV

 3V VV
 V

V
V

VVV

V
V
V'VVV
1V 
V
V

V

 VV


V

V
V  VV VV

V
V


V V
 VV VV

V 
VV
V 
VV
V
 V
V VV  V
V

VV
V V VV  V

V V V VV V
 V 
VV  V
V
 VV  V

V
 
V
&
V O V 
V
VV
0 V VVVV
1V V V
VVV

VV VVV
1V
V  V VV VVVVV
1V
 V V VVVV 
VV
 V  V V VV  V
 V V VV V
 VV VV0 V VVVV  V   V

3
V'VVV V V
V
 VV
 V
V  VV VVV VVV
VV VVV VV VV VV
VV
V  V
V
V
 VV
VVV
VV
 V
1
VV
V
 VV
1
V
 VV
 3V  VV V VVV V
V VV

V

+ost of these factors exist for biological reasons. The biological purpose of the Principle of
Satiation is to maintain the organism's homeostasis. When an organism has been deprived of
sugar, for example, the effectiveness of the taste of sugar as a reinforcer is high. However, as
the organism reaches or exceeds their optimum blood-sugar levels, the taste of sugar becomes
less effective, perhaps even aversive.

The Principles of Immediacy and Contingency exist for neurochemical reasons. When an
organism experiences a reinforcing stimulus, dopamine pathways in the brain are activated.
This network of pathways "releases a short pulse of dopamine onto many dendrites, thus
broadcasting a rather global reinforcement signal to postsynaptic neurons."[12] This results in
the plasticity of these synapses allowing recently activated synapses to increase their
sensitivity to efferent signals, hence increasing the probability of occurrence for the recent
responses preceding the reinforcement. These responses are, statistically, the most likely to
have been the behavior responsible for successfully achieving reinforcement. But when the
application of reinforcement is either less immediate or less contingent (less consistent), the
ability of dopamine to act upon the appropriate synapses is reduced.

[et O e ant va alty

uperant variability is what allows a response to adapt to new situations. uperant behavior is
distinguished from reflexes in that its     (the form of the response) is
subject to slight variations from one performance to another. These slight variations can
include small differences in the specific motions involved, differences in the amount of force
applied, and small changes in the timing of the response. If a subject's history of
reinforcement is consistent, such variations will remain stable because the same successful
variations are more likely to be reinforced than less successful variations. However,
behavioral variability can also be altered when subjected to certain controlling variables.[13]
[et Avoane lea nn

Avoidance learning belongs to negative reinforcement schedules. The subject learns that a
certain response will result in the termination or prevention of an aversive stimulus. There are
two kinds of commonly used experimental settings: discriminated and free-operant avoidance
learning.

[et Ds nate avoane lea nn

In discriminated avoidance learning, a novel stimulus such as a light or a tone is followed by


an aversive stimulus such as a shock (CS-US, similar to classical conditioning). During the
first trials (called escape-trials) the animal usually experiences both the CS (Conditioned
Stimulus) and the US (Unconditioned Stimulus), showing the operant response to terminate
the aversive US. During later trials, the animal will learn to perform the response already
during the presentation of the CS thus preventing the aversive US from occurring. Such trials
are called "avoidance trials."

[et F ee-o e ant avoane lea nn

In this experimental session, no discrete stimulus is used to signal the occurrence of the
aversive stimulus. Rather, the aversive stimulus (mostly shocks) are presented without
explicit warning stimuli. There are two crucial time intervals determining the rate of
avoidance learning. This first one is called the S-S-interval (shock-shock-interval). This is the
amount of time which passes during successive presentations of the shock (unless the operant
response is performed). The other one is called the R-S-interval (response-shock-interval)
which specifies the length of the time interval following an operant response during which no
shocks will be delivered. Note that each time the organism performs the operant response, the
R-S-interval without shocks begins anew.

[et Two- oess theo y of avoane

This theory was originally established to explain learning in discriminated avoidance


learning. It assumes two processes to take place:
_ a _ a a a
 `` b b 
c     de fV

 VV
V
VVV VV 
V
V V "VV
V"V

0
3
V VV

 VV V
V
V 

 V V


V V
V VV "VVV"
V'
VVV
VVVV"VV "V
V

VV
 VVV  VV 3V!V
VV 

 V V


 VV "V
VVV
V"V

VV 

V  V 
V
a a
 !d  e bddc  cd de c e d`  ` d   de"e d
 bc fV

'
V VV
V

VV "V
   VV
V"V
V
 V  V
V
V  VVVV 
V V
V 
V  VV

VV
VVV


V V 
V 
VV VV

V VV"V

V  VV
V VV V VV "
VV V
VV
V
V
VVV VV
VV V
 VVV 
V
V
VV

VVVV
V"VVV

VV V
VVV 
V

V
V
V V
V
V VV "
V

[et *  



g g k k g
‰  h i j l #lh  l h $ 

In 1957, Skinner published * l vi, a theoretical extension of the work he had
pioneered since 1938. This work extended the theory of operant conditioning to human
behavior previously assigned to the areas of language, linguistics and other areas. * l
 vi is the logical extension of Skinner's ideas, in which he introduced new functional
relationship categories such as intraverbals, autoclitics, mands, tacts and the controlling
relationship of the audience. All of these relationships were based on operant conditioning
and relied on no new mechanisms despite the introduction of new functional categories.

[et Fo te  ontneny

Applied behavior analysis, which is the name of the discipline directly descended from
Skinner's work, holds that behavior is explained in four terms: conditional stimulus (SC), a
discriminative stimulus (Sd), a response (R), and am reinforcing stimulus (Srein or Sr for
reinforcers, sometimes Save for aversive stimuli).[1 ]

[et O e ant hoa n

  is a referring to the choice made by a rat, on a compound schedule called
a multiple schedule, that maximizes its rate of reinforcement in an operant conditioning
context. +ore specifically, rats were shown to have allowed food pellets to accumulate in a
food tray by continuing to press a lever on a continuous reinforcement schedule instead of
retrieving those pellets. Retrieval of the pellets always instituted a one-minute period of
extinction during which no additional food pellets were available but those that had been
accumulated earlier could be consumed. This finding appears to contradict the usual finding
that rats behave impulsively in situations in which there is a choice between a smaller food
object right away and a larger food object after some delay. See schedules of
reinforcement.[15]

[et An alte natve to the law of effet

However, an alternative perspective has been proposed by R. Allen and Beatrix


Gardner.[16][17] Under this idea, which they called "feedforward," animals learn during operant
conditioning by simple pairing of stimuli, rather than by the consequences of their actions.
Skinner asserted that a rat or pigeon would only manipulate a lever if rewarded for the action,
a process he called "shaping" (reward for approaching then manipulating a lever).[18]
However, in order to prove the necessity of reward (reinforcement) in lever pressing, a
control condition where food is delivered without regard to behavior must also be conducted.
Skinner never published this control group. unly much later was it found that rats and
pigeons do indeed learn to manipulate a lever when food comes irrespective of behavior. This
phenomenon is known as autoshaping. [19] Autoshaping demonstrates that consequence of
action is not necessary in an operant conditioning chamber, and it contradicts the law of
effect. Further experimentation has shown that rats naturally handle small objects, such as a
lever, when food is present.[20] Rats seem to insist on handling the lever when free food is
[21][22]
available (contra-freeloading)
n and even when pressing the lever leads to less food
[23][2 ]
(omission training). Whenever food is presented, rats handle the lever, regardless if
lever pressing leads to more food. Therefore, handling a lever is a natural behavior that rats
do as preparatory feeding activity, and in turn, lever pressing cannot logically be used as
evidence for reward or reinforcement to occur. In the absence of evidence for reinforcement
during operant conditioning, learning which occurs during operant experiments is actually
only Pavlovian (classical) conditioning. The dichotomy between Pavlovian and operant
conditioning is therefore an inappropriate separation.

c      
! 


Guthrie's contiguity theory specifies that "a combination of stimuli which has accompanied a
movement will on its recurrence tend to be followed by that movement". According to
Guthrie, all learning was a consequence of association between a particular stimulus and
response. Furthermore, Guthrie argued that stimuli and responses affect specific sensory-
motor patterns; what is learned are movements, not behaviors.

In contiguity theory, rewards or punishment play no significant role in learning since they
occur after the association between stimulus and response has been made. Learning takes
place in a single trial (all or none). However, since each stimulus pattern is slightly different,
many trials may be necessary to produce a general response. une interesting principle that
arises from this position is called "postremity" which specifies that we always learn the last
thing we do in response to a specific stimulus situation.

Contiguity theory suggests that forgetting is due to interference rather than the passage of
time; stimuli become associated with new responses. Previous conditioning can also be
changed by being associated with inhibiting responses such as fear or fatigue. The role of
motivation is to create a state of arousal and activity which produces responses that can be
conditioned.

  

Contiguity theory is intended to be a general theory of learning, although most of the research
supporting the theory was done with animals. Guthrie did apply his framework to personality
disorders (e.g. Guthrie, 1938).



The classic experimental paradigm for Contiguity theory is cats learning to escape from a
puzzle box (Guthrie & Horton, 19 6). Guthrie used a glass paneled box that allowed him to
photograph the exact movements of cats. These photographs showed that cats learned to
repeat the same sequence of movements associated with the preceding escape from the box.
Improvement comes about because irrelevant movements are unlearned or not included in
successive associations.

'

1. In order for conditioning to occur, the organism must actively respond (i.e., do things).

2. Since learning involves the conditioning of specific movements, instruction must present
very specific tasks.

3. Exposure to many variations in stimulus patterns is desirable in order to produce a


generalized response.

. The last response in a learning situation should be correct since it is the one that will be
associated.

Summary: Piaget¶s Stage Theory of Cognitive Development is a description of cognitive


development as four distinct stages in children: sensorimotor, preoperational, concrete, and
formal.

uriginator: wean Piaget (1896-1980)

Key Terms: Sensorimotor, preoperational, concrete, formal, accommodation, assimilation.

' "     #c $ 

Swiss biologist and psychologist wean Piaget (1896-1980) observed his children (and their
process of making sense of the world around them) and eventually developed a four-stage
model of how the mind processes new information encountered. He posited that children
progress through stages and that they all do so in the same order. These four stages are:

—V      (Birth to 2 years old). The infant builds an understanding of
himself or herself and reality (and how things work) through interactions with the
environment. It is able to differentiate between itself and other objects. Learning takes
place via assimilation (the organization of information and absorbing it into existing
schema) and accommodation (when an object cannot be assimilated and the schemata
have to be modified to include the object.
—V '    (ages 2 to ). The child is not yet able to conceptualize
abstractly and needs concrete physical situations. ubjects are classified in simple
ways, especially by important features.
—V c    (ages 7 to 11). As physical experience accumulates,
accomodation is increased. The child begins to think abstractly and conceptualize,
creating logical structures that explain his or her physical experiences.

S-ar putea să vă placă și