Documente Academic
Documente Profesional
Documente Cultură
pid=1588
MEMORY
According to Passer and Smith memory refers to the processes that allow us to
record, store and later retrieve information. Certain parts of the brain such as
amygdala (for emotional memories) and hippocampus are involved in
construction of memories.
3 processes of memory:
Encoding Stage: This is the stage where the stimulus input is translated into a
code. For e.g. the stimulus input may have been encoded in the form of a
visual image, phonological/auditory code (or sound) or in terms of semantic
code (i.e. meaning).
Storage stage: The memory is then retained for some time before retrieval.
MEASURES OF MEMORY
How do psychologists measure the memory of past events? There are two primary
ways to test memory. Explicit methods involve overt (observable) measures of
memory. Implicit methods assess memory indirectly. There are two explicit measures
of memory: recall and recognition. Recall measures require a subject to access a
memory.
Under most circumstances, recall is higher with cued recall than free recall.
There are some occasions when information is presented, and the subject's
task is to judge whether the information accurately reflects a previous
experience When the memory measure is a judgment of the accuracy of a
memory, it is a recognition measure of memory.
Implicit measures:
1
There are also several implicit measures of memory. Ebbinghaus (1885) used
the savings score to measure memory. To obtain a savings score, one subtracts
the number of trials it takes to relearn a task from the number of trials the
original learning required. If one needs fewer relearning trials, the memory
was probably retained from original learning. Reaction time is another implicit
measure of memory. A subject is presented with a stimulus, and the time it
takes to react to that stimulus is recorded. If the subject reacts more readily on
the second encounter than on the first, the faster reaction time is thought to be
due to the memory of the prior experience.
Atkinson-Shiffrin Model/Information
Processing Model(IPM)/ Three stage model
(1968)
This model of memory (Atkinson and Shiffrin, 1968) assumes the processing of
information for memory storage is similar to the way a computer processes memory
in a series of three stages. It consists of three basic stores: sensory memory, short term
memory/working memory (STM) and long term memory (LTM). Information can also
be lost from each point. However, some modifications have been made to the above
model with the STM being substituted for working memory.
1) Encoding: Input to the sensory store can occur regardless of whether or not
the subject is attending to the information; that is, sensory stores are
preattentive. No recognition of the information has taken place. A sensory
memory exists for each sensory channel: Neisser (1967) proposed that the
representation for a brief visual sensory store should be called an ‘icon’ (‘icon’
means an ‘image’). Sensory register for aural/sound stimuli is called echoic
memory. Haptic memory is the sensory register for touch
2
Duration: Information is held in the sensory register for a very
brief period i.e. about 0.5 to 4.0 seconds depending upon the
sensory store. Classical Experiments by George Sperling
(1960) found that the icon has duration of less than 1 second.
Darwin, Turvey, and Crowder, 1972) found that the sensory
register for echoic memory has a duration between 1 to 4
seconds.
Capacity: unlimited
1) Encoding: Once information leaves sensory memory and enters STM it needs to be
converted into a ‘code’. A ‘code’ is a mental representation of some type of
information or stimulus. It can be a visual code (based on images),
phonological/acoustic code (based on sound), or semantic code (based on meaning).
Research has shown that the most important type of coding in WM is
phonological or auditory coding. Though encoding in WM is usually
phonological it may at times be visual as well as when some people remember
the information in terms of images. This ability occurs in children who are
known to have ‘eidetic imagery’. One of the earliest experiments was done by
Conrad, 1964 who found that “F” was most often misidentified as “S” or “X”
two letters that sound similar to “F”. Thus, even though the participants saw the
letters, the mistakes they made were based on the letters’ sound. From these
results Conrad’s concluded that the code for STM is phonological (based on
sound of stimulus) rather than visual (based on appearance of the stimulus).
3
Working memory
Note that working memory holds information that is derived from sensory inputs and
information that has been retrieved from LTM. New inputs (such as the amount of
butter you have just weighed for your cake) & old stored information (such as the
recipe stored in LTM) come together in working memory
Baddeley's Model
4
Baddeley (2002) suggests that there are four components of working memory:
Phonological loop, Visuospatial sketchpad, Episodic Buffer and central executive
1. Phonological Loop: This has two parts: 1) phonological store (‘inner ear’)
which stores sounds or phonological information. Written words must be
converted to spoken words to enter phonological loop. The stored
phonological information decays in 20 seconds unless rehearsed 2) articulatory
loop (‘inner voice’) – this continuously repeats or articulates the contents of
the phonological store, to prevent them from decay.
2. Visuo-spatial sketch pad: The visuo-spatial sketch pad, stores visual and
spatial information. The sketch pad can be further broken down into a visual
subsystem (dealing with, for instance, shape, colour, and texture), and a spatial
subsystem (dealing with location).
3. Episodic buffer: The episodic buffer has a temporary storage space where
information from long-term memory, phonological loop and/or visuospatial
sketchpad can be integrated, manipulted and made available for conscious
awareness.
For over four decades the Wisconsin Card Sorting Test (WCST) has been one of the
best tests that assesses working memory. Dual tasks are also used to assess WM.
5
called the LTM. Evidence that the LTM is distinct from STM comes from studies on
‘Serial Position Curve’. When a list of words or NSS are presented and the subject is
required to recall the list in any order, then it is observed those words in the beginning
‘(primacy effect’) and the end of the list (‘recency effect’) are easiest to recall. this
results in a U shaped curve called the serial position curve. This was first observed by
McCrary and Hunter and is therefore is known as the McCrary Hunter phenomenon.
The graph below shows the results:
This is because the first words are in the long term memory because they have been
rehearsed (Primacy effect) and the last words are still in the short term memory
(recency effect). The words in the middle are less well recalled because it does not go
into either the STM or LTM. However when the distraction is used then recall of the
last words is just as bad as the middle. This phenomenon is known as the Von Restroff
phenomenon.
There are different types of memory which store different types of information:
6
involves conscious, intentional remembering. Declarative memory
depends on the hippocampus. Declarative is so called because to
demonstrate this knowledge we need to declare it – we tell other
people. Declarative memory is of two types: semantic and episodic.
Procedural memory (‘knowing how’) and Perceptual motor skills: (e.g. riding a
cycle, driving a car). These tasks do not require explicit recollection of a
specific previous episode. Bicycle riders remember how to balance a cycle
even though they cannot explicitly retrieve that information.
7
felt immediately after the accident. Because of classical conditioning, the
previously neutral cars have taken on new properties.
a) Encoding:
8
processing theory also emphasized rehearsal – especially elaborative
rehearsal – which leads to better recall.
Organization of memory: This view emphasizes the importance of
memory organization during encoding. Bower & Clark (1969) found that
the list, which was organized hierarchically was recalled two to three times
better than the list arranged randomly
Mnemonics are devices that can be used to improve encoding in LTM.
b) Storage:
Introduction
This approach to understanding human cognition uses computer software to model the
functioning of actual neural networks in human brains. First, a collection of software
9
“neurons” are created and connected together, allowing them to send messages to
each other. Next, the network is asked to solve a problem, which it attempts to do over
and over, each time strengthening the connections that lead to success and
diminishing those that lead to failure. These models typically consist of
interconnected networks of simple units exhibiting learning. Within such networks,
each item of knowledge is represented by a pattern of activation spread over
numerous units rather than by a single location.
10
When a neuron receives excitatory input that is sufficiently large
compared with its inhibitory input, it sends a spike of electrical activity
down its axon.
Memories and Learning occurs by strengthening of the synapses.
An Artificial Neurons-‘Perceptron’
PDP models consist of artificial neurons (or ‘nodes’ or ‘units’) based on the essential
features of the biological neuron and their interconnections. One of the first artificial
neuron to be developed was called a ‘perceptron’ in the 1950s and 1960s by the
scientist Frank Rosenblatt, inspired by earlier work by Warren McCulloch and Walter
Pitts. Today, it's more common to use other models of artificial neurons such as
sigmoid neuron.
Once, a collection of software “neurons” are created and connected together, they can
then send messages to each other. Next, the network is asked to solve a problem,
which it attempts to do over and over, each time strengthening the connections that
11
lead to success and diminishing those that lead to failure. It’s a technique for building
a computer program that learns from data.
(PDP models also known as Neural Networks (NNs), Artificial Neural Networks (ANNs), and Connectionist
Models)
12
2. These nodes have an activation level depending upon synaptic strength.that
in turn decides their weightage (W) In the below model, stronger synaptic
strengths (or larger weight) are shown by thick lines.
3. An input node sends its activation value to each of the hidden units to which it
is connected. Each of these hidden units calculates its own activation value
depending on the activation values it receives from the input units. This signal is
then passed on to output units or to another layer of hidden units. Those hidden
units compute their activation values in the same way, and send them along to
their neighbors. Eventually the signal at the input units propagates all the way
through the net to determine the activation values at all the output units. The
inputs are summed at the neuron to produce a ‘weighted sum’. (S)
In the below diagram, node C receives inputs from node A (lower weightage as
line is thinner) and node B (higher weightage as line is thicker). It then sums the
weight of the two. If this ‘weighted’ sum exceeds some threshold value then a
single output is produced. It can then excite or inhibit the next node with this
value.
13
4. Finding the right set of weights to accomplish a given task is the central goal
in connectionist research. This is accompolished through a process called
‘backpropagation’ where nodes or the nurons can learn by themselves.
a) For a given set of input weights we decide on the desired output weights.
c) Then using the random weights we let the network calculate the
outputs. We then compare this calculated outputs with the desired outputs
and find the error.
14
d) Now that we have the errors we need to adjust the connection weights
so that smaller errors are obtained. This is done through ‘backpropagation’.
The output nodes tells the hidden nodes about the errors and together they
decide on how to adjust the connection weight (based on a mathematical
equation).
g). .Then these nodes with the newly calculated errors push the errors back
through the hidden nodes and adjust the weights behind them.
h) This goes on till all the weights have been adjusted. The idea is to find
out which nodes are to be blamed for the errors and try to adjust their
weights the most.
15
i) Once all the weights have been adjusted the sytem is given the same
inputs again and it is seen what outputs are obtained. The calculted
outputs will now be closer to the desired outputs but there will stll be
some error. So the whole process is repeated again and again till the error
is minimum.
The neural networks are very slow learners (referred to as ‘learning rate’) as they have
to do this for each input. Even for a simple problem it will take millions of attempts
by the neural network but eventually it learns. After a neural net is trained, we say that
16
the neural network has learned and that the acquired knowledge has been stored (as
values) in the weights of its connections.
Critical evaluation
Parallel distributed processing models differ from real neural networks, including the
human brain, in numerous ways: Even the biggest PDP networks are tiny compared to
the brain; PDP models have just one kind of unit, compared to a variety of types of
neurons; and just one kind of activation (which can act excitatory or inhibitory), rather
than a multitude of different neurotransmitters; and so on. Yet these differences are
not necessarily cause to reject the PDP approach.
https://www.youtube.com/watch?v=DG5-UyRBQD4
https://www.youtube.com/watch?v=bxe2T-V8XRs
Biological Basis:
Hippocampus: The hippocampus is proposed to be involved in the formation
and retrieval of memories. In fact, this structure is seen as the control centre of
our memory, collecting and cross-referencing inputs from all our sensory
modalities (see Figure 2.6 below). It is also involved in explicit and
declarative mempories.
Basal ganglia: Procedural memories
Amygdala: Fear memories
Language scratch pad: Broca’s areas
c) Retrieval in LTM:
Retrieval Cues: Retrieval involves being able to access the memory so that
it can be recalled. Retrieval from LTM is aided by a ‘retrieval cue’. A
retrieval cue is a stimulus, whether internal or external that activates
information stored in LTM. The ‘encoding specificity principle’ or ESP
(Wiseman & Tulving, 1976) essentially means that recall depends directly
on the similarity between the cues available at the time of encoding and
the cues available at retrieval. Imagine that Aman reads two sentences (1)
The man lifted the piano and (2) The man tuned the piano. Much later
17
when given the retrieval cue "something heavy" he is able to recall the first
sentence and not the second. Similarly, the cue "makes nice sounds" would
probably help you recall the second sentence but not the first (Barclay el
al., 1974).
The greater the overlap between the dues at the time of encoding and
retrieval the better the recall. This means the performance will be
optimized when the same context that was available during encoding is
available at the time that retrieval is attempted.
ESP
Research indicates having multiple self generated retrieval cues (e.g. the
word ‘banana can be associated with many cues such as fruit, peel, good,
ice cream etc) maximizes recall as this leads to deeper processing.
Recognition is usually easier than free recall due to the additional retrieval
cues.
Context and mood Effects -- Context-dependent memory refers to
improved recall when the context present at encoding and retrieval are the
same. The ‘encoding specificity principle’ or ESP (Wiseman & Tulving,
1976) implies that the performance will be optimized when the same context
that was available during encoding is available at the time that retrieval is
attempted
18
Godden and Baddeley, 1975 demonstrated context dependency with the
learning of lists of unrelated words by deep-sea divers. Items learned on land
were difficult to recall in the underwater environment. Similarly, words heard
underwater may be forgotten once on dry land. Schab (1990) Chocolate Study:
The purpose of the study was to see if using a smell as a memory cue, by
having the smell of chocolate present at encoding and retrieval of the
information. This increased memory scores as opposed to having the smell
only at encoding.
Neisser felt that the icon must play only a small part in everyday memory and
that Sperling’s task was highly artificial.
Some brain damaged patients have a normal LTM and impaired STM. If
information goes from STM to LTM as the model proposes then this is not
possible.
Milner (1966) reported on a young man, referred to as HM, who had a normal
STM but an impaired LTM. When told of the death of his favourite uncle, he
reacted with considerable distress (normal STM). Later, he frequently asked about
19
his uncle and, on each occasion, reacted again with the level of grief appropriate
to hearing the news for the first time (impaired LTM). KF, a motorcycle accident
victim investigated by Shallice and Warrington (1970), had no difficulty in
transferring new items into LTM but had a grossly impaired STM.
It had been suggested that coding in STM is basically acoustic but there is
evidence that semantic coding may also be involved in STM which makes the
distinction between LTM and STM blurred.
Decay in STM and interference in LTM – have been used as a basis for
claiming that the two stores are separate. However, it has been suggested that
some kind of interference may also be involved in STM. So same mechanisms
of forgetting in both systems.
It has been found that chunking in STM can at times increase the capacity to
large dimensions making it similar to LTM. The case of SF (Ericsson et al,
1980) is often cited in this regard who had a normal memory. But his memory
capacity was increased from a typical 7 chunks to a phenomenal limit as a
result of training in using chunking.
LEVELS OF PROCESSING
THEORY
20
Instead of concentrating on the stores/structures involved (i.e. sensory register, short
term memory & long term memory), this theory concentrates on the processes
involved in memory. Levels of Processing (Craik and Lockhart, 1972) proposes that
the strength of the memory depends on two factors 1) the depth of processing and 2)
kind of rehearsal involved.
In another experiment Hyde and Jenkins (1973) instructed one group to look at a list
of words and rate how pleasant or unpleasant each one was on a five-point scale.
Another group was simply asked to count the number of times the letter E appeared in
each word they were shown. The letter counting group (which engaged in shallow
processing) was able to recall much less words as compared to the other group.
Criticisms
21
• It is often difficult to find out what the level of processing actually is. For example,
some researchers argue that the task of deciding the part of speech to which a word
belongs is a shallow processing task - but other researchers claim that the task
involves deep or semantic processing. So, a major problem is the lack of any
independent measure of processing depth.
• Levels of processing theory does not really explain why deeper levels of processing
is more effective.
Theories of forgetting
Course of forgetting:
Herman Ebbinghaus was the first to experimentally investigate the properties of
human memory. To observe this process, he devised a set of items that would have no
previous associations, the so-called nonsense syllables (NSS). These consist of a
sequence of consonant, vowel, and consonant (CVC) that do not spell anything in
one's language -- in English, CAJ would be an example. To test retention,
Ebbinghaus learnt a list of NSS and then waited varying lengths of time before testing
himself again. He found that forgetting occurs in a systematic manner, beginning
rapidly and then leveling off.
Rubin and Wenzel (1996) found evidence to support Ebbinghaus from group data, but
suggest that autobiographical memory does not fit the model. Bahrick et al. (1975)
22
studied retention of names and faces of high school classmates and did not found
support for the classic forgetting curve. Baddeley (1997) found that the forgetting rate
was unusually slow for continuous motor skills e.g., riding a bike. Most studies look
at explicit memory and findings with implicit memory have been inconsistent.
Theories of forgetting:
It is useful to think of forgetting as a problem of either 'availability (it was never
properly stored and therefore is not available), or ‘accessibility' (it was stored but the
information cannot presently be accessed). The first three theories suggest availability
problems while the others indicate accessibility.
Decay Theory (Thordike, 1914): It is believed that learning leaves a memory trace
in the brain. These traces are structural units of the brain, and are called neurograms
or engrams. The engrains are considered to be impressions established in the form of
neural structures in the brain. Entrants are not considered to be permanent structures
because through measurement of the extent of retention and forgetting we know that
the learnt matter often undergoes changes and gets lost.
Several attempts were made to locate this engram. Karl Lashley (1929), one of the
world's foremost brain researchers, tried to locate the area in the brain
where engrams or memory traces were stored. He sliced or removed sections of the
cortex after teaching the rats to run mazes. None of the brain injuries abolished the
"maze-running habit," although Lashley tried removing tissue in almost every area of
the cortex that allowed the rat to remain alive. Lashley concluded that memories had
to be spread all over the cerebral cortex, throughout the tissue. Lashley concluded that
all parts of the cortex make an equal contribution to learning and memory, a concept
he referred to as equipotentiality. In other words, Lashley believed that the engram is
distributed evenly across the cortex, such that no single area is more responsible for
learning and memory than any other. He also believed that the parts of the cortex are
basically interchangeable. As a result, the more cortex you have, the better your
memory will be, a concept Lashley referred to as mass action. However, later
researchers were very critical of his work as the task he chose i.e. maze learning was a
complex task requiring many parts of the brain.
Many researchers believed that the engram or the trace may gradually fade away
unless activated by further rehearsal. Thus, decay theory states that forgetting occurs
when the memory trace disintegrates because of disuse. However, not much support
was found for this theory.
23
(i.e. awake or sleep) was important rather than simply than simply the
length of period.
3. Interference theories: This was the dominant approach to forgetting through the
20th century. Interference theory assumes that the ability to remember can be
disrupted by what we have previously learned or by future learning. Interference by
previous memories is proactive interference. Interference by later memories is
retroactive interference.
Retroactive Interference:
Interference Group Study A Study B Test A
Control Group Study A interpolated task Test A
Proactive Interference:
Interference Group Study A Study B Test B
Control Group interpolated task Study B Test B
Evaluation
2. Solso, 1995 has pointed out that studies of interference have largely involved
Episodic Memory and whereas this demonstrates that episodic memory may be subject
to interference, Semantic Memory is likely to be more resistant.
4. Encoding failure: The way information is encoded affects the ability to remember it.
Processing information at a deeper level makes it harder to forget. If a student thinks
24
about the meaning of the concepts in her textbook rather than just reading them, she’ll
remember them better when the final exam comes around. If the information is not
encoded properly—such as if the student simply skims over the textbook while paying
more attention to the TV—it is more likely to be forgotten.
5. Retrieval failure: Tulving (1974) proposed that a large amount of forgetting may
also result from failure to retrieve information in memory, such as if the wrong sort
of retrieval cue is used. The ‘encoding specificity principle’ (Wiseman & Tulving,
1976) essentially means that recall depends directly on the similarity between the cues
available at the time of encoding and the cues available at retrieval. The greater the
overlap between the dues at the time of encoding and retrieval the better the recall. This
means the performance will be optimized when the same context that was available
during encoding is available at the time that retrieval is attempted
Godden and Baddeley, 1975 demonstrated context dependency with the learning of
lists of unrelated words by deep-sea divers. Items learned on land were
difficult to recall in the underwater environment. Similarly, words heard
underwater may be forgotten once on dry land. Schab (1990) Chocolate Study:
The purpose of the study was to see if using a smell as a memory cue, by
having the smell of chocolate present at encoding and retrieval of the
information. This increased memory scores as opposed to having the smell
only at encoding.
ESP
Evaluation: Often studies have been done under extreme conditions (deep-sea divers)
or where the states are very different (eg alcohol and drug use), whereas in real life we
often have to recall things under similar conditions. For example, during examinations
we recall in a quiet environment and presumably that is the type of environment
25
people study in. Some of the studies in this area, then, could be said to lack ecological
validity.
Evaluation: Loftus and others question the accuracy of these recovered memories.
The creation of an inaccurate record of childhood sexual abuse is now called the false
memory syndrome. Loftus and other researchers have experimentally demonstrated
that it is relatively easy to induce someone to believe an entirely false event actually
happened some time in their past simply through suggestion.
Forgetting in everyday life may not be the same as experienced in the lab. More
recent approaches focus on such forgetting. One such area is eyewitness memory.
After witnessing an event, the eyewitness is repeatedly questioned by friends, police
authorities, lawyers, etc. and may be exposed to intentional or unintentional
misinformation. Does information that the witness acquires after the crime (i.e. post
event misinformation); perhaps in the course of the interviews with the police officials
bring about changes in the witness’s recollection of the crime or the suspect?
In a classic series of experiments by Loftus (e.g. Loftus, Miller and Burns, 1978) it
was demonstrated that participants could be led to report suggested events that were
never witnessed. This was referred to as the ‘suggestibility’ effect. The paradigm that
Loftus and her colleagues used to study such suggestibility effects is often referred to
as the Standard Loftus Paradigm.
26
injury or traumatic event. Retrograde amnesia is the inability to remember
events that occurred before an injury or traumatic event. Dementia refers to
impaired memory and other cognitive deficits that accompany brain
degeneration and interfere with normal functioning. One common cause of
dementia is Alzheimer’s disease which often occurs in people above 65
years and involves symptoms such as forgetting, poor judgment, confusion
and disorientation. Often memory for recent and new information is
impaired. Infantile amnesia is experienced by everyone and involves
inability to remember personal experiences from the first few years of life
(especially before age 3). One hypothesis as to why this occurs is that the
brain is still immature to encode long term memories.
Memory distortion and Schemas: Sir Frederick Barlett’s demonstrated that people’s
recollections of past events are neither accurate nor exact reproductions of
the information encoded or stored. Rather people actively try and make
sense of it in terms of what they already know - a process he called ‘effort
after meaning’. He stated that remembering is guided by ‘schemas’, or
general organizing structures that hold past expectations and experiences.
Bartlett studied the way British undergraduates remembered stories whose
themes and wording were taken from another culture. His most famous
story was the “War of the Ghosts," an American Indian tale. Bartlett (1932)
found that his readers' reproductions of the story were often greatly altered
from the original. The distortions Bartlett found involved three kinds of
reconstructive processes:
27
•Assimilating—changing the details to fit the participant's own background or
knowledge.
Thus, readers reproduced the story with words familiar in their culture taking the
place of those unfamiliar: e.g. Boat might replace canoe. Such
reconstruction often leads to memory distortions.
Participants were shown a series of slides, one of which featured a car stopping in
front of a yield sign (the eyewitness event). After viewing the slides, participants read
a description of what they saw (postevent information). Some of the participants were
given descriptions that contained misinformation, which stated that the car stopped at
a stop sign. Following the slides and the reading of the description, participants were
tested on what they saw. The results revealed that participants who were exposed to
such misinformation were more likely to report seeing a stop sign than participants
who were not misinformed. This was referred to as the ‘suggestibility’ or the
‘misinformation’ effect which involves the distortion of memory by misleading
postevent information. Researchers overwhelmingly agree that misinformation can
distort eyewitness reports. This has raised concerns about the reliability of eyewitness
testimony not only from adults but also from children in cases of alleged physical and
sexual abuse.
28
Lindsay, 1993).
The child as eyewitness: In cases of alleged child sexual abuse, there is often no
conclusive orroborating medical evidence and the child is usually the only witness
(Bruck et al., 1998). If the charges are true, failing to convict the abuser and returning
the child to an abusive environment is unthinkable. Conversely, if the charges are
false, the consequences of convicting an innocent person are equally distressing.A
single instance of suggestive questioning can distort some children's memory, but
suggestive questioning most often leads to false memories when it is repeated. Young
children are typically more susceptible to misleading suggestions than older children
(Ceci et al., 2000).
More recent research reveals that we often misremember the past in ways that flatter
our egos, confirm our self theories, and serve our currently active needs and motives.
In perhaps the most dramatic example of these sorts of distortions, Newman and
Baumeister (1996) argued that a need to escape from self- awareness underlies the
fabrication of UFO abduction memories.
29
The message from science is not that all claims of recovered traumatic memories
should be dismissed. Rather, it is to urge caution in unconditionally accepting those
memories, particularly when suggestive techniques are used to recover the memories.
Researchers have begun to examine whether some types of true versus false memories
are associated with different patterns of brain activity. But at present, these findings
cannot be used to determine reliably whether any individual memory is true or false
(Pickrell et al., 2003).
In one study, Qi Wang (2001) found that the Americans were more likely than their
Chinese counterparts to recall events that focused on individual experiences and self-
determination (e.g., “I was sorting baseball cards when I dropped them. As I reached
down to get them, I knocked over a jug of iced tea.”). In contrast, Chinese students
were more likely than American students to recall memories that involved family or
neighborhood activities (e.g., “Dad taught me ancient poems. It was always when he
was washing vegetables that he would explain a poem to me.”).
Mnemonics/Improving memory
CASES
The title mnemonist (derived from the term mnemonic) refers to an individual with
the ability to remember and recall unusually long lists of data, for example:
unfamiliar names, lists of numbers, entries in books, etc. Such individuals have also
been described as possessing an eidetic memory, although whether such abilities are
innate behaviour or somehow learned appears somewhat contentious. Nonetheless,
individuals famed as mnemonists have become part of popular myth, lore, fiction,
contemporary media, and, indeed, subjects of scientific enquiry.
30
The Talmud, the transcribed collection of Jewish oral law and its commentaries,
constitutes 5,422 pages like those at right. In 1917, an article appeared in the journal
Psychological Review about an incredible group of Polish Talmud scholars known as
the Shass Pollak.:
The Shass Pollak would be asked to give the word in a particular line on a particular
page in the Talmud. They would mention the word and it was found invariably
correct. He had visualized in his brain the whole Talmud; in other words, the pages of
the Talmud were photographed on his brain. It was one of the most stupendous feats
of memory ever witnessed.
S became famous after an anecdotic event in which he was told off for not taking any
notes while attending a speech in the mid-1920s. To the astonishment of everyone
there (and to his own also, due to his belief that everybody had such an ability to
recall), he could recall the speech perfectly, word by word.
Kim Peek (born November 11, 1951), is a savant with a photographic or eidetic
memory and developmental disabilities, possibly resulting from congenital brain
abnormalities. He was the inspiration for the character of Raymond Babbit, played by
Dustin Hoffman, in the movie Rain Man. Rain Man is a film which tells the story of
Charlie Babbitt, who discovers that his father has left all of his multi-million dollar
estate to a brother Raymond, who has autism. But Kim Peek did not have autism.
Kim Peek was born with macrocephaly, a condition in which the bundle of nerves
that connects the two hemispheres of the brain is missing. There is speculation that
his neurons make other connections in the absence of a corpus callosum, which
results in an increased memory capacity.
According to Peek's father, Fran, Peek was able to memorize things from the age of
16-20 months. He read books, memorized them, and then placed them upside down on
the shelf to show that he had finished reading them, a practice he still maintains. He
reads a book in about an hour and remembers approximately 98% of everything he
has read, memorizing vast amounts of information in subjects ranging from history
and literature, geography, and numbers, to sports, music, and dates. He can recall
some 12,000 books from memory.
31
A lot of research in the area of memory is focused on how to improve memory.
Mnemonics
1. Acronyms which combine one or more letters (usually the first letter) to make a
new word E.g. VIBGYOR (Violet, indigo…..)
2) Acrostics: Combines letters (usually the first letter) but instead of making a new
word, with the letters make a sentence.
E.g. My Very Easy Method Just Speeds Up Naming Planets (Planets:
Mercury,Venus,----)
3. ‘Methods of loci’ (‘loci’ means ‘places’ and pronounced as 'LOW sigh'): This
method associates the items (to be remembered or TBR) with already memorized
places. The person remembers a list of items by imagining walking through a familiar
place, like her house, and visualizing herself putting one of the items in each room.
Later, to remember the items, she merely imagines walking back through the house,
recalling what item she “left” in each room.
For e.g. if one wants to remember the parts of the brain then one can mentally place
them in different parts of the house. When one wants to recall the parts of the brain
then one can mentally go for a walk in the house.
This method uses imagery to improve memory. The Dual Coding Theory of memory
was proposed by Paivio (1971) to explain the powerful mnemonic effects of imagery.
This theory states that encoding information in both visual and verbal codes improves
memory. ‘Visual imagery’ evokes both visual and verbal codes and this leads to a
better recall.
4. Numeric peg-words
Here numbers serve as ‘pegs’ on which memories of items (TBR) can be hung. First
create ‘numeric peg words’ which consists of the number and a rhyming word: E.g.
one is a bun, two is a shoe, three is a tree etc. Now associate each item in the list
(TBR) with the ‘peg word’. Suppose the first few names on the TBR list are Wundt,
James and Watson. You may create associations as follows: Wundt is on a bun, James
with two left shoes and Watson stuck in a tree.
32
organising the words into a story their recall increased by a least 50%. EXAMPLE OF
NARRATIVE CHAINING (Bower & Clark, 1969): (words to be remembered are in
capitals)
There was a BREAK in the storm and the LIGHT came back. A MOUSE came out of
its hole to take a LOOK at the CAKE she was eating. Its SPINE tingled so it ran away
FAST to the haySTACK.
(i) Preview - examine what you are reading, looking at headings, words in bold,
etc, so that you can grasp what topics are covered and get a general idea of
what it is all about.
(ii) Question -formulate questions so you know what information you are
aiming to extract from whatever it is you are reading.
(iii) Read - read the material, actively seeking the answers to your questions.
(iv) Summarize -summarize what you have read, preferably in your own
words.
7) Rhyme: A rhyme has similar distinctive sounds at the end of each line.
Studies have shown that rhyming makes things easier to remember because it can be
stored with acoustic encoding. Example: In fourteen hundred and ninety-two
Columbus sailed the Ocean Blue.
8) The keyword strategy is based on linking a new word to keywords that are already
encoded in LTM. A keyword is a word that sounds like the new word and is easily
pictured. The keyword method has been especially pushed as an effective strategy for
learning foreign vocabulary.
For example, to help remember that ‘barrister’ is another word for ‘lawyer’:
First create a keyword (e.g. bear).
Create a picture of the keyword and the new word doing something together: a
bear who is acting as a lawyer in a courtroom, pleading his client's innocence.
2. Depth of processing
Levels of processing (Craik & Lockhart, 1972) suggests that ‘deeper’ the processing
better the memory. Shallow processing involves analyzing the material in terms of
structure or how the material appears (e.g. is it written in capital or small letters).
“Deep” processing involves analyzing the stimulus more abstractly in terms of its
meaning. So to improve memory one should process the material more deeply by
understanding its meaning.
33
3. Rehearsal
The Levels of processing theory suggest that elaborative rehearsal (which involves
rehearsing the material by understanding its meaning) is better than maintenance
rehearsal (or rote learning).
4. Chunking
For most people the capacity of WM is 7+-2 chunks. However, we can enlarge the
size of a chunk and thereby increase our memory capacity. Ericsson et al,(1980)
managed to improve the digit span of a learner S.F. from 7 to almost 80. Method used
was chunking and hierarchical organization. S.F.a long distance runner devised the
strategy of recoding digits into running times.eg 3429 recode as 3:42.9 meaning world
class time for a mile.
5. Organization of memory
34
(1969) found that the list, which was organized hierarchically was recalled two to
three times better than the list arranged randomly
Bousefield (1953)
7. Distinctiveness
Distinctive material leads to better recall so highlighting the study material may
improve memory.
35