Sunteți pe pagina 1din 79

Expanding The View Of Auditory Processing

Disorders In Children:
Considering The Whole Child

Larry Medwetsky, PhD, CCC-A larry.medwetsky@gallaudet.edu


Disclosures Larry Medwetsky
Financial Nonfinancial
Invited speaker ASHA member
AAA member
Registration waived HLAA professional
Professor/Chair, advisory board
Department of Hearing,
Speech & Language
Sciences, Gallaudet
University
Course Objectives

Introduction and Background to Topic

Issues Guiding Development of Approach

Processing mechanisms involved in spoken-language processing


(Spoken-Language Processing Approach)

Deficits subsequent to breakdowns in specific processes


Introduction
My approach is broader than central auditory processing (CAP)
and is not a CAP model

Auditory processing is a subset of my spoken-language


processing approach

My approach first developed in the 1990s but continues to evolve


over time as we acquire more information

I will briefly provide a brief historical background to this topic,


including a review of the current CAP consensus definition, but
elucidate why I have adopted the approach discussed in this
presentation 5
Background to Topic
As many of you know, although auditory perception deficits
were first postulated by Myklebust in 1954 and central sites of
lesion first identified by Bocca and Calearo at about same time,
we still have ongoing disagreements as to:

Does CAP exist?

Therefore, what does CAP really encompass?


(Central) Auditory Processing: What is it?

Because of the lack of a clear definition as to what constitutes CAPD,


ASHA convened two consensus conferences (1993, 2005).

These task forces defined central auditory processes as auditory


system mechanisms responsible for the following phenomena:

Behavioral Phenomena: Definition:

Sound Localization/Lateralization - where sounds occur in space/head

Auditory Discrimination - distinguishing one sound from another


(same/different)

Auditory Pattern Recognition: - recognizing similarities/differences in sound


patterns (e.g., hi/lo, short/long)

7
Behavioral Phenomena: Definition:

Temporal aspects of audition: - (temporal) ability to processing acoustic stimuli over


time, including:

Temporal Resolution: - perception of short duration/fast changing sounds

Temporal Patterning - order stimuli in a sequence/pattern (lo/lo/hi)

Temporal Masking: - potential to mask stimuli presented before/after


masker presentation

Auditory performance with competing acoustic signals (noise, talkers)


Auditory performance with degraded acoustic signals (filtered, compressed)

The 2005 group stated that a central auditory processing (i.e., primary) deficit
could coexist with, but not be result of a dysfunction in other modalities (e.g.,
subsequent to a deficit such as ADHD).
Even though consensus statements were intended to resolve the questions concerning
CAP, issues still appear to remain unresolved:

Many language processing proponents feel most language processing even for
auditorily processed signals- involves little information gleaned from auditory signal

These individuals feel spoken-language processing mostly consists of higher level


cognitive/language mechanisms applied to the incoming acoustic signal

Even within audiology field, much disagreement to what constitutes a central auditory
processing disorder.

Many of these individuals feel that central auditory processing involves more than just
the central auditory system.

For example, Jack Katz describes central auditory processing in functional


terms: What we do with what we hear
Sifting Through the Key Issues

In everyday listening settings (such as classrooms), ability to


process spoken language determined by # of intertwined
processes on an ongoing basis

To understand why child experiencing difficulties in auditory


processing of spoken language, important to have fundamental
understanding of various processes engaged and how
intertwined

And, most importantly, whatever we do, should ultimately


guide the teacher, parents, SLP as to how to best help the
student in the classroom.
Pure Central Auditory Processing?

Recent research has shared more light into the Auditory Efferent
system

Has long been known that Outer Hair Cells (OHCs) are innervated
by efferent nerve fibers from cortico-fugal system(CFS)

These OHCs play a role in not only amplifying sounds near


thresholds but do so in selective fashion, that is, via attentional
influences initially directed from central executive system, exerts
effects on OHC via the CFS

Thus, attentional effects exerted as early as the OHC of Cochlea


Some Recent Research Findings:
Questioning Pure Auditory Processing

The next set of slides present research on the following topics:

Influence of auditory processing mechanisms on spoken


language processing (bottom-up)

Cognitive/linguistic influences on auditory processing (top-down)

The importance of the research is that demonstrates auditory


processing does not exist as a sole entity but influences/is
influenced by other mechanisms.
13
Influence of Auditory Processing Mechanisms
on Spoken-Language Processing
Kraus & colleagues (Northwestern University) have developed
technique known as Complex ABR (cABR)
cABR uses synthetic speech stimuli to examine brainstems response
to speech (in quiet and noise).

The stimuli (release consonant- such as /d/ or /g/, followed by


vowel) allows examiner to determine ABRs response to:
rapidly changing formant transition (acoustic domain usually
most prone to CANS disorder)
steady state portion of the vowel

This technique has been used to examine the ABRs response to


speech stimuli in various populations.
14
Impaired Speech Stimuli Transmission
and Subsequent Phonemic Disorders

Banai et al. (2009) found a direct relationship between


subcortical auditory processing of speech stimuli and
phonological/reading skills.

Children with a poor representation of the signals


harmonic structures in the cABR response (i.e., formant
transitions) exhibited poor phonological awareness skills

Good readers were characterized by more temporally


precise encoding of the speech harmonics
15
Impaired Speech Stimuli Transmission
and Subsequent Phonemic Disorders
A recent study by White-Schwoch et al. (2015) found a direct
relationship between the neural coding of consonants in noise
and multiple emergent literacy skills
The fidelity of neural coding strongly predicted phonological
processing in pre-readers over and above demographic
factors
Because these researchers followed the children over time, they
were able to establish brainbehavior links in pre-readers that
carried through to school-aged children
Consequently, the findings suggest a causal, and not simply a
correlative, role for auditory processing in learning to read
Impaired Speech Stimuli Transmission
and Other Disorders- contd
Banai et al. (2005); Johnson et al. (2007) using complex ABR
found that 1/3rd of LD children exhibit an abnormal speech ABR
response.

These findings show CAPD is not just a reflection of a language


disorder manifested when stimuli are presented auditorily, but
there are actual central auditory nervous system deficits.

This also means that 2/3rd of LD children do not have an auditory


brainstem processing deficit, reflecting the heterogeneity of LD
population.
17
Cognitive/Linguistic Influences on
Auditory Processing
There is also research that shows the influence of top-down
mechanisms on sub-cortical auditory processing.

The following are some examples:

1. Brain stem responses to speech is greatest for phonemes of the


same language (Naatanen, 1999).

2. The brainstems response to pitch contours within words


(e.g., use of Fo to alter meaning within words) is sharper for
individuals having learned a tonal language, for example,
Mandarin versus English (Krishnan et al., 2009) 18
For Your Information

Relative to speech stimuli:


Beyond 150 to 175 msec post-stimulus offset , the brain no
longer processes acoustic parameters but phonetic features and
eventually phonemic/lexical representations
Therefore, all behavioral testing is automatically influenced by
cognitive/language mechanisms
We have always known this, as reflected by:
+/- 5 dB acceptable variance in puretone/SRT thresholds that
can exist due to factors such as alertness
Impact of word frequency effects on word recognition scoring
Impact of clients native language on word recognition score
19
For Your Information

In reality, if one wants to assess only Auditory mechanisms, one


can administer only objective test measures, that is:

OAE

Electrophysiolologic measures up to & including Middle Latency


Response

Tympanometry and Acoustic Reflexes


Spoken-Language Processing (S-LP)

Unlike some who seem to feel that audiologists should limit


ourselves to examining primarily auditory processing, I strongly
feel we can take a more expansive view and examine spoken-
language processing (and actually return to our origins in the
1930s).

In addition to what other professionals might find, audiologists


results can further delineate why a child may be breaking down at
school or why an adult may be struggling at work.

Thus, an audiologist can help to bring clarity to a students/adults


issues and what may be underlying other professionals findings.
21
Overview of the Spoken-Language
Processing (S-LP) Approach
I have developed an approach that examines the underlying processes,
breakdowns, and interventions in context of what listener faces in everyday
situations.

The basic premise of this approach is that auditory processing of spoken language
involves dynamic, interactive processes that include:
Auditory processes
Cognitive mechanisms (attention, memory, sequencing)
Language (including world knowledge and ability to use social context)
Central Executive System
22
Arousal Level (Reticular System)

Emotional State (Limbic System)

Speechreading

All of these systems will be discussed in this presentation as well as to how


they interface.

One of the goals of this presentation will also be to explain why deficits in
some of these processes appear to overlap:

In reality, it will be seen that this is primarily due to how one defines a
processing disorder.

If one is defining a CAPD, then one is looking at coexisting conditions

If one is looking at S-LPD, then one is looking at an underlying cause


By combining a knowledge of spoken language processing
mechanisms plus the strategic deployment of various tests, one can:

Understand how breakdowns in specific processes are manifested

Design a test battery that examines these processes, and, in turn,


determine specific deficit(s) present and their severity

Design a management approach that addresses individual


needs

I also believe my conceptualization not only helps to understand issues


affecting learning disabled children, but also processing related issues of
individuals with hearing loss, ADHD, Autism, Non-Verbal Learning
Disorders, Anxiety Disorders, etc.
24
Caveat Before Proceeding

Importance of High Frequency Testing

This refers to hearing loss > 8K Hz


From my initial research, most individuals with significant hearing
loss in this region, exhibit difficulty in background noise:

End result is normal hearing on a typical audiogram (250- 8K Hz),


but difficulty hearing in noise

In my clinic, we will be doing research and eventually may


incorporate routine testing to at least to 16K Hz

25
Spoken-Language Processing

The following pertains to a multiple-talker setting:


Even before a person makes an utterance, a listener decides to
whom they will attend
This decision is made via the Central Executive System (CES)

The CES relays information via neural transmission to the


Auditory Cortical structures, and, in turn, via efferent fibers in
the Cortico-Fugal System to sub-cortical and brainstem
structures, and ultimately to the Outer Hair Cells in Cochlea
Thought to selectively amplify/filter various frequencies via
this early filtering mechanism; is an example of Selective
Attention
Spoken-Language Processing

Initial Processing (Transduction)

Acoustic stimuli undergo many conversions from the initial


sound waves while being transmitted in the auditory system

Pinna Middle Ear (Vibratory) Oval Window of Cochlea


(traveling waves and hydro-mechanical energy) Inner Hair
Cells (cellular mechanics/release Glutamate) Spiral
Ganglion Synapse (neural firing)

Once a specific AN fibers activation threshold exceeded,


neuro-electrical transmission occurs across Auditory Nerve
and Brain Stem

27
28
Noise blast results in damaged Nerve Type 2 fibers, leaving ability
to hear at normal thresholds but impaired ability to hear at louder
levels in presence of background noise.
29
Neural Transmission
Neuronal regions consist of different types of neurons that respond
to acoustic stimuli (fire/transmit) in different ways, such as:
Acoustic onset
Sustained portion
Acoustic offset
Coincidence detectors (binaural neurons examine
relationship of firing in time/intensity)- such as those
involved in Superior Olivary Nucleus re localization

Ultimately, the different functional attributes in the auditory nervous


system extract different features (intensity, frequency, temporal,
intonation/amplitude contours, phase)

30
Impairment in 8th Nerve Transduction

Clear input to Brain requires nerve fibers to fire in synchrony

Any deficit impacting on integrity of neural firing results in


inability to deliver a clean signal (Auditory Neural
Dyssynchrony)

Impaired neural synchrony leads to development of poor LTM


phonemic representations, and, in turn, poor underpinnings
for:

speech recognition ability- especially in noise

phonemic awareness, and, in turn, spelling, reading, writing


Transmission to Brainstem Level

Up to level of Cochlear Nucleus (1st Auditory Brain Stem


Region), neural fibers are ipsilateral but they subsequently
decussate (i.e., cross-over)
At level of Superior Olivary Nucleus, able to represent process
information from both sides

Primary Outcomes

1. Localization
2. Acoustic Stream Segregation: Acoustic streams are separated
on basis of various acoustic attributes, allowing them to be
segregated by origins, rather than one perceptual jumble
Transduction Outcomes- contd

Acoustic Stream Segregation- contd


in noisy settings, this allows individuals to be able to focus
on one or more target acoustic streams and ignore all
else

33
Disorders at Brain Stem Level

Manifestations

1. Impaired localization ability


2. Impaired acoustic stream segregation
Increased difficulty to separate various talkers
voices from each other in group setting
In turn, makes it hard to selectively attend to the
target talker, while ignoring the competing voices
Decoding
Neuroelectric patterns relayed via various pathways to:

- Language centers in brain (@95% population, left hemisphere-


auditory association cortex)
rapid, short duration auditory information processed/
analyzed by neurons specialized for rapid temporal sampling
only in left hemisphere

- Suprasegmental areas of the brain (@95% population, right


hemisphere- auditory association cortex)
neurons in right hemisphere unable to sample at a rapid
rate but does so for slower, longer duration temporal
changes
36
Decoding Speed

The speed which incoming neural patterns are relayed, compared


to, and, activate corresponding neuronal representations stored in
long-term memory (LTM) is referred to as decoding speed

LTM refers to the neuronal representations (words, concepts,


prosodic patterns, etc.) residing in a resting state (i.e.,
inaccessible to the individual)

Only when the neuronal representation is activated from LTM is


one aware of what has been said

37
Lexical Decoding Speed
Speed and accuracy incoming neural patterns identified depends on:
i. Accurate conversion of stimuli/transmission of neural patterns:
(affected by cochlear synaptopathy/AND)

ii. Amount of attention allocated to the stimulus:


- Attention lowers the neural activation (firing) thresholds
making it easier to trigger the neurons

iii. Organization/representation of the lexicon in LTM:


- phonologic, semantic concepts/relations, grammar category,
physical attributes, etc.

38
Lexical Decoding Speed
More accurate representation/organized LTM representation, the:

(a) faster trace activated for subsequent processing or retrieval

(b) stronger memory trace when activated, thus, trace lasts longer
in Short-Term Memory (STM) as well as more resistant to
effects of noise:
- that is, stronger the firing strength versus the background
noise present- a more positive Perceptual S/N Ratio

39
Lexical Decoding Speed
iv. Activation threshold of the stored percept:

- Commonly occurring percepts (numbers, food) or high


emotional content (such as an individuals name) have lower
neural thresholds and stronger representations in LTM
easier to activate at lower input levels and process at lower
S/N ratios than novel/emotionally neutral stimuli

v. Linguistic/social context preceding/following stimulus; world


knowledge; and subjects expectations

- Above allows for rapidity in being able to follow what is being


said, and underlies auditory closure

Example, My mother baked a /k/ 40


Decoding: Primary Outcomes

Primary Outcomes: Pattern matching and activation of


representations in long-term memory, result in activation of:

Left Hemisphere: phonemic and lexical representations

Right Hemisphere: suprasegmental patterns

Transduction Decoding
41
Decoding Difficulties- Possible
Underlying Reasons/Manifestations
Phonemic
Poor temporal resolution leading to poor phonemic representations
Poor phonological/phonemic awareness poor reading/spelling

Lexical
Poor underlying phonemic representations
Poorly organized semantic/schema relations/syntactic aspects
Increased processing time, miss later information, inevitably
leads to increased mental load/fatigue
Flip side is increased word retrieval/response time
Decoding Difficulties/Manifestations

Suprasegmental (Prosodic)
Difficulty processing the slower changing acoustic information
in right auditory association cortical neurons manifests in a
prosodic deficit, such as displayed by following characteristics:

Flat voicing pattern


Poor ability to replicate melodies
Increased difficulty with chunking- unable to use prosodic
information effectively to facilitate perception of grammatical
clause junctures
Auditory-Linguistic Integration
Unlike the definition of Integration used by many audiologists, I
am not using this term to refer to an ability to simultaneously
process two or more stimuli

I am referring to ability to combine sensory evidence into one


entity

That is, segmental information processed in left hemisphere is


integrated on the fly across the corpus callosum with
suprasegmental information (and body language/facial cues) in the
right hemisphere

This interaction allows for coordination/integration of linguistic and


prosodic features during auditory speech percept
44
Auditory-Linguistic Integration:
Primary Outcomes
The rhythm/amplitude & intonation contours/pauses
(right hemisphere) linked to linguistic information
(left hemisphere), allows for some of the following:

Differentiates yes/no sentences (rising/falling Fo contours)

Prosody patterns linked to grammatical clauses

Helps determine mood of talker (angry, happy, sarcastic)

Transduction Decoding Integration


45
A-L Integration Difficulties: Cause and
Its Manifestations

Underlying Cause:
Under myelinated Corpus Callosal fibers, results in slower, possibly
more fragmented neural transmission between hemispheres

Manifestation (extent depends on impairment):


Difficulty integrating prosodic information (as well as body
language/facial cues) with linguistic information
Monotonic voice patterns
Difficulty speechreading + listening sensory overload

** In more severe cases, manifested in spectrum disorders


In addition to impacting the ability to integrate information, a
corpus callosal deficit interacts with ones ability to allocate
attention effectively to right or left side.

This will be discussed later in section on Selective Auditory


Attention.
Short-Term Memory
Once language patterns activated from LTM (i.e., neurons in active
firing state) and integrated with corresponding suprasegmental
pattern, reside momentarily in short-term memory
Only time information is accessible consciously

Research shows neuronal regions fire for @ 2-4 seconds before decay
completely
Limit to how many neuronal regions can remain active at any one
point in time; @ maximum of four neuronal regions

Dependent on how quickly an individual can shift attention from one


region to another to maintain neuronal firing

Thus, STM span is primarily about time


Short-Term Memory
Neural firing pattern can be extended through various forms
of attention (rehearsal, visual imagery), directed via central
executive system
The latter is referred to as maintenance attention

Although maximum of four neuronal regions, one can


combine regions (units) larger firing region
example: /d/ /a/ /g/ (dog)

# of units can recall (referred to as STM Span) differs for


digits (7), random letters (6), familiar words (5), unfamiliar
words (3-4)

Transduction Decoding Integration STM


Short-Term Memory Span
STM span is affected by:
1. Attentional Allocation Strategies

Rote-memory span tasks (i.e., digits/unrelated words):

Sub-auditorization/periodically go back to repeat earlier items

Visualizing, Chunking, Mnemonics

2. Articulatory difficulty of items (affects rehearsal)

3. Representation of information in LTM (phonologic, semantic and


syntactic organization), which affects speed of processing/retrieval of
information (i.e., decoding speed aspect)

4. Familiarity of input (affecting how easily/quickly and how strongly the


prototype activated/retrieved from LTM)
Deficiencies Affecting Short-Term Memory
(STM) Span
STM span deficiencies limits amount retained at one time

STM span affected by:


A. Deficiencies in rehearsal/active processing strategies
B. Articulatory difficulty of items
Both affect ability to consolidate earlier presented information

C. Stimulus familiarity (i.e., novel information longer to process)


D. Organization of information/efficiency of retrieval in LTM
Both affect ability to quickly access LTM information

Problems in A-B likely results in forgetting of earlier items, while


problems in C-D likely results in later items being forgotten. 51
Attention
Humans are limited in amount information they can process at
any one point in time.

One purpose of attention is to allow the listener to:

focus on a limited amount of information at a specific point in


time, and, in turn,

maximize the extent to which the target information will be


processed and stored

Therefore, attentional processes play a role in the initial


activation of neuronal units from long-term memory.

And, as mentioned, earlier, attention allows neuronal firing of


target percepts to continue to fire and be maintained in STM. 52
Deficits in Attentional Allocation
If do not hold attention long enough to process initial stimulus to
activate corresponding LTM representation, then not even
cognizant a stimulus has occurred
If shift attention from maintaining it in STM, signal fades rapidly
from STM (such as cant remember where one has placed keys)
Although the following not a true deficit in attention, one is
impacted by the latter:
As long as stimulus being processed, attention remains focused with
that stimulus
If take too long, by time individual ready to process the following
stimulus, neurophysiologic representation may have faded and trace
too weak for subsequent processing (as mentioned earlier, known as
decoding-speed deficit)
Selective Attention

Selective attention involves one of two different processes,


depending on nature of competing stimuli and task

One mechanism occurs when the competing stimuli consists of


a non-linguistic, shower-type noise (e.g., fan noise, air
conditioner, car noise, etc.) and speech is embedded in the
noise

In this case, the sound characteristics of the noise are very


different from speech, and the brain can separate the two
streams and filters the speech from the noise (also known as
figure-ground)
54
A second mechanism occurs when there are competing talkers,
such as in a cocktail party
This mechanism is referred to Selective Auditory Attention
(Binaural Separation)

In the presence of competing talkers, the neuronal excitation


representing the competing speech stimuli are essentially
processed in the contralateral auditory cortex
contralateral pathway dominance in presence of competing
speech stimuli (i.e., the ipsilateral stimuli are overcome/
blocked by the stronger, incoming contralateral neural
transmissions)

The listener makes a conscious decision as to who/what to listen


The CES (pre-frontal cortex) directs neural stimulation to the
targeted auditory cortical processing neuronal region, and
ignores or inhibits firing in the irrelevant region
55
Language
region

X CC
HESCHLS
GYRI

PRE-FRONTAL
CORTEX
RIGHT ATTENTION LEFT
EAR EAR

56
Difficulties with Selective Attention

Selectively attending to a target may be difficult because of an:

(a) inadequate ability to filter speech from shower-like noise

(b) inability to perceptually separate stimuli into different streams,


thus, unable to allocate attention exclusively to target, while
ignoring competing stimuli

(c) impact of corpus callosal insufficiency on binaural separation


(impacting left ear processing, resulting in right ear dominance)
Right ear dominance in group settings is due to dominance of
contralateral pathways:
Right ear Left Auditory Cortex Language Region
(direct pathway)
Left ear Right Auditory Cortex Corpus Callosum
Language Region

Corpus callosum (CC) does not mature until age five, thus, even
young children expect to see right ear dominance

However, when CC is impaired, see even greater Right Ear


advantage over Left ear performance
Divided Attention (Binaural Integration)
Refers to attending to two or more stimuli
Because of processing inefficiencies, individual may not have
adequate resources to carry out such a task in real life; note-taking
is an example of divided attention
In test taking, degree of difficulty impacted by familiarity of
information and linguistic load. In terms of auditory processing ease:
Digits < spondees < words < sentences

Easiest when presented under separate earphones (essentially


irrespective of gender); in sound-field, increasingly difficult as talkers
get closer to each other, especially if same gender

If any spoken-language processing issue, likely to break down on


task as this is most difficult of the processing tasks
Sustained Attention
Individuals may differ in amount of time they can sustain
attention to target stimuli:
Inability to sustain attention may be due to different underlying
difficulties:

Behavioral Inhibition Disorder (otherwise known as ADHD)

Hearing loss- even minimal hearing loss (16-25 dB HL, loss


> 4K Hz, unilateral hearing loss)

Any processing issue that results to adding to mental load


over time (decoding speed, fading-memory, etc.)

High IQ (If can, please see the movie Gifted; one of the
best movies I have ever seen)
Central Executive System
Refers primarily to the Pre-Frontal Cortex but also includes a
number of sub-cortical structures- such as the Basal Ganglia
The CES is often referred to as the internal gatekeeper and
believed to be key to:
self-regulation,

planning,

sequencing

organization (including lexicon, schema)

working memory
Sequencing
One important function subsumed under the Central Executive
System (CES) is that of sequencing
There are different aspects to sequencing, including:
Processing and output of speech sounds/syllables
Syntactical organization
Formulating/carrying out sequences of events & actions
(including spelling- order of letters)

Receptive Expressive
Transduction Decoding Integration STM Sequencing
Attention Deficit (Hyperactive) Disorder
One well known CES deficit is AD(H)D, with its various subtypes.
AD(H)D is an executive system disorder, therefore it exerts its
effects on the output side
Possible CES impact on speech processing include:
Insufficient initial attentional allocation, thus, leading to possible
inattentiveness

Difficulty maintaining attention sufficiently long enough to retain


information in STM, thus, information fades rapidly

Difficulty with sustained attention/vigilance

Decreased short-term/working memory span

Difficulty with sequencing/organization 63


ADHD and S-LPD- contd
It is possible that other processing issues could also be present
such as lexical decoding speed, integration, phonological
awareness signs

X However, one would not expect a person with AD(H)D to solely


have phonological awareness, lexical decoding speed, and/or
integration signs, as these are input/receptive indicators

Note: In my broader conceptualization, ADHD is not a co-morbid


deficit but is impacting spoken language processing.

64
Arousal Level: Reticular Activating
System
Arousal level system involves Reticular Activating System (RAS);
traverses through brainstem with connections to various regions
of cortex
The RAS alerts cortex to attend to important incoming sensory
information; important for filtering unnecessary data so that
people can focus on targeted activities more intently
When important sensory data is detected, alerts cortex,
arouses body, and prepares for activity
The RAS exerts its influence on spoken language processing
system by interacting with the Central Executive System
Also, appears that the RAS also controls ones general arousal
level state 65
66
Arousal Level System: contd

If the CES is under/over aroused, cant direct attention


effectively; performance is best at moderate arousal levels

If cant direct attention effectively, then makes it harder to


(a) initiate activation of percepts from LTM; and
(b) maintain information in STM

Consequently, arousal level deficits can impact on spoken


language processing (even central auditory processing) via
its impact on attentional allocation

67
Limbic (Emotional) System
Limbic System modulates mood, which, in turn, modulates
ones arousal level through its interactions with the Reticular
Activating System

If ones emotional status is:


agitated, can cause over-arousal
depressed, can cause under-arousal

In each case, affects CES ability to direct attentional allocation


and effectively regulate various processing regions
Speech-Reading and Spoken-Language
Processing

In noisy settings, ones ability to do better is a result of integrating


auditory (phonetic) and visual features of speech (visemic)

A separate benefit of speechreading may be an enhanced ability to


inhibit competing stimuli, while focusing on (attending to) the
target message; this needs to be researched

Question: Do children with similar SL-P deficits function differently in


the classroom setting because of differing speechreading abilities?

This is an important research question but not an easy one to assess.


69
Capacity and Load

The term capacity refers to the total resources an individual


has available to expend on a particular task

Load refers to the amount of energy that must be expended

As long as the load is less than the capacity, an individual will


be able to complete the task successfully; if the load exceeds
ones capacity, difficulty/failure is likely- unlike compensatory
strategies utilized

Also, the greater the load, the less residual resources


(i.e., ability) the individual has to do concurrent tasks
15
Relationship of Load to Capacity
Capacity is determined by:
Genetic/developmental factors
Environmental influences (exposure, training)
Arousal level

Load is determined by:


How much effort needs to be exerted
Relationship of Load to Capacity
(Weightlifting Example)

Load:
A 100 pound weight

Background:
Genetic/Developmental Factors (180 lb Male vs 125 lb Female)
Environmental influences (Male Couch Potato; Female
weightlifts 4 x/week)
Arousal level (At beginning of workout versus three hours after
same body parts exercised)
The Relationship of Mental Load to Capacity
(Spoken Language Processing)

Capacity is determined by:

Genetic/developmental factors
degree/type of underlying deficits

Environmental influences
amount of speech & language intervention)

Arousal level
degree of alertness)
Mental load is determined by:

Quality of stimuli processed:


distortions present, sensation level, etc.

Complexity of task
automaticity, # of simultaneous tasks

Familiarity of stimuli

Amount of information
chunks/time

Thus, this is another way selective attention may falter:

Mental load to attend to target and block outcompeting stimuli may exceed
a persons capacity. That is, individual may be able to process auditory
information in quiet, but mental load to process target/block out noise
exceeds ones ability and falls apart on noise
SUMMARY Arousal
Level
Limbic System
System

Central Executive System-attn


Initial Attention Selective Attention Maintenance Attention

Transduction Decoding Integration STM


Sequencing

77
Thoughts Concerning Assessment

The underlying goal of the assessment will drive the test


battery approach

For example, if the goal is to try to primarily isolate auditory


processing mechanisms, then either:
Need to do objective measures that provide information
(such as cABR, OAE/ABR), or
Conduct tests of localization, temporal resolution, etc.
(though latter does involve conscious behaviors- attention,
decision-making)

78
Thoughts Concerning Assessment-
contd
If the goal is to examine the processing of spoken language, then a
broader multi-disciplinary assessment approach examining the
mechanisms that have been discussed here is useful
(with audiologist playing a key role)

In addition, this approach allows one to understand how deficits in


non-auditory mechanisms can also impact on the processing of
spoken language

79

S-ar putea să vă placă și