Sunteți pe pagina 1din 22

1

Abstract

This essay explores how new literacies studies might inform Health Literacy (HL)

assessments emerging from clinical and public health disciplines. First, we review definitions of

HL that are used in clinical health disciplines such as the Institute of Medicine. We contend that

there are gaps between clinical and public health understandings of HL, and suggest the need to

integrate a revised health literacies framework that can be used to inform and evaluate current

HL assessments. By examining Rapid Estimate of Adult Literacy in Medicine (REALM), Test of

Functional Health Literacy in Adults (TOFHLA), and Newest Vital Sign (NVS) HL assessments,

we found that they are driven by narrow definitions of HL that reflect basic reading,

comprehension, and numeracy skills. Thus, we argue that there should be a more complex

understanding of what constitutes HL, one that engages HL as a socially situated practice.
2

Health Literacy

The term “health literacy” (HL) evokes conflicting definitions that reflect varying

standpoints within clinical and public health disciplines. Such conflicting definitions are

problematic when designing health literacy assessments meant to inform clinical practice.

“Literacy” is often defined as “the realm where attention is paid not just to content or to

knowledge but to the symbolic means by which it is represented and used.” Definitions of

literacy, then, necessarily reflect foundational epistemologies and world views that serve the

purpose of particular disciplines. Mackert and Poag’s (2010) essay in the Spring 2011 issue of

the Community Literacy Journal notes that research on literacy programs’ efforts to improve HL

among students “highlights an opportunity to increase collaboration among literacy programs

and medical education programs to help students of all types—adult basic education, doctors,

nurses, and pharmacists—learn together” (p.70). We believe the tension between multiple

definitions of HL to be an opportunity to integrate clinical and public health approaches to HL

assessment through insights from new literacies studies.

This essay explores how new literacies studies might inform HL assessments emerging

from clinical and public health disciplines. In what follows, we first consider accepted

definitions of HL at use in clinical health disciplines. We then consider definitions of HL

particular to public health perspectives. We note the gaps between clinical and public health

understandings of HL and suggest integrating a revised health literacies framework that can be

used to inform and evaluate current HL assessments. In particular, we examine T.C. Davis et

al.’s Rapid Estimate of Adult Literacy in Medicine (REALM), Ruth M. Parker et al.’s Test of

Functional Health Literacy in Adults (TOFHLA), and Barry Weiss et al.’s Newest Vital Sign
3

(NVS) HL assessments through the lens of a more complex understanding of what constitutes

HL.

Health Literacy Defined through Clinical Perspectives

HL as a concept has gained prominence in public health forums and policy discussions

regarding the nature of health disparities and rising healthcare costs (Mackert and Poag; DeWalt

et al.; Weiss andand Palmer). It is generally understood that low literacy and poor health

disparities are correlated (Neilsen-Bohlman, Panzer, and & Kingdig), yet a clear understanding

of how HL functions to affect these variations in healthcare outcomes and expenses remains

elusive, in part due to inconsistencies between complex understandings of HL and HL

assessments currently in use.

Frequently cited, the Institute of Medicine’s (IOM) 2004 “Health Literacy: A Prescription

to End Confusion” provides a definition of HL. The IOM adopts the National Library of

Medicine’s 2000 definition of HL (Selden et al.) in 2000 as “the degree to which individuals

have the capacity to obtain, process, and understand the basic health information and services

needed to make appropriate health decisions.” (4). The IOM thus structures its discussion of HL

by building on the concept of literacy in general, specifically the understanding that “literacy is

context specific” (37). The prevailing understanding of literacy as “represent[ing] a constellation

of skills” engaged in basic print literacy, basic mathematical communications, and speech and

speech comprehension skills (oral literacy) is thus transferred to the understanding of HL overall

(IOM 37). The IOM further differentiates d between basic print literacy ability, literacy for

different types of text, and functional literacy (37). These distinctions are, to some extent, an

acknowledgment of the complexities of literacy, with each category accounting for specific

textual environments in which HL skills function.


4

Yet, while the multiple contexts of literacy may be acknowledged in the IOM definition

of HL, the notion of what literacy entails remains unchanged “as a set of purely technical coding

and decoding skills” (Chinn, 61). For example, in basic print literacy, language is coded and

decoded in “the ability to read, write, and understand written language that is familiar and for

which one has the requisite amount of background knowledge (Selden et al. p. 37). In the IOM

definition, language continues to be coded and decoded in “literacy for different types of texts,”

which is distinguished from basic print literacy in its emphases on a person’s ability to read “the

structure of the text” (39). A prescription label is is cited as an example of a text with a “unique Commented [AH1]: MLA prefers verbs in present tense
even if happened in the past.
structure” that requires the reader to “

be able to use that structure to understand the directions that follow. The reader may be helped or

hindered by various textual features such as font size, layout and design, syntax, or use of

graphs” (IOM 39).

In the case of functional literacy, language is decoded as the individual “use[s] literacy in order

to perform a particular task” (IOM 39). HL, understood through the lens of basic reading and

comprehension skills just described, reflects an autonomous understanding of literacy

developedevelop d earlier by Brian Street, and defined by David Barton as the idea that literacy

can stand alone “separate from any context” (p. 118). HL is thus relegated to an individual’s

capacity to decipher and use complex health-related information. Clinicians and researchers have

tested users’ capacity for print based reading and comprehension skills in order to understand the

communication needs of users within their own practice (Nutbeam; Chinn). Yet insights from

New Literacies Studies suggests that current HL assessments fail to account for a more robust

understanding of literacy as a situated and social practice (Chinn; Papen; Nutbeam). In addition,
5

a reading skills-based approach to HL assessment falls short in determining how HL functions to

affect variations in healthcare outcomes and expenses.

J. Haun et al.’s assessment of HL measurements exemplify the difficulty in measuring Commented [AH2]: Re-organized to made it subject-verb
agreement
functional literacy skills and indexes the various understandings of HL and practices in use Commented [AH3]:

across clinical and community settings in the “2014 Journal of Health Communication, “Health

Literacy Measurement: An Inventory and Descriptive Summary of 51 Instruments,” exemplifies

the difficulty in measuring functional literacy skills and indexes the various understandings of

HL and practices in use across clinical and community settings. Haun et al. review the

“psychometric properties, test parameters, and conceptual dimensions of publish ed HL

measurement tools” in order to create “an inventory for researchers, decision makers, and

practitioners who seek to identify validated measurement tools that are fitting for their research

and practice” (303). Specifically, Haun et al.’s search for HL measurement tools in “peer-

reviewed publications from 1999 to the end of 2013 […] yielded 51 unique HL measurement

tools” composed of “twenty-six 26 general HL tools, fifteen15 disease or content specific tools

(e.g., diabetes, asthma, HIV, nutrition), and ten10 tools for specific populations” (304, 305).

Haun et al. assess “the specific skills and competencies measured by the different tools” by using

“the taxonomy of skills identify ied by Sorensen and colleagues in their content analysis of HL

definitions” and a “consensus process to determine the characteristics, dimensions, validation,

strengths, and limitations of each tool” (304, 305). Haun et al.’s research confirms that, in order

to advance tools, used to evaluate HL must “assess all of the defined measurements of HL,”

specifically addressing such “significant gaps in HL measurement” as the “dearth of assessment

of navigation… confidence… [and] responsibility,” (326, emphasis in original text). Despite,

however, their call to assess all the defined measurements of HL Haun et al. also acknowledge
6

“HL is a broad concept without a single definition, thus, it is a challenge to place distinct

parameters on the definition of what should be accepted as a HL measure” (326). Existing HL

measurements must therefore be understood as assessing only part of the greater whole that

composes HL. Such a recognition of the limitations of HL measurements in turn clarifies HL

assessment as reflecting the healthcare user’s skills (or lack thereof) in a particular textual

environment, thereby preventing HL assessment from being used as a gauge for understanding

how healthcare users approach, access, understand, and use health-related information as a

whole.

Towards A More Complex Understanding of Health Literacy

The complexity of HL is reflected in the 2005 definition articulate ed in Christina

Zarcadoolas, Andrew Pleasant, and David Greer’s (2005) transdisciplinary research,

“Understanding Health Literacy: An Eexpanded Mmodel”. Zarcadoolas et al. contend that HL

constitutes a “range of skills and competencies that people develop to seek out, comprehend […],

evaluate, and use health information and concepts to make informed choices” and “reduces

health risks and increase[s] the quality of life” (196-197). Papen’s (2009) “Literacy, Learning,

and Health—A Social Practices View of Health Literacy” reflects Zarcadoolas et al.’s complex

definition of HL, arguing that HL is an always situated event or series of events that occur over

time. Papen’s research illuminates HL as a social practice, embedded in social relationships.

Such practices are informed by a range of personal and social resources users can bring to make

sense of health information. Moving beyond a cognitive basis for literacy, it was argued by

Papen that in diagnoses and other healthcare interactions, healthcare users’ emotions must be

considered as a variable that affects HL.


7

Andrew Pleasant and Shyama Kuruvilla’s (2008) “A Tale of Two Health Literacies:

Public Health and Clinical Approaches to Health Literacy” observe that “across most definitions

of HL, the conception exists that HL is “a skill-based process” used to “identify and transform

information into knowledge” (154). By contrasting public health and clinical approaches to HL,

Pleasant and Kuruvilla note that public health practices value the “acquisition of health

knowledge as an integral part of HL rather than its outcome” while clinical approaches to HL

focus on reading and numeracy skills (154). Pleasant and Kuruvilla’s research clarifies that

clinical approaches to HL reflect a narrow understanding of HL as an individual’s capacity to

process print-based health information. Pleasant and Kuruvilla are not the only scholars to frame

clinical approaches to HL this way. David Baker’s (2006) “The Meaning and the Measure of

Health Literacy” characterizes clinical approaches as believing health knowledge “facilitates HL

but does not in itself constitute it” (879). Thus, clinical approaches are describeed as viewing

health knowledge as supplementary to HL. In contrast, public health approaches value the

“acquisition of health knowledge as an integral part of HL rather than a separate outcome”

(Pleasant and Kuruvilla, 154). Public health research, therefore, tends to instantiate more

complex understanding of literacy that extend beyond reading fluency and functionality to

include health users’ histories, prior knowledges, and social resources while taking users’

perceptions, needs and goals for literacy in account (Papen; Barton; Nutbeam). Pleasant and

Kuruvilla call for a more comprehensive approach to HL, and its assessment that integrates

insights from clinical and public health spheres.

Clinical psychologist and mental health nursing practitioner Deborah Chinn, in her 2011

“Critical Health Literacy: A Review and Critical Analysis,” outlines the current “second wave of

HL research” as recognizing the complexities of literacy through “increasingly sophisticate d


8

understandings of pedagogical theories relating to multiple ‘literacies’ (reading and writing, e-

literacy, political literacy) and their links to individual autonomy, choice, and empowerment”

(61). These sophisticated understandings of multiple literacies and their ramifications, in turn,

necessarily demand the examination of literacy “as a set of social practices embedded in broader

social goals and cultural imperatives” (Chinn 61). Understanding the ways in which an

individual’s HL skills are necessarily influenced by social and personal resources, Chinn

explains, shifts HL’s focus on “absolute differences in literacy, as an individual attribute that can

be identifyied as present or absent” to a detailed examination of “how people with a range of

personal and social resources engage with written material in socially situated ‘literacy events’”

(61). This view is consistent with NLS scholars such as Dave Barton who conceptualize literacy

as a social practice across various domains such as work, school, church, and home (39). Health

literacies are thus understood as technologies active in making and reproducing shared

knowledge; Social, cultural values, individual, and social histories shaped these technologies.,

and that these technologies are shaped and are shaped by social and cultural values, as well as

individual and social histories (Barton 44-50). Further, HL should engage information appraisal,

in order to make appropriate decisions that support health and well-being (Chinn). In addition to

aiding individuals in their navigation of healthcare systems, Chinn argues that complex

understandings of HL also engage issues of power at play in the dissemination, uptake, and use

of health information and can be used in collective action to engage health inequalities.

This shift towards a “socially contextualized view of users of literacy as active, purposive

agents” consequently alters the function of HL assessments as tools for “encouraging people to

adopt healthy behaviors and avoid unhealthy ones,” where healthcare professionals are ascribe

ed the role of experts providing health-related information to a “passive target audience” (Chinn
9

61). Instead, a healthcare user is understood as an active participant in their own health rather

than recipients charged with passive acceptance and tasked with behavioral change. As a result,

there is room for an individual’s personal knowledge, comprised in part of social and cultural

ways of knowing and being, to help inform and supplement the more commonly authorized ways

of knowing typically embodied in biomedical approaches to healthcare. In alignment with NLS,

Chinn identifies different HL domains, including functional, communicative, and critical

literacies, and notes that while these literacies may “not be a necessary correspondence between

[…] scores on measure of function and critical health literacies” she cautions that these domains

should not be understood as “hierarchical or mutually exclusive” (65). For example, both

functional and critical HL domains utilize skills related to managing and interpreting health

information. Critical HL extends this skill towards assessing “personal relevance” of the

information with a focus on interaction. Calling for HL assessment to include “distributive

competencies,” Chinn supports qualitative measures of health assessment to supplement the

standardized tools to include interviews, observations, and ethnography to assess how “people

actually interact critically with health-related information in real-life situations” (65). Critical

health literacies can therefore bring a clinical focus on reading fluency into social contexts while

acknowledging the importance of knowledge production in achieving greater HL.

Towards A Stronger Health Literacy Assessment

Edward H. Behrman’s (2002) “Community-based Learning” describes such learning as

recognizing “literacy as situated activity” and “proposing three separate orientations to the

concept of community as it relates to the literacy curriculum” to “present an analytical

framework to assist curriculum developers and researchers in designing, implementing, and

evaluating community-based literacy programs” (26). Behrman’s analytical framework


10

encourages developers to “more explicitly describe the range of new situations to which they

believe the learning will transfer and the assumptions inherent in the curriculum regarding the

portability of performance across situations” (32). Similarly, HL assessments might best be

improved by incorporating more social situations that require HL skills, as opposed to testing an

individual’s HL skills in individualized reading and numeracy tasks. HL assessments thus

configured can not only anticipate the range of contexts in which HL skills are needed, but also

increase the variety of responses given for a situation. In this way, a greater awareness of the

range of understandings possible in any given situation avoids a deficit view of HL, and in turn,

encourages an understanding of literacy that recognizes reading and writing as social as well as

personal activities.

Health assessments that are informed and shaped by insights from new literacies studies

should account for the situated and complex nature of literacy. While it is important to

acknowledge that literacy tests in clinical contexts “have been shown to predict knowledge,

behaviors, and outcomes” it should also be acknowledgableed identifying individuals with

marginal health literacy has not been shown to improve communications or outcomes (Baker

880-881). Therefore, while there is a correlation between low to marginal literacy and negative

health outcomes, the relationship is poorly understood. Consequently, researchers and clinicians

must account for the fact that HL is not isolable to reading and numeracy skills. It follows, then,

that HL assessments should reflect the understanding of literacy as a socially-situated practice.

Such an orientation acknowledges the multiple social and personal resources and knowledges

healthcare users bring to the literacy events. In addition, HL assessments should account for

users’ social and cultural ways of knowing and being and treat these knowledges as assets rather

than barriers to HL.


11

In what follows, we review the most widely-used HL assessment tools: the REALM,

TOFHLA, and NVS. We then introduce a new Health Literacies framework that integrates

insights from NLS. We conclude by employing this framework in our evaluation of the HL tools

outlined below.

REALM

The Rapid Estimate of Adult Literacy in Medicine (REALM) was developed by Davis et

al. in 1993 as a “rapid screening instrument designed to identify patients who have difficulty

reading common medical and lay terms that are routinely used in primary care patient education

materials” (Davis et al., 1993, 391). Originally requiring “approximately five minutes to

administer and score,” Davis et al. revised the REALM in response to physician feedback for a

shorter version. The result was a shortened version that requiresd “only one to two minutes for

completion.” Davis et al. describe the REALM as “a reading recognition test that measures a

patient’s ability to pronounce words in ascending order of difficulty,” citing the test as “unique”

in that its content consists solely of “commonly used lay medical terms,” thus “making the

REALM particularly useful for estimating literacy skills in medical settings” (392). The medical

terms included in the REALM were deemed “commonly used” in part according to “item

analyses determin[ing] which words best identifyied patients with limited reading skills” and

“the frequency” of such words “in written material given to patients” (392). Davis et al. explain

that, to administer the test: “

Patients are asked to read aloud as many words as they can, beginning with the first word in

column one. There is no time limit. […] A patient’s reading raw score is the total number of

correctly pronounced words. […] Dictionary pronunciation is the scoring standard”. (392).
12

In a nominal attempt to verify the accuracy of such an exam across cultural differences, Davis et

al. note that, “A dictionary is the recognized guide for people seeking help in pronouncing

unfamiliar words, regardless of their culture or the region of the country in which they reside”

(392). As a “reading recognition test,” previous research that declares such tests “useful

predictors of general reading ability” was cited, in which Davis et al. add that “the results of the

test do not imply comprehension of interpretation but only agreement on the sound of the word”

(393). Davis et al. reason that, “if patients have trouble reading and pronouncing words, one is

then alerted to the possibility that reading comprehension is likely to be a problem” (393). Davis

et al. suggest that a patient’s REALM results be used by clinicians and researchers “to identify

patients who may have difficulty reading materials given to them in medical settings, provide a

numerical estimate of how severe their reading difficulty is, and select or create materials written

at the appropriate level” (393). Davis et al., add that, “if for any reason medical professionals

need a more complete assessment of reading, including a specific grade equivalent reading level,

the REALM would not be an appropriate test” (393).

In their recounting of trial runs of the shortened REALM, Davis et al. report that, “of 215

potential subjects, all but twelve12 (5%) were tested. “The research assistants speculate that they

could not read after nNine could not see well enough to participate, one was in pain, and two

refused afterwhen they were asked to read,”, (394). leading the research assistants to speculate

that they could not read” (394). Such speculations are problematic as they connect to an

individual’s non-participation as an indication of a lack of ability, while also overlooking other

reasons for not taking the test, including the possibility that users did not share an interest in the

test or perhaps felt insulted at having their health literacy assessed. Writing in 2007, literacy

assessments in healthcare contexts arguably risk alienating and shaming healthcare users
13

(Paasche-Orlow & Wolf). Citing Wolf’s previous research, Paasche-Orlow and Wolf confirm

that while 90% of users in their study found it helpful for practitioners to know they needed

some help with medical terminology, “almost half (48%) of patients reading at third-grade level

reported they would feel ashamed if medical staff knew this information” (Paasche-Orlow and

Wolf 101). Integrating insights from basic literacy research, Kelly Bradbury argues that within

academic professions, there exists a “hierarchy of knowledge” and culture that values

intellectualism over everyday knowledge. Thus, when in the clinical context, practitioners feel

justified to insist on a basic level of literacy to support their own technologies of communication.

Kelly Bradbury argues for a broader understanding of intellectualism that could, if applied to

HL, obviate the need for literacy testing in clinical spaces, if we reassess what Baker calls the

“literacy demand” healthcare systems place on the public (880). Paasche-Orlow and Wolf agree

that this demand complicates communication and that healthcare users across the literary

spectrum would benefit from an emphasis on clear and jargon-free oral communication and

print-based healthcare information (101).

TOFHLA

Two years later after REALM’s creation, the Test of Functional Health Literacy in Adults

(TOFHLA) was developed in 1995 by Parker et al. to serve as a “valid, reliable instrument to

measure the functional health literacy of patients” (537). To accomplish these ends, the

TOFHLA “consists of a 5050-item reading comprehension and 17 17-item numerical ability

test,” composed of “actual hospital materials” and “taking up to twenty-two22 minutes to

administer” (537). The development of the TOFHLA was significant in its then-unique ability to

assess Spanish-speaking as well as English-speaking persons and to test a person’s capacity to

read and understand numbers, or quantitative literacy (Parker et al. 538). For the reading
14

comprehension portion of the TOFHLA, users are given passages with every 5th to 7th word

omitted, tasked with selecting from “four possible choices, one of which is correct and three of

which are similar but grammatically or contextually incorrect” (538). The content of the passages

are excerpts from “instructions for preparation for an upper gastrointestinal series, the patient

rights and responsibilities section of a Medicaid application form, and a standard hospital

informed consent form” (538). The TOFHLA’s 17-item numerical ability section consists of

“actual hospital forms and labeled prescription vials,” designed to test the user’s ability to

“comprehend directions for taking medicines, monitoring blood glucose, keeping clinic

appointments, and obtaining financial assistance” (538). Parker et al. explain that for the

numeracy portion of the TOFHLA, “patients are presented with cue cards or labeled prescription

bottles and asked to respond to oral questions regarding information about the cards or bottles”

(538). To calculate an individual’s overall TOFHLA score, “the sum of the reading

comprehension and the weighted numeracy scores” is calculated, with scores “rang[ing] from 0

to 100 and [with] equal distributions” from both the reading comprehension and numerical

portions (538).

When discussing the implications and future uses of the TOFHLA, its creators described

their assessment as “an appropriate tool for measuring functional health literacy [that] should

provide better insight into the problems that low-literacy patients face in the healthcare setting,”

calling for “further investigation […] to assess not only the overall prevalence of low literacy,

but also how it actually affects patients’ abilities to understand their medical conditions and

adhere to treatment recommendations” (541). Thus, the developers of TOFHLA themselves

recognized the need for a clearer understanding of how HL functions to affect variations in

healthcare outcomes and expenses. Described generally, such a need can be understood as the
15

absence of a clear connection between an individual’s ability to transfer skills between contexts

(functional HL) and the consequences of the quality of that transference as measured by an

individual’s experience with those contexts (healthcare outcomes) and the costs for involve d

parties (healthcare expenses).

NVS

While the REALM assesses reading fluency by testing a person’s capacity to read and

enunciate words correctly, the TOFHLA was design ed to assess functional literacy through

testing a person’s comprehension of various types of health-related documents. However, though

positive in the fact that it iswas design ed to predict low HL in both English and Spanish Commented [AH4]: MLA prefer verbs in present tense
even if something happened in the past
speakers, the TOFHLA is lengthy and time-consuming. To address this limitation, Weiss et al.

developed and tested the Newest Vital Sign (NVS) in 2005. Like the TOFHLA, the NVS tests

for functional HL and numeracy. Formed from a series of scenarios, the NVS provides users

with health-related information in only one scenario (an ice cream nutritional label) and then

asks users to answer 6 questions relating to that information. The NVS comparesd favorably to

the long version of TOFHLA in predicting lower levels of HL in both English and Spanish

speaking users, and is particularly sensitive to identifying “marginal” HL (Weiss et al. 520). In

contrast with the TOFHLA, which was developed “to help identify problems that low-literacy

patients face in the healthcare setting” (Parker et al. 541), the NVS was designed specifically to

identify users with low levels of HL to “alert physicians to patients who may need more attention

and help physicians focus on physician-patient communication using recommended techniques”

(Weiss et al. 520). In the context of the NVS, measuring an individual’s capacity to process and
16

use health-related information serves to improve user-practitioner communication through

raising practitioners’ awareness of a potential for the presence of low HL in their patient

populations. When diagnosed as having low literacy through NVS instrument, practitioners can

engage appropriate educational support or more in-depth communication with their patients. The

developers of the NVS concede that literacy is a “complex construct that encompasses many

aspects of how individuals use health information and the healthcare system” (Weiss et al. 521).

Like the TOFHLA and the REALM, the NVS only “measures reading and interpretation skills

[…] rather than all aspects of HL” (Parker et al. 521).

Health Literacies Framework

We conclude with suggesting a health literacies framework useful to guide HL

assessment in clinical and community contexts. Because it draws from new literacies studies

outlined above, this framework presents a more complex understanding of literacy across

functional, communicative, and critical domains. Therefore, we utilize the plural literacies to

emphasize the multiple forms of HL. As new literacies studies point towards a framework that

understands literacy as a social practice, we address the functional, interactive and reflexive

qualities of literacy. Our health literacies framework acknowledges HL as an asset and resource,

rather than an indicator of deficiency. In developing our health literacies framework, we ask:

1. Does the HL assessment only measure a health a healthcare user’s performance of a

skill, or does the assessment provide a range of situations where HL might be

assessed?

2. Does the HL assessment measure or otherwise account for the personal, emotional,

social, and cultural resources the healthcare user brings to the literacy event?
17

3. Does the HL assessment identify and evaluate critical literacy skills that may help

inform appropriate health-related decisions and build upon prior knowledge?

The REALM iswas developed to test for the singular presence or absence of reading skills—

specifically the decoding of the proper use and pronunciation of particular words in the medical

world. Within the context of the REALM, the dictionary pronunciation is held as a criterion for

assessment, which is arguably an arbitrary and culturally-biased choice that obscures knowledge

held by those outside English-speaking contexts. In addition, the tool’s scope is limited to

identifying healthcare users, who have low levels of print based literacy.

The singular focus on the individual’s reading skills, or lack thereof, is problematic on

two fronts. As Papen notes in her research on healthcare users’ experience in HL practices, skills

assessment often supports a deficient view of the one assessesd. A closer look at REALM’s

scoring rubric reveals a deficit view of health literacy that is prescriptive in nature. For example,

if a user pronounces 0-18 words correctly, these individuals are labeled “third grade or below.”

However, users scoring 19-44 words are designated at a “fourth to sixth grade.” These users are

addresse d within the rubric as “you will not be able to read prescription labels.” Users achieving

a score of 45-60 correctly pronounced words are at a “seventh to eighth grade” level and “will

struggle with most patient education materials and will not be offended by low literacy

materials.” A score of 61-66 places a user at a high school level and judges these users as

generally independent with patient education materials (“Health Literacy Measurement Tools

(Revised) Agency for Healthcare Research & Quality”). Clearly, REALM tests only for an

individual’s capacity to process medical information, thus failing on all counts to engage HL as

the socially-situated practice.


18

As a measure of functional HL, the TOFHLA assesses an individual’s comprehension of

print-based texts commonly found in a clinical encounter. As the TOFHLA provides a variety of

texts in different but related contexts, the TOFHLA measures a small range of skills (reading and

comprehension) but neglects to assess for the ability to critically reflect and use the information.

The TOFHLA also completely ignores the social situated ness of literacy, as the examples it

provides are static and not tailor ed for the individual user’s needs. The TOFHLA’s choice of

texts used for assessment is arbitrary. While the REALM’s goal is to aid communication

between health practitioners and users, the TOFHLA was specifically designed to enable users to

“understand their medical conditions and adhere to treatment recommendations” (Parker et al.

541). The TOFHLA moves from enhancing practitioner-user communication to improve d

adherence and self-care. While the TOFHLA does recognize that HL is a practice, the tools place

stress on the proper use of health-related information rather than an ability to interpret that

information critically and reflexively to make appropriate healthcare decisions (Chinn).

Additionally, the TOFHLA scoring sheet is less prescriptive and punitive as individual scores

indicate “inadequate,” “marginal,” and “adequate” levels of HL (STOFHLA Directions for

Administration, Scoring and Technical Data 1-17).

The latest tool for HL in clinical settings, the NVS seems to move towards a more

situated understanding of literacy, assessing HL in relation to healthcare related scenarios outside

of medical contexts and asking questions based on reading a nutrition label for reading skills,

comprehension, and numeracy. Inquiring whether a child with a peanut allergy should eat ice

cream, close and applyied reading skills are used by the NVS. Yet, the NVS also fails to gauge Commented [AH5]: What?

personal, social, and cultural resources available to the healthcare user.


19

The REALM, TOFHLA, and NVS were developed for use in clinical contexts, and seem

to reflect over time an evolving conception of literacy as a situated practice. These assessment

tools, however, are driven by narrow definitions of HL that reflect basic reading, comprehension,

and numeracy skills. Narrow definitions of HL fulfill the needs of health professionals who, in a

busy clinical environment, need to know if users understand the complex terminology that

constitutes the health profession. Assessments such as the REALM, TOFHLA, and NVS provide

a quick assessment and quantifiable data useful in literacy research. However, when assess ed

within the health literacies framework, the REALM, TOFHLA, and NVS privileges the values,

knowledge, learning sites, and educational experiences of medical and health practitioners.

Having a more complex understanding of HL, the stigma assigned to lower levels of functional

literacy can be removed. At the same time, we can acknowledge other ways of knowing that are

assets to health. Finally, it may be worth considering how complex understandings of health

literacies place the onus on health practitioners to be able communicate to all users without the

need to add another diagnosis—Health Literacy Deficiency—to those persons seeking their care.
20

Works Cited

Baker, David W. “The Meaning and the Measure of Health Literacy.” Journal of General

Internal Medicine 21.8 (2006): 878-883. Web.

Barton, David. Literacy: An Introduction to the Ecology of Written Language. 2nd ed. Malden,

MA: Wiley-Blackwell, 2007. Print.

Barton, David, et al. Literacy, Lives, and Learning. London: Routledge, 2007. Print.

Behrman, Edward H. “Community-Based Literacy Learning.” Literacy 36.1 (2002): 26-32. Web.

Berkman, Nancy D., Terry C. Davis, and Lauren McCormack. “Health Literacy: What Is It?”

Journal of Health Communication 15 Sup 2 (2010): 9-19. Web.

Chinn, Deborah. “Critical Health Literacy: A Review and Critical Analysis.” Social Science &

Medicine 73.1 (2011): 60-67. Web.

Davis, T.C. et al. “Rapid Estimate of Adult Literacy in Medicine: A Shortened Screening

Instrument.” Family Medicine 25.6 (1993): 391-395. Print.

DeWalt, Darren A., et al. “Literacy and Health Outcomes.” Journal of General Internal Medicine

19.12 (2004): 1228-1239. Web.

Grabill, Jeffrey T. Community Literacy Programs and Change. Albany: SUNY Press, 2001. Print.

“Health Literacy Measurement Tools (Revised) Agency For Healthcare Research & Quality”.

Ahrq.gov. N.p., 2017. Web. 11 Mar. 2017.

Haun, Jolie N., et al. “Health Literacy Measurement: An Inventory and Descriptive Summary of

51 Instruments.” Journal of Health Communication 19 Sup 2 (2014): 302-333. Web.

Mackert, Michael & Meg Poag. “Adult Basic Education and Health Literacy: Program Efforts and

Perceived Student Needs.” Community Literacy Journal 5.2 (2010): 68-73. Print.
21

Neilsen-Bohlman, Lynn, Allison M. Panzer, & David A. Kingdig. Health Literacy: A Prescription

to End All Confusion. Ed. Lynn Neilsen-Bohlman, Allison M. Panzer, and David Kingdig.

Washington, DC: National Academies of Science: Institute of Medicine, n.d. E-book.

Nutbeam, Don. “The Evolving Concept of Health Literacy.” Social Science & Medicine 67.12

(2008): 2072-2078. Web.

Paasche-Orlow, Michael K. & Michael S. Wolf. “Evidence Does Not Support Clinical Screening

of Literacy.” Journal of General Internal Medicine 23.1 (2007): 100-102. Web.

Papen, Uta. “Literacy, Learning, and Health—A Social Practices View of Health Literacy.”

Literacy and Numeracy Studies 16.2 and 17.1 (2009): 19-34. Web.

Parker, Ruth M. et al. “The Test of Functional Health Literacy in Adults.” Journal of General

Internal Medicine 10.10 (1995): 537-541. Web.

Pleasant, Andrew and Shyama Kuruvilla. “A Tale of Two Health Literacies: Public Health and

Clinical Approaches to Health Literacy.” Health Promotions International 23.2 (2008):

152-159. Web.

Seldon, Catherine et al., eds. Health Literacy. N.p.: National Libraries of Medicine, 2000.

STOFHLA Directions for Administration, Scoring and Technical Data. Literacy in Health Care,

2017. PDF.

Street, Brian V. Literacy in Theory and Practice. New York: Cambridge University Press, 1985.

Print.

Weiss, Barry et al. “Quick Assessment of Literacy in Primary Care: The Newest Vital Sign.” The

Annals of Family Medicine 3.6 (2005): 514-522. Web.

Wolf, M.S. et al. “Patients’ Shame and Attitudes toward Discussing the Results of Literacy

Screening.” Journal of Health Communication 12.8 (2007): 721-732. Web.


22

Zarcadoolas, Christina, Andrew Pleasant, and David S. Greer. “Understanding Health Literacy:

An Expanded Model.” Health Promotion International 20.2 (2005): 195-203. Web.

S-ar putea să vă placă și