Sunteți pe pagina 1din 22

Theory & Psychology

http://tap.sagepub.com/

Numbers and interpretations: What is at stake in our ways of knowing?


Jeanne Marecek
Theory Psychology 2011 21: 220
DOI: 10.1177/0959354310391353

The online version of this article can be found at:


http://tap.sagepub.com/content/21/2/220

Published by:

http://www.sagepublications.com

Additional services and information for Theory & Psychology can be found at:

Email Alerts: http://tap.sagepub.com/cgi/alerts

Subscriptions: http://tap.sagepub.com/subscriptions

Reprints: http://www.sagepub.com/journalsReprints.nav

Permissions: http://www.sagepub.com/journalsPermissions.nav

Citations: http://tap.sagepub.com/content/21/2/220.refs.html

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


Article

Theory & Psychology


Numbers and interpretations: 21(2) 220–240
© The Author(s) 2011
What is at stake in our ways of Reprints and permission:
sagepub.co.uk/journalsPermissions.nav

knowing? DOI: 10.1177/0959354310391353


tap.sagepub.com

Jeanne Marecek
Swarthmore College

Abstract
This article reflects on a set of target articles concerned with the use of quantitative procedures
in interpretive research. The authors of those articles (Osatuke & Stiles; Westerman; and Yanchar)
discuss ways that numerical procedures can be brought into interpretive studies, using illustrations
from research programs on psychotherapy process, schools, law courts, and work life. Instead
of the usual quantitative–qualitative distinction, I use Geertz’s distinction between experimental
science and interpretive science and Kidder and Fine’s distinction between Big-Q and small-q
research to reflect on several procedural and epistemological differences among target papers.
The diversity of approaches under the umbrella of qualitative methods is described, along with
some recent developments. Even though US psychology continues to mount stiff resistance against
incorporating interpretive approaches into its knowledge-producing practices, such approaches
are flowering in other parts of the world.

Keywords
Big-Q, discursive psychology, interpretation, interpretive science, qualitative methods

Years ago, Clifford Geertz (1973) formulated what he regarded as a crucial distinction in
the human sciences:

Believing, with Max Weber, that man is an animal suspended in webs of significance he himself
has spun, I take culture to be those webs, and the analysis of it to be therefore not an experimental
science in search of laws but an interpretive one in search of meaning. (p. 5)

In US psychology, meaning-centered analyses have been advocated mainly by those,


such as Jerome Bruner and Richard Shweder, who work at the intersection of psychology
and anthropology. The mainstream has been resolute in its commitment to experimental

Corresponding author:
Jeanne Marecek, Department of Psychology, Swarthmore College, Swarthmore, PA 19081, USA.
Email: jmarece1@swarthmore.edu

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


Marecek 221

science. Few orthodox US psychologists see a place in the discipline for studies of
subjectivity and few believe that psychology stands to gain from introducing interpretive
strategies into its knowledge-producing practices. Yet, though meaning-centered analy-
ses and sociocultural models of psychic life are marginal to the central interests and
concerns of the discipline in the US, they flourish on the periphery (cf. Kirschner &
Martin, 2010).
The three target papers (Osatuke & Stiles, 2011; Westerman, 2011; Yanchar, 2011)
revisit Geertz’s divide between experimental science and interpretive science. In particu-
lar, they examine whether and how an interpretive quantitative paradigm, to use
Westerman’s term, is possible and useful for projects concerned with subjectivity and
meaning. Given the hegemony of numbers in US psychology, I applaud the authors’
energetic efforts to carve a place for interpretation and meaning. In what follows, I first
offer brief overviews of the papers. I then try to place them within the broad context of
qualitative inquiry and to reflect on some differences among them. All the authors say
their work rests on interpretation, but what they mean by interpretation differs. Moreover,
all three argue for using numbers in interpretive research, but they use numbers differ-
ently. Finally, I take a step back to consider some additional dimensions of interpretive
studies. I begin with brief summaries of the three target papers.
Osatuke and Stiles (2011) summarize and reflect on an extensive program of research
on assimilation theory. The assimilation model is “a developmental theory of psycho-
logical change” in psychotherapy (p. 200). It offers an account of the psychological
processes that clients experience as they resolve problematic experiences in therapy. The
model holds that selves are constituted by sets of cognitive-affective positions; these
positions are referred to as “voices,” a term that references Bakhtinian ideas about dia-
logicality and also denotes that “experiential traces within the person” have a kind of
agency: that is, they are “actors and movers who speak for themselves” (pp. 201–202). A
problematic experience constitutes a voice that is incompatible with the individual’s
dominant community of voices: that is, his or her usual, familiar set of (inner) voices,
which are interconnected and mutually accessible. Problematic experiences produce
negative states—anxiety, pain, conflict, and confusion—and so people are motivated to
ward them off.
Assimilation analysis, which the authors describe as an intensive, interpretive, itera-
tive process, is based on transcripts or recordings of therapy sessions. Focusing on cli-
ents’ utterances, expert judges identify and describe each of the client’s inner voices and
the passages that are related to them. The judges then evaluate “how the selected voices
were or were not assimilated in treatment” (Osatuke & Stiles, 2011, p. 204). They make
ratings using the APES (Assimilation of Psychological Experiences Scale), an eight-
point ordinal scale that indexes the sequence of stages required to assimilate problematic
experiences.
Osatuke and Stiles (2011) make several strong claims about the assimilation model.
They say that the resolution of problematic experiences in therapy requires that clients
move through every stage represented on the APES. They also assert that the sequence
of psychological changes represented on the APES characterizes improvement in psy-
chotherapy “independent of problematic content, diagnostic population, or therapeutic
approach” and “type of client” (pp. 209 and 213). Further, they assert that assimilation is

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


222 Theory & Psychology 21(2)

the engine of improvement in therapy; the assimilation model “explain[s] … how, in suc-
cessful therapy, the problematic experiences become resolved” (p. 201).
Michael Westerman’s paper (2011) is framed around his efforts to devise what he calls
an interpretive quantitative paradigm. This paradigm recasts important elements of quan-
titative methodology as well as important elements in qualitative research—more spe-
cifically, in conversation analysis. Like Osatuke and Stiles, Westerman is interested in
psychotherapy process research. However, his work draws from psychoanalytic theories
of conflict and defense. Moreover, whereas Osatuke and Stiles propose the assimilation
model as a grand theory of psychotherapeutic change, Westerman deliberately steps back
from proposing grand theory to more modest goals. For Westerman, measurements and
meanings are inextricably bound to their contexts; observations and explanations rest
inextricably on our prior involvements in the world. It follows then that psychological
inquiry will necessarily yield findings that are contingent. Moreover, the questions that
are brought forward for research are contingent on the situation of the researcher and on
a host of pre-reflective understandings.
Westerman describes a research program that conjoins two lines of thought: psycho-
analytic theories of unconscious conflict and defense and conversation analytic studies
of the organization of dyadic interactions. Using the techniques of conversation analysis,
Westerman examined the micro-devices in conversation sequences by which meanings
are jointly talked into being. But Westerman added person-level considerations (specifi-
cally, individual differences in interpersonal interaction patterns) to these investigations,
an addition that would be anathema to ethnomethodologically inclined conversation ana-
lysts. When people are motivated to avoid conflictual material, what conversational
devices come into play? In adding considerations of personality and psychopathology to
the study of talk, Westerman offers one way to address concerns that have been voiced
by some psychodynamic critics (e.g., Chodorow, 1999; Hollway & Jefferson, 2000)
about language-focused theories of post-structural and discursive psychologists. In the
eyes of these critics, language-focused theorists have evacuated the person from studies
of identity and subjectivity, envisioning people as moving freely among subject posi-
tions, unencumbered by personal history or desire.
Westerman describes a number of studies aimed at demonstrating that psychic
defenses operate not only to ward off unpleasant inner states and remembrances, but also
to shape conversation practices so that unpleasant interpersonal events do not occur. To
study interaction sequences in therapist–patient dialogues, he used videotaped segments
of therapy sessions. To study interpersonal defense, he used procedures that resemble
those of experimental social psychologists. For example, he used controlled laboratory
conditions in which college students responded to scripted scenarios. In other studies, he
had college students read hypothetical dialogues and then make paper-and-pencil ratings
or write possible replies to a hypothetical interlocutor. In all these studies, trained raters
evaluated speech samples or conversation sequences using a manual.
Informed by the hermeneutic perspectives of Merleau-Ponty and Wittengenstein,
Westerman (2011) argues for the necessity of situating people (research participants as
well as experimenters) in medias res and of considering the knowledge produced by
psychologists’ research as contingent on the methods used to produce it. On the surface,
these well-argued philosophical commitments seem at odds with Westerman’s research

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


Marecek 223

practices (e.g., situating his studies in the laboratory, generalizing from college student
samples, employing closely scripted experimental scenarios).
Stephen Yanchar’s (2011) paper presents a sharp contrast to the previous two. Osatuke
and Stiles and Westerman emphasize that psychological phenomena can be (indeed,
should be) studied using forms and procedures that are amenable to standard quantitative
technologies (e.g., techniques for assessing inter-rater reliability and statistical methods
for testing hypotheses). In contrast, Yanchar’s concerns lie with the practices that
researchers use to capture and describe lived experience and everyday practical activi-
ties. He casts a critical eye on the conventional data-gathering procedures used by psy-
chologists, decrying their reliance on “thin numerical descriptions” and “theoretical
abstractions that often make little meaningful connection with practical involvement in
ordinary human life” (pp. 179 and 181). In place of customary data-gathering proce-
dures, Yanchar proposes what he calls a practical discourse framework.
As an exemplar of the practical discourse framework, Yanchar (2011) describes the
research program of Yrjö Engeström, a psychologist at the University of Helsinki who has
been closely associated with the Laboratory of Comparative Human Cognition at the
University of California, San Diego. Engeström’s approach draws on cultural-historical
activity theory, a theoretical orientation with origins in the ideas of Vygotsky, Luria, and
Leontiev. As Yanchar describes them, Engeström’s methods constitute an eclectic and
fluid mix, tailored to specific investigative situations. They include micro-level analyses
of discourse, observation of social interactions, and historical analysis. Regarding organi-
zations as “activity systems” that change by continually working through contradictions,
Engeström has studied work settings such as banks, clinics, schools, and factories.
The concept of an activity system is key to Engeström’s research practice. An activity
system involves an assemblage of actors, tools, materials, problems, and outcomes; it
“integrates the subject, the object, and the instruments (material tools as well as sign and
symbols) into a unified whole” (Engeström, 1993, p. 67). Activity systems are in con-
tinual flux, confronting and working through the difficulties that arise in everyday life.
Engeström’s ideas about the integration (or, perhaps, indissoluble connection) between
persons, tasks, and arenas of activity resonate with Michael Cole’s (1996) plea to psy-
chologists to re-think what they mean when they speak about “context.” For Cole, a
context should not be construed as a container that surrounds a freestanding individual.
Pointing to the root meaning of context as “weaving together,” Cole sees persons, objects,
and events as dynamically interconnecting with one another (p. 135).
As Yanchar (2011) describes it, Engeström’s work seems to have several distinctive
features. Engeström, one guesses, does not aspire to produce or test grand theories of
working life or individual psychology. Nor does he seem interested in locating causes of
individual worker behavior. Moreover, Engeström seems to take workers’ intentions,
reasons, observations, and self-interpretations as worthy of study for themselves. Related
to this stance, he engages his research participants as co-researchers who help to plan, carry
out, and interpret his studies. Numbers have a place in Engeström’s research only insofar
as numerical procedures and measures are part of the lived experience and everyday prac-
tice of the people he studies. In other words, as Yanchar (2011, p. 186) says, investigation
and analysis should focus on “practical numeracy” (e.g., the use of pay scales, time charts,
and job evaluations that were part of the work setting prior to the entry of the researcher).

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


224 Theory & Psychology 21(2)

Putting myself in place


Before I proceed, I need to describe the standpoint from which I speak. My formal train-
ing as a psychologist centered on canonical psychological methods—laboratory-based
experiments, the construction and validation of quantitative scales and inventories, and
statistical hypothesis-testing. However, early in my professional life, my knowledge
interests—the production of gender in everyday life—led to a methodological impasse.
Along with many other feminist scholars of the time, I became disenchanted with the
context-stripping practices involved in laboratory experiments: What was stripped away
was exactly the cultural and social content that was my focus of interest (Marecek, 2001).
Similarly, “canned” measurement scales—which I had been taught to prize for their stan-
dardization and presumed universal utility—seemed to preclude rather than facilitate
efforts to study the complex intertwining of gender with other markers of social location
(Hare-Mustin & Marecek, 1994). More and more, I came to believe that societal context
and social location exploded psychologists’ claims of universality. The more that psy-
chologists endeavored to pin down “true” differences (or similarities) between men and
women, the more apparent it became that constructs that indexed those differences, such
as aggression, self-esteem, depression, sexual interest, emotionality, and autonomy, were
always already gendered (and often raced and classed as well). That is, their meanings
and social consequences shift in accord with the particularities of those we study
(Marecek, 1995). Jerome Kagan’s (1998) characterization of psychology’s lexicon as a
corpus of unconstrained words seemed all too apt. Efforts to measure or quantify these
slippery qualities seemed misplaced and futile.
My disappointment with conventional methods deepened further once I began to live
in South Asia and to work as a researcher there. As I came to share in life worlds and
cultural models profoundly different from those of North Americans, I could see in stark
relief that much of what US psychology holds as universal—self, emotion, vocabularies
of psychological distress, milestones of children’s development, and stages of the
lifecycle—was culturally contingent. So too were the conceptions of normality, maturity,
mental health, and optimal development that our America-centric textbooks promulgate.
My goal of understanding the concepts, categories, moral visions, and cultural models
that organize everyday life for South Asians could not be accomplished with the America-
centric theories, categories, and measurement technologies that I knew.
My interests in gender and culture led me into intellectual partnerships with anthro-
pologists, sociologists, and feminist theorists. Many of them looked askance at psycholo-
gists as gauche bumblers, but they welcomed me nonetheless. This was an unsettling
time in my intellectual life and I will always be grateful for their acceptance, gentle tutor-
ing, and support. Stretches of time spent as a visiting scholar in Sweden and in Norway
over the past dozen years served further to make it evident that, even in comparison to
other western high-income countries, US psychology—with, inter alia, its wholehearted
embrace of numbers—had a culture all its own.
All these experiences led me to look at, rather than through, the lenses of US psychol-
ogy. I began to ask questions: What lies behind our taken-for-granted categories and
what holds them in place, even when their adequacy has been called into question over
and over again for decades? How have notions of methodological adequacy become so

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


Marecek 225

totalizing? What price do we pay for privileging statistical significance as the royal road
to truth? What are the acculturation processes (and the ongoing patrolling mechanisms)
of the discipline that seal off the boundary between psychology and the other social sci-
ences? To whose benefit? At what peril?

Re-thinking dichotomies, distinctions, and differences in


interpretive research
Conventional psychologists typically have drawn a sharp distinction between quantita-
tive and qualitative methods. According to this received distinction, which is constructed
from the vantage point of quantitative researchers, qualitative research is defined in
terms of what it lacks: it is research “without numbers.” The authors of the target papers
find the distinction between numbers and without-numbers an unhelpful simplification.
I agree. The dichotomy imposes a false homogeneity on both ends of the poles.
Psychological researchers use numbers in many different ways. Psychological research-
ers use many different qualitative procedures for data collection, as well as a multitude
of interpretive strategies; perhaps most important, they operate with differing episte-
mologies. Viewing qualitative and quantitative approaches as a dichotomy also conceals
many areas of overlap between them, several of which have been discussed in the target
papers. As all the target papers note, all research projects, whether quantitative or quali-
tative, involve interpretation and subjectivity at many points throughout the inquiry pro-
cess. As Osatuke and Stiles (2011) say, “any understanding we reach is somebody’s
understanding and is subjective in this sense” (p. 215).

Big-Q and small-q research


Leaving aside the unhelpful distinction between quantitative and qualitative methods,
there remain distinctions among qualitative approaches that seem important to me.
I draw on these distinctions to map differences among the target papers. One such dis-
tinction was formulated by Louise Kidder and Michelle Fine (1987). They coined the
terms small-q and Big-Q to denote two contrasting modes of qualitative research. Small-q
methods, they said, involve “the insertion of open-ended questions into a survey or
experiment that has a [conventional] structure or a design” (p. 59). In small-q research,
the goal is hypothesis-testing and the model is the hypothetico-deductive method taught
in psychology methods classes. The hypotheses, constructs, and procedures are fixed at
the onset of the study and do not change as the research progresses. Researchers take
considerable care to make sure that their procedures are standardized. For example, all
participants in the study undergo the same treatment and they are assessed in the same
manner. In interviews, participants are asked identical questions in identical order, with
the interviewer attempting to vanish into the woodwork as much as possible. The data are
coded according to categories that are pre-established at the outset of the study. There is
little or no possibility that small-q research can uncover surprising or novel phenomena,
and such surprises would be unwelcome. Indeed, a surprising response would probably
be deemed an “outlier” or discarded as “unscorable.”

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


226 Theory & Psychology 21(2)

Big-Q research, by contrast, proceeds without specific hypotheses, predetermined


variables, or regimented data collection procedures. Whether via fieldwork or via rela-
tively unstructured conversations, Big-Q researchers attempt to work from the ground
up, entering people’s lives and social settings “with all pores open” (Kidder, 1994). They
hope to glimpse the categories, stories, and meanings that people use to frame their lives.
In Big-Q research, the procedures and interview questions may change as the research
progresses, making use of what the researcher is learning from the participants. More
fundamentally, the research questions are refined as the study proceeds, with new ques-
tions coming to light through the investigator’s immersion in participants’ lived experi-
ences. Howard Becker (1998), one of sociology’s most venerated ethnographers, has
referred to this as “think[ing] about your research while you are doing it” (p. 1). As
Kidder and Fine say, small-q researchers look for answers, but Big-Q researchers look
for questions. Surprising data, far from being unscorable, are embraced. Data analysis
proceeds inductively, with new concepts and theory generated from the ground of par-
ticipants’ lives and stories.1

Big-Q research comes in many flavors


Big-Q research encompasses an assortment of data collection procedures, analytic strate-
gies, and epistemological commitments. I briefly describe four axes of variation among
Big-Q projects. One axis concerns the degree to which data gathering is structured. At
one end are the fieldwork approaches practiced largely by anthropologists and some
sociologists, which Clifford Geertz (1998) has called “deep hanging out” (p. 69). As
Erving Goffman (1989) put it,

It’s . . . getting data, it seems to me, by subjecting yourself, your own body and your own
personality, and your own social situation, to the set of contingencies that play upon a set of
individuals, . . . so that you are close to them while they are responding to what life does to
them. (p. 125)

Geertz (1983) has characterized fieldwork as taking an active part in the ongoing stream
of activities to see “from the natives’ point of view” in order to figure out “what they
think they are up to” (p. 58). The other end of this axis includes modes of data collection
that elicit verbal (or, occasionally, visual) material from respondents during formalized
encounters like semi-structured interviews. Psychologists, psychological anthropolo-
gists, and person-centered anthropologists tend to cluster at this end of the axis, although
some psychologists (such as Sunil Bhatia, 2007; Gary Gregg, 2007; and Louise Kidder,
2000) have pursued fieldwork.
Another axis of difference concerns epistemology. Many psychologists who have
criticized qualitative psychology have erroneously conflated it with postmodern thought.
This conflation not only conceals important differences among researchers, but also
erases the long history of qualitative inquiry in psychology. Across the span of history,
most psychologists who have carried out qualitative studies have not been “postmod-
ern.” (Consider, for example, John Dollard’s Caste and Class in a Southern Town, 1937.)
On the other hand, there are Big-Q investigators who have framed projects within

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


Marecek 227

constructionist, poststructuralist, or postmodern meta-theories. They have taken up, for


example, questions of representations of reality, ideology, and power, and the relational
and institutional processes through which dominant discourses are circulated, natural-
ized, and challenged. An example is Helen Gremillion’s (2003) study of the uninten-
tional deployment of cultural ideologies of femininity and the body by caregivers on an
eating disorders treatment unit.
A third axis concerns analytical strategies. Some Big-Q researchers favor highly
structured procedures for inducing categories and meanings. Proponents of grounded
theory, for example, have laid out detailed procedures for identifying units of analysis in
verbal material and for extracting repeating ideas, categories, and themes from unstruc-
tured talk (Glaser & Strauss, 1967; Strauss & Corbin, 1990). In psychology, Carl
Auerbach and Louise Silverstein (2003) have composed a step-by-step blueprint, which
is loosely based on principles of grounded theory, for analyzing interview material. Other
researchers, however, have resisted such codification. Jonathan Potter and Margaret
Wetherell (1987), for example, have objected to the idea that research methods are neu-
tral, all-purpose tools; they disavow the goal of “perfecting a cast iron methodology” (p.
179). In Potter’s (1996) view,

Indeed, it is not clear that there is anything that would correspond to what psychologists
traditionally think of as a “method.” . . . The lack of a “method” in the sense of some formally
specified set of procedures and calculations, does not imply any lack of argument or rigour; nor
does it imply that the theoretical system is not guiding analyses in various ways. (pp.
128–129)

For Potter, interpretive strategies should be tailored to specific projects; they should be
formed around the investigator’s questions and the research materials, not vice versa.
What is important is that an interpretive strategy be systematic and transparent and that
it produce knowledge that is useful and generative.
A fourth axis concerns the relationship between the researcher and the research par-
ticipants. At one end of the axis are projects in which participants are positioned much
like participants in conventional psychology experiments. At the other end are approaches
in which participants are co-producers of knowledge. In participatory action research
(PAR), for example, the participants define the questions to be investigated, carry out the
investigations, collaborate in interpreting the data, and decide how to use the research
outcomes to promote change (Maguire, 2000). For example, Michelle Fine and her stu-
dents enrolled urban minority youth in “research camps,” where they learned to design
surveys and to conduct interviews and focus groups. The youth used these methods to
solicit other students’ experiences of racial and class-based injustice in their schools and
communities (Fine, Torre, Burns, & Paine, 2006).

Big-Q and small-q:Turning to the target papers


The authors of all the target papers (Osatuke & Stiles, 2011; Westerman, 2011; Yanchar,
2011) see their work as explicitly interpretive, but a close look shows that they mean
different things by this. The Big-Q/small-q distinction offers one way to map these

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


228 Theory & Psychology 21(2)

differences. Before I draw this map, I must note that I do not intend the term “small-q”
to denigrate certain approaches or to suggest that these approaches offer only “small”
interpretations.
Assimilation research seems closely akin to small-q inquiry. Although it addresses
itself to “phenomena that are deeply subjective” (Osatuke & Stiles, 2011, p. 201), it
seems firmly anchored in the paradigm that Geertz (1973) called an “experimental sci-
ence in search of laws” (p. 5). The studies that Osatuke and Stiles have described inves-
tigated specific hypotheses derived a priori from assimilation theory. These hypotheses
did not change as the investigations progressed. The data (typically, passages excerpted
from therapy sessions) were rated using the pre-specified coding categories set out in the
APES; the APES itself was not open to revision. Make no mistake: I do not mean to
imply that the interpretive demands on raters using the APES are trivial; indeed, the
APES requires high levels of both specific expertise and general clinical acumen.
Westerman’s (2011) methods of data gathering and analysis also resemble in impor-
tant respects what Kidder and Fine call small-q approaches. In the studies he has
described, he set out to test hypotheses that draw upon psychoanalytic theory. His studies
of interpersonal defense relied on structured procedures that remained in place from the
start of the study to the end and did not vary from participant to participant. Research
participants were asked to make brief unstructured verbal responses, which raters then
sorted into categories that were set when the study began. The raters operated according
to a pre-defined (and theoretically derived) set of codes. To paraphrase Kidder and Fine,
Westerman sought to answer questions, not uncover them.
Yanchar’s (2011) portrayal of Engeström’s research depicts a number of Big-Q fea-
tures. Engeström did not seem to feel a compunction to treat every participant alike, to
“run subjects” through rote procedures, or even to ask everyone the same series of inter-
view questions. His procedures were designed to attend to local meanings, as well as to
the particularities of participants’ identities and to the social, historical, and material
context of the research. Although Yanchar does not detail Engeström’s analytic proce-
dures, it seems plausible to assume that the variegated array of data produced by his way
of working could only be approached inductively.

How does the interpretive process proceed?


The cognitive activities involved in interpreting data are different for small-q projects
and Big-Q projects. In small-q projects, coders fit data into predetermined categories. In
Westerman’s work, for example, raters categorized brief sequences of therapist–client
interactions or brief responses to a scripted scenario. They used a manual that delineated
the set of categories. Osatuke and Stiles described a rating process in which judges used the
eight-point Assimilation of Psychological Experience Scale to rate passages from ther-
apy transcripts. In such small-q research, coding is a task akin to birdwatching: that is,
matching to a prototype. Coders are, in effect, tasked with judging whether or not a spe-
cific instance is a member of a pre-determined category. In small-q research, coders can
find only what they are looking for.
The interpretative process in Big-Q research involves different cognitive activities.
There are no a priori categories and the data are not evaluated against a set of prototypes.

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


Marecek 229

The interpreters begin with general research concerns, but these are not hypotheses or
even specific questions. The interpreters work inductively, seeking emergent patterns in
participants’ meanings, concerns, language practices, or cultural models and then formu-
lating constructs that capture those patterns. Interpreters read and re-read passages, com-
paring them to one another until a pattern appears. This interpretive process is like
playing chess: initially, many tentative possibilities come into view, to be set aside,
reconceived, or perhaps later resurrected. It is not uncommon to analyze a body of data
in different ways, producing multiple findings, a practice that some conventional psy-
chologists find disconcerting and unscientific (cf. Pratto & Walker, 2004). Potter and
Wetherell (1987) offer this description of the interpretive process:

[A]nalysis involves a lot of careful reading and rereading. Often it is only after long hours of
struggling with the data and many false starts that a systematic patterning emerges. False starts
may occur as patterns appear, excitement grows, only to find that the pattern postulated leaves
too much unaccounted, or results in an equally large file of exceptions. (p. 168)

“Coding” and “rating” are misnomers for this process. Small-q investigators can employ
(and may prefer) raters who have no knowledge of the research questions. In contrast,
Big-Q investigators do the interpretive work themselves. Indeed, formulating the
research questions is the heart of the interpretive task.

Causes or reasons: What kind of accounts does the


interpretive process yield?
In the passage that opens this paper, Geertz (1973) contrasts what he calls an “interpre-
tive science in search of meaning” with an “experimental science in search of laws”
(p. 5). What happens if we map the target papers onto Geertz’s distinction? Interpretive
science, it seems to me, figures people as sense-making, purposive actors and thus turns
attention to their reasons, intentions, reflections, moral judgments, and stories (all forged
in relation with others). By this definition, Yanchar and Engeström clearly fit within
Geertz’s interpretive science. Engeström seems to be interested in capturing what Bruner
(2008, p. 35) has called “shared ordinariness” and Geertz (1973, p. 5) has called the
“webs of significance” that people spin. It is hardly surprising that Engeström asked
participants (in one study, cleaners) to produce a shared understanding of their work by
engaging in discussions of videotaped segments of their everyday work activities.
The other two target papers (Osatuke & Stiles, 2011; Westerman, 2011), though their
authors deem their work interpretive, do not sit so easily inside Geertz’s paradigm of
interpretive science. Let us consider them in turn. Assimilation research is not concerned
with seeing “from the natives’ point of view” (Geertz, 1974, p. 26). Instead it offers a
re-description of therapy clients (the “natives”) that is a view “from above.” Neither
clients nor therapists are asked for their reflections on their therapy experiences or to
contribute to the interpretive process. In other words, assimilation theory is not con-
cerned with clients’ or therapists’ sense-making. When raters use the APES to code ther-
apy passages, they are not interested in the details of clients’ talk for itself; rather they
endeavor to see through clients’ talk to a domain of mental activity that lies behind it.

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


230 Theory & Psychology 21(2)

Assimilation is not the reason for clients’ or therapists’ activities. That is, it is not a con-
scious goal that clients or therapists pursue or a task they deliberately engage in during
the therapy hour.2 (Indeed, it is unlikely that either clients or therapists have ever heard
the term.) Furthermore, assimilation researchers seem to seek law-like generalizations.
Recall Osatuke and Stiles’s (2011) claim that greater assimilation characterizes improve-
ment in therapy independent of presenting problem, diagnosis, type of therapy, and client
characteristics. This, in my view, constitutes a law-like generalization. Thus, assimila-
tion research seems to be an instance of Geertz’s experimental science.
Westerman’s research program is framed within psychoanalysis and his practice of
interpretation has much in common with meanings of the term in psychodynamic theo-
ries. Although interpretation is central to psychoanalysis, it does not have a single agreed
meaning. (Even Freud offered a number of different meanings.) The psychodynamic
psychotherapists Kaner and Prelinger (2005) offer a broad definition: interpretation
involves a person who draws on theoretical expertise to give another person “a sense of
possible meaning” of his or her experiences (p. 268). As in assimilation research, talk (as
well as bodily and paralinguistic behavior) points toward a domain of mental experience
that the speaker is not privy to. In this view, self-interpretation seems close to an outright
oxymoron. Westerman is careful to stand aside from what Geertz calls the “search for
laws”: that is, generalized knowledge that is not contingent on historical and cultural
circumstances. Nonetheless, I read his project as a search for causes: that is, psychologi-
cal processes that operate outside individual awareness and that shape patterns of thought
and action, and not an interrogation of reasons. Whether or not Westerman sees these
processes—and indeed psychoanalysis itself—as products of specific historical and
cultural circumstances (as some critics—including friendly critics—of psychoanalysis
assert) is hard to say.
In these latter two research programs, research participants are necessarily granted
little interpretive authority. As far as Osatuke and Stiles (2011) tell us, neither clients nor
therapists are asked to contribute to or even to comment on APES ratings. Westerman
(2011) too does not suggest that his research participants are consulted during the inter-
pretive process.
In summary, key terms—notably, interpretation, subjectivity, and practical activities—
have different meanings across the three target papers. Moreover, the investigators seek
to produce different kinds of accounts of human action. These differences bear on their
research procedures, the social relations of the research project (i.e., among investiga-
tors, interpreters, and participants), and the uses to which quantification might be put.

On the uses and misuses of numbers in interpretive research


The target papers make strong cases for various forms of quantification (e.g., statistical
hypothesis-testing, assessments of inter-rater agreement, and practical numeracy) in
interpretive research. They show us concrete and workable examples of what Westerman
termed explicitly interpretive quantitative paradigms. But what are the limits of quanti-
fication? Yanchar (2011) and Westerman (2011) have voiced reservations about certain
uses of quantification, particularly the standard measurement practices that prevail in
conventional psychology. I share their reservations about these practices and I have

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


Marecek 231

reservations about certain other practices as well. Therefore, I offer some additional
observations and cautions about the limits of quantification.
I worry particularly about slippery equations between numerical indices and preci-
sion, particularly when numbers are used to index personal experience. For example,
Osatuke and Stiles (2011) say, “Numbers can help us express the products of our subjec-
tivity (concepts, theories, interpretations) in more precise language” (p. 216). Numbers,
they point out, offer ease of manipulation, aggregation, and reduction, as well as efficien-
cies of presentation in scientific communications. All true, but the “us” to whom they
refer is the research community. If we shift our focus from researchers to research par-
ticipants, and from data analysis to data collection, there may be less reason for extolling
the precision of numbers.
Research participants frequently say that numbers express their subjective experience
less precisely than words. Indeed, research participants often resist the demand to com-
press their stories into numerical responses on scales and close-ended questionnaires.
Capps and Ochs (1995), for example, describe the reactions of two participants in a study
of anxiety disorders:

[Meg] expressed frustration with the enterprise. . . . she felt that the interviews and questionnaires
did not capture her experience. Beth echoed these feelings in a two-page note attached to her
packet of completed questionnaires, in which she wrote, “I need to explain my answers in order
for them to make sense.” (pp. 7–8)

Glenda Russell (2000) studied the reactions of gay, lesbian, and bisexual citizens of
Colorado during the run-up to the referendum on Amendment 2 (an anti-gay constitu-
tional amendment). She reported a similar reluctance on the part of her participants to
reduce their stories to a sheaf of questionnaires and psychiatric symptom checklists. Of
the 663 individuals who completed the eight-page survey instrument, 496 chose also to
respond to the final item, “Tell me anything else about your response to any aspect of
Amendment 2.” The poignant stories and feelings in these responses led Russell to com-
pose a full-length monograph. They also served as source material for an oratorio.
Close-ended questionnaires are a favorite means of producing data that can effort-
lessly be converted into numbers amenable to aggregation and reduction. However,
close-ended questions severely constrain what respondents can say. Such questionnaire
items do not permit respondents to express ambivalence, give inconsistent or contradic-
tory responses, reframe the issues under consideration, or say what is really on their
minds. These constraints lead to serious distortions. I offer a brief example from my own
work in Sri Lanka (Marecek & Senadheera, in press), which concerns individuals who
have engaged in deliberate self-harm. An enduring problem for suicide specialists is
distinguishing between individuals whose self-harm reflects an intention to die and those
who have other motives. In previous studies in Sri Lanka, researchers have asked patients
hospitalized after a self-harm episode a single direct question (“Did you intend to die?”)
with close-ended response options (yes and no). Based on patients’ answers, they have
reported that up to 75 percent of such patients intended to die at the point when they
engaged in self-harm. My colleagues and I gathered open-ended narratives from persons
hospitalized for self-harm; after they had finished those stories, we asked a direct

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


232 Theory & Psychology 21(2)

question about intentions. When we compared the narrative accounts to the responses to
the direct question, striking discrepancies emerged. Fully half of those who said they
intended to die when asked directly had recounted at least one and sometimes two or
three different motives in their narrative accounts (e.g., “I wanted to get my father to stop
hitting my mother,” “I wanted to keep my son from enlisting in the army,” “I knew that
my girlfriend’s family would force her to return to me if I did this”). Concrete aspects of
the self-harm episodes also belied respondents’ claim that they intended to die. For
example, many carried out the self-harm in others’ presence. The close-ended question,
with its crisp, pointed answers, provided data handily pre-packaged for quantitative anal-
ysis. Unfortunately, the numerical estimates generated from these data, though precise,
bore little resemblance to reality.
Trenchant critiques of conventional psychological modes of measurement have been
advanced by several theoreticians, such as Stam (2006), Michell (2000), Danziger
(1990), and others, as Yanchar (2011) and Westerman (2011) have noted. They have
argued that numerical indices of psychological phenomena convey an illusion of preci-
sion, while actually producing “a host of unknown, uninterpretable, or unhelpful mean-
ings” (Yanchar, 2011, p. 185). Discussions of arbitrary metrics in psychological scaling
compound these concerns (Blanton & Jaccard, 2006). None of the target papers relies on
such indices. However, claims about the precision afforded by numbers might be read as
endorsements of such indices.
When a premium is placed on producing numerical grist for statistical mills, research-
ers may not pay sufficient attention to meanings. Psychological scales run rife with items
that are ambiguous, unintelligible, or (if taken literally) unanswerable (cf. Hacking,
1995). Moreover, scale items are often laden with implicit, taken-for-granted meanings
that demand considerable cultural competence to decipher. Consider a few examples
(taken from published studies):

x How much do you feel that you identify with Japanese culture?
[Answered on a 10-point scale, ranging from “Not At All” to “Completely”]
x How often do you believe . . . that women would fare better in their mental
health without men in their lives?
[Answered on a 5-point scale ranging from “Always Believe” to “Never
Believe”]
x Circle the percentage of the time this happens to you: How often do you have
the experience, when taking a trip . . ., of suddenly realizing that you don’t
remember all or part of the trip?
[Answered on scale from 0% to 100%]

An anonymous reviewer has noted that any researcher can write “poor items.” Yes. My
intent is not to upbraid researchers who write “poor items.” My point is that when
researchers are driven to produce numbers to feed into statistical mills, they may brush
aside considerations of meaning (i.e., item content). Furthermore, the conventions for
reporting research entice readers of psychology journals to take numerical scores at face
value; often journal articles provide little or no information about the items that compose
scales. Thus, “poor items” are not merely anomalies produced by an occasional poorly

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


Marecek 233

trained researcher. To buttress this point, I note that one of the examples given above is
from a widely used scale of clinical dissociation and another is from a paper that was
awarded an APA research prize.
In sum, what counts as precision depends in part on one’s perspective. For instance,
Osatuke and Stiles (2011) assert that “Using the numbers on the scale . . . helps raters
express more precisely their perception of the relationship between parts of clients’ expe-
rience” (p. 200). For them, for example, anchoring the initial point on the APES with the
number “1” affords more precision than anchoring it with the verbal description
“unwanted thoughts/active avoidance.” They may value this form of precision because
their sights are set on producing data that permit statistical manipulation. From a Big-Q
researcher’s perspective, precision might mean staying as close as possible to the mean-
ings that participants give to their experiences. Some Big-Q researchers, for example,
advocate using the words of the research participants as labels for themes and coding
categories rather than reverting to professional jargon. To return to Osatuke and Stiles’s
contention that the numbers on the APES helped raters express their perceptions more
precisely: a Big-Q researcher probably would find the label “unwanted thoughts/active
avoidance” a more meaningful (and thus more precise) description of a therapy client’s
internal experiences than “1.”

Expanding the concept of “practical numeracy”


Yanchar (2011) uses the term “practical numeracy” to refer to the ways that people
engage numbers in everyday practical activities. Practical numeracy is akin to practical
discourse; therefore we can investigate everyday numerical activities in ways similar to
those we use to investigate everyday language practices. For instance, just as discursive
psychologists examine language as action, as John Austin (1962) put it, we can examine
numbers as action.
In arguing for the utility of numbers, Osatuke and Stiles (2011) have declared “num-
bers mean the same thing to everybody” and “retain their meaning across time and cul-
ture” (p. 200). Everybody? Once we step outside the scientist’s laboratory, numbers do
not seem so pristine. And when we move from counting pellets or measuring rainfall, the
meaning of a number may not be so solid. Like words, numbers are re-presentations of
reality. They carry excess meanings, do ideological work in conversations, and serve as
rhetorical resources. Moreover, in everyday life, numbers do not necessarily retain their
meaning “across time and culture.” To offer some everyday examples: In the 1950s,
Marilyn Monroe was considered a sex goddess; her body shape was extolled as a “per-
fect size 12.” Today, young women consider Marilyn Monroe unacceptably fat, even
cow-like; the “perfect size,” according to my female students, is a size 2. For patriotic US
citizens, “One if by land, two if by sea” has a special emotional resonance that it does not
have for others. And the number 9/11 assumed an enduring symbolic significance in
September 2001.
Many examples can be given of instances in which numbers do ideological work. An
important, albeit unsettled, question among historians studying the lingering effect of
British imperialism on its colonies concerns census-taking and other enumerating prac-
tices of colonial administrators in South Asia. Did the practice of enumerating colonial

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


234 Theory & Psychology 21(2)

subjects by caste create new meanings of caste, shifting it from a system of labor
exchange to a mode of personal and group identity (cf. Dirks, 2001)? Similarly, histori-
ans of Sri Lanka argue whether the British administrators’ penchant for enumerating
their colonial subjects by their ethnicity hardened what had been fluid boundaries among
Tamil, Sinhala, and Muslim peoples, contributing to the post-independence emergence
of a virulent ethno-political nationalism and a devastating 27-year civil war (Bandarage,
2008; Peebles, 2006).
Numbers can serve as poetic resources as well as precise measurements in people’s
everyday accounts of their experiences. Interpretive researchers (as well as everyday
listeners) can misunderstand numerical expressions if they limit themselves to literal
meanings. I offer another brief example from my research to illustrate this. In Sri Lanka,
it is widely assumed that suicide and self-harm are nearly always impulsive acts that take
place with little or no premeditation. This consensus is supported by many studies that
have asked patients hospitalized for self-harm how much time they spent contemplating
self-harm before acting. Patients usually report that they spent very little time (e.g., “less
than half an hour,” “immediately,” “all at once”) contemplating self-harm. We too found
that in response to a direct question, many self-harm patients reported only moments of
contemplation before taking action. However, when they provided unstructured narra-
tives, many of the same individuals described purchasing pesticide or large quantities of
tablets several hours before swallowing them. How could we make sense of the frank
discrepancy between two recollections (collected back-to-back) of an incident that had
taken place only a day before? Our analysis interpreted the discrepancy as a matter of
accountability management. We suggested that the response to the direct inquiry (“How
much time passed . . . ?”) was not a literal report of time elapsed, but one narrative strat-
egy (of several) for disavowing responsibility for a morally tainted action.
Numbers serve as ideological and poetic resources in professional communications as
well as in everyday talk. For example, epidemiological data (or pseudo-data) serve as
potent claims-making devices. Statistical scene-setting, as Reekie (1994) termed it, rei-
fies what it purports merely to count. Consider this statement: “According to researchers,
about 17 million Americans are compulsive spenders—an addiction that can lead to
financial, psychological and social problems, such as bankruptcy, depression, anxiety
and divorce” (Jay, 2010).
Statements of this sort—which are legion in both popular and professional psychology—
transform everyday activities into pseudo-diagnoses and set up (pseudo-scientific)
boundaries between what is ordinary and what is abnormal. Although body counts of
putative sufferers do not create categories like “compulsive shopping disorder,” they
lend a bogus solidity to them. When something can be counted, we assume that its exis-
tence and definition are settled.
We can see other examples of the ideological use of numbers in the discipline of psy-
chology itself. Since Titchener’s day, quantification has been assumed by many psy-
chologists in the US to be the sine qua non of psychology. The presence or absence of
numbers separates scientific from nonscientific projects. (Consider how many psychol-
ogy students—and, regrettably, their teachers—believe that qualitative research is “not
empirical.”) Moreover, the discipline’s gatekeepers use numbers to weed out the non-
psychological from the psychological: “If you expect to get tenure, you need to show that
you can do the real stuff [statistics].” “I can’t assign that book to a psychology class

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


Marecek 235

because it doesn’t have numbers in it.” Some editors of mainstream US journals reject
out of hand manuscripts without numbers as “not research.” Some insist that numbers be
inserted into research reports, even when it is illogical to do so. In short, in the culture of
US psychology, numbers serve as the talisman of scientific credibility.

Fixing what isn’t broken?


In putting forward the advantages of bringing numbers to interpretive research, the target
papers (Osatuke & Stiles, 2011; Westerman, 2011; Yanchar, 2011) may imply that quali-
tative research is lacking in various domains: for example, rigorous specification of its
procedures, forward momentum, and innovation. In this section, I briefly address these
criticisms, suggesting that they are more apparent than real.
Westerman (2011) seems to imply that interpretive researchers’ procedures are lack-
ing in specificity in comparison to those of quantitative researchers. For example, he
describes coding categories as “the concrete specification of phenomena that is part-
and-parcel of employing quantitative methods” (p. 157). Describing the work of Horowitz
and his colleagues, Westerman says further:

The quantitative procedures . . . contribute in a second way as well. As I noted at the outset of
this paper, quantitative research requires the concrete specification of phenomena of interest.
. . . Among other things, the procedures employed in these studies included parsing the stream
of interaction into thematic units and then coding each unit for a number of distinct subtypes of
elaboration, with each described by concrete characterizations. (p. 164)

I share Westerman’s view of the importance of systematic, transparent, and precise speci-
fication of units of analysis, coding categories, and coding criteria. But I am puzzled by
the (perhaps unintended) implication that such concrete and careful specification is
uniquely demanded by quantitative research and otherwise absent. Many interpretive
researchers engage in the procedures he has outlined, even when they do not proceed to
statistical analysis. Moreover, Big-Q researchers—who deliberately do not specify a pri-
ori the content of codes, categories, and themes—nonetheless specify the steps of the
inductive process. Grounded theory analysis offers a prime example of such detailed
specification of a process for finding categories in talk (e.g., Charmaz, 2006). Other
examples of detailed methodological explications can be found in the edited collection
Finding Culture in Talk (Quinn, 2005), in Five Ways of Doing Qualitative Analysis (Wertz
et al., 2011), in Discourse as Data (Wetherell, Taylor, & Yates, 2001), and in Working
with Spoken Discourse (Cameron, 2001).
Indeed, interpretive researchers have devised additional methods to improve the pre-
cision of their data. One example concerns transcription. Studies of talk nearly always
rely on transcripts. But a transcript, as many have noted, does not (and cannot) neutrally
reproduce the talk it sets out to record. What might seem to be merely technical decisions
about how to represent speech on paper smuggle in covert theoretical assumptions and
pre-reflective understandings, as a classic paper by Ochs (1979/1999) pointed out. This
has led to considerable discussion about various methods of displaying speech on a writ-
ten page and about forms of notation (Schiffrin, 1994; Wetherell et al., 2001). Another
effort to improve precision, which I have mentioned before, is member checking (i.e.,
seeking feedback from research participants or members of their social group) to verify

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


236 Theory & Psychology 21(2)

categories, category definitions, and coding definitions. Some researchers have taken
member checking a step further to seek participants’ feedback about the researcher’s
interpretations (e.g., Stacey, 1988, 1991).
Yanchar (2011) expresses a worry that qualitative inquiry is drifting toward a “one-
dimensional research tradition” that relies solely on worn-out “staples” such as inter-
views and focus groups (p. 180). Yanchar’s worry seems to echo Sigmund Koch’s (1981)
caustic criticism of the “assembly line” methods of conventional psychology. In my
view, Yanchar need not worry that qualitative inquiry is stalling out; I offer a few exam-
ples of innovation to allay such concerns. Rather than settling into a “type of qualitative
orthodoxy” (Yanchar, 2011, p. 180), interpretive researchers have been enlarging the
repertoire of data-gathering methods. One example is participatory visual methods (i.e.,
photo-voice or photo-elicitation; Luttrell, 2009; Wang & Burris, 1997). Another is the
use of video technology to capture bodily, postural, and spatial dimensions of social
interaction. A third is the use of web materials (blogs, chat, etc.) as data sources.
Moreover, although the semi-structured interview may be a staple, a rich array of formats
and styles of interviewing has developed, as well as an array of means of eliciting more
naturalistic forms of talk. The latter include conversations between dyads and among
groups, sometimes with no interviewer present.
If we shift the focus to data analysis rather than data collection, concerns about stasis
and orthodoxy can be laid to rest equally quickly. The past couple of decades have wit-
nessed a flood of interpretive theories and strategies, especially if we cast our net outside
psychology proper (e.g., Camic, Rhodes, & Yardley, 2004; Denzin & Lincoln, 1994;
Quinn, 2005). If we expand our focus beyond the US to locales where the culture of
psychology is more open (such as the UK, Australia, and New Zealand), a stunning array
of language-focused methods comes into view (e.g., Billig, 1996; Edwards & Potter,
1993; Gavey, 2005; Smith, 2008; Wetherell et al., 2001).
Qualitative psychology in the US may appear to be stalled out because institutional
barriers in the discipline continue to render it largely invisible. Such work is disallowed
by many core psychology journals and so it is published elsewhere. Moreover, US-based
psychology departments are so firmly identified with quantitative methods and experi-
mentalism that psychologists who engage in qualitative work are often mistaken as
members of other disciplines; furthermore, they often are located in departments other
than psychology, such as education, human development, family studies, communica-
tion, or gender studies. Elsewhere, communities of qualitative psychologists are bur-
geoning. In the UK, for example, the Qualitative Methods in Psychology Section, with
over 1000 members, is the largest Section of the British Psychological Society (British
Psychological Society, 2010).

Conclusion: Interpretive science, experimental science, and


psychology
The authors of the target papers (Osatuke & Stiles, 2011; Westerman, 2011; Yanchar,
2011) have described novel ways of coupling interpretive procedures with numerical
analyses. They have also shown us strikingly different approaches to the study of subjec-
tivity and meaning. To crystallize some of these differences, I have drawn upon Geertz’s

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


Marecek 237

(1973) distinction between experimental science and interpretive science, and on Kidder
and Fine’s (1987) distinction between Big-Q and small-q research. As Yanchar and
Westerman suggest, the differences in ways of gathering and interpreting data go far
beyond the merely procedural. The procedural differences are connected to radically
divergent stances toward knowledge production, the relations between researcher and
researched, and the relation between culture and person. Erica Burman (2001) casts these
stances as differing “ethical-political investments,” contrasting researchers who have “an
ethos of manipulation and instrumentality” to those with an ethos of “mutuality and
co-authorship” (pp. 261–262).
When Sigmund Koch (1981) reflected on psychology’s first hundred years, he argued
that psychology was “misconceived, whether as a science or as any kind of coherent
discipline.” Instead, he argued, it should be regarded as “a collectivity of studies of var-
ied cast” (p. 268). Thirty years have passed since then. If we view psychology in global
perspective, it is evident that its knowledge interests, practices, epistemological commit-
ments, and national disciplinary cultures are far more varied now than in Koch’s day. Yet
the hegemony of numbers stubbornly persists in US psychology. The family of approaches
that fit within such rubrics as qualitative research and interpretive inquiry has been fruit-
ful and multiplied, yet it remains on (or even beyond) the margin of the discipline.
Colleagues continue to recite a litany of pejoratives about such approaches: “not psy-
chology,” “not research,” “not science,” “not academic,” and even “evil.” It is rare for
students to be assigned to read qualitative studies, to be trained in qualitative methods in
methods courses, or to carry out qualitative investigations. Institutions of US psychology
(e.g., the APA) still deny such approaches legitimacy or even visibility. For example, in
February 2008, the American Psychological Association’s Council of Representatives
voted against a request to add a Division of Qualitative Inquiry to its 56 divisions, even
though over 800 APA members had signed the petition supporting such a division. Also,
the APA’s official publication manual prescribes elements of style narrowly tailored to
reporting quantitative, experimental research; its official code of research ethics does not
address issues specific to qualitative, field-based, or language-focused research. There
are few conversations about epistemology in the corridors and classrooms of psychology
departments: “What might I do in order to know X?” “What is at stake for my knowl-
edge of X if I adopt research procedure Y?” The editors of this section and the authors of
the target papers have done us a service by inviting us into such a conversation about
their work.

Funding
This research received no specific grant from any funding agency in the public, commercial, or
not-for-profit sectors.

Notes
1. This is not purely an inductive process, nor are categories derived solely from the “ground”
of the data. Interpreters bring to the task pre-reflective understandings as well as knowledge
interests. They also have preconceptions about what sorts of patterns are relevant to the broad
purposes for which the study was conducted.
2. This is not to say whether or not the assimilation model is a “true” or useful description of therapy.

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


238 Theory & Psychology 21(2)

References
Auerbach, C.F., & Silverstein, L.B. (2003). Qualitative data: An introduction to coding and analy-
sis. New York, NY: New York University Press.
Austin, J. (1962). How to do things with words. Oxford, UK: Clarendon.
Bandarage, A. (2008). The separatist conflict in Sri Lanka: Terrorism, ethnicity, and political
economy. New York, NY: Routledge.
Becker, H.S. (1998). Tricks of the trade. Chicago, IL: University of Chicago Press.
Bhatia, S. (2007). American karma. New York, NY: New York University Press.
Billig, M. (1996). Arguing and thinking. Cambridge, UK: Cambridge University Press.
Blanton, H., & Jaccard, J. (2006). Arbitrary metrics in psychology. American Psychologist,
61, 27–41.
British Psychological Society. (2010, February 2). Qualitative methods in psychology section
[Homepage of website]. Retrieved from http://www.bps.org.uk/qmip/qmip_home.cfm
Bruner, J. (2008). Culture and mind: Their fruitful incommensurability. Ethos, 36, 29–45.
Burman, E. (2001). Minding the gap: Positivism, psychology, and the politics of qualitative meth-
ods. In D.L. Tolman & M. Brydon-Miller (Eds.), From subjects to subjectivities: A handbook
of interpretive and participatory methods (pp. 259–275). New York, NY: New York University
Press.
Cameron, D. (2001). Working with spoken discourse. London, UK: Sage.
Camic, P.M., Rhodes, J.E., & Yardley, L. (Eds.). (2004). Qualitative research in psychology:
Expanding perspectives in methodology and design. Washington, DC: American Psychological
Association.
Capps, L., & Ochs, E. (1995). Constructing panic: The discourse of agoraphobia. Cambridge,
MA: Harvard University Press.
Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analy-
sis. Thousand Oaks, CA: Sage.
Chodorow, N.J. (1999). The power of feelings. New Haven, CT: Yale University Press.
Cole, M. (1996). Cultural psychology: A once and future discipline. Cambridge, MA: The Belknap
Press of Harvard University Press.
Danziger, K. (1990). Constructing the subject: Historical origins of psychological research.
Cambridge, UK: Cambridge University Press.
Denzin, N., & Lincoln, Y.S. (Eds.). (1994). Handbook of qualitative research. Thousand Oaks,
CA: Sage.
Dirks, N. (2001). Castes of mind: Colonialism and the making of modern India. Princeton, NJ:
Princeton University Press.
Dollard, J. (1937). Caste and class in a southern town. New York, NY: Doubleday/Anchor.
Edwards, D., & Potter, J. (1993). Discourse and cognition. London, UK: Sage.
Engeström, Y. (1993). Developmental studies of work as a testbench of activity theory: The case
of primary care medical practice. In S. Chaiklin & J. Lave (Eds.), Understanding practice:
Perspectives on activity and context (pp. 64–103). Cambridge, UK: Cambridge University Press.
Fine, M., Torre, M. E., Burns, A., & Paine, Y. A. (2006). Youth research/participatory methods for
reform. In D.C.-S. Thiessen (Ed.), International handbook of student experience in elementary
and secondary school (pp. 805–828). New York, NY: Springer.
Gavey, N. (2005). Just sex: The cultural scaffolding of rape. New York, NY: Routledge.
Geertz, C. (1973). The interpretation of cultures: Selected essays. New York, NY: Basic Books.
Geertz, C. (1974). “From the native’s point of view”: On the nature of anthropological understand-
ing. Bulletin of the American Academy of Arts and Sciences, 28(1), 26–45.
Geertz, C. (1983). Local knowledge: Further essays in interpretive understanding. New York, NY:
Basic Books.

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


Marecek 239

Geertz, C. (1998, October 22). Deep hanging out. The New York Review of Books, 45(16), 69–71.
Glaser, B.G., & Strauss, A. (1967). The discovery of grounded theory: Strategies for qualitative
research. New York, NY: Sociology Press.
Goffman, E. (1989). On fieldwork. Journal of Contemporary Ethnography, 18, 123–132.
Gregg, G. (2007). Culture and identity in a Muslim society. New York, NY: Oxford University
Press.
Gremillion, H. (2003). Feeding anorexia: Gender and power at a treatment center. Durham, NC:
Duke University Press.
Hacking, I. (1995). Rewriting the soul. Princeton, NJ: Princeton University Press.
Hare-Mustin, R., & Marecek, J. (1994). Asking the right questions: Feminist psychology and sex
differences. Feminism & Psychology, 4, 531–537.
Hollway, W., & Jefferson, T. (2000). Doing qualitative research differently: Free association, nar-
rative and the interview method. London, UK: Sage.
Jay, R. (2010). Therapy for shopaholism—The addiction of shopping and spending. Retrieved from
http://behavioralhealthcentral.com/index.php/2009061722136/Special-Features/therapy-
for-shopaholism-the-addiction-of-shopping-and-spending.html
Kagan, J. (1998). Three seductive ideas. Cambridge, MA: Harvard University Press.
Kaner, A., & Prelinger, E. (2005). Crafting psychodynamic psychotherapy. Lanham, MD: Jason
Aronson.
Kidder, L.H. (1994, August). All pores open. Paper presented at the convention of the American
Psychological Association, Los Angeles, CA.
Kidder, L.H. (2000). Dependents in the master’s house: When rock dulls scissors. In S. Dickey &
K.M. Adams (Eds.), Home and hegemony: Domestic service and identity politics in South and
Southeast Asia (pp. 207–220). Ann Arbor: University of Michigan Press.
Kidder, L.H., & Fine, M. (1987). Qualitative and quantitative methods: When stories converge. In
M.M. Mark & R.L. Shotland (Eds.), Multiple methods in program evaluation (pp. 57–75). San
Francisco, CA: Jossey-Bass.
Kirschner, S.R., & Martin, J. (Eds.). (2010). The sociocultural turn in psychology. New York, NY:
Columbia University Press.
Koch, S. (1981). The nature and limits of psychological knowledge: Lessons of a century qua “sci-
ence.” American Psychologist, 36, 257–269.
Luttrell, W. (2009, May). Emergent seeing and knowing: Mapping practices of participatory visual
methods. Paper presented at the Conference on Qualitative Methods and Social Critique,
CUNY Graduate Center, New York.
Maguire, P. (2000). Doing participatory research: A feminist approach. Amherst, MA: Center for
International Education, University of Massachusetts.
Marecek, J. (1995). Gender, politics, and psychology’s ways of knowing. American Psychologist,
50, 162–163.
Marecek, J. (2001). After the facts: Psychology and the study of gender. Canadian Psychology,
42, 254–267.
Marecek, J., & Senadheera, C. (in press). “I drank it to put an end to me”: Narrating girls’ suicide
and self-harm in Sri Lanka. Contributions to Indian Sociology.
Michell, J. (2000). Normal science, pathological science and psychometrics. Theory & Psychology,
10, 639–667.
Ochs, E. (1999). Transcription as theory. In A. Jaworski & N. Coupland (Eds.), The discourse
reader (pp. 167–182). New York, NY: Routledge. (Original work published 1979)
Osatuke, K., & Stiles, W. B. (2011). Numbers in assimilation research. Theory & Psychology, 21,
200–219.
Peebles, P. (2006). The history of Sri Lanka. Westport, CT: Greenwood.

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011


240 Theory & Psychology 21(2)

Potter, J. (1996). Discourse analysis and constructionist approaches. In J.T.E. Richardson


(Ed.), Handbook of qualitative research methods for psychology and the social sciences
(pp. 125–140). Leicester, UK: BPS Books.
Potter, J., & Wetherell, M. (1987). Discourse and social psychology. London, UK: Sage.
Pratto, F., & Walker, A. (2004). The bases of gendered power. In A. Eagly, A. Beall, & R. Sternberg
(Eds.), The psychology of gender (2nd ed.; pp. 242–286). New York, NY: Guilford.
Quinn, N. (2005). Finding culture in talk. New York, NY: Palgrave Macmillan.
Reekie, G. (1994). Reading the problem family: Poststructuralism and the analysis of social prob-
lems. Drug and Alcohol Review, 13, 457–465.
Russell, G.M. (2000). Voted out: The psychological consequences of anti-gay politics. New York,
NY: New York University Press.
Schiffrin, D. (1994). Approaches to discourse. Oxford, UK: Blackwell.
Smith, J.A. (Ed.). (2008). Qualitative psychology: A practical guide to research methods (2nd ed.).
London, UK: Sage.
Stacey, J. (1988). Can there be a feminist ethnography? Women’s Studies International Forum, 11,
21–27.
Stacey, J. (1991). Brave new families. New York, NY: Basic Books.
Stam, H.J. (2006). Pythagoreanism, meaning and the appeal to number. New Ideas in Psychology,
24, 240–251.
Strauss A., & Corbin J. (1990). Basics of qualitative research: Grounded theory procedures and
techniques. Thousand Oaks, CA: Sage.
Wang, C., & Burris, M.A. (1997). Photovoice: Concept, methodology, and use for participatory
needs assessment. Health Education & Behavior, 24, 369–87.
Wertz, F.J., Charmaz, K., McMullen, L., Josselson, R., Anderson, R., & McSpadden, E. (2011).
Five ways of doing qualitative analysis: Phenomenological psychology, grounded theory, dis-
course analysis, narrative research, and intuitive inquiry. New York, NY: Guilford.
Westerman, M.A. (2011). Conversation analysis and interpretive quantitative research on psycho-
therapy process and problematic interpersonal behavior. Theory & Psychology, 21, 155–178.
Wetherell, M., Taylor, S., & Yates, S. (2001). Discourse as data: A guide for analysis. London,
UK: Sage.
Yanchar, S.C. (2011). Using numerical data in explicitly interpretive, contextual inquiry: A “prac-
tical discourse” framework and examples from Engeström’s research on activity systems.
Theory & Psychology, 21, 179–199.

Jeanne Marecek is Wm. Kenan Professor Emerita of psychology at Swarthmore College, where
she was also affiliated with the Asian Studies Program and the Program in Gender and Sexuality
Studies. She has worked in Sri Lanka as a researcher and trainer since 1988, when she was a
Fulbright scholar. Her research in Sri Lanka concerns suicide and deliberate self-harm, gender
relations, and culture-specific healing practices. She has been a visiting scholar at universities in
Sweden and Norway, a fellow of the Swedish Collegium for Advanced Study in the Social Sciences
in Uppsala, and researcher in residence at the Centre for Advanced Study in Oslo. She holds a
Ph.D. from Yale University. Address: Department of Psychology, Swarthmore College,
Swarthmore, PA 19081, USA. [email: jmarece1@swarthmore.edu]

Downloaded from tap.sagepub.com at CLARK UNIV on April 19, 2011

S-ar putea să vă placă și