Documente Academic
Documente Profesional
Documente Cultură
Behavioral Sciences
TEACHING WRITING
Volume 6
Series Editor
Patricia Leavy
USA
Scope
The Teaching Writing series publishes concise instructional writing guides. Series
books each focus on a different subject area, discipline or type of writing. The books
are intended to be used in undergraduate and graduate courses across the disciplines
and can also be read by individual researchers or students engaged in thesis work.
Series authors must have a demonstrated publishing record and must hold a PhD,
MFA or the equivalent. Please email queries to the series editor at pleavy7@aol.com
Writing up Quantitative Research in the Social and
Behavioral Sciences
Marianne Fallon
Central Connecticut State University, USA
A C.I.P. record for this book is available from the Library of Congress.
“This book covers the ‘how to’ of writing research projects in a highly
engaging manner. Graduate students who are preparing to work on their
master’s thesis will get a lot out of this book. Undergraduates doing a
thesis or a capstone project will also find this book very helpful. Instructors
teaching research methods in the social sciences will find this book makes a
useful course companion.”
– Damon Mitchell, Professor of Criminology and Criminal Justice,
Central Connecticut State University
“I teach a research course for undergraduate seniors where they need to write
an APA-style research report. Fallon conveys challenging concepts in a clear,
meaningful, and engaging manner. The review of methods and statistics at
the beginning of the text is particularly useful and the advice offered to
emerging writers is extremely helpful and will encourage those struggling
to improve. The examples throughout the book are outstanding; they are
relevant, descriptive, and varied. The sample papers are excellent models for
emerging researchers navigating their first report.”
– Caitlin Brez, Assistant Professor of Psychology, Indiana State University
Prefacexiii
Introductionxv
Section 1: Foundations for Writing Quantitative Research Reports
in the Social and Behavioral Sciences
Chapter 1: Methodological Elements of Quantitative Research 3
Introduction3
Forms of Quantitative Research 3
The Definitional Hierarchy 6
Types of Variables 7
Validity and Reliability 9
Summary and Practice 12
Chapter 2: Statistical Elements of Quantitative Research 15
Introduction15
Quantitative Properties of Variables 15
Describing Your Data 16
Fitting Statistical Models to Your Data 19
Examining the Strength of Your Model 25
Summary and Practice 27
Chapter 3: Successfully Writing about Quantitative Research
(or Anything) 29
Introduction29
Mindset29
The Rules 39
Writing Builds Character 55
Summary and Practice 56
Section 2: Writing a Quantitative Research Report
Chapter 4: What Question Did You Ask? And Why Should Anyone Care? 61
Introduction61
What Is the Core Problem or Phenomenon You Have Studied? 61
ix
TABLE OF CONTENTS
x
TABLE OF CONTENTS
xi
PREFACE
xiii
PREFACE
report (or your professor made you). You want A Way. Any Way. This book
offers you A Way – actually Several Ways. You see, I do not believe in One
Right Way either.
I do, however, believe in Your Way. That’s what you’re searching for,
isn’t it? To be clear, Your Way still needs to conform to discipline-specific
guidelines. (This isn’t the Wild West.) Your Way still needs to demonstrate a
command of the relevant knowledge and a healthy dose of critical thinking.
(This is University.) And Your Way still needs to embrace the rules and
practices of good writing. (This is good. Period.)
My job, as I see it, is to help you develop Your Way. In that spirit I will
share My Way and Other People’s Ways. Then you can amalgamate those
suggestions into Your Way, which becomes Your Voice.
Many people helped me prepare this book. In particular, I would like
to thank my colleagues from departments across the social and behavioral
sciences for sharing their time, insight, and students’ work. Dr. Brian
Osoba helped me wade through the different approaches to econometrics
and economic forecasting. He also turned my attention to McCloskey’s
(2000) work, which should be required reading for any student of social
or behavioral science. Dr. Fiona Pearson and Dr. Stephen Adair provided
invaluable direction in sociology. Dr. Diana Cohen and Dr. Robbin Smith
supplied much needed guidance in Political Science. I also thank Dr. Karen
Ritzenhoff for her sage and practical advice.
Special thanks also extend to the students who graciously shared their work
and allowed me to edit it for the purposes of this volume: Cory Manento,
Anthony Huqui, and Selina Nieves. Thank you for being wonderful and
generous emerging researchers.
My immediate and extended family also deserve mention. My husband
Martin supplied much needed support, editing, and – above all – time, even
when my writing needs compromised his personal goals and fulfillment. My
daughters Catherine, Fiona, and Bridget mostly complied when I claimed
writing time during their waking hours. My mother and Aunt Mary patiently
listened to me talk about this book for over a year and helped me weather
the ebbs and flows. A dear family friend, Mary Kennedy, provided editorial
assistance.
Finally, I thank Patricia Leavy for affording me the opportunity to develop
this book and for the motivation to keep me going. Being a part of this series
on Teaching Writing is, for me, a celebration of learning and teaching.
xiv
INTRODUCTION
This brief, practical, and prescriptive primer will help emerging researchers
learn how to write high-quality, quantitative research reports in the social
and behavioral sciences. Students from Communications, Criminology or
Criminal Justice, Economics, Political Science, Psychological Science, and
Sociology will find the suggestions and samples contained herein particularly
helpful. Social Work or Education students may also benefit.
I am a psychological scientist and I discuss writing through that lens.
Let me explain by way of example: Becker (2007), a sociologist, interprets
anxiety over writing through his lens – culturally influenced rituals of writing
(e.g., the “right” type of paper, cleaning the house right before writing, etc.).
As a psychological scientist, I interpret those behaviors and anxieties as
stemming from beliefs about intelligence.
Despite my discipline-specific lens, I have tried to appeal to students
across the social and behavioral sciences. I have illustrated concepts with
examples from criminology, economics, political science, communications,
sociology, and social work. Many examples are interdisciplinary.
I wrote this book with the assumption that you have a working-level
knowledge of discipline-relevant research methodology and statistical terms.
That said, I elaborated on key methodological and statistical concepts that
are critical underpinnings for writing a strong quantitative report. My goal is
not to teach you how to conduct quantitative research, but to illustrate how
to effectively communicate a research project.
The book is divided into three major sections. The first section provides
foundational elements of quantitative research that apply across disciplines.
Chapters 1 and 2 review key methodological and statistical concepts. Chapter
3 focuses on discipline-transcendent writing strategies and practices.
The second section details the questions researchers answer within a
quantitative research report. These questions cohere around four major
themes:
• What question did you ask?
• What did you do to answer your question?
• What did you find?
• What do your findings mean?
xv
INTRODUCTION
xvi
SECTION 1
Imagine you are playing a baseball or softball game for the first time. Prior
to this moment, you’ve learned the vocabulary of the game – strike, double-
play, infield fly, ground-rule double. You’ve also learned how to make
decisions including how hard to throw the ball or whether to swing at a
pitch. You’ve internalized the rules that govern the game. And you have
likely experienced emotional ups and downs during your practice sessions.
You may have had to push through frustrating practice, be hopeful that the
next practice will get better, and focus your attention to learn effectively. In
short, before you take the field or step up to the plate, you’ve developed a
foundation to prepare you for the game.
This section will provide you with the foundation needed to write a
research report based on quantitative methods. Chapters 1 and 2 reacquaint
you with your equipment and decision-making tools – the methodological
and statistical concepts critical to communicating quantitative research
intelligently and professionally. Although you have likely encountered
these concepts in your Research Methods and/or Statistics courses, some
additional practice won’t hurt.
Chapter 3 addresses the art of writing well. You need to understand rules
that enable high-quality writing. But you also need to consider the emotional
and motivational aspects of writing – the “ups and downs” and the “heart”
required to succeed. This chapter strikes a balance between the cognitive and
emotional underpinnings of solid writing.
1
CHAPTER 1
METHODOLOGICAL ELEMENTS OF
QUANTITATIVE RESEARCH
INTRODUCTION
Returning to the ball game analogy, what would happen if your coach told you
to bunt and you forgot what that meant? If you don’t know the terminology,
you can’t play the game well.
When you designed and conducted your research project, you took
abstract concepts and put them into practice. Writing about your research
project requires an even deeper level of application. Now you must explicitly
communicate your project using the terminology of social and behavioral
science. This chapter will help you review the methodological concepts most
critical to writing a strong research report.
First we distinguish methodological forms of quantitative research that
you are most likely to conduct as an emerging researcher. Next we describe
an organizational framework to help you understand how you move from
abstract constructs to aligned variables and concrete definitions. To bolster
confidence using the lexicon of quantitative research, we pay special
attention to the different ways we label variables. We conclude with validity
and reliability to enable you to interrogate your own and others’ research
thoroughly and critically.
3
CHAPTER 1
Content Analysis
Content analysis involves carefully examining artifacts that function as a
medium for communication (Neuman, 2011), including songs, sculptures,
graphic designs, comic strips, newspaper articles, magazine advertisements,
books, films, television shows, tweets, instagram pictures, letters, and much
more.1 Quantitative content analysis requires counting the occurrence or
rating the strength of particular social or behavioral phenomena within the
media. For example, Harold Zullow (1991) examined over three decades of
popular song lyrics for “pessimistic ruminations”, which involve dwelling
on the notion that negative events will persist far into the future and will
negatively impact endeavors (Zullow, Oettigen, Peterson, & Seligman,
1988). Pessimistic ruminations within popular songs predicted changes in
consumer optimism, which consequently predicted economic growth. So,
the more pessimistic songs were, the less optimistic people were about
their financial affairs, which stilted the economy. (Anyone want to bet that
pessimistic ruminations in songs by Taylor Swift and Katy Perry predict the
length of adolescents’ romantic relationships?)
4
Methodological Elements Of Quantitative Research
5
CHAPTER 1
6
Methodological Elements Of Quantitative Research
TYPES OF VARIABLES
Let’s linger on the middle rung of the hierarchy. Social and behavioral
scientists label variables based on their methodological function. This
terminology is challenging to master, but essential for understanding your
project and communicating it clearly and accurately. We begin with the most
basic methodological distinction: measured and manipulated variables. We
extend our discussion to extraneous variables that could affect the relationship
between your measured and/or manipulated variables of interest.
7
CHAPTER 1
Extraneous Variables
Extraneous variables are, frankly, nuisances. Whereas measured and
manipulated variables are central to your predictions and analysis, extraneous
variables are peripheral. They merit attention because of their potential
impact on the relationship between your variables of interest. Let’s say you
are analyzing episodes of The Walking Dead for hopeful, uplifting words or
actions (good luck). You surmise that earlier seasons contain more hopeful
content than later seasons (because the zombie apocalypse has a knack for
bringing people down). An extraneous variable could be the number of
episodes in a season. Season 1 has considerably less content to analyze than
later seasons. Another extraneous variable is the rapidity with which main
characters exit the show and fringe characters become central.
Extraneous variables are also problematic for studies involving direct
data collection. Imagine you examine how watching violent media affects
people’s perceptions of others. One group of participants watches a film
containing gratuitous violence and another group watches a film containing
violence necessary for survival. Then, participants complete a questionnaire
regarding how much they like the experimenter. To ensure that the content
of the videos changes person perception, you would present the media for
the same amount of time, dress in the same way for all participants, keep the
volume of the films similar, ask participants with imperfect vision to wear
their glasses or contacts while watching the media, etc. Researchers address
8
Methodological Elements Of Quantitative Research
Your and others’ confidence in your findings depend upon the validity and
reliability of your methods and results. In your research report, you will
critically evaluate whether your study passes muster. According to Morling
(2015), understanding four broad forms of validity and three types of
reliability will help you interrogate your (and others’) findings.
Validity
Put broadly, validity involves making a conclusion or decision that is
appropriate, reasonable, accurate, and justifiable (Morling, 2015). In the
social and behavioral sciences, there are multiple forms of validity, including
external, construct, internal, and statistical validity.
9
CHAPTER 1
population of pop songs from 2015, to R & B songs from 2015, or to pop
songs from 2005?
Internal validity. Your study exhibits high internal validity when you are
confident that the relationship between two or more variables is not due
10
Methodological Elements Of Quantitative Research
to another variable. Within the design of your study, you carefully control
variables and/or measure variables that might explain the relationship
of interest. In short, you methodologically rule out potential alternative
explanations for the relationships you observed.
Internal validity is especially critical for true experiments that contain
at least one manipulated variable. To convincingly argue that change in
one variable causes systematic change in another, researchers must clearly
demonstrate that the variation was not likely caused by some extraneous
variable, an issue sometimes called the third-variable problem or tertium
quid (Field, 2013). (Who says Latin is a dead language?) To maximize
internal validity, researchers randomly assign participants to conditions,
or, in the case of within-participant manipulations, vary the order in which
participants experience conditions.
Reliability
Reliability involves how consistently an instrument or procedure produces
the same results. Thus, reliability has implications for construct validity. If
a questionnaire or procedure does not produce consistent results, it cannot
meaningfully reflect behavior, thought, feeling, or motivation. We will
discuss three main types of reliability: inter-item reliability (sometimes
called internal consistency), test-retest reliability, and inter-rater reliability.
not all characteristics of a good life: being satisfied with life, believing they
are achieving their goals, etc.
12
Methodological Elements Of Quantitative Research
NOTES
1
The traditional function of content analysis is to “make valid inferences from
text” (Weber, 1990, p. 9). I present a broader view of content analysis that extends
to visual and possibly tactile media.
2
Another form of secondary data analysis involves meta-analysis whose goal
is to provide an accurate estimate of the strength of a relationship or effect. In
this method, researchers examine previously published studies and sometimes
unpublished data that test the same research question. Then researchers estimate
an effect size based on the available research. Although meta-analysis deserves
mention, it is a special case of secondary data analysis. Discussing meta-analysis
further is beyond the scope of this book.
3
http://www.clipartbest.com/umbrella-clip-art
13
CHAPTER 2
INTRODUCTION
Back to the game – you’re at the plate. Will the next pitch be a fastball?
Is this pitch in the strike zone and are you going to swing? Now you’re in
the field. Should you be satisfied with a force at second or go for a double
play? Do you try to pick off the runner on first and risk a balk? Decisions,
decisions! Taking action at the plate or in the field requires you to process
evidence and make a decision. Simply put, you conduct an internal statistical
analysis and act accordingly.
In quantitative research, statistical analysis occurs externally and
deliberately. Your analysis allows you to accomplish three important goals.
First, you describe your data. Second, you ascertain whether the variables
you examined in your sample are likely related in the population. Third, you
determine whether the relationships or effects are important.
Developing a solid foundation in statistics helps you ensure that your
study has statistical validity (see Chapter 1). We begin by reviewing the
quantitative properties of variables and then how to summarize the data using
descriptive statistics. We discuss fitting a model to your data, which enables
you to determine whether the relationships you observed in your study are
likely to be reflected in the population. We conclude by discussing effect
size, or the real-world importance of the observed relationship or effect.
From a statistical perspective, variables can be divided into two broad classes:
categorical variables and continuous variables. This distinction influences
how you describe data and conduct inferential tests.
Categorical variables distinguish categories or classes. Some categorical
variables involve binary distinctions (employed/unemployed, enrolled/not
enrolled, bunch/fold). Other categorical variables specify multiple options,
none of which are necessarily more or better than another, just different
15
CHAPTER 2
Readers hunger for details about the data. Common descriptive statistics
include frequencies, measures of central tendency, and measures of
dispersion.
Frequencies
Calculating frequencies requires counting the number of occurrences of a
category; thus, frequencies are used to report categorical variables. Imagine
you are examining whether women are more likely than men to graduate
from university in 4 years. You could report the raw number of women
and men from the entering undergraduate class of 2011 who graduated in
4 years (e.g., 495 women, 266 men) or percentages (e.g., 65% of students
who graduated in 4 years were women). Favor percentages when you have
a large-ish sample and the number of members within categories is unequal.
16
statistical elements of quantitative research
Measures of Dispersion
Like measures of central tendency, measures of dispersion are also single
values that describe distributions of continuous variables. Dispersion
statistics capture the sample’s variability, or how much individual scores or
observations differ from each other. The most commonly reported measure is
standard deviation, which reflects the approximate average that any value in
a sample deviates (i.e., differs) from the mean of the distribution.
You may be asking yourself: Why do I need to know the standard deviation
of a distribution? Isn’t the mean (or other measure of central tendency)
enough? Nope. Imagine you are in a class of 100 students and your professor
tells you that the average grade on the first exam was 80 out of 100. You scored
a 90, and you’re feeling pretty good about yourself. But you are curious –
how many other students scored a 90 or above? That’s where knowing the
17
CHAPTER 2
standard deviation can be helpful. Let’s say the standard deviation of the
class grades is 8 points. Assuming that the grades are distributed normally
(i.e., in a bell curve), 95% students earned between 74 and 96 (i.e., 2 standard
deviations above and below the mean). Your score of 90 is at the top end of
that distribution and four other students probably earned the same grade as
you (see Figure 2.1). Roughly 8 students scored better than you. Now, let’s
cut the standard deviation in half – 4 rather than 8. The average test grade is
the same (80) and your grade is still 90. With a smaller standard deviation,
your grade is even more impressive; you probably earned the highest grade
in the class. Thus, knowing the standard deviation helps you appreciate how
individual scores relate to all scores within the distribution.
18
statistical elements of quantitative research
Now, let’s take the same example and halve the standard deviation in both
groups to 3 (see Figure 2.3). The mean difference between the groups is
the same (still 10 points), but the dispersion has changed. Notice that the
curves are narrower and the amount of overlap between the distributions
is smaller. The less overlap between distributions, the more likely the
difference between the means reflects a real difference in the population. In
this case, the difference between students who received credit and those who
did not probably reflects a real difference in the population of undergraduate
students taking a similar course.
The goal of your analysis is to fit a statistical model to your data. Nate
Silver does this for a living (see http://fivethirtyeight.com/contributors/nate-
silver/). He asks interesting questions about sports, politics, economics, and
sociology. Then he answers those questions by estimating parameters (e.g.,
the mean, an unstandardized regression coefficient) and calculating test
statistics (e.g., F). Silver is rather good at it; in 2012 he accurately predicted
the midterm election winners in all 50 races… even Florida. The title of his
19
CHAPTER 2
book, The Signal and the Noise (Silver, 2012), is the perfect metaphor for
assessing how your model fits the data.
To assess the fit of your model, you calculate a test statistic that compares
the signal (your model) to the noise. Let’s break that down. Individual
scores or values on your outcome (or dependent) variable differ. Some of
that variation is due to factors that you are not directly studying, such as
extraneous variables: respondents may be tired, cell phones interrupt a testing
session, raters differ slightly in their perceptions of behavior or content, etc.
That’s noise. But your scores also systematically vary with your predictor
(or independent) variables. That’s your signal. To compare signal to noise,
examine the ratio of systematic variation to unsystematic variation in the
outcome variable (Field, 2013).
If you have more signal than noise (or more systematic variation
than unsystematic variation), your test statistic will be greater than 1 (see
Equation 1).
(1)
The larger your test statistic, the more variation your model explains (i.e.,
the better your model fits the data). You can then assess the probability of
obtaining a test statistic that large by chance alone using null hypothesis
significance testing.
20
statistical elements of quantitative research
21
CHAPTER 2
run the risk of missing it 20 out of 100, or 1 out of 5 times. See Figure 2.4 for
a schematic of these potential outcomes.
Using the common alpha and beta levels, we are more likely to make a
Type II error (20%) than a Type I error (5%). Why are scientists generally
more “OK” with making a Type II error? Let’s work through a real-world
example: pregnancy. In reality, someone is or is not pregnant. This is akin to
an effect being present or absent in a population. The “test statistic” in this
case is a pregnancy test, which is sensitive to the hormone human chorionic
gonadotropin (hCG). Based on the concentration of hCG in urine, the test
provides a positive result (pregnant) or a negative result (not pregnant). A
Type I error occurs when the test produces a positive result when in reality
someone is not pregnant (i.e., a false positive). A Type II error arises when
the test yields a negative result when someone is actually pregnant (i.e., a
false negative). So, can you see why we would rather make a Type II error
than a Type I error?
Although Type II errors are generally more concerning than Type I errors,
committing a Type I error could produce serious negative consequences in
certain circumstances. If you claim that a drug produces an effect when it
does not, consumers may endure side effects of the drug without the intended
benefit. To the extent that public policy is based on research, municipalities
and states could commit taxpayer revenue to projects or initiatives based on
a chance finding. The potential consequences of making Type I or II errors
emphasize the importance of empirical replication.
22
statistical elements of quantitative research
Estimating Error
You’ve calculated statistics from a sample that estimate the true value in the
population. Thus, those estimates contain some error (unsystematic variation
or noise). We will discuss two ways to estimate error: the standard error of
the sample means and confidence intervals.
Standard error is like standard deviation; it estimates average dispersion.
But rather than estimating variation among individual points in a sample,
standard error reflects the average dispersion of sample means. Imagine you
replicated the same study 50 times. The standard error would be the standard
deviation for all of those sample means. Standard error helps you determine
whether the mean you obtained in your study is likely to be comparable
to the population mean (Field, 2013). If the standard error is large, there
is a lot of variability between the means of different samples. Therefore,
your sample is probably not representative of the population. By contrast,
small standard error reflects little difference between sample means, so most
means (including yours) should be similar to the true population mean.
Confidence intervals provide boundaries within which the true population
mean is thought to fall. Typically, researchers calculate 95% confidence
intervals; if you replicated your study 100 times, the true population mean
will fall within this interval 95 out of 100 times (Field, 2013). Confidence
23
CHAPTER 2
intervals are particularly useful for comparing sample means across conditions
of particular variables. Say you examined whether identifying as politically
conservative or liberal was related to a neural response to disgusting images
(see Ahn et al., 2014). You recorded blood flow to a part of the brain called
the amygdala, which is involved in emotional processing and memory.
You found that the neural response to disgusting pictures was greater in
participants who self-identified as conservatives compared to those who
identified as liberals. But is that difference likely to reflect a true difference
in the population? To answer this question, you should construct 95%
confidence intervals around the means representing change in blood flow for
conservatives and for liberals. Let’s say that the intervals do not overlap; that
is, the bottom of one interval does not cross the top of the other (see Figure
2.5). Non-overlapping intervals suggest that the samples were drawn from
two truly distinct populations – in this case, liberals and conservatives. If the
confidence intervals for liberals and conservatives overlapped a fair bit, we
would less confident that the means really came from distinct populations
(see Figure 2.6). Confidence intervals can be calculated for any statistic, not
just sample means.
Confidence intervals correspond to the probability of making a Type
I error (Cumming & Finch, 2005). In Figure 2.5, you can see a gap
between the lower bound of the 95% CIs for conservatives and the
upper bound for liberals. If the boundaries of 95% CIs do not overlap,
the probability of making a Type I error is less than .01. Boundaries that
barely touch correspond to a p value of approximately .01. Boundaries
with moderate overlap (25% of the CI) reflect our typical α-level, .05.
24
statistical elements of quantitative research
The 95% CIs in Figure 2.6 overlap too much to be considered “statistically
significant”.
Figure 2.6. The same difference between means, but with larger,
overlapping confidence intervals
25
CHAPTER 2
the model could be important. Should you ignore this potentially important
finding because the model is not statistically significant (but could be with
more statistical power)?
Simply put, determining whether a finding is statistically significant is
not the whole story. You need to know whether it is important; you need to
examine its effect size. Broadly stated, effect size is the “size of anything
that may be of interest” (Cumming, 2014, p. 34). Technically, the mean or
difference between means qualify as effect sizes. But means aren’t terribly
useful when trying to get a sense of how the phenomenon manifests across
multiple contexts. Researchers often investigate the same phenomenon using
different methods or measures and means would be reported in the original
units of their measurements. So, comparing the magnitude of a finding
across studies based on means alone would be difficult. Consequently, we
will focus on quantifying effect sizes using methods that do not depend on
the original units of measurement.
Some effect size statistics reveal how much variation your independent
or predictor variables explain in your dependent or outcome variables. A
correlation coefficient is probably the most straightforward example for
measuring the proportion of variation one variable explains in another.
Correlation coefficients describe the direction and strength of a relationship
between two variables. If you square the value of the correlation coefficient,
you obtain the proportion of variance one variable accounts for in another.
So, let’s say you have a correlation coefficient of .10. This value represents
a weak, positive relationship. Squaring this value gives you .01, which
corresponds to 1% of variance accounted for. Imagine the relationship is
strong with a correlation coefficient of .50 (Cohen, 1992). Squaring .5 gives
you .25, or 25% of variance accounted for. In behavioral and social research,
explaining 25% of the variance is worthy of jumping out of your chair and
dancing (your choice of style).
Other statistics quantify effect size in standard deviation units. Cohen’s d,
the most well-known statistic of this nature, is used to compare two means.
Without getting into too much detail, Cohen’s d tells you how many standard
deviations separate the means. Cohen (1992) notes that d values of .2, .5,
and .8, represent small, medium, and large effects, respectively. One notable
advantage of Cohen’s d is that the statistic does not depend on sample size.
That said, the larger your sample, the more likely you are to accurately
estimate the means and standard deviations of the population.2 Thus, larger
samples should yield more accurate – not necessarily larger – effect sizes.
26
statistical elements of quantitative research
Pearson r coefficients and Cohen’s d are only two examples of effect size
statistics. You can use other effect size metrics depending on the test statistics
you calculate.
Statistics help you describe your data and determine whether your
findings are statistically significant and important. Grab another piece of
paper (physical or digital). Label your dependent or outcome variables as
categorical or continuous. List the descriptive statistics that you should
report for these variables. State the null hypotheses that you will evaluate.
For each null hypothesis, identify the inferential test that you should conduct
and corresponding effect size statistic.
After you conduct your analyses for each hypothesis, determine whether
your model is a good fit to the data using null hypothesis testing and
confidence intervals. Consider the likelihood of making a Type I or Type II
error. Examine effect sizes to assess how strong the observed relationship
may be in the population. And you have written more of your research report.
NOTES
Note carefully that large samples are NOT necessarily representative samples.
2
Depending on how the data are collected, a large sample can still be a biased
sample. Mathematically, the larger the sample, the more likely effect sizes will
accurately reflect the population. This is a probability, not a certainty.
27
CHAPTER 3
INTRODUCTION
To successfully play ball, you need to know the rules of the game, how to
think like a champion, and have heart. Notice I did not mention natural,
inborn talent. Clearly, talent helps; but it is not sufficient for success.
That, in a nutshell, is the plot of Moneyball (Lewis, 2004). By all accounts
Billy Beane was a natural athlete – a guaranteed hall-of-famer. But he did
not know how to convert setbacks on the field or at the plate into success and
his career as a major league player ended before it began. Once Beane began
thinking differently about setbacks, he did quite well as the general manager
of the Oakland A’s.
Writing bears similarities to softball, baseball, or any complex skill
that you want to perform well. You should know the rules of writing and
appreciate how your mindset influences your approach to writing. Mindset
is the lens through which the rules of good writing make sense. Accordingly,
we’ll start there.
MINDSET
Fear is perhaps the biggest deterrent to writing well. Becker (2007) asked
his students, some of whom were accomplished academic writers, what they
were so afraid of. Students feared that their writing would be a disorganized
jumble of half-baked ideas. Further, students expected to be mercilessly
mocked for writing the “wrong” thing. Sound familiar?
I contend that fears about writing are connected to beliefs about intelligence
(i.e., how well you think). Fear arises from this progression of thoughts:
Writing reveals how I think.
Poor writing reflects poor thinking.
Poor thinking equals stupid.
Therefore, I am afraid to write because it will confirm that I’m stupid
and don’t belong in this program.
29
CHAPTER 3
Codswallop! Your acceptance into your current program proves you have
the potential to think and write well. (Admissions did not make a mistake.)
Further, difficulty expressing your thoughts in writing does not necessarily
mean your thinking about the topic or problem is deficient. But people will
evaluate your thinking through your writing. So, you need to develop your
thinking about how to write better.
Beliefs about how – even whether – you develop intelligence affect how
you approach learning complex skills like writing (Dweck, 2006). Some
people consider their intelligence fixed and largely determined by genetics.
They’ve got a knack for writing or they don’t. Struggling with their writing
means that they aren’t cut out for it. Failing proves it. So they give up easily,
shying away from challenges and opportunities to improve. They ignore
critical feedback because it further confirms that they don’t have what it
takes (and never will). They feel threatened when others succeed because
they do not measure up.
Alternatively, you could cultivate a growth-oriented approach towards
intelligence (Dweck, 2006). People with a growth mindset believe that
intelligence is malleable and increases with deliberate, sustained effort and
adaptive strategy use. Challenges become exciting, struggling equates to
learning, setbacks are temporary and surmountable, criticism is constructive,
and successful colleagues inspire them. Not surprisingly, students with
growth mindsets are more likely to succeed in academic contexts than
students with fixed mindsets (Blackwell, Trzeniewski, & Dweck, 2007;
Grant & Dweck, 2003).
Take a moment to reflect on your beliefs about intelligence in general
and about writing in particular.1 Do you believe that you can develop your
writing skills through deliberate practice and effort? Do setbacks thwart
your attempts to write? Do you view writing as an opportunity to learn or
as an opportunity to prove that you are smart? Do you seek and use critical
feedback to improve your writing? Do you consider others’ good writing
inspirational and motivational?
If your answers reflect a growth mindset, you have laid some of the
groundwork required to develop your writing. If your responses belie
a fixed mindset, take heart. You can cultivate a more growth-oriented
approach. As a first step, we’ll explore how a fixed mindset underlies many
misapprehensions about how people become good writers. Then we’ll dispel
or at least challenge these myths by talking truth. (Even if you are more
oriented towards growth, these truths are worth reviewing. They serve as a
welcome reality check when the going gets tough.)
30
successfully writing about quantitative research
31
CHAPTER 3
32
successfully writing about quantitative research
Myth: Good Writers Wait for Inspiration to Hit Before Starting to Write
Ah, the muse. When Homer invoked Calliope to inspire the Odyssey and
the Illiad, he unwittingly encouraged people with fixed mindsets to chase
rather than command their muse. Writing only while inspired gives the false
impression that writing should feel effortless. Once inspiration hits, ideas
spring forth rapidly and easily. Csikszentmihalyi (1996) calls this feeling
of total immersion “flow”. And it feels amazing. You are “in the zone” or
“in the pocket” and produce your “best, most efficient, and most satisfying
writing” (Flower & Hayes, 1977, p. 451).
33
CHAPTER 3
Truth: Good Writers Schedule their Writing Time and Stick to their
Schedule
The best writers, like anyone else at the top of their field, are disciplined.
They deliberately practice even when they don’t feel particularly inspired
(Keyes, 2003; Silvia, 2007). To become disciplined, you need to do three
things: schedule your writing time, make clear goals for each writing session,
and track your progress.
You do not need large chunks of time to make steady progress on writing
your report. Try to schedule 5 hours a week, preferably 1 hour per day over
5 days. If you stick to this schedule, you will have devoted about 75 hours
over a 15-week semester. Would you rather those hours be spread out over
15 weeks? Or crammed into 2? Or 1?!?
Set concrete, actionable goals (e.g., write 100 words, draft the first two
paragraphs of your report). Goals do not necessarily involve putting words
on the page (Silvia, 2007). You could read and take notes on an article,
outline a section, perform analyses, make figures and tables – anything that
gets you closer to completing your report. That said, I would devote at least
2 sessions a week to actual writing. Otherwise, you can end up putting off
writing for too long.
Given that you are conducting quantitative research, you know a thing or
two about collecting data. So you know how to collect data measuring your
own writing progress. Choose meaningful metrics. The most important one
is simply whether you stuck to your writing schedule. You will have good
34
successfully writing about quantitative research
sessions and grim sessions, but you can still check the win column when
you’re done. You could also track word count or page count. Running up
counts is easier when you are drafting. When you are revising, the goal is to
decrease word and page counts because you are refining your thinking and
tightening your prose.
This discipline reaps rewards. First, writing regularly increases insight,
where the pieces of an intractable puzzle suddenly fit. Although insight feels
like a bolt from the blue, it arises from your brain mulling over a problem
outside your conscious awareness (Metcalfe & Weibe, 1987). Insight arrives
only after the ideas or details have had time to incubate (Flower & Hayes,
1977). Adhering to your writing schedule builds incubation into your routine.
Second, adhering to a schedule means that you no longer passively await
your muse or “flow”; you create conditions that promote it (Csikszentmihalyi,
1996). For example, minimizing distractions (i.e., turning off your phone,
social media, etc.) allows you to devote your full attention to your task.
Adjusting the amount of challenge you experience also helps. When your
skill level exceeds the task, you get bored. When tasks are beyond your
skill level, you get frustrated. So, the trick is to set goals that ratchet up the
challenge as you are building the skill. Realistically, do not expect to get into
flow during every writing session. No matter your skill level, checking your
references for accuracy is a yawner.
Third, writing habitually reduces your anxiety about writing a quantitative
research report. You are solving a complex, multifaceted problem. Routinely
chipping away at larger goals by focusing on smaller ones ensures that you
will make steady progress.
Sticking to your schedule is tough, but sometimes the bigger hurdle is
establishing the schedule in the first place. You have to be committed to
the goal, decide how much time you are willing to devote, and identify and
remove barriers. Let me share a personal example. I had convinced myself
that my life simply would not tolerate regular exercise. I couldn’t go to the
gym or leave my house for extended periods of time because of familial
responsibilities. So my first challenge was figuring out duration and a time
of day I could stick with. I decided on 30 minutes a day, 5 days a week before
the kids woke up. The second barrier was figuring out how to exercise in
my home. I found a series of DVDs that fit my time constraints and kept
me motivated. At first progress was slow. But in a little over a year, I am
enjoying the benefits of being in good physical condition. I am now applying
the same lessons to my writing. (Sometimes we come late to the party, but at
least we show up.)
35
CHAPTER 3
Truth: Good Writers Work to Make their Prose Accessible and Engaging
Writing that most people do not understand has limited utility. I do not
advocate oversimplifying or avoiding complexities. However, you must keep
readers on board. You can engage readers without sacrificing intellectual
rigor by accepting that “good writing is good teaching” (Bem, 2004, p. 4).
How do the best teachers engage their students? Bain (2004) offers some
keen suggestions, which I will extend to writing your quantitative research
report. Good teachers know their discipline; good writers know their topic
and their data well. Sharing this knowledge does not involve cataloging
information. Rather, writers take a reader-centered approach, tailoring
material to readers’ needs. Good writers constantly evaluate their writing
from the reader’s perspective, assessing what readers currently know and
what they might need to know before encountering a forthcoming point.
Good teachers have high standards for their students and create
environments in which students can reach these expectations (Bain, 2004).
Similarly, good writers should challenge readers to up their game. You want
36
successfully writing about quantitative research
your reader to learn something new and useful, or view an important problem
from a different perspective. However, you must create conditions within
your writing that enable readers to achieve this goal.
Effective teachers infuse lessons with humor and novelty to keep learners
engaged. Keeping your readers engaged is not so different from keeping
readers entertained. McCloskey (2000) argues:
Most academic prose, from both students and faculty, could use more
humor. There is nothing unscientific about self-deprecating jokes about
the sample size, and nothing unscholarly in dry wit about the failings of
intellectual proponents. (p. 43)
Humor, more than any other rhetorical device, requires bravery. Be prepared
for your attempts to tank every now and again, and be proud that you
attempted to enliven the conversation.
Novelty captures – or recaptures – readers’ interest. You can achieve
novelty by avoiding the “boilerplate” that plagues academic prose
(McCloskey, 2000). Boilerplate offerings include the predictable, mundane,
and soulless. Describing every source you read, skimmed, or threw a parting
glance towards qualifies as academic boilerplate. Professors identify padding
the way hawks spot mice. (It doesn’t end well for the mouse.) McCloskey
(2000) also warns writers against using the “roadmap” paragraph to introduce
sections of your report. She is convinced that the well-meaning high school
English teacher who coined “Tell the reader what you are going to say. Say
it. Say that you’ve said it,” is burning in Hell.
Good writers engage readers without sacrificing nuance or critical
thinking. Like good teachers, good writers breathe life into their report.3 The
result? Everyone learns.
37
CHAPTER 3
(e.g., “I screw everything up”), and may call you to question its truth (e.g.,
“They’re just wrong; they clearly didn’t understand what I was trying to
say.”). You also might question your relationship with the person delivering
the feedback (e.g., “She’s out to get me. She plays favorites.”).
None of these thoughts will make you a better writer. Learning how to
effectively seek and process critical feedback will.
38
successfully writing about quantitative research
THE RULES
You learn the rules so you can have fun (McCloskey, 2000). You may
associate the rules of writing with grammar, that interminable assault of
niggly traps whose sole purpose is to make your life miserable. Proper
grammar is important; without it, you cannot communicate clearly. But we
will not review grammatical rules here.5 (Did I just hear a sigh of relief?)
Rather, we will discuss broader principles that underlie writing a strong
quantitative report.
Rule #1: Learn the Major Sections of the Report with Your Bare Hands
Quantitative research reports impose an overarching structure consisting of
several major sections. Appreciating the organization of a research report
makes it much easier to decide what you want to do with your knowledge
(Flower & Hayes, 1977). Use this blueprint to plan your writing and break
complex goals into more manageable ones.
Just as there is more than One Right Way to express ideas, there is more
than One Right Way to organize the major sections of the research report.
The most common organization across the social and behavioral sciences is
like a human hand, or what I call “The Humanoid” (see Figure 3.1).
We’ll address the detailed components that comprise each “digit”
throughout the rest of this book. For now, we’ll concern ourselves with
describing the broad function of each section. The Introduction acquaints
readers with your research question and convinces them why they should
care about your research. The Literature Review educates readers about
what scholars currently know about your research question and leads
to your predictions. The Method details how you obtained your data,
including who or what you sampled. The Results (or Analysis) section
presents your findings. The Discussion contextualizes your findings and
identifies limitations, areas of future study, and broader implications of
your work.
39
CHAPTER 3
Sometimes you will see the overall organization extended to six digits.
(Think Count Rugen in The Princess Bride.) The additional phalange is
the Conclusion, which appears after the Discussion (see Figure 3.1). The
Conclusion serves to highlight what is usually the final paragraph in the
Discussion of the 5-digit approach: to state in a nutshell what you examined,
why you studied it, what you found, what it means, and why someone should
care.
The streamlined, 4-digit approach (or, The Simpsons) is typical in
Psychological Science. The Introduction, taking its cue from the Borg,
assimilates the Literature Review (see Figure 3.1).6
Now that you have a sense of a report’s general organization, consider
how you might approach composing the digits. When I write a research
report, I draw upon a literary analogy. To me, a research report is a novel:
you set the scene, introduce characters, and establish plot points; insert rising
action; reach the climax; and fall back while wrapping up loose ends and
leaving your reader with something to think about. Your Introduction and
Literature Review orient readers and introduce important characters and plot
points. Your Method gets things moving. Your Results and Analysis are the
answers to your questions (finally!). Your Discussion and Conclusion sum
up everything and address lingering concerns.
40
successfully writing about quantitative research
41
CHAPTER 3
42
successfully writing about quantitative research
43
CHAPTER 3
our understanding of the human condition forward. When you act unethically,
you erode people’s confidence in you and in science. As such, you must give
proper credit to others’ work, convey information in your own words, and
hold your data inviolate.
Plagiarism involves improperly crediting someone’s work. Imagine that
you have completed your quantitative project after much blood, sweat,
and tears. You’re watching the campus news and up pops your professor
who happens to be describing your project. How exciting! And she never
mentions your name. (Really?!?) Always give credit where credit is due.
Referencing a source implies that you have carefully evaluated it. The
literature reviews of the primary sources you read will likely include other
works that look relevant for your research project. To cite these sources,
you must locate them and read them yourself. In so doing you confirm that
the source is indeed relevant and you ensure that you represent the work
accurately, rather than take someone else’s word for it (even if the source
should count as credible). In the rare event that you cannot locate the primary
source, you can cite the secondary source. (You can learn more about
referencing sources in Chapter 8.)
It’s not enough to simply reference your sources. Expressing an author’s
words in a slightly different format (i.e., changing a word here and there)
is also plagiarism because you are presenting that author’s voice largely as
your own. Here’s a rule of thumb: If four or more consecutive words are
identical to the source, you are at risk for plagiarizing.
Some emerging researchers try to sidestep plagiarism by immaculately
transcribing and citing multiple direct quotes. Although direct quotes can
be valuable when defining a construct or operationalizing a variable, use
them judiciously. Including too many quotations introduces a problem: A
bunch of unconnected, incoherent ideas. The goal is to paraphrase, or to state
something in your own words (and then properly cite the authors).
If you find yourself thinking, “I can’t paraphrase because I could never
express this idea as well as the original author did,” you’re mistaken.
Remember that everyone writes with a specific goal in mind. You are writing
in a different context, one with a specific purpose known only by you. Thus,
you should present the information in a way that furthers your purpose and
tells your story. (To be clear, tailoring your writing does NOT mean reporting
information inaccurately or omitting crucial information because you think it
strengthens your argument.)
How do you paraphrase effectively? Try this: Read the information that
you want to write about. Recite (out loud, if possible) the most important
44
successfully writing about quantitative research
information from the source and how you could use the information in your
paper (e.g., theory, major construct of interest, etc.). Put the source out of
sight. Choose a 3-digit number (e.g., 836) and start counting backwards by
3s (or 4s, 6s, or 7s if you want to mix it up) for at least 30 seconds. Counting
backwards should dislodge verbatim traces (i.e., exact wording) that might
linger in your short-term memory. What remains is the gist, or general idea
of the information, which you then express in your own words.
In addition to conveying others’ work ethically, you must collect and
present your data ethically. There are far too many reports of scientists
bowing to pressure and fabricating or altering data. Few examples are more
powerful than the scandal surrounding the MMR (Measles, Mumps, Rubella)
vaccine and autism (e.g., Dominus, 2011). In 1998, Dr. Andrew Wakefield
(and others) published a report claiming that the MMR vaccine was linked to
the appearance of autism-like behaviors in 12 young children. This finding
received a great deal of media attention and prompted thousands of parents
to avoid vaccinating their toddlers. However, multiple studies (and hundreds
of thousands of dollars) could not replicate Wakefield’s findings. A British
investigative journalist, Brian Deer, claimed that Wakefield misrepresented
the symptom timeline of most, if not all his participants. Coupled with other
questionable behavior (e.g., taking blood samples from children at his son’s
birthday party because he needed data from non-symptomatic controls, and
accepting money for research from lawyers attempting to sue MMR vaccine
makers), Great Britain’s General Medical Council banned Wakefield from
practicing medicine. Although Wakefield experienced serious personal
repercussions, a far more pressing problem is an outbreak of measles in
young adolescents, some of whose parents refused to vaccinate their children
for fear of autism (“Aftermath of an Unfounded Vaccine Scare”, 2013).
You can feel frustrated and disheartened when your results do not pan
out the way you had hoped. But your data are your data. Come by them
ethically and report them ethically. Distorting or fabricating your results can
have disastrous consequences that extend beyond your personal sphere of
influence.
45
CHAPTER 3
46
successfully writing about quantitative research
does not clearly specify the connections between concepts, you have likely
jumped from Point A to Point C without realizing you glossed over Point B.
If she emphasizes relatively unimportant points, you have missed your mark.
Then buy her coffee or a snack – providing feedback is not easy, and she has
helped you improve your work.
You could also try reading your prose aloud, firmly and deliberately
(Flower & Hayes, 1977; McCloskey, 2000). Doing so will short-circuit
the internal editor that visually skims over clunky constructions and
subconsciously scans for points you intended to emphasize. As you read,
mark places where you trip over words or slow down; that’s your brain
working harder to process the material (Ferreira, 1991). You can also record
yourself reading your work and then listen to it without following your
“script”. Hearing how you intone your prose reveals what your reader will
likely emphasize.
47
CHAPTER 3
Clear. You achieve clarity when your audience understands what you
intended to convey. To improve clarity, identify code words and unpack
them. Be explicit.
Not as clear as it could be: The Pew Research Center, who conducted
a study in 2011, contends that young adults between the ages of 18 to
24 are enrolling in college in record numbers. They go on to note that
student debt has increased exponentially.
Clearer: According to the Pew Research Center (2011), the percentage
of 18- to 24-year-olds enrolled in college has increased approximately
70% since the 1960s. However, student debt has more than tripled since
the 1980s, exceeding $20,000 on average.
48
successfully writing about quantitative research
49
CHAPTER 3
50
successfully writing about quantitative research
51
CHAPTER 3
Upon finishing your draft, you might not be convinced that your ideas
globally cohere. Read your topic sentences sequentially.7 If they form a
coherent summary of that section, you have constructed strong global
coherence.
Whatever your strategy, remember Project Runway and religiously invoke
Tim Gunn’s catchphrase: “Make it work”. That said, do not be afraid to
scrap an organizational structure that simply does not work. Reframing the
building may be a better option than retrofitting an ultimately unworkable
structure. Expert writers recognize when to raze and rebuild (Flower, 1993).
Rule #7 – Language!
Mark Twain notably quipped, “The difference between the right word and the
almost right word is the difference between lightning and a lightning bug.”
Words pack a punch, and we can be tempted to throw knockouts because
they forcefully convey our ideas. However, nudges can be just as or more
effective.
Choose your words felicitously and cautiously, especially when discussing
causal relationships between variables. You’ve memorized the adage,
“Correlation does not imply causation.” However, you may not be clear
about the conditions under which you can make a causal claim. To establish
a casual claim, a relationship must meet three criteria (Morling, 2015). First,
variables must covary (i.e., a reliable relationship must exist). Second, one
variable must clearly precede another (i.e., the cause must precede the effect).
Third, alternative explanations must be addressed and minimized.
52
successfully writing about quantitative research
53
CHAPTER 3
54
successfully writing about quantitative research
at the bottom of the bill. Instead of the typical format for recording prices
(e.g., $10.99), you see full words (e.g., ten dollars and ninety-nine cents).
Although all the information about your meal is present, you expend vast
amounts of effort deciphering it because you are accustomed to the bill
being arranged in a certain order (e.g., drinks, starters, entrees, desserts)
and in a particular format. If the check were organized and presented in the
way you expected, you would be able to read and verify the information on
it very efficiently.
Reason #3: It helps you hone computer skills, especially word processing. Do
you know how to make a table? Use styles? Format a hanging indent? Do
you even know what one is? You will after writing your research report. To
satisfactorily format your report, you must use a word processing program
sophisticated enough to handle somewhat tricky formatting. Simple text
editing programs such as Notepad, TextEdit, and GoogleDocs will not likely
have the functionality to properly format your document.
Reason #4: It develops skills and habits needed to learn other systems of
reporting (one would hope). If you are not planning to write professionally
in your discipline, you may think: “I’m never going to write in this style
again. Why should I sink all this time into learning it?” At some point in your
career, you might have to familiarize yourself with a particular system of
reporting because your employer wants things done A Certain Way. Although
that Way will likely not involve a scholarly formatting style, you will still
have to learn the details of that system. Thus, having mastered a formatting
style may better prepare you for the challenge of learning another maybe-
not-quite-so-anal one.
You may think of writing your quantitative research report as just another
class assignment. But it can be a life-changing opportunity. Undertaking a
project of this scope requires you to leverage and build your character.
55
CHAPTER 3
Think of someone running a marathon. She may have been born with a
naturally athletic build. She may have trained hard. But right now she’s on
mile 21 of 26.2. Natural, inborn athleticism or training is not going to get
her across that finish line when her muscles scream and sweat pours off her
body. Character will.
Paul Tough (2013) lists seven character strengths that social scientists
and educators believe promote academic success.8 They include: grit
(perseverance), optimism, self-control, gratitude, curiosity, zest (passion),
and social intelligence. Embracing the “truths” and following the “rules” of
this chapter requires you to leverage and further develop these strengths. For
example, to benefit from feedback you use social intelligence to empathize
with the person delivering the feedback. You are optimistic that you can
improve your work. When feedback is not as positive as you had hoped, you
muster grit to press on. You use self-control to carefully evaluate and address
the feedback. Finally, you express gratitude for the person taking the time
and energy to deliver your feedback.
Writing a quantitative research report is an intellectual marathon.
Regardless of your confidence or experience writing, you may find yourself
on the verge of giving up on mile 21. Take a deep breath. Let passion and
curiosity drive you. Keep sight of the goal. Stick to your plan. Dig deep and
push through. Be grateful for the opportunity and to those who helped you
along the way. (In case you are wondering, I tell myself these very things
every time I sit down to write.)
56
successfully writing about quantitative research
to embrace the “truths” and adhere to the “rules”. Write in longhand or type
your response to give shape to your musings.
NOTES
1
Your mindset can differ across domains; you can embrace a fixed mindset in one
domain (e.g., writing) and a growth mindset in another (e.g., scientific reasoning).
2
For other myths about writing, see Boice (1990).
3
I am indebted to Dr. Brian Osoba for this turn of phrase.
4
For a hilarious description of this process, watch Seth Meyer interview Bill
Hader: https://www.youtube.com/watch?v=kVHJc26t3Gc.
5
For a refresher, consult Thurman (2003) for basic grammar, and Strunk and
Whites’s (1999) Elements of Style for more advanced stylistic advice.
6
In case you are wondering, the medical terms for 4- and 6-fingeredness are
“symbrachydactyly” and “polydactyly”. I bet you’re happy I opted for pop culture
references.
7
Your Method and Results/Analysis section also need to achieve global coherence.
However, the structure of these sections is typically more prescribed than the
Introduction/Literature Review and Discussion.
8
These character strengths are derived from Peterson and Seligman’s (2004)
theory that posits 24 distinct, universal character strengths. Research involving
the role of character strengths in higher education is in its infancy. Other character
strengths such as bravery, love of learning and integrity, may predict academic
success in college students (Lounsbury, Fisher, Levy, & Welsh, 2009).
57
SECTION 2
INTRODUCTION
As you learned in Chapter 3, the social and behavioral sciences have more
than One Right Way of organizing a quantitative research report. Although
arrangements vary, all research reports answer the following four overarching
questions:
• What question did you ask (and why should anyone care)?
• What did you do?
• What did you find?
• What does it all mean (and why should anyone care)?
Chapters 4 through 7 discuss how you address these questions. Turns out
you tackle big questions by answering several “smaller” ones (smaller in
scope, not in importance). Your responses to these questions will appear
in different sections of your research report depending on your discipline. In
Section 3 you will see how some disciplines go about organizing responses
to these questions.
The question-based approach I’ve used in this section plays to your
strengths. As a student, you answer questions all the time; you know the drill.
Further, it is a straightforward strategy to ensure that you include essential
information in your report. But, here’s the caveat (a.k.a. my recurring
nightmare) – you approach your research report as you would a final exam.
You carefully answer each question on a separate page. Then you mash
the pages together and, Voila! You have your report. At this point I wake
up clammy and hyperventilating. With labored breath, I croak: “Connect
the dots!” These questions are not islands unto themselves; they form an
integrated network of ideas. Accordingly, your responses to these questions
need to form a coherent whole.
Chapter 8, cleverly titled “Odds and Ends,” includes important
components that supplement the major sections of your report. Many of
these components are bookends, such as your title page and appendices.
Other components, such as citations, are integrated throughout your report.
59
CHAPTER 4
INTRODUCTION
Readers feel invested in your study when you identify a mutual end (Flower &
Hayes, 1977). In a quantitative research report, that end is the answer to
your research question. Thus, your readers need to clearly understand your
research question. But before stating your question, you need to focus readers
on the core problem or phenomenon that you have studied. This might be
easier said than done; you may have examined several constructs, concepts,
or theories. However, you need to decide upon one ring to rule them all – one
core concept to anchor your report.
For example, let’s say your research question was: Does using Facebook
affect feelings of social anxiety (i.e., how anxious one is around other
61
CHAPTER 4
62
What question did you ask?
answered. Who or what is “124” and why is he, she, or it spiteful? What
is a screaming and why is it coming across the sky? What’s the fatherly
advice and why does it provoke so many thoughts? But these questions
may be red herrings for the real theme of the book. In scientific writing, the
opener clearly teases your core problem or phenomenon. The ideal audience
reaction after reading your opener should be, “Really? Show me.” Consider
McClosky’s (2000) punchy hook: “Every economist knows by now that
monopoly does not much reduce income. Every economist appears to be
mistaken” (p. 36). She’s got my attention and I want to see how she’s going
to support that claim.
If you would like a more tempered approach to hooking your reader, Kail
(2015) provides three suggestions.1 You could open with a familiar behavior
that is poorly understood. Stand-up comics who revel in observational
humor (e.g., Katt Williams, Louis C. K., Paula Poundstone) use this strategy
religiously. Although comics can get away with “Ever notice how…”, your
approach should be more like: “Go to any family-friendly restaurant and
notice how many family members are holding electronic devices.” Readers
can likely relate to this real-world situation and will want to read on to see
whether your study confirms their expectations. This approach is especially
helpful if you are studying a basic research question. This clear and practical
application provides a familiar idea for readers to hold on to as they work
their way through abstract theory and constructs. It also reminds you of the
potential applications of your research, which will be important when you
discuss what your findings mean.
A second strategy is to begin with a hypothetical situation followed by
a rhetorical question, as in: “Imagine a stranger approaches you and asks
you to buy her a cup of coffee. What would you do?” Readers’ answers will
depend on how they envision the stranger, which could launch a discussion
of stereotypes or prosocial behavior. A great opener allows you to pivot to
your core problem or phenomenon while teasing the other constructs you
have studied.
A third approach involves opening with an interesting fact or statistic.
Edwards (2012) supplies a sociological example:
Sixty-four percent of mothers of preschoolers are in the labor force
(U.S Census, 2010). More than 90 percent of fathers of preschoolers
are in the labor force (U.S. Census, 2000). (p. 45)
Readers see the disparity and expect the report to examine work-family
related issues in mothers and fathers with young children. This approach
may be particularly effective with applied research questions. Providing
63
CHAPTER 4
64
What question did you ask?
Note. If your project crosses disciplines (e.g., behavioral economics), you may
find that the databases in other disciplines will provide helpful information
You will likely find more literature on your general topic than you could
or should include in your manuscript. Locating potential sources is more
art than science, and takes a fair bit of practice. Your goal is to obtain a
manageable number of sources to evaluate – the sweet spot is usually between
20 and 50 “hits”. If you are using a scholarly database for the first time, you
would benefit from asking reference librarians for assistance. They can help
you broaden or narrow your search and suggest search terms that yield more
promising returns. A word to the wise: Keep a list of all the search terms
you have used. If you think you have exhausted all possibilities and are still
coming up short, ask your professor to suggest search terms. Showing him/
65
CHAPTER 4
her your list will accomplish two goals. First, it can help your professor make
specific suggestions for terms that you may not have considered. Second, it
will attest that you made a conscientious effort to locate sources before you
asked for assistance.
Of course, it’s possible that very little literature directly relates to your
topic. Some students come to this realization and are disappointed – how
can you locate scholarly sources now? Chin up. You may be tapping into
an understudied area of research, and that’s extremely exciting! If you find
yourself in this position, think of your search in broader terms. For example,
imagine you are interested in examining stereotypes towards individuals who
are internationally and/or interracially adopted (Villanti, 2016). Previous
researchers investigated stereotypes towards adopted individuals in general,
but no research has examined issues regarding the combination of interracial
and international adoption. At this point, you would synthesize what is
known about stereotypes surrounding adoption and how those stereotypes
may have developed. You might also broaden your literature review to
include stereotypes about race and about foreign-born individuals.
Evaluate the literature. When you have a list of potential sources, you
need to judge whether the source is relevant to your purpose. Think of the
literature like a family tree. You want literature to be immediate family, not
third cousins twice removed. Examine the source’s title. Does it contain
constructs/issues/theories that are central to what you are examining? If so,
skim the abstract. Will the source provide the necessary context to understand
your problem or phenomenon? If so, skim the source. For books, examine
the table of contents and locate a chapter that is likely to be relevant to your
purpose. Also consider examining the book’s index for constructs, concepts,
or theories central to your project. For book chapters and theoretical or
review articles, pay particular attention to headings – they can help guide
your search. For empirical articles, skim the Introduction/Literature Review
and Discussion/ Conclusion. Bear in mind that you will read these sources
more critically later– these suggestions are simply to help you determine
whether a source is worth delving into.
Many emerging researchers want to know how much literature is enough.
That depends largely on the scope of your question (i.e., the number of
constructs are you studying) and how much previous research exists. You
need to thoroughly substantiate your assertions without disrupting the flow
of ideas. Kail (2015) suggests between one and three citations per claim;
more would be overkill.
66
What question did you ask?
Now that you have your sources, you need to decide which information to
include in your Introduction. Select information that enables your audience
to develop a clear understanding of the concepts, theories, and/or issues
surrounding your particular research question. For empirical articles, focus
on the findings – you will soon discover whether the available research
converges on a common conclusion or diverges. Sources can also provide
theoretical or operational definitions for your constructs and variables.
Sometimes methodological information or limitations of the study are
important. All sources are not created equal and the best writers carefully
consider how much and what type of information readers need to know.
To ensure that you have included relevant information in your
Introduction, keep your specific research question in mind as you critically
read your sources. If a source does not inform your research question or
your methodology, your readers do not need to know about it. While you are
reading, jot down relevant ideas, definitions, methodology, findings, and/or
limitations. Write about how the source connects to your research project.
Sometimes you will find paraphrased information within a source that
appears relevant to your research question. All empirical journal articles
include a literature review that appears like a one-stop-shop of knowledge.
You may be seduced into paraphrasing someone else’s paraphrasing – it’s
right there, ripe for picking. However, those authors paraphrased those
sources with their specific purpose in mind – a purpose that differs from yours.
Remember, you are telling the story of your research, not someone else’s.
Further, those authors may not have represented others’ work accurately and
you may end up perpetuating an error if you “borrow” from them. Thus, you
should always track down the primary source and evaluate it for yourself.
67
CHAPTER 4
68
What question did you ask?
Answering this question is perhaps the most potent means to convince your
audience that your report is worth reading. Clearly articulate how your
study will move your science forward. Specify the hole(s) or gap(s) in our
current knowledge that your research project will address. In other words,
scientifically justify your research.
In my experience as an instructor, emerging researchers find it challenging
to articulate how their project adds to a discipline. Here are some possibilities:
• Directly replicate a published study;
• Investigate an entirely “new” or unstudied phenomenon;
• Use different measures, stimuli, or manipulations to examine a known
phenomenon or test a theory (i.e., conceptual replication);
• Examine known phenomena in a different population or different time
period (i.e., generation/era);
• Test hypotheses for competing theories using a single method;
• Examine relationships between variables that previously have not been
linked;
• Provide additional evidence surrounding a controversy or ambiguous
phenomenon; or
• Validate a new instrument (i.e., questionnaire).
Explicitly specify the new information readers will learn from your
research. Then go a step further and convey why gaining this knowledge
is important. For basic research, your findings would produce greater
understanding of a phenomenon and support or refute a particular theory.
You might also have made a creative methodological contribution. Like
basic research, findings from applied research have implications for theory
69
CHAPTER 4
and general knowledge. In addition, you should clearly articulate how your
findings would have real-world, practical application.
Now that you have specified the holes or gaps in readers’ knowledge,
describe how your study will fill them. For example, if you are testing
hypotheses for competing theories, describe your method and explain
why it is a good test of the theories. To examine relationships between
variables that previously have not been linked, note how you obtained
data for these variables through self-report questionnaires, observation, or
archival sources. If you are using different measures to examine a known
phenomenon, mention your specific measures and describe how they differ
from previous research. In short, provide enough detail about your study for
readers to understand how your research project addresses your justification
for conducting it.
A final cautionary note: Your justification cannot be personal or
anecdotal. You may have a deep, personal connection with your research
topic. Perhaps you are examining outcomes in juvenile offenders because of
direct experience or experiences of a close family member or friend. Having
this connection fuels your passion and gives meaning to your work. Being
deeply invested in your project is something every educator hopes for her
student. But your personal experience or connection with your topic does
not constitute scientific justification and does not belong in a research report.
70
What question did you ask?
71
CHAPTER 4
As you were learning how to address the previous questions, you may have
wondered about the nitty gritty details of verb tense. If verb tense never
crossed your mind, humor me.
Use present tense to:
• theoretically define your core problem or constructs of interest.
• report statistics about the current incidence or prevalence of a phenomenon
(e.g., “According to the Center for Disease Control, approximately 29
million Americans have diabetes.”).
• describe a theory.
Use past tense to:
• convey the methodology, findings, or limitations of previous empirical
research.
• describe aspects of your study (e.g., “In the current study, I examined
whether stereotype threat theory explained athletes’ ‘choking under
pressure’ during sports events”).
72
What question did you ask?
What is the core problem or Introduction Introduction
question you studied?
How will you hook your reader? Introduction Introduction
What have other researchers and Introduction Literature Review
scholars learned about this issue?
What is not known about your Introduction Introduction or Literature
problem or question and why is it Review
important to fill this gap in our
knowledge?
What do you predict will happen Introduction Literature Review or
and why? Method subsection
SUMMARY
73
CHAPTER 4
outcome or pose questions that you will answer later in your report. What’s
on the horizon? You tell your readers how you went about answering your
questions.
NOTES
See Dunn (2004) and Kendell, Slik, and Chu (2000) for additional suggestions.
1
present tense.
74
CHAPTER 5
INTRODUCTION
Time to answer questions about the nuts and bolts of your research. The goal
is to provide enough information about how you conducted your study so
that anyone could replicate it. In particular, you should answer the following
questions:
• Who or what did you sample?
• How did you operationally define your variables?
• How did you collect your data?
Although answering these questions may feel like grunt-work
documentation, the information you provide helps your reader evaluate
the validity of your study. For example, describing your sample speaks
to whether your sample adequately represents the population from which
it is drawn (i.e., external validity). Your operational definitions detail how
you measured or manipulated the variables you theoretically defined (i.e.,
construct validity). The manner in which you collected your data helps
readers evaluate how well you minimized the impact of extraneous variables
(i.e., internal validity).
Social and behavioral scientists use three major methodological
approaches to obtain data. You can analyze the content of mediated messages,
examine secondary data using archival datasets, or collect data from human
participants. We’ll discuss content analysis, archival datasets, and primary
data collection with human participants1 in turn.
CONTENT ANALYSIS
75
CHAPTER 5
Next, explain how you obtained and selected – sampled – these elements.
If you can access all possible elements within a population, you can randomly
draw from these elements. More commonly, researchers use purposive,
or relevant sampling (Krippendorff, 2004). From the entire population of
elements, you limit your sample to elements that possess criteria relevant
to your research question. Imagine that you are examining case law for
infractions of Title IX (i.e., the law prohibiting discrimination on the basis of
biological sex in federally funded education programs or activities). Within
a comprehensive case law database, you could search for cases involving
Title IX. Then you could limit cases to those involving universities, athletic
programs, and so on until you arrive at a manageable number of relevant
cases to analyze. If that number is still too large, you can randomly select
cases from your purposive sample.
Sometimes researchers compare elements across an important variable.
Going back to Title IX case law, perhaps you want to compare the decisions
of such cases within primary and secondary school settings. To ensure that
the conclusions you draw are based on the school setting, you would identify
other extraneous variables that might differ across the groups (e.g., the
range of years the lawsuits were filed, the States in which the complaints
were lodged, etc.). The more comparable your groups on these extraneous
variables, the more likely your variable of interest accounts for differences
between the groups.
76
What did you do?
77
CHAPTER 5
consistency in the same coder’s analysis of the same data at two or more
time periods (like test-retest reliability). Reproducibility is akin to inter-
rater reliability in observational studies. Here, more than one rater codes
the same content. Whereas stability involves the coder’s unique perspective,
reproducibility assesses shared understanding between coders (Weber,
1990). Even if you are using a computer program to analyze content, you
can still assess stability – you may have erred when you recorded the data
from the computer output or did not set the proper parameters for analysis.
So, you would reanalyze a subset of data to confirm reliability in your
measurements.
78
What did you do?
79
CHAPTER 5
80
What did you do?
81
CHAPTER 5
on social media. You may choose to alter existing items on the scale or add
items to suit your purpose.
Probe your new or revised scale’s construct validity. At the very least,
examine internal consistency. You might pursue more advanced statistical
techniques such as factor analysis. As with all questionnaires, describe how
you reduced responses into composite scores.
82
What did you do?
Report the number of stimuli that participants experienced and how you
obtained or selected the stimuli. If your stimuli are inspired by or obtained
directly from published works or the internet, cite the author/source. You
also need to describe precisely how the stimuli systematically differed (i.e.,
your manipulation) and how you held certain characteristics of the stimuli
constant (i.e., control variables). Further, note any special apparatus you
used to present your stimuli (e.g., PowerPoint projector).
Most of these questions will be answered using past tense. However, use
present tense to describe what a published questionnaire measures (e.g.,
83
CHAPTER 5
“The Mini-IPIP measures the Big 5 personality traits”). You might also use
present tense to describe an archival database (e.g., “The General Social
Survey includes over 5600 variables.”).
SUMMARY
NOTES
1
Some behavioral scientists conduct research with non-human animals. But that is
beyond the scope of this book.
2
If your measure has subscales, you could calculate a composite score for each
subscale.
84
CHAPTER 6
INTRODUCTION
Not all research reports explicitly address this question. Nevertheless, this
approach is especially helpful when studies involve complex experimental
designs incorporating multiple independent variables or sophisticated
regression or structural equation models. For example, economists often
specify stochastic equations corresponding to the “steps” in logistic multiple
regression or time-series analyses used in economic forecasting. Describing
your study’s design and plan for analysis helps your audience bridge the gap
between your data collection and your analysis.
The moment everyone has been waiting for – your findings! And yet you
may feel as if you are writing in a foreign language. You’re using statistical
terminology and talking about data in ways that are new to you. Still, your
job is to continue telling your story in a way that keeps everyone on board
and engaged.
Your analysis should read like good textbook posing clear questions and
subsequently answering them. Think about a textbook that you enjoyed
reading (take as much time as you need). I’d wager that the textbook had a
clear and intuitive overarching organization with topics flowing in a logical
manner. Within the chapters and major sections, the authors of this fabulous
85
Chapter 6
Macrostructure
Exploratory analyses. The purpose of exploring your data is to probe for
problems that compromise subsequent analyses. The four most concerning
problems for emerging researchers are: invalid data points, non-linearity of
relationships, non-normal distributions of continuous variables, and outliers
(i.e., data points that are not statistically consistent with the majority of the
data) or points of influence.
Humans can produce invalid data. Data points have questionable
validity due to: suspected response bias (e.g., participants answer “3” for
all items on a questionnaire); response omissions (e.g., participants do not
provide responses for a certain percentage of items or trials); participant
disinterest, fatigue, or other incapacitation (e.g., being “under the
influence”); or technological/experimenter error (nobody’s perfect!). In
such cases, you may decide to discard those data points. Note how many
participants or observations you discarded and reasons underlying your
decision. Clearly report how many participants or observations remained
in your sample.
If you use inferential tests to determine whether your predictions
convincingly reflect what occurs in the population, you need to examine
whether your data meet the assumptions of these tests. Many tests are
based on a linear model, so your data need to be linearly related to use
such tests. Creating a scatterplot is the easiest way to determine whether
the relationship between variables is linear. Some statistical tests such as
regressions and MANOVAs require that variables are not too strongly
related, or multicollinear. If you are using such tests, you would report
correlation coefficients to quantify the strength of the relationship between
predictor or dependent variables.
Many inferential tests also require your data to be normally distributed.
Report statistics quantifying skewness and kurtosis and plot your data
using histograms. If you are comparing groups of people and/or between-
participant conditions in your study, examine normality separately in each
group or condition. Imagine you are testing whether the jokes in Big Bang
86
What did you find?
Theory have increased in their geekiness over time. (Let’s not worry about
operationalizing “geekiness” for the moment!) You compare geekiness
ratings for seasons 1 and 2, 3 and 4, and 5 and 6. You should ensure that the
data in each of those epochs were normally distributed. If your data are not,
entertain transforming your data. Report the transformation you used and
the effect on the distribution. Alternatively, you can use advanced statistical
techniques that produce accurate statistics despite skewed or kurtotic
distributions (e.g., bootstrapping; see Field, 2013).
Be on the alert for individual data points that do not behave like the
majority of the data (i.e., outliers) or unduly influence the strength of a
relationship between continuous variables (i.e., points of influence or
leverage). Your statistical program will likely offer multiple ways – both
visual and statistical – to identify outliers and points of influence. Statisticians
debate the best ways to handle outliers (e.g., Osborne & Overbay, 2004). You
could report analyses with and without the discarded data, allowing readers
to evaluate the discarded values’ influence. Alternatively, you could conduct
“robust” statistical analyses reduce the influential effects of outliers (Field,
2013).
choose to discard data from such trials. For variables measuring accuracy,
you should describe what constitutes an accurate response – sometimes it’s
not obvious. Imagine that participants freely recalled words from a list. You
would describe the criteria you used to determine whether the response was
accurate. Were minor spelling errors OK (e.g., “bananna” for “banana”)?
Would you accept plural forms of singular nouns (e.g., “cats” for “cat”)?
Once you’ve clearly described inclusion criteria, report how you reduced
responses into a single score (e.g., sum, mean).
Microstructure
Now that you have a sense of the broad elements of analysis, we should
address the minor detail of writing about specific analyses. Your description
of any analysis should include four major components: (1) the hypothesis
(or research question); (2) the analysis you used; (3) which variables were
incorporated in the analysis and the role each played (e.g., independent,
dependent variable); and (4) the finding(s).
88
What did you find?
Analysis. Once you have reminded your readers of the purpose of the
analysis, identify which analysis you conducted to test the prediction or
research question. Continuing on from the example above: “…I conducted a
probit regression…”
Variables. Now you’re cooking. After stating the statistical analysis you
have used, clearly state the variables in question and their roles. Following
the example above, “…using mother’s college major, Math SAT, and high
school science GPA as predictor variables.” Of course, the way you describe
and label your variables depends entirely on your specific analysis. Imagine
you investigated whether online blogging reinforced content and improved
students’ performance on a subsequent midterm (Lindberg, 2015). You might
write:1
89
Chapter 6
effect size statistics; and (5) convey whether your findings support your
predictions.
That’s a lot of information and a straightforward example is in order. Let’s
return to the question of predicting young women’s participation in STEM
majors in college. We’ll simplify the question to whether having a mother
who majored in a STEM discipline is associated with participation in STEM.
I would write:
To examine whether having a mother who majored in a STEM discipline
is associated with a women’s participation in STEM programs, I
conducted a 2 × 2 chi-square test of independence. As expected,
students whose mothers majored in STEM were more than twice as
likely to participate in STEM programs (75%) than students whose
mothers did not (35%), χ2 (1, N = 134) = 8.25, p < .001, φ = 0.25.
Note how I accomplished all five goals within the second sentence. Go
ahead and marvel in wonder and admiration – I don’t mind. Realize that
more complex analyses would require additional deconstruction (i.e., more
sentences).
Two additional questions emerge when you are reporting findings: First,
how should you report analyses with transformed data? You conducted
the analyses with transformed data, so report the inferential statistics from
these analyses. Note that transforming data alters the original units of your
measurement. Thus, researchers often report descriptive statistics both in
text and in tables and figures using original, non-transformed data.
Second, what happens when you conclude that there is no relationship or
effect, or that your model poorly fits the data (i.e., null effects)? Historically,
statistics for null effects have not often appeared in the scholarly literature.
However, discipline-specific style guides have begun calling for these
statistics. This shift belies a deeper appreciation of the limitations of null
hypothesis testing and the importance of effect size. If you have a large
enough sample, any effect or relationship will be statistically significant. But
your effect size may be paltry, or quite unimportant in the grand scheme of
things. Conversely, your test statistic may have almost been large enough
for you to reject the null hypothesis, but your study did not have enough
statistical power. Nevertheless, your effect size is reasonably healthy. If your
sample is representative, then that effect should hold with more observations
(Cummings, 2013). Thus, omitting null results may artificially downplay
important effects or relationships. I advocate reporting null results – it is
good practice.
90
What did you find?
You’ve heard that “a picture is worth a thousand words”. That adage literally
proves true in your research report. (OK, maybe not 1000 words worth, but
you get the idea.) Providing visual representations of your findings in tables
or figures is helpful for at least two reasons. First, you summarize a lot of
information efficiently, reducing your audience’s fatigue and confusion.
Second, you highlight what you consider to be key findings in tables and
figures. If all of your findings are embedded within your prose, your readers
may not appreciate the importance of particular findings over others. In
short, tables and figures harness attention.
Tables
Tables summarize scads of statistics. You can report descriptive statistics
for several groups or conditions in a glance. In addition, tables can organize
findings for complex statistical analyses, such as ANOVAs or regressions.
Imagine if you had to incorporate all that information in written prose! That
would be cumbersome for you as well as your reader. Tables free you to
highlight main points in your prose and help readers more easily digest your
analyses and findings.
Histograms
Histograms visually depict the frequency distribution of values along a
single variable (see Figure 6.1). Rather than represent the frequency of each
potential value, histograms “bin” values, condensing several values into a
range of scores. Histograms clearly capture skew, kurtosis, or multimodality
in a distribution. Further, histograms depict outlying data points. Thus, you
can use histograms to support many analytic decisions, such as transforming
the data to achieve a normal distribution, creating categorical variables
out of continuous variables, and identifying and potentially discarding
outliers.
Scatterplots
Scatterplots illustrate the relationship between two continuous or discrete
variables (see Figure 6.2). You can discern many important things from
scatterplots including linearity, direction, strength, outliers or points of
influence, and the impact of third or moderating variables.
91
Chapter 6
If the points in your scatterplot appear to travel upward from left to right,
you have a positive linear relationship (as values of one variable increase,
so does the other). Points giving the impression of moving downward from
left to right represent a negative linear relationship (as values of one variable
increases, values of the other decrease). The direction of the relationship is
particularly clear if the scatterplot includes the least squares regression line
(or “line of best fit”), which minimizes the distance between it and all other
points in the distribution.
Scatterplots also give readers information about the strength of the
relationship. The steepness of the slope of the least-squares regression
line does not exclusively determine the strength of the relationship. You
need a non-zero slope for a relationship to exist, but the distance between
observed points and the predicted values specified on the least-squares
regression line determines the strength of a relationship. The closer points
in the distribution are to the line of least squares, the better fit (stronger
relationship).
Outliers or points of influence stick out like a sore thumb on scatterplots.
In Figure 6.2, one point does not look like the others: Someone drinks about
450 mg of caffeine on average and is not very anxious, which bucks the
positive trend between caffeine intake and anxiety.
Scatterplots can also help you discern whether you have a third-variable
problem or a moderating variable. Third-variable problems arise when it
looks like you have a relationship between two variables, but the association
is completely explained by another variable. In Figure 6.3, the overall
92
What did you find?
93
Chapter 6
94
What did you find?
Remember that mean differences alone do not help you determine whether
a difference in your sample accurately reflects a difference in the population.
So that readers can make this determination in a glance, include error bars
around the means. These error bars look like whiskers that extend above and
below each mean. They can represent standard error of the mean or 95%
confidence intervals; report this in your figure’s caption.
In Figure 6.5, the nonoverlapping 95% confidence intervals show us that
mean anxiety significantly increases across levels of caffeine intake. But this
pattern is slightly different for men and women. At moderate and high levels
95
Chapter 6
of caffeine intake, men and women report comparable levels of anxiety (the
95% confidence intervals overlap considerably). However, at low levels of
caffeine intake, women appear to report higher levels of anxiety than men
(the 95% confidence intervals overlap very little).
If you are reporting many analyses, use subheadings to help your readers
remain focused. Subheadings could include headings used in this chapter (i.e.,
Exploratory analyses, Primary analyses, Secondary analyses). Alternatively,
you could use elements of your hypotheses as subheadings (e.g., Biological
sex as a moderator of anxiety and caffeine intake). Regardless of your
approach, remember your goal: your audience needs to understand the
answer(s) to your research question(s).
Reporting analyses can sound formulaic (ironic, I know). As you move
from analysis to analysis, you may begin to use the same sentence structure,
96
What did you find?
emphasizing clarity over style. Once you’ve clearly stated the findings
work to reduce boilerplate. Why not be clear and engaging while reporting
numbers?
SUMMARY
You started with questions and now you have answers. You helped your
readers navigate a sea of statistical analyses using an effective macrostructure,
clear microstructure, and illustrative tables and figures. You stated whether
your data support your predictions, but that’s just a tease. Now you explain
what it all means.
NOTES
1
If you included a detailed design/proposed analysis section in your report, you
may not need to provide such detailed information about your analysis here.
2
Bar graphs aren’t limited to continuous data; they can also represent frequencies
for categorical data.
97
CHAPTER 7
INTRODUCTION
Explanations begin with your main findings. Then, connect your findings
to existing literature and theory and examine your findings from multiple
angles. Sounds straightforward until you realize that studies have three
possible outcomes. Your findings support your predictions, do not support
your predictions, or partially support your predictions. Let’s tackle strategies
for explaining each outcome.
99
Chapter 7
100
What does it all mean?
101
Chapter 7
comprehension and you use widely available practice passages from the
SAT (Scholastic Aptitude Test). Some participants may have had direct
experience with those passages or already know something about the content
within those passages.
In the case of primary data collection, researchers can unwittingly increase
within-participant variance through their behavior and characteristics.
Dressing unprofessionally or acting disinterested or aloof may signal that
you do not care about your study. If you don’t care, why should participants?
Encouraging some, but not other participants to do their best or pay attention
will also inflate within-participant variability.
Environmental or situational variables can also affect participants’
responses. In many cases of primary data collection, the testing environment
can be reasonably controlled or at least monitored. The temperature and
lighting of a testing room can be held constant. Noise and distractions can be
minimized (hopefully!). However, collecting data online relinquishes a fair
amount of control over the testing environment.
Sometimes the manner in which stimuli or questionnaires are presented
increases within-group variation. Ideally, you want all participants to
experience stimuli or questionnaire items in the same manner. Imagine you
presented a series of images designed to elicit happiness. You would ensure
that all images are equally sized, are presented using the same photographic
filter (sorry, Instagram), and appear within the same medium (on paper or
presented through a projector). With questionnaires, you would make sure
that the typeface is clear and large enough to read easily.
You can never completely eradicate within-participant variance when
humans supply data. Even in content analyses, within-participant variance
can creep into measurements when humans make subjective judgments about
content. Science involving humans does not happen in a sterile environment
or in a vacuum. As such, you need to stomach some mess.
102
What does it all mean?
103
Chapter 7
104
What does it all mean?
Your research, flawed though it is, still shapes our current understanding of a
topic. Avoid giving voice to every little thing that is wrong with your study.
Select the most important points that readers would find insightful and have
the most promise of moving science forward.
One of the amazing things about being human is our ability to project
ourselves into the future and be hopeful (Lopez, 2013). Your research study
is one piece in a vast puzzle; you can shape what the puzzle looks like now
and in the future. What’s next?
Future directions emerge organically from limitations. Should someone
attempt to extend your findings in another context or to a different population?
Should someone address the methodological issues that arose as you were
conducting your study? The more insightful the limitation, the more exciting
the potential for future study.
Surprising or counterintuitive findings beg for future study. Obtaining
data from an open source website, a team of researchers examined 3200
selfies from people living in five major world cities (http://selfiecity.net/).
They found head tilt is quite common in selfies, but is surprisingly more
pronounced in women to the tune of 50%. Although these data have not
been published in a scholarly peer-reviewed source, they are an excellent
example of how surprising findings can inspire future research, even across
disciplinary boundaries.
Your findings might naturally point to the next logical step needed to solve
a problem or understand a phenomenon. Your study has gotten us one link
farther in a long chain of research. What’s the next link? Take the plunge –
shape future science.
Science contributes to the public good. Explore how your findings might
apply outside of your specific context. Who might use your findings and how
might they apply them in the real world? Municipal school board members
or state legislators might your use research on the economics of local public
schools to inform funding decisions. Educators might use your research
on adaptive strategy use to enhance their students’ learning. Non-profit
organizations might use your research on sociological factors underlying
teen pregnancy to steer outreach programs.
105
Chapter 7
106
What does it all mean?
By and large, the answers to the previous questions appear in the Discussion
section of a research report. However, a conclusion section (i.e., the six-
fingered approach) adds spice. Table 7.1 illustrates where the responses to
each question might live in your research report.
Table 7.1. Typical residences of answers for what your findings mean
Question
SUMMARY
You have explored how your findings add to our existing knowledge, inform
theory, are limited, shape future lines of exploration, and can be applied to
real-world contexts. You answered questions and opened the door for more.
Your audience leaves your report satisfied, yet hungry. Before the sun sets on
your research story, you must attend to some odds and ends.
107
CHAPTER 8
INTRODUCTION
TITLE PAGE
Like a birth announcement, the title page announces your study. The title
is your audience’s first encounter with your project, so carefully consider
names for your baby. Titles should be short and informative. In 15 words or
fewer, readers should understand the major constructs you’ve investigated.
Avoid traditional title traps, including the “The Effect of X on Y” and “The
Relationships Between X, Y, and Z in Q”. Kail (2015) also recommends
shunning clever-but-too-cute titles that rely on readers’ knowledge of pop
culture. Consider a recent criminology article: “Smells Like Teen Spirit:
Evaluating a Midwestern Teen Court” (Norris, Twill, & Kim, 2011). The
song Smells Like Teen Spirit is timeless, but it came out 20 years before this
article. Construct a title that teases your take-home message: “Increasing
Sanctions in Teen Court May Lead to Greater Recidivism and Drop-Out
Rate”. (This was actually Norris and colleagues’ conclusion – you’d never
know it from their title.)
As proud parent of your project, your name and institutional affiliation
follows the title. Depending on your formatting style, you might also include:
word count; contact information for the corresponding author (that’s you);
acknowledgments, or who you would like to credit or thank (that’s me –
just kidding); and potential conflicts of interest (e.g., your funding source, if
applicable). Title pages generally have specific formatting requirements, so
carefully review your discipline’s preferred style guide.
109
Chapter 8
ABSTRACT
IN-TEXT CITATIONS
In-Text Citations
In-text citations occur within the body of your research report. The majority
of formatting styles used in the social and behavioral sciences employ
parenthetical citations1 using an author-date citation system. Authors’ names
and date of publication (usually the year) appear within the citation. This
form of citation applies for journal articles, books, and articles in the popular
press that have a clear author. The term “author” also applies to groups of
people or organization that identify themselves under a single name (e.g.,
Association for Psychological Science [APS]).
110
Odds and ends
111
Chapter 8
When do I use the abbreviation “et al.”? “Et al.” is short for “et alia”,
or “and others” in Latin. This shortcut saves you the trouble of writing out
names of multiple authors. Simply type the surname of the first author, and
follow that with “et al.”. Be mindful of your periods when using “et al.” –
placing the period after “et” or not including one at all may send your
professor into equal measures of apoplectic shock and fitful rage.3
Things get really interesting when you have more than two authors. The
rules for APSA and MLA are most straightforward: If there are four or more
authors, use “et al.”. ASA is more nuanced. Always use “et al.” with four or
more authors; start using “et al.” with three authors after you’ve cited them
the long way once. APA rules were invented to keep therapists in business.
For sources written by three to five authors, use “et al.” upon the second
reference. Always use “et al.” with six or more authors.
When (and how) do I include page numbers? With MLA, you always
include page numbers within in-text citations. Other citation formats require
page numbers only when directly quoting a source. The page number(s)
appear(s) after the year of publication. Link page ranges with a hyphen; do
not insert spaces around the hyphen. And that’s where the similarities end.
In APA, include an abbreviation for the page number(s) (p. or pp.). APSA
requires no abbreviation or punctuation. ASA wants publication year and
page number separated by a colon with no spaces.
At this point, you wouldn’t be surprised to learn that there are more rules
about citing particular types of sources (e.g., government documents, court
112
Table 8.1. Examples of forethought and afterthought in-text citations and citation rules across formatting styles
Style One Author Two Authors Three Authors When to Use “et al.” Page Numbers
APA Fallon (2016) Fallon and Smith Fallon, Smith, and Jones 3 to 5 authors, starting with Fallon (2016, p. 232) or
(Fallon, 2016) (2016) (2016) 2nd reference; 6 or more Fallon (2016,
(Fallon & Smith, (Fallon, Smith, & Jones, authors, starting with 1st pp. 232–233)
2016) 2016) reference
APSA Fallon (2016) Fallon and Smith Fallon, Smith, and Jones 4 or more authors, starting Fallon (2016, 232–233)
(Fallon 2016) (2016) (2016) with 1st reference
(Fallon and Smith (Fallon, Smith, and Jones
2016) 2016)
ASA Fallon (2016) Fallon and Smith Fallon, Smith, and Jones 3 authors, starting with 2nd Fallon (2016:232–33)
(Fallon 2016) (2016) (2016) reference; 4 or more authors,
(Fallon and Smith (Fallon, Smith, and Jones starting with 1st reference
2016) 2016)
MLA* Fallon…(32) Fallon and Smith… Fallon, Smith, and Jones… 4 or more authors, starting Fallon (232–233)
(Fallon 32) (32) (32) with 1st reference
(Fallon and Smith (Fallon, Smith, and Jones
32) 32)
113
Odds and ends
Chapter 8
cases, electronic media, etc.), secondary citing, and citing multiple works
by the same author(s). This chapter is intended to help you cite the most
common sources within the social and behavioral sciences (i.e., journal
articles, books, and book chapters in edited books). For more specialized in-
text citations, refer to a publication manual or reputable online source such
as https://owl.english.purdue.edu/.
114
Table 8.2. Examples of journal article, book, and book chapter citations across formatting styles
APA Fallon, M., Smith, A. B., & Jones, C. D. Fallon, M., Smith, A. B., & Fallon, M., Smith, A. B., & Jones, C. D. (2016).
(2016). The magic of confidence intervals. Jones, C. D. (2016). The magic of The magic of confidence intervals. In M. Fallon
Journal of Supreme Awesomeness, 2, confidence intervals. New York, (Ed.), Statistics for the win (pp. 125–130).
125–130. doi: 123.4567.89 NY: Awesome Publishers. New York, NY: Awesome Publishers.
APSA Fallon, Marianne, Anna B. Smith, and Fallon, Marianne, Anna B. Smith, Fallon, Marianne, Anna B. Smith, and Caliope
Caliope D. Jones. 2016. “The Magic of and Caliope D. Jones. 2016. The D. Jones. 2016. “The Magic of Confidence
Confidence Intervals.” Journal of Supreme Magic of Confidence Intervals. Intervals.” In Statistics for the Win, ed.
Awesomeness 2 (2): 125–30. New York: Awesome Publishers. Marianne Fallon. New York: Awesome
Publishers, 125–30.
ASA Fallon, Marianne, Anna B. Smith, and Fallon, Marianne, Anna B. Smith, Fallon, Marianne, Anna B. Smith, and Caliope
Caliope D. Jones. 2016. “The Magic and Caliope D. Jones. 2016. The D. Jones. 2016. “The Magic of Confidence
of Confidence Intervals.” Journal of Magic of Confidence Intervals. Intervals.” Pp. 125–130 in Statistics for the
Supreme Awesomeness 2(2):125–130. New York: Awesome Publishers. Win, edited by M. Fallon. New York: Awesome
doi: 123.4567.89 Publishers.
MLA Fallon, Marianne, Anna B. Smith, Fallon, Marianne, Anna B. Smith, Fallon, Marianne, Anna B. Smith, and Caliope
and Caliope D. Jones. “The Magic of and Caliope D. Jones. The Magic D. Jones. “The Magic of Confidence Intervals.”
Confidence Intervals.” Journal of Supreme of Confidence Intervals. New York: Statistics for the Win, Ed. Marianne Fallon.
Awesomeness 2.2 (2016): 125–130. Print.* Awesome Publishers, 2016. Print. New York: Awesome Publishers, 2016.
125–130. Print.
Note. I have made up these references for illustrative purposes. But I wish the Journal of Supreme Awesomeness actually existed.
*MLA always specifies the medium of the source. APA, APSA, and ASA identify the medium when it is not a print source.
115
Odds and ends
Chapter 8
Once you have listed all references, organize them alphabetically using the
primary author’s surname.5 Further, format them using a “hanging indent”
where the first line of each reference is flush left with subsequent lines indented.
When do I use the abbreviation “et al.”? Only MLA and APA have explicit
rules about using “et al.” within references. MLA gives researchers the option
of using “et al.” for three or more authors. APA specifies using “et al.” only
with eight or more authors. In such cases, you list the fist six authors, include
an ellipsis (…), and list the final author.
What do I capitalize? Here, too, APA is the odd man out. Titles of articles,
books, and book chapters in APA are capitalized using sentence case,
where only the first word, proper nouns, and words after colons and em
dashes are capitalized. However, journals are listed in Title Case, in which
content, or principal words are capitalized and function words (e.g., articles,
prepositions, conjunctions) are not.4 All other formats use Title Case for all
titles of articles, book chapters, books, and periodicals.
116
Odds and ends
What information is italicized? Book titles and journal titles are italicized
across every formatting style. APA also italicizes the volume number of
journals.
Do I include digital object identifiers (doi’s)? APA and ASA require doi’s
when they are available for journal articles. These appear at the end of the
reference and are preceded by “doi:”. (Be wary of ever-well-intentioned
software that automatically capitalizes the “d”.)
Bring tables and figures to readers’ attention within the text. Beginning a
sentence with “As illustrated in Figure 1” immediately draws readers’
attention to a powerful visual summary of the data. Despite mentioning
tables and figures directly in the text, you make readers hunt for them on
separate pages at the end of the report.6
117
Chapter 8
APPENDICES
SUMMARY
Writing the major sections of your research report oriented you towards
the big picture, but the devil is in the details. Preface your report with an
engaging title and abstract, include immaculate in-text citations throughout,
and wrap it up with carefully formatted references, tables and figures, and
appendices. Congratulations, emerging researcher! Your report is complete.
NOTES
Chicago style uses two citation systems: the author-date system and the “notes and
1
bibliography” system. In this book we will focus only on the author-date system.
118
Odds and ends
2
ASA also permits chronological ordering of multiple sources as long as the author
remains consistent throughout the manuscript (ASA, 2010). Some formatting
styles suggest separating sources with commas in certain contexts. Here is an
example from the Style Manual for Political Science (APSA, 2006): (Confucius,
1951; see also Gurdjieff, 1950, Wanisaburo, 1926, and Zeller, 1914).
3
Why there is so much fuss around periods? Why abbreviate “alia”? In Latin,
“alia” is a neutral gender form of “other”. Technically, you could have a bunch
of all-male “others” (i.e., “alii”) or all-female “others” (i.e., “aliae”). So, the
abbreviation “al.” takes care of all permutations. You will remember this moment
when you graduate from your program and you’re wondering whether to call
yourself an alumnus or alumna – you’ll just settle on “alum”.
4
In APA, Title Case has the additional rule that any word that is four of more
letters – even if it is a function word (e.g., that) – is capitalized. To my knowledge,
that rule does not exist in the MLA, ASA, or APSA manuals.
5
In MLA, if there are two works by the same author(s), don’t list the author(s) on
the second reference. Instead, include three dashes where the name(s) should be.
6
One exception is in APSA style – for review purposes, APSA requests tables and
figures be placed within the text. However, accepted manuscripts need to include
tables and figures on separate pages at the end of the report.
119
SECTION 3
EXAMPLES OF QUANTITATIVE
RESEARCH REPORTS
121
Section 3
• Line spacing. I used single spacing throughout the sections with additional
spacing between sections and headers to help your eyes parse the major
sections and subsections of the paper. All formatting styles require double-
spacing and most do not incorporate additional spacing between sections.
• Page numbers and running heads. These are non-existent in the samples;
look them up!
• Figure and table placement. To improve readability, I included figures
and tables where they would most logically appear within the narrative.
Depending on your formatting style, you might place them at the end of
your manuscript.
• References. References should be the same point-size as the rest of the
manuscript (typically 12-point font). The point-size included in the sample
papers is considerably smaller.
• Pagination. Some formatting styles call for page breaks following the
abstract and other sections of the research report.
• Appendices. No appendices are included.
Even if these sample papers do not derive from your home discipline,
they illustrate how the concepts and strategies included in earlier chapters
manifest across contexts. The exterior features may be different, but the
heartbeat is the same. Think of songs that you can’t help but move to – or
at least tap a finger or foot to. Rhythmic patterns that balance complexity
and predictability make people want to dance (Witek, Clarke, Wallentin,
Kringelback, & Vuust, 2014). It doesn’t matter if it’s Ray Charles, Pharrell,
Mark Ronson, or Aretha Franklin; you dance because your body wants to fill
the space between the beats (Doucleff, 2014). That’s what these sample papers
do – in their unique way they find the optimal space between predictability
and complexity so readers can get their groove on.
122
CHAPTER 9
CONTENT ANALYSIS
INTRODUCTION
Mr. Cory Manento conducted this content analysis within the discipline of
Political Science. Political scientists study how political systems originate,
develop, and operate. Analysis can extend to governments, policies (including
legislation), political institutions, and political behavior. Mr. Manento used
textual analysis to examine Supreme Court decisions of the “exclusionary
rule”, a law stating that evidence collected in violation of one’s constitutional
rights may be inadmissible in a court of law. The study beautifully illustrates
content analysis at the word level and adds a dimension of time to capture the
evolution of the Court’s thinking (and politics). It is formatted using APSA
and illustrates the six-digit Count Rugen organizational structure.
SAMPLE PAPER
Abstract
In 1914, the Supreme Court recognized the exclusionary rule as an essential
constitutional remedy to protect citizens from illegally seized evidence
in court (Weeks v. United States). Subsequently, the Court extended the
exclusionary rule to the States in Mapp v. Ohio (1961). The Court appears
to have shifted to the ideological right on this issue, carving out exceptions
to the rule. In 1984, the Burger Court created the “good faith” exception in
United States v. Leon. The even more conservative Roberts Court created the
“knock-and-announce” exception in Hudson v. Michigan (2006), and made
exceptions for police error in Herring v. United States (2009). To confirm the
rightward trajectory of modern exclusionary rule jurisprudence, I analyzed
the content in the majority and dissenting opinions in these five cases. The
123
Chapter 9
Introduction
You are a college student attending a State University and you reside in
an on-campus dormitory. You have signed a contract with the University
acknowledging that the University reserves the right to inspect your room at
any time for contraband (e.g., candles, toaster ovens, alcohol, etc.) in plain
sight. You placed a small bottle of alcohol in your bottom desk drawer, which
you closed fully before you left for Winter Break. During room inspections,
the inspector accidentally kicks the drawer, opening it and revealing the
alcohol. The University schedules you for a disciplinary hearing and
potential expulsion. Can the University legally initiate disciplinary hearings
based on this search and seizure?
The Fourth Amendment to the United States Constitution protects people’s
right to be secure in their “persons, houses, papers, and effects against
unreasonable searches and seizures,” and says that “no Warrants shall issue,
but upon probable cause, supported Oath or affirmation, and particularly
describing the place to be searched, and the persons or things to be seized.”
The Supreme Court applied the exclusionary rule to Fourth Amendment
cases as a legal protection for citizens that could face criminal penalties
because of illegally obtained evidence. Nolo’s Plain-English Law Dictionary
defines the exclusionary rule as: “A rule of evidence that disallows the use
of illegally obtained evidence in criminal trials.” In other words, the rule
prevents the government from using most evidence gathered in violation of
the Constitution.
Over the past century, the applications of and exceptions to the exclusionary
rule have been challenged within the Supreme Court. Further, since 1972,
Supreme Court decisions have been predominantly conservative, with the
most extreme conservative rulings occurring since 1991 (Martin & Quinn,
2002, 2014). Three important cases on exclusionary rule jurisprudence have
come before the Court following this ideological shift to the right: United
States v. Leon (1984), Hudson v. Michigan (2006), and Herring v. United
States (2009). Two other landmark cases occurred before the shift: Weeks v.
United States (1914), Mapp v. Ohio (1961). All five cases contain the original
application of the exclusionary rule, the major expansions and exceptions
made to the rule, and the rule’s current state. Thus, the purpose of this paper
is to analyze the content of the majority and dissenting decisions in these
124
Content Analysis
cases and examine whether the ideological shift of the Court is manifest in
the language of these decisions.
Literature Review
Two main areas of literature are relevant to this research: the Supreme
Court’s interpretation of the exclusionary rule and the methodological
effectiveness of word quantification and the use of words as data. Before
describing legal scholars’ interpretation of exclusionary rule jurisprudence,
I present the facts of each case under study and the Court’s rulings.
125
Chapter 9
126
Content Analysis
127
Chapter 9
is quite conservative on this issue. Kennedy has never voted to impose the
exclusionary rule in a Fourth Amendment case. Even when Kennedy was
a judge on the Ninth Circuit, he voted against applying the exclusionary
rule in Leon. In his dissent in Leon, Kennedy lamented that “[w]hatever
the merits of the exclusionary rule, its rigidities become compounded
unacceptably when courts presume innocent conduct when the only common
sense explanation for it is ongoing criminal activity.” Justices Scalia and
Thomas have been reliable conservatives on nearly every issue since they
have been on the Court, including the exclusionary rule. Similarly, Justice
Alito has voted conservatively regarding the exclusionary rule. Indeed,
Alito and Chief Justice Roberts served in the Reagan-era Edwin Meese
Justice Department, whose primary goal was to attack exclusionary rule
(Bandes, 2009). During that time, Roberts found himself engaged in “a
campaign to amend or abolish the exclusionary rule,” when he was a young
lawyer in the Reagan Administration (Liptak, 2009).
Clearly, the current Court’s findings regarding the exclusionary rule
have swung toward a more conservative application of the law. To better
understand this ideological shift, it is important to analyze the content of the
majority decisions and dissents.
128
Content Analysis
exceptions to the rule have occurred in recent years, particularly within the
Roberts Court, exclusionary rule opinions offer an excellent opportunity to
examine the Court’s shift to the ideological right.
Research Design
The intent of this research project is to expose patterns in the rhetoric used
in five Supreme Court cases regarding the exclusionary rule. The majority
opinions in Weeks and Mapp, and the dissenting opinions in Leon, Hudson,
and Herring were designated liberal opinions. The dissenting opinion
in Mapp and the majority opinions in Leon, Hudson, and Herring were
categorized as conservative opinions. Note that Weeks was a unanimous
decision, and did not have a dissenting opinion.
I chose the word as my unit of analysis. Specifically, I examined content
words (nouns, proper nouns, verbs, adjectives, or adverbs) rather than function
words (articles, conjunctions, interjections, prepositions, etc.). Different
forms of content words, such as past tense or plural, were considered one
word. Words that express the same idea (i.e., synonyms) were recorded as
one word (e.g., “officer” and “policeman”). However, words that are similar
but express different ideas (e.g., “exclude” and “exclusionary”) were counted
as distinct words.
Using an online word frequency counter, I calculated the 30 most
frequently used words in each opinion. Then, I selected particular words
that conveyed political ideology and examined the frequency of those words
across the opinions. After examining the most frequently used words in each
opinion, I focused on the following words and their variants: “Constitution”,
“privacy”, “right” (as in one’s constitutional right, not ideology), “deter”,
and “cost”.
I predicted a shift in viewing the exclusionary rule as a constitutional
right and necessity to a court-created remedy. Over time, I expected words
connoting civil liberties (i.e., “Constitution”, “right”, and “privacy”) to
decrease and words connoting criminal deterrence (i.e., “deter” and “cost”)
to increase. Further, I expected liberal opinions to address civil liberties more
frequently than criminal deterrence and the opposite patter for conservative
opinions.
Analysis
My hypotheses were partially supported. As illustrated in Figure 1,
“constitution/al/ality” was frequently used in the Weeks (0.61%) and the
Mapp liberal majorities (0.72%), and sharply declined in the conservative
129
Chapter 9
130
Content Analysis
For words conveying punishment (Figure 4), the general pattern was
reversed. Variants of “deter” increased across cases, especially within
conservative majority decisions. In Mapp, there was hardly any mention of
“deter” (0.03% or 0.06%), whereas in Herring, the conservative majority
(0.54%) and liberal minority opinions (0.38%) addressed deterrence more
frequently.
131
Chapter 9
Discussion
My hypotheses were partially supported. Overall, the frequency of words
connoting civil liberties (i.e., variants of “constitution”, “privacy”, and “right”)
decreased over time. Further, words conveying deterrence increased within
the Court opinions. In most cases, changes in the frequency of occurrence were
linked to the ideology underlying the opinion. Not surprisingly, conservative
majority opinions containing variants of civil liberties decreased the most
drastically, whereas variants connoting deterrence increased. The opposite
pattern was mostly mirrored in the liberal opinions.
However, there were exceptions to these general patterns. For example,
conservative opinions using variants of “right” remained relatively consistent
across the five cases. Conservative opinions may have focused on the public’s
right to safety rather than a right to privacy. Similarly the word “cost” was
very frequent in liberal’s minority opinion in Leon. Perhaps the cost was to
an individual’s civil liberties rather than public safety.
Such aberrations may have arisen from dissenting opinions being
written in response to majority opinions. For example, the Mapp dissent
used “constitution/al/ality” with almost the same frequency (0.70%) as the
majority (0.72%). However, this explanation does not hold within other
132
Content Analysis
Conclusion
The present study supports the contention that the ideology of the
Supreme Court on the exclusionary rule has been shifting to the right,
especially recently. The Court has made several exceptions to the rule,
making it more difficult for defendants to be protected from unreasonable
searches and seizures. Recent majority opinions argue that the societal costs
of applying the exclusionary rule outweigh the benefits of deterring police
from committing Fourth Amendment violations. Should this trend continue,
the exclusionary rule may be swept away as a court-ordered remedy rather
than upheld as a constitutional right. Lock your drawers.
References
Bandes, Susan A. 2009. “The Roberts Court and the Future of the Exclusionary
Rule.” Paper for the American Constitutional Society for Law and Policy.
California v. Minjares. 1979. 443 U.S. 916.
Herring v. United States. 2009. 555 U.S. 135.
Hudson v. Michigan. 2006. 547 U.S. 586.
Liptak, Adam. 2009. “Justices Step Closer to Repeal of Evidence Ruling” New York
Times, January 31.
Laver, Michael, Benoit, Kenneth, and Garry, John. 2003. “Extracting Policy Positions
from Political Texts Using Words as Data.” American Political Science Review
97 (May): 311–31.
133
Chapter 9
Maclin, Tracey and Rader, Jennifer. 2012. “No More Chipping Away: The Roberts
Court Uses an Axe to Take Out the Fourth Amendment Exclusionary Rule.”
Mississippi Law Journal 81 (May): 1184–227.
Mapp v. Ohio. 1961. 367 U.S. 643.
Martin, Andrew D. and Quinn, Kevin M. 2002. “Dynamic Ideal Point Estimation via
Markov Chain Monte Carlo for the U.S. Supreme Court 1953–1999.” Political
Analysis 10: 134–153.
Martin, Andrew D., and Quinn, Kevin, M. 2014. “Martin-Quinn Scores.”
http://mqscores.berkeley.edu/measures.php
McGuire, Kevin T. and Vanberg, Georg. 2005. “Mapping the Policies of the U.S.
Supreme Court: Data, Opinions, and Constitutional Law.” Presented at the
Annual Meeting of the American Political Science Association, Washington
DC.
Nolo’s Plain-English Law Dictionary. “Exclusionary Rule.” http://www.nolo.com/
dictionary/exclusionary-rule-term.html
Rice, Douglas. 2012. “Measuring the Issue Content of Supreme Court Opinions
through Probabilistic Topic Models.” Presented at the 2012 Midwest Political
Science Association Conference, Chicago.
United States v. Leon. 1984. 468 U.S. 897.
Weeks v. United States. 1914. 232 U.S. 383.
134
CHAPTER 10
INTRODUCTION
SAMPLE PAPER
Abstract
This paper examines how race moderates the relationship between
occupational prestige and economic attainment. Data from the 2008 General
Social Survey were analyzed. Although racial minorities have lower
occupational prestige than whites, the effect is quite small. Racial minorities
are more likely than whites to attribute economic disparity to race at all levels
of prestige, but especially at low levels of prestige. Whites are more likely
to own homes than racial minorities at all levels of occupational prestige
and particularly at moderate levels of prestige. Race and occupational
prestige predict unique portions of variation in income. However, race
and occupational prestige do not significantly interact to predict income.
135
Chapter 10
Taken together, these findings partially support the conclusion that race is
a stronger predictor of economic attainment than occupational prestige. For
some indicators, occupational prestige moderates the relationship between
race and economic attainment.
Introduction
How worthy is your job perceived by society? Is such worthiness
rewarded economically? According to a recent Harris poll of over 2,000
adults in the United States (The Harris Poll 2014), the most prestigious
occupations include physicians, military officers, firefighters, scientists, and
nurses. Nevertheless, in the same poll, parents were most likely to encourage
their children to become engineers – a career with less prestige than the
aforementioned professions. Why?
Parental attitudes are mirroring the fact that occupational prestige does
not actually measure the economic attainment of occupations. Prestige is
strictly a measure of the perceived social value attributed to a job. The National
Opinion Research Center (NORC) first started measuring occupational
prestige in the 1940s to identify social status associated with professions
(Duncan 1961). Occupational prestige has been measured regularly as part of
the General Social Survey since the 1970s (The General Social Survey 2014)
and has remained extremely stable over decades (Nakao and Treas 1994).
However, more recent data show generational differences in perceived
prestige (The Harris Poll 2014).
From a conceptual standpoint, occupational prestige is not the best
indicator of economic attainment. Nevertheless, occupational prestige may
mitigate income inequality related to demographic factors, particularly race.
To test this hypothesis, I examine whether occupational prestige moderates
the relationship between race and income, as indexed by home ownership
and yearly income. Further, I investigate whether occupational prestige plays
a role in people’s perceptions of economic inequality due to race. I contend
that occupational prestige partially mitigates economic inequality across
races, but it does not erase it.
136
Secondary Analysis of archival data
than racial minorities. Xu and Leffler (1992) reported that racial minorities
earn less than whites.
Other researchers examine economic inequality across race using economic
indicators other than income. Smith and Elliott (2002) examined whether
ethnic concentrations in occupations and industries influence how people
different races attain positions of authority. Workplace authority gives
access to other social benefits allocated through the labor market, such as
higher pay, medical benefits, pensions, etc. In general, whites are more
likely to attain supervisory positions than nonwhites. However, ethnic
concentration is related to the attainment of supervisory positions to
minorities. If the ethnic concentration is in entry-level workers, then the
ethnic minority likely supervises entry-level workers (a basic supervisory
role). Conversely, if the ethnic concentration is at higher-level positions, the
ethnic minority has a greater chance to obtain a higher-level supervisory
role managing upper-level workers. This hypothesis is known as the “sticky
floor”.
In addition to economic equalities, a widening body of research documents
differences in occupational prestige across race. Generally, whites have
higher occupational prestige than racial minorities (Lemelle 2002; Xu and
Leffer 1992). Espino and Franz (2002) report a relationship between skin
color and occupational prestige. For some Hispanic groups (Mexian and
Cuban, but not Puerto Rican), people with lighter skin tones tend to have
jobs with higher occupational prestige. Within the health care sector, a sector
with quite high occupational prestige overall, Aguirre, Wolinsky, Niederauer,
Keith, and Fann (1989) found that whites have higher occupational prestige
than racial minorities.
The previous literature clearly indicates racial inequality in economic
attainment and in occupational prestige. However, few if any studies have
examined the intersection between occupational prestige and race as predictors
of economic attainment. In the present study, I replicate previous findings
documenting racial differences in economic attainment and in occupational
prestige. Using data from the most recent General Social Survey, I examine
two indicators of economic attainment including yearly income and home
ownership. In addition to examining economic attainment, the GSS offers
the unique possibility of investigating beliefs explaining economic disparity.
Specifically, I examine whether occupational prestige moderates the
relationship between race and belief that income inequality is due to racial
discrimination. Consistent with dual market labor theory, I expect whites
to report higher levels of economic attainment and occupational prestige
137
Chapter 10
METHODS
Data
I used responses from the 2008 General Social Survey (GSS; National
Opinion Research Center, 2008) to address my research question. Since
1972, the General Social Survey has collected data on various social factors
from community-dwelling adults in the continental United States. The 2008
General Social Survey contained 2023 respondents.
Measures
The following variables were used in this study: RACE (Race of Respondent),
PRESTG80 (Respondent’s Occupational Prestige Score [1980]), RINCOM06
(Respondent’s Income), DWELOWN (Does Respondent Own or Rent
Home), and RACDIF1 (Economic Differences Due to Discrimination). The
independent variables were RACE and PRESTG80. The dependent variables
were RINCOM06, DWELOWN, and RACDIF1.
Methods
First, I used an analysis of variance to examine whether respondents’ race
(RACE) is related to their occupational prestige (PRESTG80). To increase
statistical power, I recoded RACE into two categories: whites (n = 1559) and
racial minorities (n = 464). I expect whites to report having jobs with higher
occupational prestige than minorities.
Second, I used a multivariate cross-tabulation with a chi-square test of
independence to investigate whether racial minorities high in occupational
prestige are more likely that whites to believe that racial differences in
wealth indictors are due to discrimination. The recoded RACE acts as
the independent variable. PRESTG80 functions as the control variable;
I recoded PRESTG80 into a three-level categorical variable of occupational
prestige: low (n = 750), moderate (n = 638), and high (n = 635). RACDIF1, a
binary response, acts as the dependent variable. I predict that racial minorities
at all levels of prestige are more likely to believe there are economic
differences between whites and racial minorities in the United States because
of racial discrimination.
138
Secondary Analysis of archival data
139
Chapter 10
Hypothesis 3
Figure 2 illustrates the interaction between race and occupational
prestige and actual economic attainment, as indicated by home ownership.
Race, occupational prestige, and home ownership were significantly and
moderately associated, χ2(1, N = 1327) = 72.42, p < .001, Cramer’s ν = .234.
For each level of occupational prestige, whites were more likely to own or
be in the processing of buying a home than racial minorities [17.53 ≤ χ2 ≤
28.687, all p’s < .001, .187 ≤ Cramer’s ν ≤ .234]. Whereas the percentage of
whites owning homes increased almost 22% as a function of occupational
prestige, the percentage of racial minorities owning homes increased only
18%. Further, the greatest disparity across race emerged at moderate levels
140
Secondary Analysis of archival data
Figure 2. The interaction between race and prestige for home ownership
Hypothesis 4
I conducted a multiple regression to examine whether occupational
prestige, race, and the interaction between prestige and race predicted unique
variation in income. Because income was negatively skewed, I applied a
square root transformation on the reversed scores. The values from the
regression model reflect the transformed scores, whereas Figure 3 depicts
original values.
The overall regression model was statistically significant, F(3, 1185) =
87.09, p < .001, and accounted for approximately 17.9% of the variation in
income. As expected, whites earned more than racial minorities, ß = −0.12,
t(1187) = −4.40, p < .001. The part correlation indicates that approximately
11.6% of the variation in income can be explained by race alone. Further,
people with higher occupational prestige earned more than people with low
occupational prestige, ß = 0.27, t(1187) = 3.40, p = .001. Approximately
8.9% of the variation in income was uniquely explained by occupational
prestige. Although the interaction between race and occupational prestige
was not significant, ß = 0.13, t(1187) = 1.65, p = .098, Figure 3 suggests a
trend consistent with other analyses that the greatest impact of race occurs at
141
Chapter 10
142
Secondary Analysis of archival data
disparity was due to racial discrimination was moderately stronger for racial
minorities than for whites. However, this relationship weakened dramatically
as occupational prestige increased.
For more tangible economic indicators, the mitigating role of occupational
prestige was less clear. With respect to home ownership, whites were more
likely to own homes than racial minorities and this disparity was greatest at
moderate levels of prestige. Perhaps racial minorities do not feel comfortable
committing to a home investment until they reach higher levels of
occupational prestige. Alternatively, racial minorities with moderate levels
of occupational prestige may be less likely to be approved for a mortgage
than whites with comparable levels of prestige. Although lending practices
for Black people improved over the 1990s, Black people were still more than
2 times likely to be turned down for a mortgage than their white counterparts
(Phillips, 2003). Note that the data for the present study were collected in
2008 before the subprime lending collapse. The data from the 2018 GSS
may reveal greater racial disparity in home ownership with less mitigation
by occupational prestige.
I observed no moderating effect of occupational prestige on the
relationship between race and overall income. Although there was a trend
suggesting that occupational prestige mitigated racial income inequality at
higher levels of prestige, the interaction was not statistically significant.
Even if the interaction achieved statistical significance, it would account for
very little variation in income.
Several limitations qualify these conclusions. First, for some variables,
there was substantial missing data. For example, 1327 out of 2023 possible
respondents reported home ownership. Further, 1274 out of 2023 possible
respondents answered questions about economic disparity being due to
racial discrimination. Thus, the sample may not be truly representative of
the general population. Second, there are two important factors that I did
not control for in these analyses: biological sex and educational attainment.
Clearly, an income gap exists between men and women and this gap differs
across race. Although educational attainment is associated with occupational
prestige (more prestigious jobs such as physicians and scientists require
many years of schooling), the relationship is not perfect. Future research
should use a broader sample to adequately account for these variables. Third,
the overall trends reported in the present paper are for a cross-section of the
entire United States. The moderating effect of occupational prestige on the
relationship between race and economic attainment may manifest differently
across regions of the United States that differ in their beliefs and display of
143
Chapter 10
References
Aguirre, Benigno E., Fredric D. Wolinsky, John Niederauer, Verna Keith, and
Lih-Jiuan Fann. 1989. “Occupational Prestige in the Health Care Delivery
System.” Journal of Health and Social Behavior, 30:315–29.
Duncan, Otis D. 1961. “A Socioeconomic Index for All Occupations.” Pp. 109–38
in Occupations and Social Status, edited by Albert J. Reiss, Jr. New York:
Free Press.
Espino, Rodolfo and Michael M. Franz. 2002. “Latino Phenotypic Discrimination
Revisited: The Impact of Skin Color on Occupational Status.” Social Sciences
Quarterly, 83(2):612–23.
Hirsch, Eric. 1980. “Dual Labor Market Theory: A Sociological Critique.”
Sociological Inquiry, 50(2):133–45.
Lamelle, Anthony. 2002. “The Effects of the Intersection of Race, Gender, and
Educational Class on Occupational Prestige.” The Western Journal of Black
Studies, 26(2):89–97.
Nakao, Keiko and Judith Treas. 1994. “Updating Occupational Prestige and
Socioeconomic Scores: How the New Measures Measure Up.” Pp. 1–72
in Sociological Methodology, edited by Peter Marsden. Washington, DC:
American Sociological Association.
Phillips, Sandra. 2003. “African Americans and Mortgage Lending Discrimination”.
The Western Journal of Black Studies, 27(2):65–79.
Smith, Ryan A. and James R. Elliot. 2002. “Does Ethnic Concentration Influence
Employees’ Access to Authority? An Examination of Contemporary Urban
Labor Markets.” Social Forces, 81(1):255–79.
The General Social Survey. 2014. “About the GSS”. Chicago. Retrieved March 2,
2016 (http://gss.norc.org/About-The-GSS)
144
Secondary Analysis of archival data
The Harris Poll. 2014. “Doctors, Military Officers, Firefighters, and Scientists
Seen as Among America’s Most Prestigious Occupations”. September 10.
New York: Retrieved March 14, 2016 (http://www.theharrispoll.com/politics/
Doctors__Military_Officers__Firefighters__and_Scientists_Seen_as_
Among_America_s_Most_Prestigious_Occupations.html)
Xu, Wu, and Ann Leffler. 1992. “Gender and Race Effects on Occupational Prestige,
Segregation, and Earnings”. Gender & Society, 6(3):376–92.
145
CHAPTER 11
INTRODUCTION
Ms. Selina Nieves collected data from other undergraduates for her study
in Psychological Science. Psychological scientists examine behaviors,
cognitions, and emotions with the goal of understanding how humans
experience the world. Emerging researchers in criminology, communications,
and behavioral economics often take a psychological approach in posing
and answering research questions. Ms. Nieves compared men and women’s
liking for happy and sad musical excerpts after being placed in a sad mood.
Her study illustrates a true experimental manipulation and also contains
published self-report measures. Consequently, you will see how to describe
stimuli as well as questionnaires. Further, this research report contains
excellent examples of secondary analyses, including manipulation checks.
The paper is formatted in APA style (APA, 2010) and illustrates the four-
finger Simpson structure.
SAMPLE PAPER
Abstract
Research has well established that music modulates people’s mood. More
recent research has demonstrated that the relationship between mood and
music is bidirectional: people’s mood affects their musical preferences.
Although people generally gravitate towards liking happy music,
people who are in a sad mood increase their liking for sad music. Given
documented sex differences in emotional responsiveness and general
mood, I compared men’s and women’s preferences for happy- and sad-
sounding music after putting them in a sad mood. People in a sad mood
147
Chapter 11
liked sad music more than they liked happy music, p < .001, ηp2 = .826.
Further, men and women provided comparable likeability for happy and
sad music, p = .810, ηp2 = .002. Regardless of biological sex, misery loves
company.
148
Primary data collection
Method
Participants
The sample consisted of 40 undergraduates (21 females, 19 males) from
a Northeastern regional public university. Students received course credit
for their participation. Participants were between 18 and 25 years of age
(M = 20.00 years, SD = 1.89). The majority of participants (62.5%) identified
as non-Hispanic Caucasians and racial/ethnic identification did not differ
across men and women, χ2(4, 40) = 0.78, p = .942, Cramer’s ν = .139.
All participants reported having good hearing and eyesight. Over half of
the sample (60%) reported having some musical training: 1–3 years (n = 6),
4–5 years (n = 4), and 6+ years (n = 7). Seven participants did not report the
duration of their musical training.
Materials
Demographics questionnaire. Participants reported their age, biological
sex, race and ethnicity, and answered questions about their student status
(e.g., total number of credits, cumulative GPA, extracurricular activities).
Further, participants described their previous musical training, which I
defined as knowing how to read music or being involved in activities where
you would have to know how to read music and understand music concepts,
such as tempo. Participants confirmed that they had normal hearing and
were wearing corrective lenses if applicable (see Appendix A).
Postitive and Negative Affect Schedule (PANAS; Watson, Clark, &
Tellegen, 1998). This self-report scale consists of 20 items measuring
149
Chapter 11
positive (e.g., alert) and negative (e.g., afraid) affect. Respondents rated how
each word described their current momentary mood using 5-point scale from
1 (very slightly or not at all) to 5 (extremely). Both positive and negative
subscales exhibited adequate inter-item consistency (Cronbach’s α = .89,
Cronbach’s α = .85; see Appendix B).
Sad mood induction. To induce a sad mood, participants viewed Hunter
et al.’s (2011) stimuli consisting of 12 color images depicting sad content
(e.g., injured animals). Each picture was sized to fill a PowerPoint slide.
Brief written descriptions appeared under each image (e.g., “injured turtle”)
to emphasize negative affect. Using a Direct RT script (Empirisoft, 2012),
images appeared on the computer screen in a random order for 15 seconds
each. Participants rated each image on a 7-point bipolar sad-happy rating
scale using the computer’s keypad. Responses ranged from 0 (extremely
sad) to 6 (extremely happy). Upon making a response, the next image
appeared until participants rated all 12 images. Then participants selected
one of the images and wrote a reflective statement about it for 2 minutes (see
Appendix C).
Musical excerpts. Participants listened to six 30-second musical excerpts.
Five selections were from Hunter et al.’s (2011) stimuli that derived from
commercial recordings in an assortment of musical styles. I provided the
sixth excerpt. Excerpts were presented though headphones at equal volume
(see Appendix D).
Procedure
This study received institutional IRB approval. I conducted my study in a
small computer lab testing up to 5 participants at a time. Each participant sat
at his or her own computer terminal. Dividers were placed between terminals
for privacy. I informed participants that the purpose of my study was to
examine the relationship between mood and music. Participants then read
and signed the informed consent.
Participants completed the demographics questionnaire and the PANAS.
Next, they experienced the sad mood induction. As a manipulation check,
participants completed the PANAS again. Then I instructed participants
to put on their headphones to listen to six short musical excerpts. After
listening to each excerpt, participants answered three questions: “How
much did you LIKE the music?”, “How happy or sad did the music
SOUND?”, and “How familiar did the music SOUND?” Consistent with
Hunter et al. (2011), participants recorded responses on a 7-point scale
(0 to 6).
150
Primary data collection
Results
Sad Mood Induction
Image ratings. To confirm that participants found the images sad, I
averaged ratings across the 12 images. Ratings (M = 1.04, SD = 0.70, 95%
CI [0.82, 1.27]) were significantly below the neutral threshold of 3.00,
t(39) = 17.78, p < .001, Cohen’s d = 2.81. Although women (M = 0.90,
SD = 0.59, 95% CI [0.63, 1.17]) found the images more sad on average than
did men (M = 1.20, SD = 0.78, 95% CI [0.82, 1.58]), this difference was not
statistically significant, t(38) = 1.38, p = .175, Cohen’s d = 0.43.
Mood. I examined whether participants’ mood changed after experiencing
the sad mood induction. Although negative affect scores on the PANAS
were positively skewed, the change in negative affect was normally
distributed for both men and women. I conducted a 2 × 2 mixed-model
MANOVA with time (pre-induction, post-induction) as a within-participant
factor and biological sex (male, female) as a between-participants factor.
Sums of positive and negative affect scores were the dependent variables.
Positive affect significantly decreased after induction (Mpre = 29.97, SD =
9.69; Mpost = 25.33, SD = 9.76), F(1, 38) = 29.57, p < .001, η2p = .438.
Conversely, negative affect significantly increased after induction (Mpre =
13.85, SD = 4.52; Mpost = 15.98, SD = 5.91), F(1, 38) = 7.87, p = .008, η2p =
.172. Neither the main effect of biological sex nor the interaction between
biological sex and time were statistically significant for positive or negative
affect scores (all p’s > .174). Thus, the sad mood induction effectively
changed participants’ mood and was comparably salient for men and women.
Primary Analysis
To evaluate whether men and women differed in their liking of the happy
and sad excerpts, I conducted a 2 × 2 mixed-model ANOVA with valence
(happy, sad) as a within-participant factor and biological sex (male, female)
as a between-participants factor. Findings are illustrated in Figure 1.Contrary
to prediction, participants vastly preferred sad music to happy music,
F(1, 38) = 88.24, p < .001, η2p = .699. I observed no main effect of biological
151
Chapter 11
sex, F(1, 38) = 1.420, p = .241, η2p = .036 or interaction between biological
sex and valence, F(1, 38) = 0.018, p = .894, η2p < .001.
Secondary Analyses
Affect ratings for musical excerpts. Following the approach of the
primary analysis, I conducted a 2 × 2 mixed-model ANOVA to evaluate
whether participants reliably identified the intended valence of the musical
excerpts. Participants rated happy excerpts (M = 3.33, SD = 0.47) significantly
more positively than sad excerpts (M = 0.93, SD = 0.83), F(1, 38) = 179.93,
p < .001, η2p = .83. Ratings appeared comparable across biological sex,
F(1, 38) = 0.07, p = .800, η2p = .070 and there was no interaction between
biological sex and affect, F(1, 38) = 0.06, p = .810, η2p < .001.
Figure 1. Mean liking ratings for happy and sad musical excerpts across men and
women. Error bars represent ± 1 SE.
152
Primary data collection
Discussion
In the current study, I compared men’s and women’s liking of happy and
sad music while they were in a sad mood. Whereas Hunter and colleagues
(2011) found that inducing a sad mood produced comparable liking ratings
for happy and sad excerpts, I found that participants who felt sad preferred
sad music to happy music by more than a factor of 2. Further, I found no
differences in music liking or emotional processing between women and
men.
Why did participants in my study vastly prefer sad music to happy music
while in a sad mood? Participants in my sample may have been music
empathizers who viewed music as emotional communication (Kreutz,
Schubert, & Mitchell, 2008). By contrast, music systematizers take a more
analytical approach to music listening. Future research should examine
whether preference for sad music while sad is associated with musical
listening style.
The testing situation may also explain the current results. Hunter et
al. (2011) tested participants individually; I tested participants in groups.
Perhaps emotional contagion occurred in my study. In emotional contagion,
people perceive others’ emotional behavior and unconsciously replicate
it. The act of replicating emotional behavior causes people to feel that
behavior themselves, presumably through the action of the mirror neuron
system (Dezecache, Jacob, & Grèzes, 2015). Although participants in my
study listened to music through headphones and partitions were placed
between computer stations, participants could still see each other. Perhaps
participants perceived others’ body language and facial expressions,
increasing the salience of the mood induction and subsequent liking for sad
music.
Despite evidence suggesting that women respond more strongly to
emotional stimuli than do men (e.g., Fujita et al., 1991; Grossman & Wood,
1993; Panskepp, 1995) and that women tend to make choices to prolong a
sad state (Butler & Nolen-Hoeksema, 1994), I observed no sex differences
in music liking or emotional processing. The images and musical excerpts
in the present study were gender neutral. Using stimuli that activates gender
schemas may produce differences in people who identify strongly with a
particular gender and corresponding stereotypes.
The current findings have implications for music therapy. Masumoto
(2002) reported that deeply sad participants felt less sad after listening to
sad music. Although participants in the current study were not “deeply sad”,
the current findings suggest that people like emotional stimuli that matches
153
Chapter 11
their current mood. The act of matching behavior to current cognitions and
emotions may reduce cognitive dissonance and improve mood.
My findings add to the literature on mood congruency in the musical
domain (Hunter et al., 2010; Hunter et al., 2011). Men and women appear
to exhibit comparable mood congruency for music. The particularly strong
effect of liking sad music while in a sad mood observed in the present study
may be due to empathizing deeply with music or experiencing emotional
contagion. Misery may indeed love company.
References
Butler, L. D., & Nolen-Hoeksema, S. (1994). Gender differences in responses to depressed
mood in a college sample. Sex Roles, 30, 331–346. doi:10.1007/BF01420597
Dezecache, G., Jacob, P., & Grèzes, J. (2015) Emotional contagion: Its scope and limitations.
Trends in Cognitive Sciences, 19, 297–299. doi: 10.1016/j.tics.2015.03.011
DirectRT (Version 2012) [Computer software]. New York, NY: Empirisoft Corporation
Fujita, F., Diener, E., & Sandvik, E. (1991). Gender differences in negative affect and well-
being: The case for emotional intensity. Journal of Personality and Social Psychology,
61, 427–434.
Grossman, M., & Wendy Wood, W. (1993). Sex differences in intensity of emotional
experience: A social role interpretation. Journal of Personality and Social Psychology,
65, 1010–1022.
Hunter, P. G., Schellenberg, E., & Griffith, A. T. (2011). Misery loves company: Mood-
congruent emotional responding to music. Emotion, 11, 1068–1072. doi:10.1037/
a0023749
Hunter, P. G., Schellenberg, E., & Schimmack, U. (2008). Mixed affective responses to
music with conflicting cues. Cognition and Emotion, 22, 327–352.doi:10.1080/
02699930701438145.
Hunter, P. G., Schellenberg, E., & Schimmack, U. (2010). Feelings and perceptions of
happiness and sadness induced by music: Similarities, differences, and mixed
emotions. Psychology of Aesthetics, Creativity, and the Arts, 4, 47–56. doi:10.1037/
a0016873.
Isaacowitz, D. M., Toner, K., Goren, D., & Wilson, H. R. (2008). Looking while unhappy:
Mood-congruent gaze in young adults, positive gaze in older adults. Psychological
Science, 19, 848–853.
Kring, A. M., & Gordon, A. H. (1998). Sex differences in emotion: Expression, experience,
and physiology. Journal of Personality and Social Psychology, 74(3), 686–703.
doi:10.1037/0022-3514.74.3.686
Matsumoto, J. (2002). Why people listen to sad music: Effects of music on sad moods.
Japanese Journal of Educational Psychology, 50(1), 23–32.
Matt, G. E., Vázquez, C., & Campbell, W. K. (1992). Mood-congruent recall of affectively
toned stimuli: A meta-analytic review. Clinical Psychology Review, 12, 227–255.
Nolen-Hoeksema, S. (1990). Sex differences in depression. Stanford University Press.
154
Primary data collection
Panksepp, J. (1995). The emotional sources of ‘chills’ induced by music. Music Perception,
13, 171–207.
Watson, D., Clark, L., & Tellegen, A. (1988). Development and Validation of Brief Measures
of Positive and Negative Affect: The PANAS Scales. Journal of Personality and Social
Psychology, 54, 1063–1070.
155
EPILOGUE
Well, emerging researcher, you’ve been on quite a wild ride. Take a moment
to ponder all you have accomplished. Developing a research question,
formulating and implementing a methodology, and analyzing the data
stretched your creativity, judgment, and perspective. But these feats, notable
as they are, pale in comparison to writing your report.
Think about it. You introduced readers to your research question by
hooking them, contextualizing your question with extant research and
theory, and convincing readers why your question was worth studying.
You described how you went about answering your question in a way that
others could replicate. You described what you found and supported your
conclusions with a second language – statistics. You considered alternative
explanations for your findings, critically examined limitations, discussed
potential applications of your research, and – again – made readers care
about your work. You wrangled with seemingly countless formatting rules.
And, to top it off, you revised and reworked your prose to make it accessible
to the widest audience. Wow.
Should your journey end here? It can, but it doesn’t have to. In writing
your quantitative research report, you have made yourself part of something
much grander than you probably realized. You have become a member of
the scientific community. Scientists not only conduct research, they share
their research with anyone willing to listen or read. Would you present your
research at a local, regional, or national conference? Is your work publishable
in a scholarly outlet? Could you write a blog about your research?
Carl Sagan noted, “We live in a society dependent on science and
technology, in which hardly anyone knows anything about science and
technology.” As the next generation of social and behavioral scientists,
you have the opportunity to change that. Take a step in that direction by
partnering with your professor and taking your research report to the next
level. Communicate your research so that you can push science and society
forward.
157
REFERENCES
Aftermath of an unfounded vaccine scare. (2013, May 22). The New York Times.
Retrieved from http://nyti.ms/1xXWlkV
Ahn, W.-Y., Kishida, K. T., Gu, X., Lohrenz, T. Harvey, A., Alford, J. R., Smith, K.
B., Yaffe, G., Hibbing, J. R., Dayan, P., & Montague, P. R. (2014). Nonpolitical
images evoke naural predictors of political ideology. Current Biology, 24, 2693–
2699. doi:10.1016/j.cub.2014.09.050
American Political Science Association Committee of Publications. (2006). APSA
style manual for political science. Washington, DC: American Political Science
Association. Retrieved from www.apsanet.org
American Psychological Association. (2010). Publication manual of the American
Psychological Association (6th ed.). Washington, DC: American Psychological
Association.
American Sociological Association. (2010). American Sociological Association
style guide (4th ed.). Washington, DC: American Sociological Assocation.
Ames, D. R., Rose, P., & Anderson, C. P. (2006). The NPI-16 as a short measure
of narcissism. Journal of Research in Personality, 40, 440–450. doi:10.1016/j.
jrp.2005.03.002
Bain, K. (2004). What the best college teachers do. Cambridge, MA: Harvard
University Press.
Becker, H. S. (2007). Writing for social scientists: How to start and finish your
thesis, book, or article (2nd ed.). Chicago, IL: University of Chicago Press.
Bem, D. J. (2004). Writing the empirical article. In J. M. Darley, M. P. Zanna, &
H. L. Roediger III (Eds.), The compleat academic: A career guide (2nd ed.).
Washington, DC: American Psychological Association.
Blackwell, L. S., Trzeniewski, K. H., & Dweck, C. S. (2007). Implicit theories of
intelligence predict achievement across an adolescent transition: A longitudinal
student and an intervention. Child Development, 78, 246–263.
Boice, R. (1990). Faculty resistance to writing-intensive courses. Teaching of
Psychology, 17, 13–17.
Bonaccio, S., & Reeve, C. L. (2010). The nature and relative importance of students’
perceptions of the sources of test anxiety. Learning and Individual Differences,
20, 617–625. doi:10.1016/j.lindif.2010.09.007
Bransford, J. D., & Johnson, M. K. (1972). Contextual prerequisites for understanding:
Some investigations of comprehension and recall. Journal of Verbal Learning
and Verbal Behavior, 11, 717–726.
Brown, P. C., Roediger, III, H. L., & McDaniel, M. A. (2014). Make it stick: The
science of successful learning. Cambridge, MA: Harvard University Press.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.
159
References
160
References
Heiman, G. W. (2002). Research methods in psychology (3rd ed.). New York, NY:
Houghton Mifflin.
Howe, M. J. A. (1999). Genius explained. Cambridge, United Kingdom: Cambridge
University Press.
Ifcher, J., & Zarghamee, H. (2014). The happiness of single mothers: Evidence
from the general social survey. Journal of Happiness Studies, 15, 1219–1238.
doi:10.1007/s10902-013-9472-5
Kalis, P., & Neuendorf, K. A. (1989). Aggressive cue prominence and gender
participant in MTV. Journalism Quarterly, 66, 148–154, 229.
Kendell, P. C., Silk, J. S., & Chu. B. C. (2000). Introducing your research report:
Writing the introduction. In R. J. Sternberg (Ed.), Guide to publishing in
psychology journals (pp. 41–57). New York, NY: Cambridge Univeristy Press.
Kerlinger, F. N. (1979). Behavioral research: A conceptual approach. New York,
NY: Holt, Rinehart, & Winston.
Keyes, R. (2003). The writer’s book of hope. New York, NY: Holt.
Kreider, R. M. (2005). Number, timing, and duration of marriages and divorces:
2001 (Household Economics Studies). Washington, DC: U.S. Bureau of the
Census. Retrieved from https://www.census.gov/prod/2005pubs/p70-97.pdf
Krippendorf, K. (2004). Content analysis: An introduction to its methodology (2nd
ed.). Thousand Oaks, CA: Sage.
Lay, C. (1986). At last, my research article on procrastination. Journal of Research
in Personality, 20, 474–495.
Lee, L., Frederick, S., & Ariely, D. (2006). Try it, you’ll like it: The influence of
expectation, consumption, and revelation on preferences for beer. Psychological
Science, 17, 1054–1058.
Lewis, M. (2004). Moneyball: The art of winning an unfair game. New York, NY:
W. W. Norton & Co.
Lindberg, R. (July, 2015). Does online course enhancement contribute to learning
and social connectivity? (Unpublished Master’s Thesis). Central Connecticut
State University, New Britain, CT.
Lopez, S. J. (2013). Making hope happen. New York, NY: Atria.
Lounsbury, J. W., Fisher, L. A., Levy, J. J., & Welsh, D. P. (2009) An investigation
of character strengths relation to the academic success of college students.
Individual Differences Research, 7, 52–69.
Martin, S. P. (n.d.). Growing evidence for a “divorce divide”? Education and martial
dissolution rates in the U.S. since the 1970s (Russell Sage Foundation Working
Paper Series). Retrieved from http://www.russellsage.org/research/reports/steve-
martin
Metcalfe, J., & Wiebe, D. (1987). Intuition in insight and noninsight problem
solving. Memory & Cognition, 15, 238–246. doi:10.3758/BF03197722
Modern Language Association. (2008). MLA style manual and guide to
scholarly publishing (3rd ed.). New York, NY: Modern Language Association of
America.
161
References
162
References
Strunk W., Jr., & White, E. B. (1999). The elements of style (4th ed.). Boston, MA:
Longman.
Thurman, S. (2003). The only grammar book you’ll ever need: A one-stop source for
every writing assignment. Avon, MA: Adams Media.
Tough, P. (2012). How children succeed: Grit, curiosity, and the hidden power of
character. New York, NY: Mariner Books.
VandenBos, G. R. (Ed.). (2015). APA dictionary of psychology (2nd ed.) Washington,
DC: American Psychological Association.
van Baaren, R. B., Holland, R. W., Kawakami, K., & van Knippenberg, A. (2004).
Mimicry and prosocial behaviour. Psychological Science, 15, 71–74.
Villanti, A. (2016, March). Positive attitudes toward interracially and internationally
adopted children. Poster presented at the Eastern Psychological Association
Annual Conference, New York, NY.
Vincent, R. C., Davis, D. K., & Boruszkowski, L. A. (1987). Sexism on MTV: The
portrayal of women in rock videos. Journalism Quarterly, 64, 750–755, 941.
Weber, R. P. (1990). Basic content analysis (2nd ed.). Newbury Park, CA: Sage.
Witek, M. A. G., Clarke, E. F., Wallentin, M., Kringelback, M. L., & Vuust, P.
(2014). Syncopation, body movement, and pleasure in groove music. PLoS ONE,
10, e0139409. doi:10.1371/journal.pone.0094446
Vygotsky, L. (1962). Thought and language. Cambridge, MA: MIT Press.
Zinsser, W. (1988). Writing to learn. New York, NY: Harper.
Zinsser, W. (2006). On writing well: The classic guide to nonfiction. New York, NY:
Harper.
Zullow, H. M. (1991). Pessimistic rumination in popular songs and newsmagazines
predict economic recession via decreased consumer optimism and spending.
Journal of Economic Psychology, 12, 501–526. doi:10.1016/0167-4870(91)90029-S
Zullow, H. M., Oettigen, G., Peterson, C., & Seligman, M. E. P. (1988).
Pessimistic explanatory style in the historical record: CAVing LBJ, presidential
candidates, and East versus West Berlin. American Psychologist, 43, 673–682.
doi:10.1037/0003-066X.43.9.673
163
ABOUT THE AUTHOR
165