Sunteți pe pagina 1din 273

Advances in Experimental

Moral Psychology
Advances in Experimental Philosophy

Series Editor:
James R. Beebe, Associate Professor of Philosophy, University at Buffalo, USA

Editorial Board:
Joshua Knobe, Yale University, USA
Edouard Machery, University of Pittsburgh, USA
Thomas Nadelhoffer, College of Charleston, USA
Eddy Nahmias, Neuroscience Institute at Georgia State University, USA
Jennifer Nagel, University of Toronto, Canada
Joshua Alexander, Siena College, USA

Experimental philosophy is generating tremendous excitement, producing


unexpected results that are challenging traditional philosophical methods.
Advances in Experimental Philosophy responds to this trend, bringing
together some of the most exciting voices in the field to understand
the approach and measure its impact in contemporary philosophy.
The resultisa series that captures past and present developments and
anticipatesfutureresearch directions.

To provide in-depth examinations, each volume links experimental


philosophy to a key philosophical area. They provide historical overviews
alongside case studies, reviews of current problems, and discussions
of new directions. For upper-level undergraduates, postgraduates, and
professionalsactively pursuing research in experimental philosophy,
theseareessential resources.

New titles in the series include:


Advances in Experimental Epistemology, edited by James R. Beebe
Advances in Experimental Philosophy of Mind, edited by Justin Sytsma
Advances in Experimental
MoralPsychology

Edited by
Hagop Sarkissian and Jennifer Cole Wright

Series: Advances in Experimental Philosophy

LON DON N E W DE L H I N E W YOR K SY DN EY


Bloomsbury Academic
An imprint of Bloomsbury Publishing Plc

50 Bedford Square 1385 Broadway


London New York
WC1B 3DP NY 10018
UK USA

www.bloomsbury.com

Bloomsbury is a registered trade mark of Bloomsbury Publishing Plc

First published 2014

Hagop Sarkissian and Jennifer Cole Wright and Contributors 2014

Hagop Sarkissian and Jennifer Cole Wright have asserted their right under the Copyright,
Designs and Patents Act, 1988, to be identified as the Editors of this work.

All rights reserved. No part of this publication may be reproduced or transmitted


in any form or by any means, electronic or mechanical, including photocopying,
recording, or any information storage or retrieval system, without prior
permission in writing from the publishers.

No responsibility for loss caused to any individual or organization acting on or


refraining from action as a result of the material in this publication can
be accepted by Bloomsbury or the author.

British Library Cataloguing-in-Publication Data


A catalogue record for this book is available from the British Library.

ISBN: HB: 978-1-4725-0938-3


ePDF: 978-1-4725-1304-5
ePub: 978-1-4725-0785-3

Library of Congress Cataloging-in-Publication Data


Advances in experimental moral psychology/edited by Hagop Sarkissian and
Jennifer Cole Wright.
pages cm. (Advances in experimental philosophy)
Includes bibliographical references and index.
ISBN 978-1-4725-0938-3 (hardback) ISBN 978-1-4725-0785-3 (epub)
ISBN 978-1-4725-1304-5 (epdf) 1. EthicsResearch. I. Sarkissian, Hagop (Professor),
editor of compilation.
BJ66.A38 2014
170dc23
2013039795

Typeset by Deanta Global Publishing Services, Chennai, India


Table of Contents

Notes on Contributors vii

Experimental Moral Psychology: AnIntroduction


Hagop Sarkissian and Jennifer Cole Wright 1

Part 1 Moral Persons

1 The Character in Competence Piercarlo Valdesolo 21

2 Spoken Words Reveal Selfish Motives: AnIndividual


DifferenceApproach toMoralMotivation Jeremy A. Frimer
andHarrison Oakes 36

3 Is the Glass of Kindness Half Full or HalfEmpty?


Positive and Negative Reactionsto Others Expressions
ofVirtue Gabriela Pavarini and Simone Schnall 55

4 What are the Bearers of Virtues? Mark Alfano 73

5 The Moral Behavior of Ethicists and the


Power of Reason Joshua Rust and Eric Schwitzgebel 91

Part 2 Moral Groundings

6 Pollution and Purity in Moral and Political Judgment


Yoel Inbar and David Pizarro 111

7 Selective Debunking Arguments, Folk Psychology,


and Empirical Moral Psychology Daniel Kelly 130

8 The Psychological Foundations of Moral Conviction


Linda J. Skitka 148

9 How Different Kinds of Disagreement Impact Folk


MetaethicalJudgments James R. Beebe 167
vi Table of Contents

10 Exploring Metaethical Commitments: MoralObjectivity and


Moral Progress Kevin Uttich, George Tsai, and Tania Lombrozo 188

11 Agent Versus Appraiser Moral Relativism: An Exploratory


Study Katinka J. P. Quintelier, Delphine De Smet, and
Daniel M. T. Fessler 209

Part 3 Measuring Morality

12 Know Thy Participant: The Trouble with Nomothetic


Assumptions in Moral Psychology Peter Meindl and
Jesse Graham 233

Index 253
Notes on Contributors

Editors

Hagop Sarkissians research is located at the intersection of cognitive


science, ethics, and classical Chinese philosophy. He draws insights from
the cognitive and behavioral sciences to explore topics in moral psychology,
moral agency, and the status of morality, with an eye toward seeing how
culture might shape cognition in these domains. In addition to drawing from
the empirical sciences, he also uses the tools of experimental psychology in
some of his research. He has authored or coauthored papers in these areas
for Philosophical Studies, Philosophers Imprint, Mind & Language, The
Annual Review of Psychology, Philosophy Compass, Review of Philosophy and
Psychology, History of Philosophy Quarterly, The Journal of Chinese Philosophy,
and for Moral Psychology, Volume I: The Evolution of Morality: Adaptations
and Innateness (MIT, 2007). His work has been translated into Chinese and
Korean. He teaches at The City University of New York, Baruch College, and
resides in Brooklyn.

Jennifer Cole Wright is an assistant professor of psychology and an affiliate


member of philosophy at the College of Charleston. Her area of research is
moral development and moral psychology more generally. Specifically, she
studies metaethical pluralism, the influence of individual and social liberal vs
conservative mindsets on moral judgments, and young childrens early moral
development. She has published papers on these and other topics in Cognition,
Mind & Language, Journal of British Developmental Psychology, Journal of
Experimental Social Psychology, Oxford Studies in Experimental Philosophy,
Journal of Moral Education, Philosophical Psychology, Journal of Cognition and
Culture, Personality and Individual Differences, Social Development, Personality
& Social Psychology Bulletin, and Merrill-Palmer Quarterly.
viii Notes on Contributors

Contributors

Mark Alfano is an assistant professor of philosophy at the University of


Oregon. In2011, he received his doctorate from the Philosophy Program of
the City University of New York Graduate Center. He has been a postdoctoral
fellow at the Notre Dame Institute for Advanced Study and the Princeton
University Center for Human Values. Alfano works on moral psychology,
broadly construed to include ethics, epistemology, philosophy of mind, and
philosophy of psychology. He also maintains an interest in Nietzsche, focusing
on Nietzsches psychological views. Alfano has authored papers for such
venues as the Philosophical Quarterly, The Monist, Erkenntnis, Synthese, and
the British Journal for the History of Philosophy. His first book, Character as
Moral Fiction (Cambridge University Press, 2013), argues that the situationist
challenge to virtue ethics spearheaded by John Doris and Gilbert Harman
should be co-opted, not resisted. He is currently writing an introduction to
moral psychology and a research monograph on Nietzsche, as well as editing
three volumes on virtue ethics and virtue epistemology.

James R. Beebe is an associate professor of philosophy and a member of the


Center for Cognitive Science at the University at Buffalo. He has published
papers on reliabilism, skepticism, and the a priori in mainstream epistemology
and is actively engaged in the empirical study of folk epistemic intuitions. He
also has a keen interest in the psychology of moral, political, and religious
beliefs.

Jeremy A. Frimer is an assistant professor of psychology at the University


of Winnipeg in Winnipeg, Canada. He completed a PhD at the University
of British Columbia in 2011. His research investigates how the words that
individuals tend to speak reveal information about their personality and
lead to congruent behaviors. Harrison Oakes is in the final year of a BAH in
Psychology at the University of Winnipeg.

Yoel Inbar is an assistant professor of social psychology at Tilburg University


in the Netherlands. His research focuses on the interplay between two general
mental processes that influence judgment, namely: rational, deliberate analysis
Notes on Contributors ix

and intuitive, emotional reactions. Yoel has applied his research to study how
intuition affects our choices, how our moral beliefs determine our actions and
judgment of others, and how the emotion of disgust can predict our moral and
political attitudes.

David Pizarro is currently an associate professor in the Department of


Psychology at Cornell University in Ithaca, NY. His primary research interest is
in how and why humans make moral judgments, such as what makes us think
certain actions are wrong, and that some people deserve blame. In addition, he
studies how emotions influence a wide variety of judgments. These two areas
of interest come together in the topic of much of his recent work, which has
focused on the emotion of disgust and the role it plays in shaping moral, social,
and political judgments.

Daniel Kelly is an associate professor of philosophy at Purdue University.


His research interests are at the intersection of the philosophy of mind and
cognitive science. He has published papers on moral judgment, social norms,
implicit bias and responsibility, racial cognition, and social construction, and
is the author of Yuck! The Nature and Moral Significance of Disgust.

Peter Meindl is a doctoral student in the social psychology department at the


University of Southern California. Prior to matriculating at USC, Pete received
a BA and an MA in psychology from Stanford University and Wake Forest
University, respectively. He primarily conducts research pertaining to moral
cognition and morally relevant behavior, but his research also often touches
on issues related to self-regulation.

Jesse Graham received his PhD (psychology) from the University of Virginia
in2010, a Masters (Religious Studies) from Harvard University in2002, and a
Bachelors (Psychology) from the University of Chicago in1998. He is currently
assistant professor of psychology at the University of Southern California, where
he hovers menacingly over the Values, Ideology, and Morality Lab. His research
interests are in moral judgment, ideology, and implicit social cognition.

Gabriela Pavarini is a PhD candidate in psychology at the Embodied


Cognition and Emotion Laboratory at the University of Cambridge. Her
x Notes on Contributors

research interests focus on the psychological mechanisms underlying prosocial


behavior and the formation of interpersonal bonds. Before moving to
Cambridge, she received her BA in Psychology from the Federal University of
Sao Carlos, Brazil. She then completed her MPhil in Social and Developmental
Psychology at Cambridge, specializing in emotional reactions to expressions of
virtue. Currently, she is interested in the social functions of physiological and
behavioral synchrony, as well as other-focused emotions such as admiration
and compassion.

Simone Schnall is a senior lecturer of psychology at the University of


Cambridge and the director of the Cambridge Embodied Cognition and
Emotion Laboratory. She studies the relationship between cognitive and
affective processes. In particular, she is interested in how embodiment
informs and constrains thought and feeling. Recent topics have included the
effect of emotion on morality and the role of physical ability on perceptual
judgments. Her work has been funded by numerous grants from public and
private sources, including the Economic and Social Research Council (UK),
the National Science Foundation (USA), and the National Institute of Mental
Health (USA). Schnalls research is routinely covered in the popular media
such as the New York Times, The Economist, Newsweek, New Scientist and
Psychology Today.

Katinka J. P. Quintelier received her PhD in philosophy from Ghent University.


She was a postdoctoral research fellow at the Konrad Lorenz Institute for
Evolution and Cognition Research in Altenberg, Austria, at the time this
chapter was written, and she is now a researcher at the University of Amsterdam.
Her research interests are reputation, individual differences in morality, and
empirically informed normative ethics.

Daniel M. T. Fessler is an associate professor of anthropology and director of


the Center for Behavior, Evolution, and Culture at the University of California,
Los Angeles. Combining experiments, ethnography, and published data, he
explores the determinants of behavior, experience, and health in domains such
as emotions, disease avoidance, aggression, cooperation, morality, food and
eating, sex and reproduction, and risk taking.
Notes on Contributors xi

Delphine De Smet obtained her masters in moral sciences at Ghent University,


where she is now a PhD student on disgust and moral emotions. She does
research on the cultural and biological origins of the incest taboo and is
currently working on disgust and fascination related to blood.

Joshua Rust is associate professor of philosophy at Stetson University, Florida.


In addition to a number of books and papers focused on the writings of
John Searle he has coauthored (with Eric Schwitzgebel) a number of papers
concerning the moral behavior of ethicists.

Eric Schwitzgebel is professor of philosophy at UC Riverside, working at the


intersection of philosophy of mind and empirical psychologyespecially on
issues of self-knowledge, consciousness, moral psychology, attitudes such as
belief, and the role of intuition or common sense in philosophical method.
Among his works are Perplexities of Consciousness (MIT Press, 2011),
Describing Inner Experience? Proponent Meets Skeptic (with Russell T. Hurlburt,
MIT Press, 2007), The unreliability of nave introspection (Philosophical
Review 2008), The moral behavior of ethics professors: Relationships among
self-reported behavior, expressed normative attitude, and directly observed
behavior (with Joshua Rust, Philosophical Psychology in press), Acting
contrary to our professed beliefs (Pacific Philosophical Quarterly 2010), and
A dispositional approach to attitudes: Thinking outside the belief box (in
Nottelmann, ed., New Essays on Belief 2013). He also blogs at The Splintered
Mind: http://schwitzsplinters.blogspot.com.

Linda J. Skitka is a professor and associate head of psychology at the University


of Illinois at Chicago. Her research interests bridge social, political, and moral
psychology. She has over 100 academic publications on topics such as the
psychology of moral conviction, the cognitive and motivational underpinnings
of ideological disagreements, and the psycho-political consequences of
terrorism. Her research on these topics has received grant support from the
National Science Foundation and NASA-Ames. She serves on multiple editorial
boards as a consulting editor, is currently an associate editor for the Journal of
Personality and Social Psychology, and is a former president of theInternational
Society for Justice Research.
xii Notes on Contributors

Tania Lombrozo is an associate professor of psychology at the University


of California, Berkeley, an affiliate of the Department of Philosophy, and a
member of the Institute for Cognitive and Brain Sciences. She received
her PhD in psychology from Harvard University in 2006 after receiving a
BS in symbolic systems and a BA in philosophy from Stanford University.
Dr Lombrozos research addresses foundational questions about cognition
using the empirical tools of cognitive psychology and the conceptual tools of
analytic philosophy, focusing on explanation and understanding, conceptual
representation, categorization, social cognition, and causal reasoning. She
is the recipient of numerous awards, including the Stanton Prize from the
Society for Philosophy and Psychology, a Spence Award from the Association
for Psychological Science, a CAREER award from the National Science
Foundation, and a McDonnell Foundation Scholar Award. She blogs about
psychology, philosophy, and cognitive science for NPRs 13.7: Cosmos &
Culture.

Kevin Uttich received his PhD in psychology from the University of California-
Berkeley in 2012 after receiving a BA in psychology from the University of
Chicago. Dr Uttichs research examines issues at the intersection of social
cognition and moral psychology with a particular focus on how people
understand prescriptive norms. His work has received the poster prize from
the Society for Philosophy and Psychology and has been recognized by the
publication Science as an Editors Choice.

Piercarlo Valdesolo is an assistant professor of psychology and director of the


Moral Emotions and Trust lab at Claremont McKenna College in Claremont,
CA. He teaches courses in social and moral psychology and his research
explores the psychological bases of trust, cooperation, and character. He is
coauthor of the book Out of Character and he is a member of the Editorial
Board of the journals Emotion, Journal of Experimental Social Psychology and
Journal of Personality and Social Psychology. He is also a regular contributor to
Scientific American and Psychology Today.
Experimental Moral Psychology:
AnIntroduction
Hagop Sarkissian and Jennifer Cole Wright

This volume is titled Advances in Experimental Moral Psychology and it is part


of a series addressing recent advances in the field of experimental philosophy
more generally. Thus, it behooves us to say at least something about both moral
psychology and its relationship to experimental philosophy.
Although moral psychology could certainly be seen as a subdiscipline
withinexperimental philosophy, it would be wrong to consider it its childif
anything, we might consider it its inspiration. After all, moral psychology,
which we will define loosely as the interdisciplinary investigation of how
human beings function in moral contexts, has been around for a long
time. Though initially a parallel endeavor taken on by philosophers on the
one hand, and psychologists on the other, these two research agendas have
increasingly become one, as scholars from both disciplines have begun to not
only seek insight and support from the others theoretical and methodological
contributions, but also to work together as interdisciplinary teams with
common goals.
Contemporary moral psychology keeps its investigative eye equally on
two questions. The first question is: How do people actually (as opposed to
theoretically) function in moral contexts? This question breaks down into a
whole host of additional questions, such as: How does our awareness of morality
develop (what is innate vs. learned, and what does the learning process look
like)? What does this moral awareness entail and of what is it composed (e.g.,
perception, emotions, automatic vs. effortful cognition)? What produces
morally good behavior? What are the main differences between morally good
and bad peopleand to what extent do those differences involve something
psychologically stable, such as character or moral identity? And so on.
2 Advances in Experimental Moral Psychology

The second question is: In what ways are the answers to the first set of question
philosophically interesting? Do they inform philosophical theoryinsuring
that our theoretical conceptions of morality properly line up with the empirical
data? Do they help us adjudicate between competing philosophical views? Do
they raise problems for long-standing philosophical commitments? Of course,
this second question presupposes that what we learn about actual moral
functioning is meaningful to philosophical theorizing, something that is not
universally accepted but, as evidenced by increasing interdisciplinary activity,
is nonetheless maintained by a substantial group of researchers. Examples of
the fruitfulness of this mindset can be found not only in this volume, but also in
other interdisciplinary volumes on moral psychology published in recent years,
such as those edited by Walter Sinnott-Armstrong (2007), Darcia Narvaez and
Daniel K. Lapsley (2009), Thomas Nadelhoffer etal. (2010), and John Doris and
the Moral Psychology Research Group (2012). And while philosophers have
much to gain through their collaborations with psychologists and their use
of empirical data to inform their projects, psychologists empirical endeavors
likewise can only benefit from an increased awareness of and appreciation for
the theoretical models arising out of philosophical work on moral normativity,
moral epistemology, and metaethics. The rich history of philosophical reflection
on these questions can serve as grounds to generate new hypotheses for testing
and new avenues for research.
Given this interdisciplinary interaction, we see moral psychology as a sort
of role model for the more recent developments in what people are calling
experimental philosophywhich is, broadly speaking, the use of empirical
and experimental methods to investigate philosophical questions. The early
(turn of this century) focus on using empirical methods to largely challenge
or undermine philosophical theories (e.g., the situationist critique of virtue
ethics) and philosophical methodology (e.g., the role of intuitions in theory
formation) has been labeled the negative program. However, philosophers
(and others) have increasingly begun to shift the focus of experimental
philosophy to other, more constructive endeavors (see Systma and Livengood
in press, for a discussion of what they call the developing positive,
naturalist, and pragmatist programs). These endeavors embrace the
exploration of peoples intuitions, judgments, and cognitive process more
broadly in order to clarify what they are, when they count as philosophical
Experimental Moral Psychology: AnIntroduction 3

evidence, and also more generally what they reveal about human cognition,
language, and behavior. It is here that we think moral psychology has much to
offer as a role model.

Moral persons

In the early stages of conceiving this volume, when we were considering


which sorts of contributions would best represent the fields most exciting
current developments, we decided to cast a wide netone that would include
not only people already identified with the experimental philosophy family
(as evidenced, for example, through participation in panels, conferences,
and volumes bearing that designation), but also other philosophers and
psychologists whose research makes a meaningful contribution to the field.
This means that our invitations went out to people engaged in a wide variety
of different projects, exploring several enduring questions concerning moral
psychology. In what follows, we will attempt to highlight some of the ways
these researchers work relate and inform one another along a number of
dimensions.
For example, a long-standing question that has preoccupied philosophers
and psychologists alike concerns the nature of moral persons. What are
morally good people like (and is there a difference between what we think they
are like and what they are really like)? How do they become moral? Piercarlo
Valdesolo observes in his chapter that we are sensitive to moral qualities in
people, and that judging or appraising individuals along moral dimensions is a
natural, inescapable part of life. We find individuals fair or cruel, kind or cold,
affable or scheming. Whats more, it seems clear that we favor certain types
of individuals over others. We tend to seek company with those we find fair,
kind, and giving, and impart unto them moral value. Yet, this tendency may,
according to Valdesolo, be biased.
According to much of the research on person perception, we commonly
evaluate people along two basic dimensions: (a) their social skills or warmth
(e.g., the degree to which people are sincere, compassionate, friendly,
trustworthy, caring, helpful, etc.) and (b) their intellectual skills or competence
(e.g., the degree to which people are determined, intelligent, creative, focused,
4 Advances in Experimental Moral Psychology

hardworking, etc.the dimension of competence). Valdesolo argues that our


assessment of what morally good people are like tends to lean heavily on the
dimension of warmth, while discounting or ignoring entirely the dimension
of competence.
Why is this? It might be that our primary interest in evaluating someones
moral status is determining his or her intentions toward us, and the extent to
which we might benefit from them (warmth), and not necessarily the extent
to which they are able to carry out and benefit from their own intentions
(competence). It seems to be in our self-interest to seek company with those
who are warm and kind, as we might benefit from their values, dispositions,
and behavior, and we might take such qualities to be definitive of virtue and
moral worth. But, as Valdesolo points out, this tendency to value warmth,
kindness, and generosity is likely to miss important features that constitute
truly moral individuals. Achieving morally valuable ends or bringing about
morally desirable outcomes requires more than personality traits of warmth
and generosity. Oftentimes, and in addition to these, individuals will require
a range of other competencies in order to succeed in their moral projects.
Traits such as grit, discipline, and industriousness are thus also important and
should be equally valuedyet they arent. In the end, Valdesolo suggests a
revision to our evolved tendency to favor warmth over competence; we ought
to value both.
Valdesolos argument finds support in the fact that many theoretical models
of virtue regard both dimensions as important aspects of the morally good
person. Along these lines, Jeremy Frimer and Harrison Oakes contribution
may provide additional insight into why we are inclined toward the asymmetry
(problematic though it might seem) as well as why, under some circumstances,
it may be justified. According to Frimer and Oakes, people typically claim
to have both agentic (self-promoting, competence-based) values and
community (other-promoting, warmth-based) values. The only problem
and it is a problem specifically from the perspective of moral developmentis
that people spend way more time pursuing their agentic values as opposed
to their community values, and often view the two as being at odds with one
another.
This is familiar enough. Most people readily see how the two values can
come into conflict, and it would not be unusual to hear someone say I cant
Experimental Moral Psychology: AnIntroduction 5

both do whats best for me and help other peopleI have to choose. (Indeed,
much ink has been spilled in the history of moral philosophy trying to defend
the view that it actually is in our self-interest to be moral, which might be seen
as an attempt to accommodate these competing tendencies.) Thus, it is not
surprising that we tend to take peoples expression of community values (or
warmth) as indicators of their moral goodness, while taking any expression
of agentic values (or competence) as indicators of their self-interested
motives.
Frimer and Oakes argue, however, that these two value systems do not
necessarily have to be in conflict. Indeed, for some individualsnamely,
recognized moral exemplarsthey are not. Frimer and Oakess research
provides evidence that the chasm between peoples agentic and community
values decreases along the developmental trajectory until, at the limit, it may
disappear entirely. Morally good people have learned to synthesize their agentic
and community values such that they are able to pursue both simultaneously;
promoting their own well-being becomes effectively linked with the well-being
of others. And as their intellectual and social skills become oriented toward and
focused on the same ends, they also become more effective moral agentsa
finding consonant with Valdesolos point that warmth and competence are
both important for moral goodness.
In ancient Greece, Aristotle described the virtuous person as someone
who elicits others admiration and respect, and a notable feature of classical
Confucian virtue ethics is its emphasis on the magnetic qualities of capable
moral leaderstheir ability to gain the assent and loyalty of others in an
effortless fashion. It is natural to assume that virtuous conduct is something
that is universally recognized and esteemed, especially since there never seems
to be enough of it going around. However, according to Gabriela Pavarini and
Simone Schnall, individuals exemplifying moral virtue are sometimes denied
feelings of approbation and approval. Indeed, they can instead be subjected to
ridicule and censure, or otherwise disparaged.
Pavarini and Schnall highlight the paradoxical nature of peoples reactions to
displays of genuine virtue by others. As they point out, morally good behavior
rarely occurs in a social vacuum, and being the witness to anothers morally
good deeds has implications beyond the immediate act itself. On the one hand,
witnessing displays of virtue can lead to a sense of elevation, and a desire to
6 Advances in Experimental Moral Psychology

praise, reward, and cooperate or associate with those who act virtuously.
Morally good people elicit elevation, respect, and gratitude. Through their
actions they can raise the standards of an entire group, spurring individuals
to greater levels of prosociality, and reminding them of the resources that
lie untapped within them that could be marshaled toward improving their
communities and making the world a better place. Moral exemplars can renew
in others a belief that they can shape the world around them in positive ways.
Yet, on the other hand, such elevation of moral standards can also be
perceived as a threat to a ones integrity and sense of self-worth, and to ones
standing among ones peers; after all, one might be seen by others as falling
short of expectations when someone else has just shown that morally excellent
behavior is within reach. When one is threatened in this way, the reaction
may be to denigrate the moral exemplar, or to explain away her good deeds as
products of self-interest, situational demands, or other extraneous factors. In
short, moral goodness can engender envy, hostility, and suspicion just as easily
as it can inspire awe, gratitude, and respect. At its extreme, the motivation
to avoid comparison with the moral exemplar and/or save face can lead to
hatred and a desire to belittle, demean, and even destroy the source of this
reputational threat.
Pavarini and Schnall observe (as Valdesolo does) that qualities of warmth
and kindness make us feel safe, protected, uplifted, grateful, while qualities
of competence or resourceful make us feel challenged, inadequate, and
threatened. We feel united with those who display warmthand perceive
more warmth in those with whom we are unitedand feel in competition
with those who display competence. Given Frimer and Oakess suggestion
that moral exemplars come to exemplify traits related to both warmth and
competence, it would make sense that people could react either positively or
negatively to their example depending upon their relationship to the exemplar
and which of her traits (her warmth or competence) is most salient to them at
the time.
However we respond to and evaluate morally good people, we commonly
think of the virtues they display as residing within them, constituting part of
their identities. Indeed, philosophers have long emphasized that laudable traits
of character are important features of moral persons. Mark Alfanos chapter
raises a skeptical worry about whether there is such a thing as a good person
Experimental Moral Psychology: AnIntroduction 7

separate from the social and asocial environments in which people display
good behavior. Rather than argue (as many have) that moral agents goodness
is keyed to their possession of stable internal character traits that manifest
in displays of virtuous behavior across diverse contexts, Alfano argues that
we need to rethink the nature of character itself. Specifically, we need to stop
thinking about it as a disposition internal to the moral agent and start thinking
of it as a trifold, relational property, composed of the interactions between
(a) a persons internal states/capacities, (b) the social environment in which
the person is imbedded, and (c) a variety of asocial features of the physical
environment (e.g., noise levels, smells, lighting) that impact the persons
thoughts, feelings, and behavior in specific ways.
The upside to this view is that what constitutes a morally good person is not
just the internal states/capacities that she possesses, but also the sort of social
and asocial environment she finds herself in and/or has taken part in creating.
This means that when it comes to moral development, as much (if not more) of
the burden falls on the world around developing moral agents as it does on the
agents themselves, for the environment figures into the very structure of moral
character they possess. Individuals themselves are not bearers of virtue. People
with the right sorts of internal states/capacities represent just one leg in the
trifold relation; if stuck within a morally corrupt social environment, and/or
an asocial environment filled with hardship, danger, or distractions, virtue will
be incomplete. This highlights an interesting link between Frimer and Oakess
and Alfanos contributionsnamely, that as peoples values and motives become
more integrated and synchronized, so do their social/asocial environments.
This may be because the synchronization of their values results in the active
selection and creation of social and asocial environments that promote and
protect those valuesenvironments that positively reflect the rewards and
benefits of their chosen lifestyleallowing for increased dispositional stability
and expression. (Of course, exemplars must often construct and integrate such
environments where they do not previously exist.)
Finally, if something along these lines is correct (viz., that virtue consists
in features extrinsic to individuals to some considerable extent), then it
suggests that certain kinds of environmentsnamely, those in which you
are socially rewarded for thinking hard about morality, but not necessarily
behaving morallyare not going to be enough to generate virtue. And this
8 Advances in Experimental Moral Psychology

might help us to explain the rather dispiriting findings reported in Joshua


Rust and Eric Schwitzgebels chapter that those of us who think and reflect
deeply about the nature of morality (such as all of you reading this book right
now) are not on average morally better behaved. We might hold out hope that
our academic interest in morality and ethics will pay off in noticeable and
measurable differences in our moral behavior, that having dedicated our lives
to reflecting on all matters moral we should have generated a healthy level of
moral goodness. It turns out that all our efforts may be for naught. Or so say
the studies conducted by Rust and Schwitzgebel, which measure professional
ethicists, moral philosophers, and other academics along a number of moral
dimensions. Whether rated by their peers or observed at conferences, whether
measured by their charitable donations, voting behavior, responsiveness to
student needs, or other dimensions, Rust and Schwitzgebel have repeatedly
found that professional ethicists and moral philosophers are no better than
their colleagues in other academic fields, despite reflecting deeply about moral
reasoning and the structure of morality, despite grappling with the greatest
figures in the history of moral thought, and despite expressing more stringent
moral attitudes. (For example, while ethicists were much more likely than
non-ethicists to rate meat-eating as morally bad, the evidence suggested no
correspondingly significant difference in how often they ate meat). This is
in line with Frimer and Oakess finding that most peopleethicists or not
report the personal importance of prosocial, community-oriented values to be
much higher than their actual daily activities reflect; they say such values are
highly important to them yet spend far less time actually engaging in related
activities. Instead, people tend to do that which promotes their own self-
interested, agentic values.
What conclusion should we draw? Rust and Schwitzgebel consider five
alternative explanationsincluding Jonathan Haidts rational tail view in
which most of our reasoning is employed post hoc to rationalize attitudes
we already possess, and the provocative view that philosophers of ethics are
making up for a natural deficiency in their moral intuitions (so, they actually
are better than they would have otherwise been)and conclude that the data
do not champion one explanation over the other. In the end, regardless of the
explanatory model one favors, their research suggests that moral goodness
requires more than just theoretical reflection and critical thinking to develop.
Experimental Moral Psychology: AnIntroduction 9

Moral groundings

Thus far, the contributions weve canvassed have largely focused on the nature
of moral persons and moral exemplarshow they are constituted, how they
are motivated, and how others evaluate/respond to them. But, we havent said
much about the nature of morality itself. Here, several of the contributions
help to enrich our understanding of the psychological mechanisms that may
constitute part of moral life, as well as the standing that we take morality
tohave.
A widely experienced facet of morality is its importance and weightiness
relative to other evaluative practices and domains. We might disagree with
others across a wide range of domains, including matters of convention and
aesthetics. However, disagreement about, say, standards of physical beauty
seldom seem as pressing or compelling to us as disagreements about basic
moral issues such as racial equality, reproductive rights, or distributive justice.
When it comes to these latter topics, we tend to think that we are arguing over
issues of central importance to human lifeissues that must ultimately admit
of correct answers even in the face of entrenched disagreement. Similarly,
when we condemn certain acts as right or wrong, virtuous or depraved, it
seems as though these judgments have a certaintyan objectivitythat elides
judgments concerning matters of convention or taste. Why is this so?
Evolutionary accounts have been featured in both the Valdesolo and
Pavarini and Schnall contributions above. For Valdesolo, the promise of
immediate gains and the need to properly understand others dispositions and
intentions toward us may have fostered a favorable disposition toward those
who are kind and warm as opposed to persistent and focused. For Pavarini
and Schnall, our divergent reactions to moral exemplars can be understoodas
facilitating very different evolutionary demandsto cooperate and build
cohesive communities on the one hand, and to maintain ones reputation and
status within the group on the other. Other contributions also link current
moral psychological tendencies to evolved capacities.
For example, Yoel Inbar and David Pizzaro note the widespread role that
disgust reactions play in our moral lives, where moral transgressions evoke not
only moral condemnation but also a visceral feelings of repulsion. In particular,
they focus on research suggesting both that disgust can arise as a consequence
10 Advances in Experimental Moral Psychology

of making certain types of moral appraisals (e.g., when confronted with taboo
behavior or unfair treatment) and that disgust can amplify moral judgments
when it is elicited in the formation of a moral evaluation. Yet why should this
be so? Why should disgust be recruited in moral judgment at all? Inbar and
Pizarro argue that disgust is part of an evolved general motivational system
whose function is to distance us from potential threatsnamely, disease-
bearing substances and individuals. Disgust was also co-opted by higher
order systems to serve as a warning mechanism against socially and morally
prohibited behaviors, as well as any potential contaminant or contagion.
Sexual acts, food taboos, physical abnormalities or deformities, individuals
from strange or foreign cultures-each of these triggers our core behavioral
immune system, which serves to create distance between the individual and
these potential sources of physical and moral threat. One upshot of this is that
moral transgressions often repulse us in ways that other sorts of transgressions
dont, eliciting from us a feeling of undeniable wrongness. But if Inbar and
Pizarro are correct, then it leads us inexorably to a question: Should we trust
our moral judgments when they involve disgust reactions? Or should we
instead recognize such responses as likely to be biased or erroneous, pushed
around by a mechanism that was shaped by forces not designed to reliably
track morally relevant considerations?
Daniel Kelly, in his contribution, argues for the latter claim. Disgust cannot,
according to Kelly, be treated as a reliable indicator of moral transgressions; it
is overly sensitive to cues related to its older and more primitive function of
protecting the individual against poisons and contaminants. What is more, it
is a system for which false positives are much more advantageous than false
negatives, so we are prone to find things disgusting even when nothing actually
disgust worthy is present. And given the particularly phobic response that
disgust generates (the experience of nausea and/or the intense desire to
remove oneself from the presence of the triggering stimulus), Kelly worries
that disgust has the potential to cause more harm (particularly in the form of
outgroup denigration and rejection of physical/cultural abnormalities) than
good, even when it is on track.
The general form of this debunking story is familiar: psychological
mechanisms that are involved in moral judgment are sensitive to irrelevant
considerations and should therefore be viewed with suspicion. However,
Experimental Moral Psychology: AnIntroduction 11

while Kelly acknowledges that many of the psychological mechanisms that are
recruited in moral life will have an evolutionary past that may render them
suspect, he argues that each particular mechanism needs to be examined on
its own; there can be no straightforward debunking of the entirety of moral
psychology from the basic fact that many of the psychological mechanisms
underwriting it were shaped by forces and pressures whose chief function was
not to track moral truth.
As noted, feelings of disgust can strengthen ones moral judgments, rendering
them more severe or certain in character. However, this tendency is not limited
to judgments that have obvious connections with disgust. Put another way,
we need not experience disgust in particular to feel as though certain moral
transgressions are obviouslyperhaps even objectivelywrong. Whether we
are reflecting on general moral principles, more specific moral rules, or even
judgments about particular cases, it is a familiar feature of moral cognition
to feel as though it is imbued with objectivitythat is, with a commitment to
moral questions having right and wrong answers independent of any given
persons or societys beliefs or practices.
In her contribution, Linda Skitka points out that our moral attitudes play an
important role in generating this phenomenology. Moral attitudes are stronger,
more enduring, and more predictive of a persons behavior than other attitudes
or preferences they might hold. Moral attitudes are distinguished by the fact
that they are highly resistanteven imperviousto other desires or concerns,
and have the force of imperatives for those who hold them. They have authority
independent of others opinions or social conventions, and have particularly
strong ties to emotions. People experience moral attitudes, convictions, or
mandates as tracking objective features of the world that apply to all universally
rather than subjective facts about themselves. Such convictions are inherently
motivating and accompanied by strong affect.
Indeed, Skitkas contribution creates an interesting wrinkle in our thinking
about the morally good person. It is natural to think that morally good people
have strong moral convictionsthat they are willing to stand on principle
and fight for what they believe is morally right. Yet, according to Skitka,
this represents a potential dark side to our moral psychology. Strong moral
convictions come with a price. Specifically, stronger moral convictions are often
accompanied by intolerance of different moral beliefs, values, and practices;
12 Advances in Experimental Moral Psychology

the stronger the conviction, the greater the intolerance. Indeed, strong moral
convictionsmore so than any other strong attitudespredict peoples lack of
tolerance for different cultures, their unwillingness to interact with and help
people with moral beliefs different from their own, their propensity to engage
in vigilante justice against perceived wrongdoings, and their imperviousness
to clear social disagreement with their views. In sum, Skitkas contribution is a
cautionary tale for moral psychologists; while we tend to focus on the positive
aspects of moral development, there are pitfalls as well, including intolerance
to differences and insensitivity to the rich complexity of moral life. We must
keep in mind that morally good people walk a fine line between integrity and
conviction on the one hand and intolerance or dogmatism on the other.
Most of us have such strong moral experiences, supported by moral
attitudes that seem particularly compelling, unshakeable, and rooted in some
set of objective moral facts about the world around us. Nevertheless, we may
wonder whether there are such things as moral facts and, if so, whether they are
actually objective in ways suggested by our moral attitudesthat is, whether
they are independent from any persons or any groups beliefs, values, or ways
of life. Metaethicists have long sought to answer this question. Do moral
judgments refer to objective moral facts or do they merely express subjective
moral attitudes? Do participants in moral disputes argue over claims that can
aspire to truth, or are they merely trading opinions with no objective basis? In
pursuit of these questions, metaethicists have sought to capture the essence
of morality as reflected in ordinary practicehow we as moral creatures
experience morality in our daily lives. Recent experimental work has helped
to reveal the mechanisms that may underlie our ordinary commitments to
objectivism about morality, and how they might be related to other aspects of
our psychological lives.
James Beebe points out in his chapter that while people tend to attribute
more objectivity to moral issues than other social or personal issues, their
beliefs concerning the objectivity of morality do not stand alone. Rather, they
are affected by a number of rather surprising factors. For example, people seem
to be sensitive to the perceived existence (or absence) of consensus concerning
moral issues in judging whether there is an objective truth underlying them.
Moreover, Beebe discovered that people tend to ground issues more objectively
when they consider them concretelyfor example, as being contested by
Experimental Moral Psychology: AnIntroduction 13

particular individuals (individuals with names and faces)than when they


consider them abstractlyfor example, as contested by nameless or faceless
individuals. Beebe suggests that the very ways in which philosophers normally
discuss metaethical issuesthat is, abstractly and in rarefied fashionmay
impact how they and others think of them. Finally, people display more
objectivism when it comes to moral wrongs than moral rightsthat is, they
might be more objectivist concerning the wrongness of racial discrimination
as opposed to the rightness of anonymous charitable donation. Individuals
thus display a range of attitudes and commitments concerning the status of
morality, belying treatments of ordinary moral practice that depict it as unified
in any interesting respect. Beebes contribution invites future researchers to
examine the nature of these commitmentsand the things that influence
themmore closely.
Complementing these studies are others reported by Kevin Uttich, George
Tsai, and Tania Lombrozo. As opposed to Beebe, who looks at how various
external factors may affect folks views concerning the status of morality,
Uttich etal. seek to better understand how beliefs in moral objectivity might
be related to other beliefs individuals may hold. Specifically, they explore
how belief in moral objectivism may be related to beliefs concerning moral
progress and beliefs concerning whether or not we live in a just world. One
believes in moral progress to the extent that one believes that the world is
trending toward moral improvement over time (even if such change is slow,
uneven, or not necessarily assured or certain), and one believes in a just world
to the extent that one believes that virtue is rewarded and vice punished
that good things happen to good people and bad things to bad people. What
Uttich et al. find is that while all three of these beliefs were correlated with
one another, each of these beliefs nonetheless has some independent force or
role in peoples conceptions of morality. Hence, folk views about the standing
of morality might form part of a web of beliefs about how morality inheres
and functions in the world, including beliefs such as whether good deeds are
rewarded and bad deeds punished, or whether there is a tendency of moral
progress in history.
Both these contributions seek to understand folk views concerning the
nature of morality by looking at how folk understand the nature of moral
disagreement. This is in line with other recent studies in this area. Yet, Katinka
14 Advances in Experimental Moral Psychology

Quintelier, Delphine De Smet, and Daniel Fessler raise another possibility.


Suppose X commits an actionsay, cheating on her income taxesand two
people disagree as to whether this instance of cheating was moral or immoral:
one maintains that the action was morally permissible, whereas another
maintains that the action was morally wrong. If one thinks that at least one
of these two appraisers must be wrong, then one reflects a commitment to
moral objectivism. Yet are their perspectives, or the perspectives of any other
people judging Xs action (a perspective referred to as appraiser relativism),
the only relevant ones to consider when deciding whether what X did was
wrong or right? Most existing studies seem to imply as much, as they focus
on disagreement among appraisers in order to measure the extent to which
folk may be committed to objectivity about morality. Yet it seems that there is
another perspective worth consideringnamely, Xs own. After all, it is easy to
imagine X finding her own act (cheating on her income taxes) as either morally
wrong or morally permissible. Does Xs assessment concerning the moral
status of her own action matter when considering its actual status? If one takes
Xs assessment of her own action as relevant to judging the moral status of
the action itself, then one embraces an agent relativist stance toward morality.
Quintelier etal. find that people do indeed take the agents own assessment
of her action as relevant to assessing the actual status of the action, as well as
the truth of statements about that action. They also found differences in the
degree to which people expressed moral relativism depending upon whether
they asked about agent relativism or appraiser relativisma distinction most
research in this area fails to make. Thus, whether people are relativists depends,
in part, on whether they are asked agent or appraiser relative questions.

Measuring morality

Of course, since this is a volume on advances in the empirical study of moral


psychology, we would be remiss to not include some discussion about the
challenges researchers face when it comes to measuring morality. And in their
chapter, Peter Meindl and Jesse Graham do exactly this, raising an important
methodological concern about one standard approach to the study of moral
psychology. Most of the research conducted in this area has approached it
Experimental Moral Psychology: AnIntroduction 15

from a third-person (or normative) perspective, from which what counts as


moral (whether morally good or bad) is determined by the researcher. Such
an approach has obvious virtues, including defining morality in a way that is
internally consistent and in line with current theoretical models. Nevertheless,
this approach fails to take the perspective of those being studiedwhat
theythemselves find to be morally relevant, good or badand so fails to fully
map the moral terrain. As just one example, personality researchers bemoaned
the consistency of peoples behavior across different types of situations (types
of situations being defined by the researchers), until they thought to ask
people to identify for themselves the meaning they attributed to the situations
to which they were responding; once they did, they found a remarkable degree
of consistency across situations which, however externally different from the
researchers or third-person perspective, were united by shared meaning from
the first-person perspective. Thus, what looked like inconsistency from the
outside ended up looking entirely coherent from the insidean insight
that is very important when trying to determine whether people possess
situation-general traits, like virtues. Similarly, Meindl and Graham argue
that without taking the first-person perspective into account, research into
moral psychology will remain limited in its scope and application. Drawing
on a number of existing studies, they call for increased use of the first-person
perspective, and suggest ways in which the first- and third-person perspectives
might inform and complement one another.
While not their central focus or concern, several other chapters in this
volume also introduce or address methodological issues. For example, Frimer
and Oakess contribution reflects a commitment to first-person measurement,
highlighting the advantages that can be obtained by using both self-report
inventories and projective measures to uncover an interesting gap in peoples
moral psychology between what values they explicitly endorse and what values
they actually spend time pursuing. Skitkas chapter explores the difference
between moral conviction and other closely relatedthough importantly
distinctconstructs. For example, unlike other strongly held attitudes, moral
convictions uniquely predict a range of morally relevant behaviors, and unlike
moral judgments, moral convictions are stable and relatively impervious to
standard priming manipulations. This last finding should prompt researchers
to expand upon the range of moral cognition being measured. Similarly, one of
16 Advances in Experimental Moral Psychology

the challenges faced by Rust and Schwitzgebel is the question of what behaviors
to measurethat is, how to operationalize moral goodness (or, in this case, its
behavioral expression). They chose a wide variety of measures, everything from
the extent to which people display courtesy and engage in free-riding behavior
at conferences to peer evaluations and self-report measures of behavior. This
range of different kinds of measurements is useful because it allows for a sort
of triangulation on the subject of interestin this case, the degree to which
ethics scholars moral behavior differs (or not) from other academics. But,
their study raises the very important question of how best to operationalize
and measure moral cognition and behavior. Alfanos model encourages
researchers to look beyond the person herself to the entire context of moral
behavior (including social/asocial environments) when operationalizing and
measuring virtue, while Valdesolos encourages researchers to expand on the
kinds of virtues (caring/other-oriented vs. competence/self-oriented) they
include in an assessment of peoples moral psychology. Finally, Quintelier
et al. raise yet another an important methodological consideration, arguing
that those researching the metaethical commitments of ordinary folk need to
be careful to specify (among other things) what type of relativism they are
investigating, or whose perspective is being taken into account when assessing
the objectivity of moral claims. This serves as just one important example
of how collaboration between philosophers and psychologists would aid in
the development of methodological approaches that are both scientifically
rigorous and appropriately sensitive to important philosophical distinctions.

Conclusion

The papers in this volume all speak to the vibrancy of research in moral
psychology by philosophers and psychologists alike. Enduring questions
concerning the nature of moral persons, the motivations to become moral,
how to measure morality, and even the status and grounding of morality itself
are each the focus of considerable research activity. This activity is driven both
by theoretical commitments and by a sensitivity to empirical data that might
shed light on the subject. Weve highlighted the ways in which the research
included here informs (and in some cases problematizes) our understanding
of morality.
Experimental Moral Psychology: AnIntroduction 17

And while the contributions to this volume fall fairly evenly across the
disciplines of philosophy and psychology, we hope it will be apparent that, at
some level, these disciplinary categories seek to be of interest on their own.
The questions at the heart of this research program have long histories in both
disciplines, and the methods between them have begun to blur. Philosophers
now use experimental methods, and experimental psychologists draw from
(and contribute to) philosophical theorizing. The field is expanding, and we
are delighted to mark some of its direction and vigor with this volume.

References

Doris, J. (2010). The Moral Psychology Handbook. Oxford: Oxford University Press.
Nadelhoffer, T., Nahmias, E., and Nichols, S. (2010). Moral Psychology: Historical and
Contemporary Readings. Malden, MA: Wiley-Blackwell.
Narvaez, D., and Lapsley, D. K. (2009). Personality, Identity and Character:
Explorations in Moral Psychology. New York: Cambridge University Press.
Sinnott-Armstrong, W. (2007a). Moral Psychology: The Evolution of Morality:
Adaptations and Innateness (Vol. 1). Cambridge, MA: Bradford Book.
(2007b). Moral Psychology: The Cognitive Science of Morality: Intuition and
Diversity (Vol. 2). Cambridge, MA: Bradford Book.
(2007c). Moral Psychology: The Neuroscience of Morality: Emotion, Brain Disorders,
and Development (Vol. 3). Cambridge, MA: Bradford Book.
Sytsma, J., and Livengood, J. (in press). The New Experimental Philosophy:
AnIntroduction and Guide. Peterborough, Ontario, Canada: Broadview Press.
18
Part One

Moral Persons
20
1

The Character in Competence


Piercarlo Valdesolo*

Perseverance, grit, efficiency, focus, determination, discipline, industriousness,


fortitude, skill. According to existing models of person perception and most
experimental studies of moral cognition, these kinds of traits are typically
not considered to be relevant to evaluations of moral character (c.f. Pizarro
and Tannenbaum 2011). Goodness and badness tend to be defined according
to whether or not an individual is likely to be hostile or threatening toward
the self. As such, traits and states associated with this perceived likelihood
(e.g., compassion, empathy, and trustworthiness) dominate the literature in
moral psychology. This flies in the face of a long tradition in virtue ethics
that identifies the competence-based qualities listed above as belonging to a
broader set of intellectual virtues, conceptually distinct but no less important
to character than moral virtues (Aristotle, fourth-century B.C.E; Grube and
Reeve 1992). Of course, studies of person perception are simply interested
in describing the qualities that contribute to overall assessments of others,
and not at all concerned with philosophical accounts of what kinds of traits
ought to compose moral character. That said, it might strike some as odd that
such intellectual virtues are not, according to existing evidence, considered
by most perceivers to be morally relevant. And it might strike some as odd
that investigations into the processes by which we evaluate others along these
dimensions are relatively rare in moral cognition research.
This chapter intends to argue that such evaluations might, in fact, be
relevant to individuals assessments of moral character and that their
categorization as amoral in person perception is misleading. Furthermore,
their absence from work on moral cognition represents a gap in the literature
that might be profitably filled in the future. I will begin by describing the most
22 Advances in Experimental Moral Psychology

well-supported model of person perception (Stereotype Content Model),


and how the core components of this model are thought to relate to moral
judgments. I will then argue that the underemphasis of these intellectual
virtues results from a tendency to see the moral virtues as other-oriented and
the intellectual virtues as self-oriented. This distinction, however, might be
dependent on a prioritization of short-term interests as compared to long-
term interests in orienting ourselves to others. Specifically, I will suggest ways
in which the behavior promoted by the intellectual virtues might be equally
important to other-oriented outcomes. As such, intellectual virtues might be
relevant to evaluations of moral character that consider long-term outcomes,
and their importance might not be reflected in individuals first impressions
of interaction partners. In other words, the relevance of competence-based
traits to moral character may track the impact that those traits have on others
well-being. Since ultimate human flourishing requires societies composed
of individuals who are both warm and competent, the moral relevance of
competence will increase as perspective shifts from the short term to the long
term. I will conclude with suggestions for how moral cognition could benefit
from the study of the processes underlying the intellectual virtues.

Person perception and moral character

Research in person perception has identified two broad dimensions of social


cognition that guide our global impressions of others, as well as our emotional
and behavioral responses to interaction partners: warmth and competence
(Fiske et al. 2007). Some form of these two dimensions can be traced back
throughout much of the literature in social cognition, though they have taken
on different labels depending on the particular theory. For example, Rosenberg
etal. (1968) instructed participants to sort 64 trait words into categories that
were likely to be found in another person. These participants generated two
orthogonal dimensions of person perception: intellectual good/bad (defined by
traits such as determined, industrious, skillful, intelligent), and social good/bad
(defined by traits such as warm, honest, helpful, sincere)two dimensions that
are conceptually similar to warmth and competence. Recent research in face
perception has also demonstrated the ease and speed with which participants
The Character in Competence 23

will judge trustworthiness and competence from short exposures to faces


(Todorov etal. 2006; Willis and Todorov 2006)again, two dimensions that
overlap significantly, if not completely, with warmth and competence.
These dimensions reflect basic and adaptive categories of evaluations:
the need to anticipate actors intentions toward oneself (warmth) and the
need to anticipate an actors ability to act on their intentions (competence).
In other words, these evaluations allow us to answer the questions Does
the other intend help or harm? and can the other carry out this intent?
(Cuddy etal. 2008).
Though this distinction in the literature on person perception maps very
closely onto the two separate categories of virtues identified by philosophers
(intellectual vs. moral), theorists have drawn a sharp divide between the moral
relevance of these traits. Put simply, traits that communicate warmth are morally
relevant, while traits that communicate competence are not. Virtue ethicists,
on the other hand, see merit not only in character traits associated with what
most contemporary models of person perception identify as warmth, but
also in traits identifiable as relevant to competence (intelligence, fortitude,
perseverance, skill; c.f. Dent 1975; Sherman 1989).
This moral distinction is evident throughout much of the literature in social
cognition and is presumed to be due to the self- versus other-focused nature
of the traits associated with each dimension. Traits related to warmth, such as
friendliness, honesty, and kindness, tend to motivate other-oriented behavior
(e.g., altruism), whereas traits associated with competence, such as efficacy,
perseverance, creativity, and intelligence, tend to motivate self-oriented
behavior (e.g., practicing a skill; Peeters 2001). Indeed, past theories that have
posited a similar kind of two-dimensional approach to person perception
have made the distinction more explicit by using the labels of morality and
competence to describe the two kinds of evaluations (Phalet and Poppe 1997).
Some work even conceptualizes these two domains as operating in tension
with one another, arguing that traits promoting other-oriented concerns
interfere with the development of traits promoting self-interest, and vice versa
(Schwartz 1992).
Why this asymmetry in the moral relevance between self- versus other-
focused traits? One possible interpretation is that perceivers value other-
oriented traits because they are thought to be more likely to directly benefit
24 Advances in Experimental Moral Psychology

themselves. Warmth motivates others to care for us whereas behaviors


motivated by others competence do not seem to directly impact our fortunes.
The kinds of appraisals that are thought to underlie judgments of warmth
and competence seem to corroborate such an interpretation. Specifically,
evaluations of competition and status have been found to predict the degree to
which individuals judge others to be competent and warm, respectively. I will
discuss these in turn.
Individuals and groups are competitive if they have goals that are perceived
to be incompatible with the goals of the perceiver. For example, competitive
others would desire to maximize their own resources at the expense of others
ability to acquire resources. These assessments inform our judgments of others
social intents, carving the social world up into those who intend to facilitate the
achievement of our own goals, and those who seem to have no such intention.
Perceptions of warmth follow directly from this evaluation.
Appraisals of competition track closely to group membership. Because
ingroup members tend not to compete with a perceiver for resources (though
this may vary depending upon the ingroup in question), they are judged to
be low in competitiveness and therefore trigger perceptions of warmth, while
outgroup members are judged to be higher in competitiveness and, therefore,
colder. Similar effects would be expected regardless of the dimensions along
which group membership is perceived. Perceived similarity to another is
dynamically evaluated along multiple dimensions of identity (Tversky 1977).
Any such perceived similarity should trigger appraisals of low competition and
high warmth. In sum, targets with whom we share identity, and consequently
from whom we can expect to benefit, are considered to be high in warmth.
These considerations, in turn, determine perceptions of moral character.
Evaluations of the status of individuals and groups inform judgments of
competence given the assumption that status is a reliable indicator of ability.
The degree to which an individual is capable of pursuing and achieving her
goals is presumed to be reflected in their place in society. As such, high-status
targets are judged to be more highly competent than low-status targets. These
considerations are not considered to be morally relevant.
Taking the effect of status and competition on person perception together,
it seems that the moral relevance of a trait is largely defined by the degree to
which that trait motivates behavior that confers benefits on anyone other than
The Character in Competence 25

the actorin other words, behavior that is more likely to bring about behavior
that profits the self.
The valence of the emotional responses to targets categorized along these
dimensions supports this view. The stereotype content model posits specific
sets of emotional responses triggered by the various combinations of these
two dimensions and, in thinking about their relevance to moral judgments, it
is instructive to examine the content of these emotions. The perception of both
warmth and competence in targets elicits primarily admiration from others
(Cuddy etal. 2008). These individuals are evaluated as having goals that are
compatible with those of the perceiver, and they have the skills requisite to
help perceivers achieve those goals. In other words, individuals high in both
warmth and competence are our most socially valued interaction partners. The
perception of warmth without competence elicits pity, competence without
warmth triggers envy, and the absence of both triggers contempt and disgust
(Cuddy etal. 2008).
The organization of these emotional responses with regard to self- versus
other-benefiting actions also squares nicely with recent research showing
the paradoxical nature of perceivers responses to moral behavior on the
part of individuals perceived to have traits that seem similar to warmth or
competence (Pavarini and Schnall 2014). Admiration is elicited in response
to the moral behavior of warm (i.e., low competition) others, while the same
behavior by those considered to be low in warmth (i.e., high competition)
elicits envy.
Indeed, the fact that perceivers discount the moral relevance of competence
traits relative to warmth traits could be a simple function of an ingroup bias.
We have negative emotional responses toward outgroup competent others,
because they might not be favorably oriented toward us. The bias against the
importance of general competence in judgments of moral character, compared
to general warmth, seems to be a reflection of a self-interested motivation to
maximize the likelihood of resource acquisition. Evidence in line with this
interpretation shows that perceivers value competence more positively in
close others (a close friend) compared to less close others (distant peers).
Though warmth judgments still carry more weight in predicting positivity
toward others compared to competence, competence only becomes relevant
to character when it is perceived to have consequences for the self (Abele and
26 Advances in Experimental Moral Psychology

Wojciszke 2007). For example, the positivity of an employees evaluations of his


boss tracks competence only when the employees fate is tied to the decisions
of the boss.

Competence as long-term other-orientation

Literature on person perception has largely agreed that, while evaluation


of others proceeds along both dimensions, warmth holds primacy over
competence. The rationale for such an asymmetry in value has typically been
explained as follows: From an evolutionary perspective, the primacy of
warmth makes sense because anothers intent for good or ill matters more to
survival than whether the other can act on those goals (Cuddy etal., p.89).
These kinds of statements imply that warmth and competence differ in their
immediate relevance to a perceivers well-being. As such, the importance of a
trait to judgments of others character is proportional to the relevance of that
trait to ones own survival. Though there seems to be no dispute that warmth
is more relevant to immediate survival, could it be that competence has
similar consequences as time passes? Could it be that the behaviors motivated
by competence, though they do not benefit others in the short term, benefit
others in the long term? This section will argue (a) that competence-related
traits in targets do increase the likelihood of benefits to perceivers over the
long term and (b) that evidence arguing for the relevance of warmth to
character judgment might be failing to detect the importance of competence
to character evaluations because of its focus on evaluations of short-term
impressions. The final section will argue for the importance of incorporating
how people assess competence in generaland over timeinto the study of
moral cognition, as well as offer some initial ways in which this pursuit might
move forward.
Though warmth in interaction partners might matter more for achieving
the short-term interests of perceivers, there are reasons to believe that long-
term benefits would depend on valuing the competent as well. Indeed, the
argument for the importance of traits that cultivate individual skills becomes
even more central to others long-term well-being if you consider the unit of
analysis to be the group as opposed to the individual.
The Character in Competence 27

The idea that groups flourish when individuals are motivated to pursue
their own interests is not new. Adam Smith argued in the Wealth of Nations
forthe pursuit of immediate self-interest as the key to flourishing societies. His
theorizing on the power of free markets suggests that it is precisely the drive for
self-interest through which societies advance. This idea was captured in Smiths
metaphor of the invisible hand: collective well-being is best achieved by groups
of individuals who pursue their own advancement without concern for others.
The engine of this process is specialization. Focusing individuals efforts on
skills/domains in which they have a comparative advantage ultimately benefits
a community by maximizing the collective capabilities of group members,
allowing for a potentially wider and richer distribution of resources as well as a
competitive advantage relative to other groups. Consequently, cultivating traits
that foster such an end may ultimately benefit the community by enhancing
the collective competence of a population.
This argument assumes that collective value is, at least in part, created by
self-focused motivational states associated with competence-based traits.
Indeed, I largely agree with Smiths sentiment that by pursuing his own
interest he frequently promotes that of the society more effectually than when
he really intends to promote it (Smith 1776/1937).
That said, societal flourishing cannot be achieved through these kinds of
motivations alone. Specialization only pays off when a collective defined by
the free-flowing exchange of resources has been established. In other words,
societies flourish when composed of individuals who (a) maximize their
individual potential in terms of skills/abilities and (b) are willing to exchange
those resources with others. What drives this willingness? Smith initially offers
a strong answer: it is not from the benevolence of the butcher, brewer or baker
that we should expect our dinner, but from a regard for their self-interest. The
argument put forward in this chapter, however, suggests an alternative. It is
not solely through regard to self-interest that people in groups should expect
the beneficence of othersit is also through the benevolence of the butcher,
brewer, and baker. Societies composed of the warm and the competent should
ultimately thrive, and structural and legal constraints should reflect the
collective value inherent in both traits.
A critical insight provided by sociobiology has been the adaptive value of
other-interested drivesthose that cultivate perceptions of warmth in others.
28 Advances in Experimental Moral Psychology

If competence-based traits motivate behaviors that contribute to specialization,


then warmth-based traits motivate behaviors that contribute to the desire to
exchange the fruits of such specialization. A balance of these motivations
maximizes long-term well-being. Societies composed of individuals with
intentions to act warmly toward others as well as the capacity to act on those
intentions will best achieve long-term collective well-being. Theories arguing
for the importance of competence without warmth ignore the social function
of other-interested states (particularly their role in mediating the emergenceof
reciprocal altruism, c.f. Trivers 1971), and those arguing for the importance of
warmth without competence ignore the importance of the process through
which individual resources contribute to collective value. Other-interested
motivations solidify social groups by establishing and maintaining mutually
beneficial relationships and by providing the proximal mechanisms that
motivate the exchange of resources individuals have accrued. As such, the
traits that underlie perceptions of warmth and competence are essential in the
ultimate creation of flourishing societies.
This perspective fits well with the finding that person perceivers admire
those who are both warm and competent: these are the kinds of individuals
who ultimately contribute to the flourishing of social groups. It also fits
well with the finding that people think warmth is morally relevant, since
immediate and long-term intentions will have consequences for the self. It
raises the question, however, as to whether people also think that, under some
circumstances, competence can also speak to moral character (c.f. Valdesolo
and DeSteno 2011).
Because competence-based traits in targets might, over time, increase the
likelihood of benefits to perceivers, and because this seems to be a crucial
criterion for evaluating anothers character, then its possible that competence
traits might take on more importance for moral character over the long term.
Importantly, the effect of competence on perceived character might operate
independently from perceptions of warmth. Smith draws a distinction between
the efficacy of promoting societal interest incidentally (through focusing on the
self) or intentionally (through focusing on others). For the purposes of judging
moral character, its possible that judgments are sensitive to not just the likelihood
that traits will have immediate positive effects on well-being, but perhaps also
whether traits will have long-term positive effects on well-being. It is neither
The Character in Competence 29

surprising nor particularly controversial to argue for the moral relevance of


other-interested traits to those who study social, or moral, cognition. But it is
indeed a departure from the norm to posit the moral relevance of competence-
based traits.
Those who might worry that this argument champions the moral value
of self-focused states over other-focused states need not worry. It does not
challenge the view that perceptions of warmth will always remain morally
relevant to perceivers. Instead, it simply proposes a change in the relative
moral relevance of warmth and competence judgments as a function of the
time horizon over which these judgments are made. As perspective shifts
from the short to the long term, and the effects of being surrounded by the
competent become more tangible, competence traits might have a greater
impact on judgments of character. Suggestive evidence of this idea comes from
work evaluating the traits displayed by moral exemplars (Frimer etal. 2011;
Frimer etal. 2012). Twenty-five recipients of a national award for extraordinary
volunteerism were compared to 25 demographically matched comparison
participants with the specific aim of comparing the degree to which these moral
exemplars displayed traits associated with both agency (i.e., competence) and
communion (i.e., warmth). Results suggested that exemplars were consistently
higher not only in communion but also in agency, as well as in the tendency
to incorporate both these core dimensions into their personality. In particular,
this work provides an empirical basis for why competence should be included
in the study of moral psychology.
One response to this argument might be that if it were the case that
competence traits are relevant to moral judgments, there would be more
evidence of it in person perception and moral cognition. Much of the
research into the two dimensions of person perception relies on perceivers
spontaneous interpretations of behavior or impressions of others (c.f.
Cuddy etal. 2008, p. 73). Studies of person perception deal with immediate
evaluations of others charactersnap decisions based off minimal amounts
of information. If judgments of others moral character are tied to inferences
about the degree to which anothers behaviors might profit the self, then it
follows that we should asses others character along dimensions of warmth
during first impressions. Someones competence is less immediately relevant to
well-being compared to whether they desire to hurt or harm you.
30 Advances in Experimental Moral Psychology

In this context, warmth might be more central to character judgments


only because it is the most obviously other-oriented dimension of perception.
However, in contexts where perceivers are judging the importance of cultivating
particular traits over the long term, competence might become more central to
character judgments. This idea leads to quite simple and testable predictions
about how the composition of our evaluations of moral character might
change depending on the time horizon under consideration. In the same way
that men and women have been found to value different traits when judging
the attractiveness of potential short- versus long-term relationship partners,
perceivers might value different traits when judging the character of short-
versus long-term interaction partners. Characteristics such as efficiency,
perseverance, and determination might be considered more morally relevant
in contexts when the behaviors that those traits motivate are more relevant to
outcomes for the perceiver.
Previous work, as described earlier, has shown that competence becomes
more relevant to peoples global evaluations of others as their fate becomes
more tied to the target (Woczjiske and Abele 2008); however, no study to my
knowledge has examined the consequences of these kinds of fate-dependent
manipulations on moral judgments. In line with the idea that evaluations of
character seem yoked to the degree to which individuals see others as helping
facilitate their goals, fate dependence might have an impact on character
judgments via inferences of competence.
Indeed, there is already evidence suggesting that competence traits form an
important part of ones own moral identity. Recent studies of the composition
of moral identity find that adjectives such as hardworking are considered
central to the moral self and predictive of moral cognition and behavior
(Aquino and Reed 2002). This makes the discounting of such traits in others
character even more interesting and worthy of research. Why is it that most
existing theories of person perception define moral traits as orthogonal to
traits rooted in competence, even though we seem to readily acknowledge
the moral value of competence for the self? The relative weight given to
warmth and competence in defining the moral identity of the self and others
could be another interesting avenue to test the degree to which judgments of
character depend on the self-interested preference for traits that help achieve
onesgoals.
The Character in Competence 31

In sum, if the import, and perceived moral relevance, of warmth traits is


due to the immediacy of the effects of such traits on others well-being, then it
may be the case that competence-related traits would increase in importance,
and moral relevance, if the decision context were framed differently. Though
someones industriousness might not matter for immediate evaluations of
moral character, such a trait might take on moral meaning if participants felt,
for example, that their fate were connected to this individual in some way over
the long term.

New direction for moral cognition

This perspective suggests several interesting avenues for future research in


the study of moral cognition, judgment, and behavior. First, it implies that
the centrality of traits to moral character during person perception might be
distinct from those considered central to ones own moral identity. The moral
relevance of competence-related traits to ones own identity, but not others,
speaks to this possibility. What other differences might there be?
More generally, what are the processes underlying assessments of others
abilities to carry out their goals? To what degree do we weight an individuals
determination, grit, or perseverance in assessing moral character and under
what conditions? Does it matter if they are ingroup or outgroup members?
Does it matter whether we are making judgments that seem to only have
immediate or also long-term implications for the self?
Two current areas of interest in moral psychology to which this perspective
might be fruitfully applied include (1) work testing the role of intent and
outcome on moral judgments, and (2) the relationship between mind
perception and moral judgment. With regard to the former, work from the
lab of Fiery Cushman (Cushman 2008; Cushman etal. 2009) has posited two
processes along which moral evaluations proceed: one which is sensitive to
the causal relationship between an agent and an outcome, and the other which
is sensitive to the mental states responsible for that action. It is tempting to
see processes associated with the former as relevant to competence-based
evaluations. An individuals ability to achieve a goal (competence) might
serve as input into a determination of whether an agent is, or is likely to be,
32 Advances in Experimental Moral Psychology

causally responsible for morally relevant outcomes, whereas an individuals


intentions for harming or helping others (warmth) might be more directly
related to determinations of the mental states of actors. Given that this research
identifies distinct patterns in moral judgments associated with evaluations of
intent and causal connection, it is possible that inferences of competence and
warmth might show similar patterns of relationships to particular kinds of
moral judgments. For example, causal responsibility seems to be more directly
relevant to judgments of punishment and blame while intent matters more
for judgments of wrongness or permissibility. Might the moral relevance of
perceived warmth and competence follow a similar pattern? Could perceiving
general competence in individuals make them more morally blameworthy for
outcomes?
In exploring this possibility it would be important to distinguish how
competence judgments might influence moral evaluations of behavior
(responsibility, intent, blame) from moral evaluations of character (how good/
bad is this person). It may be that interesting patterns of responses emerge
from considering these kinds of moral judgments separately. A distinction
between act-centered models of moral judgment and person-centered models
of moral judgment has recently been emphasized in the literature (e.g.,
Pizarro and Tannenbaum 2011). On this account, moral judgments are often
formed by considering the moral character of the individual involved. And
assessments of intent, control, responsibility, and blame might be unified in
their relationships to underlying assessments of who the actor is and what he
or she values (Pizarro and Tannenbaum).
The second area of research in moral cognition for which the moral
relevance of competence might have interesting implications is recent
theorizing on the relationship between mind perception and morality (Gray
et al. 2007, 2012). These theories posit that all moral judgments require
perceiving two distinct kinds of interacting minds: agents and patients.
Agents have the capacity to intend and carry out action, while patients are
the recipients of agents actions. Interestingly, agents are defined by traits that
seem more conceptually associated with competence (self-control, planning,
thought) but also morality. In models of mind perception the distinction
between warmth and competence in terms of moral competence seems
to disappear. How can theories of person perception, which draw a sharp
The Character in Competence 33

distinction between morally relevant capacities of warmth and the amoral


capacities of competence, be reconciled with emerging theories of mind
perception in moral psychology? As I have argued throughout this chapter, the
answer may be that the distinction drawn in person perception is misleading
and, perhaps, a function of the contexts in which such studies have been
conducted.

Conclusion

In conclusion, this chapter serves as a call for increased attention toward the
processes underlying the evaluation of others competence in moral judgments
of them and, consequently, renewed attention to the role of such traits in theories
of morality more generally. Moral cognition has focused almost exclusivelyon
traits related to warmth (kindness, altruism, trustworthiness) and has paid
relatively little attention to how we assess others capacities to achieve their
goals. These self-focused traitsdiscipline, focus, industriousnesshave
long been considered relevant to moral character by virtue ethicists, and their
absence from psychological theories of person perception is, at the very least,
worthy of more direct empirical attention.

Note

* Authors Note: Piercarlo Valdesolo, Department of Psychology, Claremont


McKenna College. Correspondence should be addressed to Piercarlo Valdesolo,
850 Columbia Avenue, Claremont, CA 91711. Email: pvaldesolo@cmc.edu.

References

Abele, A. E., and Wojciszke, B. (2007). Agency and communion from the perspective
of self versus others. Journal of Personality and Social Psychology, 93(5), 751.
Aquino, K., and Reed, A. (2002). The self-importance of moral identity. Journal
ofPersonality and Social Psychology, 83(6), 142340.
Aristotle (4th Century, B.C.E./1998). The Nicomachean Ethics. Oxford: Oxford
University Press.
34 Advances in Experimental Moral Psychology

Cuddy, A. J., Fiske, S. T., and Glick, P. (2008). Warmth and competence as universal
dimensions of social perception: The stereotype content model and the BIAS map.
Advances in Experimental Social Psychology, 40, 61149.
Cushman, F. (2008). Crime and punishment: Distinguishing the roles of causal and
intentional analyses in moral judgment. Cognition, 108(2), 35380.
Cushman, F., Dreber, A., Wang, Y., and Costa, J. (2009). Accidental outcomes guide
punishment in a trembling hand game. PloS ONE, 4(8), e6699.
Dent, N. J. H. (1975). Virtues and actions. The Philosophical Quarterly, 25(101),
31835.
Fiske, S. T., Cuddy, A. J., and Glick, P. (2007). Universal dimensions of social
cognition: Warmth and competence. Trends in Cognitive Sciences, 11(2), 7783.
Frimer, J. A., Walker, L. J., Dunlop, W. L., Lee, B. H., and Riches, A. (2011). The
integration of agency and communion in moral personality: Evidence of
enlightened self-interest. Journal of Personality and Social Psychology, 101(1),
14963.
Frimer, J. A., Walker, L. J., Riches, A., Lee, B., and Dunlop, W. L. (2012). Hierarchical
integration of agency and communion: A study of influential moral figures.
Journal of Personality, 80(4), 111745.
Gray, H. M., Gray, K., and Wegner, D. M. (2007). Dimensions of mind perception.
Science, 315(5812), 619.
Gray, K., Young, L., and Waytz, A. (2012). Mind perception is the essence of morality.
Psychological Inquiry, 23(2), 10124.
Grube, G. M. A., and Reeve, C. D. C. (1992). Plato: Republic. Hackett: NJ.
Pavarini, G., and Schnall, S. (2014). Is the glass of kindness half full or half empty?
In J. Wright and H. Sarkissian (eds), Advances in Experimental Moral Psychology:
Affect, Character, and Commitments. Continuum Press.
Phalet, K., and Poppe, E. (1997). Competence and morality dimensions of national
and ethnic stereotypes: A study in six eastern-European countries. European
Journal of Social Psychology, 27(6), 70323.
Pizarro, D. A., and Tannenbaum, D. (2011). Bringing character back: How the
motivation to evaluate character influences judgment of moral blame. In M.
Mikulincer and Shaver, P. (eds), The Social psychology of morality: Exploring the
causes of good and evil. APA Press: Washington DC.
Rosenberg, S., Nelson, C., and Vivekananthan, P. S. (1968). A multidimensional
approach to the structure of personality impressions. Journal of Personality and
Social Psychology, 9(4), 2.
Schwartz, S. H. (1992). Universals in the content and structure of values: Theoretical
advances and empirical tests in20 countries. Advances in Experimental Social
Psychology, 25(1), 165.
The Character in Competence 35

Sherman, N. (1989). The Fabric of Character: Aristotles Theory of Virtue (Vol. 6).
Oxford: Clarendon Press.
Smith, A. (1937). The Wealth of Nations (1776). New York: Modern Library, p. 740.
Todorov, A., Pakrashi, M., and Oosterhof, N. N. (2009). Evaluating faces on
trustworthiness after minimal time exposure. Social Cognition, 27(6), 81333.
Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly Review of
Biology, 46(1), 3557.
Tversky, A. (1977). Features of similarity. Psychological Review, 84(4), 32752.
Valdesolo, P., and DeSteno, D. (2011). Synchrony and the social tuning of
compassion. Emotion-APA, 11(2), 262.
Willis, J., and Todorov, A. (2006). First impressions making up your mind after a
100-ms exposure to a face. Psychological Science, 17(7), 5928.
Wojciszke, B., and Abele, A. E. (2008). The primacy of communion over agency and
its reversals in evaluations. European Journal of Social Psychology, 38(7), 113947.
2

Spoken Words Reveal Selfish Motives:


AnIndividual Difference Approach
toMoralMotivation
Jeremy A. Frimer and Harrison Oakes*

While running for the US presidency in2012, Mitt Romney made numerous
promises. To various audiences, he accumulated at least 15 major pledges of
what he would accomplish on his first day in office, should he be elected. These
included approving an oil pipeline, repealing Obamacare, sanctioning China
for unfair trading, submitting five bills to congress, increasing oil drilling, and
meeting with Democrat leaders. By any realistic account, this collection of
pledges was unfeasible for a single day.1 Romneys ambitious avowals raise
the question: Is such unrealistic over-promising out of the ordinary? Perhaps
Romneys promises are revealing of the situational pressures that politicians
face when trying to appeal to voters. More broadly, perhaps most people
politicians and the populus alikeregularly feign a desirable exterior to garner
social approval.
Then again, perhaps Romneys campaign vows are also revealing of
something specific about Romneys personality. When characterizing Romneys
policies after the primary elections, his advisor commented, Everything
changes. Its almost like an Etch-A-Sketch. You kind of shake it up and restart
all over again (Cohen 2012). In other words, Romneys team did not see his
pledges as necessitating congruent actions. Perhaps some people, like Romney,
are more duplicitous than other people.
In this chapter, we present a case for both possibilities: that feigning a moral
self is the norm and that some people do it more than others. We begin by
reviewing an apparent paradox, that most people claim to be prosocial yet
Spoken Words Reveal Selfish Motives 37

behave selfishly. We interpret this inconsistency as evidence that humans have


two distinct motives: (a) the desire to appear prosocial to others (appearance
motives) and (b) the desire to behave in a way that benefits the self (behavioral
motives). Self-report inventories thus paint an unrealistically rosy impression
of human nature; this divergence has likely contributed to a widespread
skepticism about self-reports in the social sciences.
The overarching goal of this chapter is to discuss recent efforts to develop
a more subtle motivation measure that accesses private behavioral motives
in a sense, a metal detector of the soul. This new measure relies on the
projective hypothesis: a persons spontaneously produced words are revealing
of their inner psychological world. Projective methods have the potential to
circumvent certain biases endemic to self-reports, augment the prediction of
behavior, and detect novel and morally significant individual differences. We
describe recent efforts to make projective methods less subjective and more
expedient. And we conclude by exploring how these new efforts may open up
new areas of moral psychology research.

Selfish or moral? A paradox

A quick glance at the news headlines or textbooks on evolution, economics,


or social psychology gives the impression that humans are primarily selfish
(Haidt 2007). The view that human nature is primarily selfish has been popular
among scholars throughout history (e.g., Hobbes, Spinoza, Adam Smith, and
Ayn Rand). Empirical research has largely been supportive of this theory. As but
one illustration within the social sciences, consider the dictator game wherein
one person has a fixed amount of money to unilaterally and anonymously
divide between the self and a partner. In reviewing studies on the dictator
game, Engel (2011) found that most people take more for themselves than
they give to their partner (see Figure 2.1). As a rough, first estimate of human
nature, much theory and evidence suggest that selfishness is the rule.
This broad impression of human nature as selfish generalizes to behaviors in
many contexts. However, it does not manifest when people describe themselves.
Self-descriptions tend to be more prosocial than selfish. To illustrate this, we
examined three of the more prevalent self-report inventory measures that tap
38 Advances in Experimental Moral Psychology

Prosocial
13%

Evenhanded
17%
Selfish
70%

Figure 2.1 Most people behave selfishly. The pie chart shows the percentage of people
who behave in three different ways in the dictator game (N20,813). The majority of
players selfishly take more money than they give. Minorities of people prosocially give
more than they take or even-handedly divide the money equally between the self and
partner. Adapted from Engel (2011).

Traits 0.84

Values 1.18

Goals 1.47

1.5 1 0.5 0 0.5 1 1.5


(Selfish) Effect size, Cohens d (Prosocial)

Figure 2.2 Most people claim to be moral. Bars represent the effect sizes of self-
reported inventories of goals, values, and traits, based on published norms. Across
all three psychological constructs, people see prosocial items as being more self-
descriptive than selfish items. Calculated from published norms in Trapnell and
Broughton (2006), Schwartz etal. (2012), and Schmuck etal. (2000), for traits, values,
and goals, respectively.

prosocial and selfish tendenciesgoals (Schmuck, Kasser, and Ryan 2000),


values (Schwartz et al. 2012), and traits (Trapnell and Broughton 2006). For
each construct, we calculated a score reflecting how prosocial versus selfish
the population is, operationalized by an effect size (Cohens d; prosocial minus
selfish).2 The results are shown in Figure 2.2. Across all three psychological
constructs, participants self-reports portray a distinctly prosocial humannature.
Spoken Words Reveal Selfish Motives 39

By conventional standards, effect sizes are in the large or very large ranges. Most
people claim to be more prosocial than selfish.
The general impression from the social sciences (e.g., from the dictator
game)that people are selfishappears to contradict the general impression
from personality psychologythat people claim to be moral. We will make a
case that this apparent paradox is in fact revealing of a complexity (to put it
nicely) or a hypocrisy (to put it bluntly) built in to human nature: a desire to
appear prosocial while behaving selfishly.

Moral hypocrisy

How do these disparate motives play out in human interaction? Daniel Batsons
coin-flipping experiments provide a compelling account of how, for most people,
morality is primarily for show (Batson etal. 1997). Research participants were
asked to choose one of two tasks to complete. One task offered participants
a chance to win money; the second was boring and rewardless. Individuals
were told that the next participant would have to complete whichever task
they did not choose. However, this participant would remain unaware of the
assignment process.
The experimenter described the situation facing participants as a kind of
moral dilemma, and explained that most people think the fair way to decide is
by flipping a coin. However, participants were not required to flip the coin, nor
were they required to adhere to the coin toss results should they choose to flip
the coin. Participants were then left alone in a room with a coin and a decision
to make. This set up a zero-sum situation in which one persons benefit meant
another persons loss (essentially a variation of the dictator game). Moreover,
the situation was effectively anonymous, with reputational forces stripped
away. What would participants do?
As we might expect from Figure 2.1, unapologetic selfishness was common
in these studies. Roughly half of participants never bothered to toss the coin.
Almost all (90%) of these participants immediately assigned themselves to the
favorable task. The other half of the sample, however, submitted to the fair
procedure of tossing a coin. Probabilistically speaking, about 50 percent of
these participants would have won the coin toss and assigned themselves to the
favorable task. However, 90 percent of participants who tossed a coin assigned
40 Advances in Experimental Moral Psychology

themselves to the favorable task, a full 40 percent more than probability odds
would predict.
Given the anonymity of their decision, participants who lost the coin
toss found themselves in a bind. They were caught between the desire to act
upon the result of the fair procedure and the desire to win some money. In
the end, most (80%) of the people who lost the coin toss ignored the results
and assigned themselves to the favorable task. Batson interpreted these data to
suggest that among the corpus of human motives are distinct desires to behave
selfishly and appear moral.
Flipping a coin to fairly adjudicate the assignment of unequal tasks is a display
of the desire to appear moral. Not only do we try to convince others of our good
nature, we try to convince ourselves too by internalized and generalized self-
beliefs. In the Batson studies, participants completed a self-report measure of
their moral responsibility. The measure predicted whether participants would
flip the coin, but it did not predict how they actually behaved, meaning that
participants self-proclamations were more closely linked to how they wanted
others to see them than they were to private behavior (viz., assigning the task).
Having discussed the majority of participants in Batsons studies who
exhibited moral hypocrisy or unabashed selfish behavior, we are left with the
few laudable participants who assigned the other person to the good task, either
with or without a coin toss. With no one watching, this important minority
gave of themselves to benefit another person. Introducing these givers into
iterated economic games directly benefits fellow players (in terms of payouts)
and encourages generosity from them to one another (Weber and Murninghan
2008). Whereas these givers may appear to be self-sacrificial, over time, they
tend to reap rewards for their generosity. What sets them apart from the selfish
and the hypocrites is that their self-interest is interwoven with the interests of
those around them.
Givers are probably the sort of people one would prefer as a babysitter,
colleague, or government representative, given their honorable behavior.
Society would benefit from an efficient means of detecting this minority of the
population, which would also raise the likelihood of catching hypocrisy, thus
making prosocial behavior more attractive to would-be hypocrites. We next
explore whether and how moral psychology might develop a personality tool
that detects these honorable individuals.
Spoken Words Reveal Selfish Motives 41

Moral is as moral does?

The protagonist of the 1995 film, Forrest Gump, played by Tom Hanks, was
not a bright man. He had an IQ of 75. He was inarticulate and had a poor
grasp of social rules, cues, and expectations. Yet his behavior was mysteriously
brilliant. To name a few of his accomplishments, he was a football star, taught
Elvis Presley to dance, was a Vietnam war hero, started a multimillion dollar
company, and met three US presidents. Was Forrest smart? Or was he stupid,
as his IQ and peers suggested? When asked directly, Forrest retorted, Stupid is
as stupid does. In other words, behaviornot thoughts or wordsis the true
measure of a person.
Forrests ontological stance coincides with the general feeling in social
psychology: the best way to know a person is by observing their actions, not
their words. A person is as a person does, not as he/she claims. This assertion
may be grounded in the notion that people have poor insight about the causes
of their own behavior (Nisbett and Wilson 1977). The self-congratulatory
impression emerging from self-reports (see Figure 2.2) in conjunction with
self-serving behavior in economic games (see Figure 2.1) might seem to add
to the skepticism.
We believe that this degree of skepticism about the validity and utility of
self-reports in understanding behavior is overly dismissive. Reports from the
person (self-report inventories and projective measures, combined) can offer
a reasonably accurate picture of human nature and improve predictions of
behavior. The key to prediction is to expand the toolset beyond inventories.

Limitations of self-report inventories

I (the first author) came to psychology after completing a degree in engineering


physics, wherein the tools of the trade were sophisticated machines housed
in metal boxes with names like oscilloscope. I anticipated that psychology
had its own raft of sophisticated measurement devices, which it does.
However, most tools for measuring the person are astonishingly simple. The
most common personality measure involves self-reporting ones traits (e.g.,
extraversion) by ticking boxes on a Likert scale. This seemed imprecise, biased
42 Advances in Experimental Moral Psychology

by self-presentation demands, and missing what felt to me to be the essence


of a person (e.g., a persons history, aspirations, fears, and desires). Sixty years
earlier, Allport (1937) arrived at a related sentiment: Nor can motives ever be
studied apart from their personal settings, they represent always the strivings
of the total organism toward its objective (p. 18). Self-report inventories do
have their benefits; indeed, no one has more information about any particular
person than that person him/herself. Nevertheless, these self-reports seem
untrustworthy, especially if one end of a Likert scale represents something
socially desirable, such as prosocial motivation.
An alternative means of knowing a person is through the words that he/she
spontaneously speaks. Premising this approach is the projective hypothesis,
which essentially states that the more a person cares about something, the
more the person will speak about that thing. Projective methods usually
begin in an interview, either about some ambiguous picture (e.g., Rorschach
inkblots, Thematic Apperception Test) or ones own life. After recording and
transcribing the interview, trained researchers code each story for the presence
of absence of a particular theme (e.g., power, achievement, or intimacy). The
scores from this coding process tend to be highly predictive of behavior. Why
then do mainstream empirical researchers tend to avoid studying projective
methods? We propose two pragmatic reasons.

Objectivity
Researchers may be weary of projective methods because science demands
objective, replicable measurements with researcher bias minimized. The
moment an interview begins, an avalanche of conflating factors compromise
the validity of the data. Among these are the interviewers personal views,
preferences, and knowledge of the status of the individual.

Expedience
Conversely, researchers may be attracted to self-report methods owing to
expedience. Self-report measures require few resources, can be collected online
or from groups of participants at the same time, and can be analyzed the same
day. In the current era, expedience is a prerequisite to feasibility. In contrast,
interviewing, transcribing, and coding require a significant resource investment.
Spoken Words Reveal Selfish Motives 43

The revealing nature of the spoken word

Both expedience and objectivity are important concerns, but so is prediction.


We suggest that neither of the two pragmatic reasons are sufficient grounds
for neglecting the richness of projective methods, especially in the twenty-first
century. Technological advances of recent years have opened new opportunities
to using spoken and written words, expediently and objectively. Later in this
chapter, we describe our current efforts to develop a new projective measure
to better understand a persons moral character. But first, we briefly outline the
traditional projective method.
Analyzing spoken words is founded on the projective hypothesis: when
trying to make sense of a stimulus that has no clear meaning, people create
meaning, thereby projecting the thoughts that are chronically accessible in their
mind. Rorschach inkblots and the ambiguous picture scenes of the Thematic
Apperception Test were early tests built upon the projective hypothesis (Frank
1939). Respondents were asked to make sense of what they saw; in doing so,
they projected motives such as achievement, power, or intimacy to tell a story
about the stimuli. Complex scoring systems (Exner 1993; McClelland 1975)
detailed how coders should assign scores based on the respondents answers.
Dan McAdams (2001) adapted and expanded upon these scoring systems
to empirically study personal life stories. In an interview context, participants
describe various life events, such as earliest memories or a turning point
event wherein some major life change takes place. Later, the interviews are
transcribed and each event is coded for the presence or absence of various
themes, and then tallied to form a narrative metric. Among the menu of
available coding themes are (a) affective tone, ranging from optimistic to
pessimistic; (b) redemption, wherein a negative event gives rise to a positive
one; (c) contamination, wherein a positive event gives rise to a negative one;
(d) agency, which includes themes of power, achievement, and empowerment;
and (e) communion, which includes themes of help, love, and friendship.
A founding premise of the projective method is that data garnered from
spontaneous words are non-reducible to scale/inventory data. Personal stories
contain ideographic information, structure, and meaning thatqualitatively
scales cannot capture. Knowing that a person is highly extraverted or
emotionally stable does not tell you which stories they will share about their
44 Advances in Experimental Moral Psychology

earlier life, or what meaning those stories hold in the present. Additionally,
personal stories contain important quantifiable information that is not
accessible via self-report questionnaires.
To illustrate this point in the moral domain, we consider whether personality
measures can distinguish bona fide moral heroes from the general population.
Walker and Frimer (2007) studied the personalities of 25 recipients of the
Caring Canadian Award, a national award for sustained prosocial engagement.
The authors also recruited a set of demographically matched comparison
individuals, drawn from the community. All participants completed a battery
of measures including inventories of self-reported traits (Wiggins 1995) and
a projective measurean individual Life Story Interview (McAdams 1995).
The interview includes questions about high point events, low point events,
turning point events, and so on. Common among the variety of stories that
people told were weddings, the birth of children, the death or illness of friends
or family, and work transitions. Awardees scored higher than comparisons on
many of the measures, both inventory and projective. As we will show next,
projective measures were more distinguishing of exemplars from comparisons
than were inventories.
Imagine reviewing the personality scores of the participants without
knowing whether each participant was an awardee or comparison individual.
The first line, for example, would contain an array of numbers representing
the scores of a particular individual, say Joe. How accurately could you guess
whether Joe is a moral exemplar or a comparison subject, based on the data
at hand? To find out, we performed a logistic regression on the original data
set, predicting group status (exemplar or comparison). With no predictors,
correct classification was at chance levelsspecifically, 50 percent. In the first
step of the regression, we entered self-report personality data (the Big 5) for
all participants. Correct classification improved from 50 percent to 72 percent,
a significant increase above chance, Nagelkerke R2 0.27, p0.04. In the
second step, we added the projective measure data listed above. Figure 2.3
shows how correct classification increased to near perfection (72% 94%)
with the addition of projective data, a further significant increase, Nagelkerke
R2 0.55, p0.001.
We tested which of inventory or projective data is a more powerful predictor
by entering the variables in the reverse order (projective then inventory). In the
Spoken Words Reveal Selfish Motives 45

Inventory then Projective Data Projective then Inventory Data


Step 2. Add
Inventory
Data
6%

Step 2. Add
Projective
Data
Step 0. Step 0.
22% Step 1. Add
Chance Chance
50% Projective 50%
Step 1. Add Data
Inventory 38%
Data
22%

Figure 2.3 Projective data adds to the predictive power of moral behavior. Correct
classification of moral exemplars and ordinary comparison individuals in two
logistic regression analyses. The left panel shows the correct classification based
on chance (Step 0), Big 5 trait inventories alone (Step 1), and inventory with five
projective measures (Step 2). The right panel shows correct classification based
on the reverse orderingprojective then inventory. Calculated from Walker and
Frimer (2007).

first step, projective data increased prediction above chance, from 50 percent
to 88 percent, a significant increase, Nagelkerke R2 0.74, p0.001. In the
second step, inventory data did not significantly augment the differentiation
of exemplars from comparisons (88% 94%), Nagelkerke R2 0.08,
p0.19.
By knowing a persons self-report inventory scores and projective scores
(and nothing else), one could correctly guess whether or not Joe was a moral
exemplar, 19 times out of 20. Projective data, if anything, is the more powerful
predictor of moral behavior.
Telling a story is inherently different than making global attributions about
oneself. For starters, telling a story is an ambiguously defined task. Even if a
particular episode is specified (e.g., a turning point), one still needs to select
a particular memory, determine a starting point, and then build a coherent
story from there forward. The result is that each person tells a rather unique
story. Consider one of the comparison participants in Walker and Frimers
(2007) study. In response to a question about a high point event in his life, this
46 Advances in Experimental Moral Psychology

comparison participant began his story by describing how he had prepared for
a vacation to Europe:

I was going on a vacation to Italy. I invested around $4000 for the whole tour
and whatnot....Getting the Canadian passport was easy enough because I
had one before, and it only took two weeks...

Surprisingly, this was not leading to fond memories of vineyards, beaches,


friends and family, sunshine, relaxation, or traditional foods. The story quickly
turned sour as the participant revealed that he had lost his passport. The story
became labored, as he told of his struggles with government institutions and
officials to try to recover his passport in time for his vacation. Several minutes
later, he continued:

...I went down, I showed them my income tax forms, and...that Id paid
my taxes, and this, that, and the other. And if you could speak to a Canadian
government guy, and he could get you on the computer and talk to you just like
you and I, it makes sense. But theres no sense. Youre talking to a number....

The sob story continued for roughly 10 minutes. Eventually the interviewer
interjected, asking the participant to return to the high point of his story by
asking whether he had received his passport in time for his vacation. His high
point event ended in disappointment.

No....I didnt want to go to no doctor and, you know, have to lie about
being sick and all that....As far as I was concerned, the holiday was over.
You know, Id spent that money. That was it.

This comparison participants high point story is of a tragic form. It had a


contaminating tone, wherein a positive event (vacation) gave rise to a negative
outcome (a lost passport and lost money); the affective tone was pessimistic; it
was nearly devoid of prosocial communion themes; and most of the characters
were antagonistic villains. This kind of story was common in the comparison
group, but not in the exemplar group.
The high points of moral exemplars were the kinds of generative stories that
one would expect. As an example, one moral exemplar described a touching
story in which a disadvantaged child savored a Christmas gift:

Christmas Eve one year...[my wife and I] looked at all the gifts under our
tree....It was a true mass of gifts to be opened. And yet we still looked
Spoken Words Reveal Selfish Motives 47

at each other, and asked, sincerely, Is there enough for the kids...to be
happy? We realized how fortunate our kids were, and how fortunate we
were, that regardless of how the impact was going to be, or how minimal or
how large it was going to be, we were going to start a program the following
year....It evolved very, very slowly from going to local stores asking for
a hockey stick and a baseball glove, to donated wrapping paper....Six
hundred and fifty gifts, the first year...evolving to literally 80,000 gifts one
year....[We would take the gifts] into these small communities....Very
isolated, and exceedingly poor....I can remember this one little girl...sat
on Santas knee....She was nervous. We provided her with a gift, which I
knew was a doll from the shape of it....[I was] quite anxious for her to open
the gift; I wanted to see her reaction. But she didnt....After all the kids had
received their gifts, I took one of the people from the community aside, and
I said, I was quite anxious for this one young girl to open her gift, but she
didnt. I said....I wonder if she felt embarrassed, or if she felt awkward,
or maybe she doesnt understand the tradition of Christmas.... And they
said, No, she fully understands, but this is December 22nd. That will be the
only gift that she has. She will wait until Christmas morning to open that
gift.... And [thats] the true essence of what that program is all about.

The contrast between these two stories (about the lost passport vs. the
disadvantaged child receiving a cherished gift) illustrates both the profound
richness and also the predictive utility of spoken words. Personal stories reveal
a great deal about a person.

Motives in stories

Having discussed broader issues concerning how self-report traits and


specific life stories add to our understanding of the person, we now retain
this distinction and return to specific questions about moral motivation. How
can traits and spoken words help us understand the role of selfishness and
morality in human functioning?
One way of measuring a persons moral motivation from their stories is by
examining how often themes of agency and communion arise (Bakan 1966).
A story with much agency (power, achievement) conveys a tone of selfishness.
Conversely, a story rich in communal themes (benevolence, care, universalism)
communicates prosociality. Given that prosociality is more socially desirable
48 Advances in Experimental Moral Psychology

than selfishness, one would expect themes of communion to be more prevalent


than themes of agency in most self-proclamations. This is the case with self-
report inventories (see Figure 2.2). However, the opposite is found with spoken
words; when people speak about their lives, they communicate selfishness. For
example, when college students tell stories about peak experiences and earliest
memories, agentic themes are more than twice as prevalent as communal
themes (McAdams etal. 1996, Study 1). And both ordinary adults and moral
exemplars have more than twice as many themes of agency (than communion)
in their stories (Frimer etal. 2011).
The relative strength of selfish agency and moral communion depends
on the kind of measure used (viz., self-report endorsements vs. projected
narratives), and may be revealing of differences in the psychological processes
they each measure. Most people claim to be more communal than they
areagentic on self-report measures of the importance of goals, values, and the
description of traits (see Figure 2.2). Yet, these same people tell stories that are
primarily about money, achievement, status, and recognition (agency), and
less often about taking care of friends and family members, or contributing
to the greater good (communion). In striking contradistinction to self-report
measures, the themes emergent from narratives are primarily selfish. In other
words, the impression arising from projective methodsthat people are
selfishcoincides with the predominant view (and corroborating evidence)
that began this chapter. Why?
One possible reason for this disparity between responses from inventories
and projective measures is the frame of reference. Perhaps responding
to self-report inventories prompts the respondent to take a third-person
perspectivethe Jamesian (1890) me-self to examine what the self is like
from the perspective of another. Thus, reports reveal socially desirable features
that are for public viewing, which tend to be moral. In contrast, narrating
requires taking a first-person perspectivethe Jamesian I-self to assemble
memories and construct a coherent story from the drivers seat. Thus,
personality assessments derived from narratives are revealing of the agents
private desires, which tend to be selfish.
Perhaps the rarity of communion in life stories is not a fair test of whether
people are innately selfish. The quantity (or frequency) of motives may not
be an appropriate metric. Perhaps quality is what really matters. To project
Spoken Words Reveal Selfish Motives 49

a prosocial persona, perhaps people communicate (in one way or another)


that the ultimate purpose for their agentic strivings is some communal end.
Societal leaders may be particularly adept at explaining how numerous,
proximal agentic goals (e.g., changing laws) serve a more distal, singular
communal purpose (e.g., advancing the greater good).
To test whether iconic leaders thus frame agency as a means to an end
of communion, or simply dwell on agency, Frimer et al. (2012) examined
speeches and interviews of some of the most influential figures of the past
century, as identified in Time magazines lists (Time 100, 1998, 1999). In
Study 1, expertssocial science professors at Canadian universitiesrated
the moral character of each target to identify highly moral and less-moral
leaders. Among the top 15 moral leaders were Nelson Mandela, Mohandas
Gandhi, Aung San Suu Kyi, The Dalai Lama, Mother Teresa, and Martin
Luther King, Jr.The bottom 15 less-moral leaders included Kim Jong Il, Eliot
Spitzer, Vladimir Putin, Donald Rumsfeld, Mel Gibson, George W. Bush, and
Adolf Hitler.
In Study 2, trained coders examined each influential individuals speeches
and interviews, ascertaining both the implied means (agency or communion)
and ends (agency or communion) of each speech or interview. Unsurprisingly,
Gandhi and the other moral leaders treated agency as a means to an end of
communion. Perhaps these icons garnered such public approval, in part,
because of their ability to connect pragmatic agency with communal purpose.
However, not all leaders did this. The speeches and interviews of Hitler and
the other less-moral leaders were of a primarily agentic nature, with agency as
a means to more agency. Agency and communion may be distinct, mentally
segregated motives early in the life span, with the integration of agency and
communion being a developmental achievement. Moreover, some people,
such as moral exemplars, may be more likely to realize integration (Frimer
and Walker 2009).
These findings that words are revealing of the moral character of leaders
contradicts the common-sense notion that the public words of leaders are a
means of social persuasionthe product of the calculating minds of advisors
and ghostwriters. This common sense seems to be overly dismissive of the
wealth of information that spoken words communicate. We suggest that most
people cannot help but project their own deeply held motives when speaking.
50 Advances in Experimental Moral Psychology

What is needed is an expedient, objective measure for detecting these inner


motives, and then experimentally testing if and when people can fake high
levels of prosociality.

How to spot a hypocrite: Toward


anexpedient,objectivemeasure

We conclude this chapter by describing ongoing efforts in our lab to concentrate


the active ingredients of the projective hypothesis into pill forman
expedient, objective projective measure. The new projective measure assesses
the density of agentic and communal words in texts that people produce,
using computer software such as Linguistic Inquiry and Word Count (LIWC;
Pennebaker et al. 2007). LIWC is a transparent, well-validated computer
program (available from http://liwc.net) that counts all the words within a text
file that matches the words in a specified dictionary. LIWC is expedientit
processes texts in secondsand objectiveno human coders are involved.
Frimer and Oakes (2013) created agency and communion dictionaries for
LIWC, and validated them against human coding. Using these dictionaries,
LIWC produces density scores for agency and communion from a given
text. These scores are then corrected to account for different dictionary sizes,
and then used to calculate moral motivation scores. Moral motivation is
calculated as follows: Moral Motivation Communion Agency. Positive
moral motivation scores imply that a text is richer in communal words than
it is in agentic words.
Along with the usual reliability and validity concerns, we expect that the
successful moral motivation measure will predict human behavior. For this
objective to succeed, at least two specific hypotheses concerning general
tendency and individual differences will need to be supported.

Hypothesis 1: Projected moral motives revealselfish


generaltendency
We predict that the projective measure will characterize human nature
as selfish. To do so, projected moral motivation should be negative in the
population. This criterion assumes that the theory of human nature is selfish
Spoken Words Reveal Selfish Motives 51

and the results from economics, biology, and social psychologythat people
tend to behave selfishlyare accurate.
Which contentagentic or communalemerges most frequently when
people speak about topics that matter to them? We predict that agentic
content will be more common than communal content. Preliminary findings
are confirming hypothesis 1: when describing important goals, people
produce more agentic content than communal content (Frimer and Oakes
2013). This effect is not attributable to base rates of agency and communion
in the dictionaries or typical English. When people talk about what matters
to them, they selectively use more agentic words than communal words,
communicating/revealing a selfish lifestyle.
This selfish portrait emergent from the projective measure was the opposite
of the impression emerging from a comparable endorsement inventory.
Participants also rated the importance of their goals using the Aspiration
Index (Grouzet etal. 2005), a standard self-report inventory of goals. The effect
reversed: participants now rated their communal goals as more important
than their agentic goals. These results support a dualistic theory of motivation.
Results from the projective measure coincide with the general conclusion that
people are selfish; results from the endorsement measure suggest the opposite,
and may tap socially desirable appearance motives.
In Batsons studies, hypocrisy existed between two opposing behaviors
(moral coin-tossing vs. selfish task assignment), with self-deception keeping
the two at bay. In the present study, people demonstrated the coming apart of
their own motives by acknowledging the primacy of their own selfish goals
on a projective measure, then declaring their moral goals as most important
while endorsing items on a goal inventory. On self-report inventories, people
tend to present a moral self (see Figure 2.2); on projective measures, they tend
to reveal their selfishness. Thus, the first criterion of a successful projective
measure is supported: mean-level estimates from projective methods coincide
with the general interdisciplinary conclusion that people are selfish.

Hypothesis 2: Projected moral motives predict moral behavior


One of the limitations of the Batson studies is their limited utility in an applied
setting. For example, the coin-flipping task could not realistically be used to
screen prospective employees. One of the benefits of the projective measure
52 Advances in Experimental Moral Psychology

is adaptability and unobtrusivenessit could be used in a variety of contexts.


To be useful as such, however, the measure would need to meet a second
criterionpredicting moral behavior.
Lending initial support, the moral motivations found in the Nobel Peace
Prize lectures are positive (viz., prosocial; Frimer and Oakes 2013). In contrast,
the Nobel Literature Prize lectures scored neutrally (viz., equally prosocial
and self-interested). These findings replicated with a variety of interviews
and speeches of moral exemplars like Gandhi and leaders with the opposite
moral reputation like Rumsfeld, Putin, and Hitler (from Frimer etal. 2012).
Preliminary evidence is thus far supportive of the claim that the projective
measure of moral motivation predicts moral behavior, and functions somewhat
like a metal detector for private intentions.

Conclusion

Projective measures have the potential to augment our understanding of


human motives and enhance our ability to detect moral character in the real
world. Individual differences in the strategies of pursuing selfishness remain of
the utmost concern to building civil society. If the projective hypothesis is as
useful as we are supposing, the possibilities for predicting prosocial behavior
naturalistically are virtually endless. Remaining to be seen is whether this tool
could have predicted the unscrupulous behaviors of the likes of Tiger Woods,
Lance Armstrong, and Bernard Madoff, and the progressive thinking of the
Dalai Lama, Aung San Suu Kyi, and Bono.

Notes

* Authors Note: Jeremy A. Frimer, Harrison Oakes, Department of Psychology,


University of Winnipeg, Winnipeg, MB, Canada. Corresponding Author: Jeremy
A. Frimer, Department of Psychology, 515 Portage Avenue, Winnipeg, MB,
Canada R3B 2E9. Email: j.frimer@uwinnipeg.ca.
1 Regardless of whether voters interpreted his commitments literally or figuratively
(asgeneral indicators of his intentions), Romneys pledges illustrate the wiggle room
that often exists between specific proclamations and their corresponding behavior.
Spoken Words Reveal Selfish Motives 53

2 For goals, we compared means for community (prosocial) against the average
of financial success, appearance, and social recognition (selfish). For values, we
contrasted an aggregate of benevolence and universalism (prosocial) against
an aggregate of achievement, power, and face (selfish). For traits, we contrasted
nurturance for both genders (prosocial) against the orthogonal assertiveness for
both genders (selfish).

References

Allport, G. W. (1937). Personality: A Psychological Interpretation. Oxford England: Holt.


Bakan, D. (1966). The Duality of Human Existence: An Essay on Psychology and
Religion. Chicago: Rand McNally.
Batson, C., Kobrynowicz, D., Dinnerstein, J. L., Kampf, H. C., and Wilson, A. D.
(1997). In a very different voice: Unmasking moral hypocrisy. Journal of Personality
and Social Psychology, 72, 133548. doi:10.1037/0022-3514.72.6.1335
Cohen, T. (21 March 2012). Romneys big day marred by Etch A Sketch remark.
CNN. Retrieved from http://CNN.com
Engel, C. (2011). Dictator games: A meta study. Preprints of the Max Planck Institute
for Research on Collective Goods.
Exner, J. E. (1993). The Rorschach: A Comprehensive System, Volume 1: Basic
Foundations (3rd ed.). New York, NY: Wiley.
Frank, L. K. (1939). Projective methods for the study of personality. Transactions of
The New York Academy of Sciences, 1, 112932.
Frimer, J. A., Walker, L. J., Dunlop, W. L., Lee, B. H., and Riches, A. (2011). The integration
of agency and communion in moral personality: Evidence of enlightened self-interest.
Journal of Personality and Social Psychology, 101, 14963. doi:10.1037/a0023780
Frimer, J. A., Walker, L. J., Lee, B. H., Riches, A., and Dunlop, W. L. (2012).
Hierarchical integration of agency and communion: A study of influential moral
figures. Journal of Personality, 80, 111745. doi:10.1111/j.1467-6494.2012.00764.x
Frimer, J. A., and Oakes, H. (2013). Peering into the Heart of Darkness: A Projective
Measure Reveals Widespread Selfishness and Prosocial Exceptions. Manuscript
under review.
Frimer, J. A., and Walker, L. J. (2009). Reconciling the self and morality: An empirical
model of moral centrality development. Developmental Psychology, 45, 166981.
doi:10.1037/a0017418
Grouzet, F. E., Kasser, T., Ahuvia, A., Dols, J., Kim, Y., Lau, S., and Sheldon, K. M.
(2005). The structure of goal contents across 15 cultures. Journal of Personality and
Social Psychology, 89, 80016. doi:10.1037/0022-3514.89.5.800
54 Advances in Experimental Moral Psychology

Haidt, J. (2007). The new synthesis in moral psychology. Science, 316(5827), 9981002.
doi:10.1126/science.1137651
James, W. (1890). The Principles of Psychology. New York, NY: Holt.
McAdams, D. P. (1995). The Life Story Interview (Rev.). Unpublished manuscript,
Northwestern University, Illinois, USA.
McAdams, D. P., Hoffman, B. J., Mansfield, E. D., and Day, R. (1996). Themes
of agency and communion in significant autobiographical scenes. Journal of
Personality, 64, 33977. doi:10.1111/j.1467-6494.1996.tb00514.x
McAdams, D. P. (2001). The psychology of life stories. Review of General Psychology,
5, 10022. doi:10.1037/1089-2680.5.2.100
McClelland, D. C. (1975). Power: The Inner Experience. Oxford England: Irvington.
Nisbett, R. E., and Wilson, T. D. (1977). Telling more than we can know: Verbal
reports on mental processes. Psychological Review, 84, 23159. doi:10.1037/0033-
295X.84.3.231
Pennebaker, J. W., Booth, R. J., and Francis, M. E. (2007). Linguistic Inquiry and
Word Count: LIWC [Computer software]. Austin, TX: LIWC.net.
Schmuck, P., Kasser, T., and Ryan, R. M. (2000). Intrinsic and extrinsic goals: Their
structure and relationship to well-being in German and U. S. college students.
Social Indicators Research, 50, 22541. doi:10.1023/A:1007084005278
Schwartz, S. H., Cieciuch, J., Vecchione, M., Davidov, E., Fischer, R., Beierlein, C.,
and Konty, M. (2012). Refining the theory of basic individual values. Journal of
Personality and Social Psychology, 103, 66388. doi:10.1037/a0029393
Time 100: Heroes and icons. (14 June 1999). Time, 153(23).
Time 100: Leaders and revolutionaries. (13 April 1998). Time, 151(14).
Trapnell, P. D., and Broughton, R. H. (2006). The Interpersonal Questionnaire (IPQ):
Duodecant markers of Wiggins interpersonal circumplex. Unpublished data, The
University of Winnipeg, Winnipeg, Canada.
Walker, L. J., and Frimer, J. A. (2007). Moral personality of brave and caring
exemplars. Journal of Personality and Social Psychology, 93, 84560.
doi:10.1037/0022-3514.93.5.845
Weber, J. M., and Murninghan, J. K. (2008). Suckers or saviors? Consistent
contributors in social dilemmas. Journal of Personality and Social Psychology, 95,
134053. doi:10.1037/a0013326
Wiggins, J. S. (1995). Interpersonal Adjective Scales: Professional Manual. Odessa, FL:
Psychological Assessment Resources.
3

Is the Glass of Kindness Half Full or


HalfEmpty? Positive and Negative
ReactionstoOthers Expressions of Virtue
Gabriela Pavarini and Simone Schnall*

Mahatma Gandhi is one of the worlds most famous and influential symbols of
peace. His philosophy of nonviolence has moved, transformed, and inspired
individuals and communities. Yet, he was accused of racism (e.g., Singh 2004),
and was never awarded a Nobel Peace Prize, despite having been nominated
five times. Mother Teresa, an equally remarkable symbol of compassion
andaltruism, dedicated her life to helping the poor and the dying in over a
hundred countries. Her funeral procession in Calcutta brought together
thousands of people who lined the route in expression of admiration and
respect. Yet, the entry Mother Teresa was a fraud returns 65,300 results on
Google. Indisputably, people are strongly affected by witnessing the good deeds
or heroic actions of exceptional individuals, but at the same time, such actions
invoke sentiments that vary from appreciation and warmth to cynicism and
bitterness.
The central goal of this chapter is to address this paradox: Under what
conditions does the kindness of others inspire and move individuals to tears,
or invoke envy and a desire to derogate the other persons intentions? We
review what is known about both reactions and present a functional analysis,
suggesting that assimilative and contrastive reactions to virtuous others
serve distinct purposes: whereas feeling moved or uplifted binds individuals
together in cooperative contexts and communities, contrastive responses serve
to regulate ones own social status within a group.
56 Advances in Experimental Moral Psychology

The two sides of human prosociality

Human beings have a remarkable capacity to set aside self-interest, help one
another, and collaborate (Becker and Eagly 2004). Prosocial behavior enables
groups to achieve feats that could never be achieved by individuals alone.
Some have proposed that the formation of large cooperative communities
that include genetic strangers is possible only through a series of affective
mechanisms (McAndrew 2002; Burkart et al. 2009). Even before the age
of2, toddlers derive more happiness from giving treats to others than from
receiving treats themselves, and find it more emotionally rewarding when
giving is costlythat is, when they give away their own treats rather than a
treat that was found or given to them (Aknin et al. 2012). Similarly, adults
derive greater happiness from spending money on others than spending on
themselves (Aknin etal. 2013; Dunn etal. 2008). Finally, the most prosocial
individuals are the least motivated by the pursuit of status among peers (Willer
etal. 2012).
Although engagement in prosocial behavior may not be necessarily
motivated by the pursuit of status, prosocial others are nonetheless often
preferred and, as a consequence, ascribed greater status. Infants as young as 6
months show a preference for characters who help others over mean or neutral
characters (Hamlin et al. 2007). Later on, adults tend to affiliate with kind
rather than attractive others when they find themselves in stressful situations
(Li etal. 2008), and usually prefer morally virtuous others as potential mates
(Miller 2007). Evidence also suggests that individuals who build a reputation
as generous by giving more in a public goods game are more likely to be chosen
as a partner in subsequent games, as well as to receive social rewards (i.e.,
honor) than those who do not (Dewitte and Cremer 2004; Sylwester and
Roberts 2010). In other words, displays of altruistic behavior signal ones moral
quality and desirability as a potential partner and thus induce the tendency in
others to praise and affiliate (Miller 2007; Roberts 1998).
Because of the individual benefits of being generous, people may also
behave altruistically to improve their own reputation. This strategic route
to prosociality has been widely documented. When reputational concerns
are at stakefor example, when all participants have access to individual
contributions in an economic gamepeople behave more prosocially (Barclay
Is the Glass of Kindness Half Full or HalfEmpty? 57

and Willer 2007; Hardy and Van Vugt 2006). Similarly, after having been
primed with status motives individuals are more likely to purchase products
that benefit the environment (Griskevicius etal. 2010). Even minimal cues of
being observed and therefore evaluated by others, such as images of eyes in
ones surroundings, lead to more prosocial choices in economic games and
greater charitable giving (Bateson etal. 2006; Bereczkei etal. 2007; Haley and
Fessler 2005). Thus, by displaying prosocial behavior when one is likely to be
seen, one enhances the chances of receiving status benefits (e.g., Hardy and
Van Vugt 2006).
From an individual perspective, however, all members of a group wish to
maintain their reputational status and optimize their chances of being chosen as
future cooperation partners. In this context, others exemplary moral behavior
represents an increase in the standard for prosociality and challenges those
who observe it to display equally costly prosocial behavior to upregulate and
protect their moral status (Fessler and Haley 2003). In other words, witnessing
somebody else acting prosocially might be threatening because it raises the
bar for others. Therefore, observers defuse the threat imposed by the morally
superior other by engaging in prosocial acts that would strategically improve
their own status. Alternatively, they may try to reduce the standard for prosocial
behavior by derogating the virtuous other or excluding him or her from the
group (Monin2007). In any of these cases, witnessing others generosity can
possibly lead to negative emotions such as fear and envy, followed by efforts to
regulate ones own status in a group.

The half empty glass of kindness: When virtuous


others are suspected and derogated

The extensive literature on social comparisons addresses a range of reactions to


successful others. In general, if the individual feels outperformed by a similar
other and perceives the others success as unachievable, the social comparison
leads to contrastive reactions. Such reactions include self-deflection and the
activation of self-defensive strategiesfor example, feelings of aversion toward
outperformers and derogation or punishment of them (Fiske 2011; Salovey
and Rodin1984). Even though this literature has mainly focused on peoples
58 Advances in Experimental Moral Psychology

reactions to highly skillful or competent individuals, analogous reactions have


been reported in reaction to exceedingly virtuous others (e.g., Monin2007).
To start with, there seems to be a general asymmetry in how individuals
react to seemingly selfless versus selsh acts. A change from being moral to
immoral is more notable to others than a change from being immoral to moral
(Reeder and Coovert 1986). Similarly, after reflecting about a prosocial deed
participants voluntarily generate selfish reasons for the act, whereas they rarely
seek altruistic reasons for a selfish act (Critcher and Dunning 2011). Further,
individuals tend to underestimate the likelihood that others would respond to
a request for help (Flynn and Lake 2008) and to underrate the trustworthiness
of their counterparts in economic games (Fetchenhauer and Dunning 2010).
This asymmetry makes functional sense, since the costs associated with
trusting an immoral character are greater than those associated with falsely
suspecting a good person.
This general cynicism surrounding other peoples moral actions does not
necessarily imply any self-threat. However, individuals do seem to derogate
virtuous others as a result of an unfavorable comparison in the moral domain
possibly as a response to a sharp rise in the standard for prosocial behavior
that threatens an observers reputation among group members. If that is
the case, these reactions would be more likely to occur in the following
contexts: (a)when another persons behavior is appraised from an individual,
competitive perspective, and therefore contrasted with the observers own
morals; (b)when the virtuous other and the observer compete for the same
cooperation partners; (c) when reputational benefits are at stake; and (d) when
others behavior is unattainable by the observer.
Initial research on do-gooder derogation supports these predictions.
Monin, Sawyer, and Marques (2008) observed that participants who were
already engaged in a negatively perceived experimental activity, such as a
counter-attitudinal speech or a racist task, disliked individuals who refused
to participate in the same tasks. Non-involved observers either preferred
moral rebels to obedient others (Study 1) or liked and respected them equally
(Studies 2 and 3), indicating that only participants who had performed the
negative activity themselves were motivated to derogate others. Moreover,
whereas non-involved observers judged the rebel as more moral than the
obedient other, threatened individuals evaluated the obedient person as just as
Is the Glass of Kindness Half Full or HalfEmpty? 59

moral as the moral rebel. These results show that whereas a non-threatening
prosocial other triggers tendencies to affiliate and praise, exposure to a similar
other that does a good deed for which you have missed your opportunity leads
to derogation and cynicism.
This pattern applies to meat-eaters reactions to vegetarians. Minson and
Monin (2012) asked participants to indicate whether they themselves were
vegetarians or not, to report whether they thought vegetarians felt morally
superior, and to freely list three words that came to mind when thinking of
vegetarians. Meat-eaters expected vegetarians to feel morally superior to
themselves and to non-vegetarians in general, and nearly half of them listed at
least one negative quality, generally referring to negative traits (e.g., arrogant,
weird, self-righteous, opinionated). Indeed, the more they expected vegetarians
to feel morally superior, the more negative words participants listed. In a
second study, participants who were first asked to rate how they would be
seen by vegetarians rated vegetarians more negatively in comparison to those
who were not primed with threat of being morally judged. These studies are
a compelling demonstration of how engaging in a stark comparison between
oneself and a morally superior other triggers defensive reactions, which may
serve to regulate ones own sense of morality.
Another defensive reaction consists of expressing a desire to expel
excessively generous members from a cooperative group. Fair participants,
who receive proportional rewards for their contributions, are significantly
more popular than both extremely benevolent and selfish participants (Parks
and Stone 2010). Ironically, unusually benevolent members are ironically
rated just as unfavorably as selfish members in terms of the extent to which
they should be allowed to remain in the group. When asked to explain the
desire to expel unselfish individuals from the group, 58 percent of participants
used comparative reasons (e.g., people would ask why we cant be like him).
These findings suggest that people attempt to reduce the standard for prosocial
behavior when reputational demands are present and members compete for
cooperation partners.
Further, large-scale cross-cultural evidence for the existence of punishment
of prosocial others was provided by Herrmann etal. (2008). Participants from
16 countries participated in an economic game in groups of four. They were
given tokens to contribute to a group project, and contributions were distributed
60 Advances in Experimental Moral Psychology

equally among partners. After each round, players could punish other players
by taking tokens away from them. Beyond the well-known punishment of
selfish participants, the authors observed that participants across the world also
punished those who were more prosocial than themselves. Unlike altruistic
punishment, levels of antisocial punishment were highly variable across
communities, and greatly influenced by economic and cultural backgrounds:
more equalitarian societies, with high levels of trust, high GDP per capita,
strong norms of civic cooperation, and a well-functioning democracy were the
least likely to punish virtuous others.
These results as a whole indicate that in conditions of high competition
and demand for reputation maintenance, individuals tend to react defensively
to virtuous others. These reactions assume a number of configurations that
include attributing negative traits to virtuous others, punishing or excluding
them from the group, and denying the ethical value of their acts. The exclusion
and derogation of the extremely generous members of ones group might be
effective in regulating ones moral reputation by decreasing the competitive
and comparative standard for prosocial behavior to a less costly level. Such
contrastive reactions may also help regulate the stability of ones group morality
and establish achievable norms of prosocial behavior. When one derogates
another persons moral status, it reduces the standard for prosocial behavior
for all members of the group.
We have so far discussed negative and defensive reactions that arise from
unfavorable social comparisons in the moral domain. There are, however,
circumstances under which negative reactions take place for reasons other than
comparative ones. One example is when exceedingly benevolent behavior by
an ingroup member is interpreted as deviant from the group norm. Previous
research has shown that ingroup members are highly sensitive to behavior that
differs from the norms set for members of ones ingroup, and so derogate such
behavior in an attempt to maintain group cohesiveness (Marques etal. 1988;
Abrams etal. 2000). In fact, in Parks and Stones (2010) study, 35 percent of
the participants did use normative reasons to justify their desire to expel the
selfless member of the group (e.g., Hes too different from the rest of us).
The authors suggest that this shows a desire for equality of participation even
from participants who are willing to give more, as well as a resistance against
changing the group norm in an undesirable direction.
Is the Glass of Kindness Half Full or HalfEmpty? 61

A second example is when either the prosocial act benefits an outgroup but
provides an ingroup disadvantage, or the others behavior is not considered
virtuous from the perspective of the observer. For example, a teacher who
lectures chastity-based sex education at school may be considered virtuous
by some people, but a violation of teenagers freedom of conscience by others.
Peoples moral values vary (Graham etal. 2009; Schwartz 2006) and so their
emotional reactions to actions that either support or undermine different
values should vary as well. In these cases, derogating the virtuous other is
not a reaction to a threat to ones reputation but rather to a threat to the ones
personal or political interests.

A half full glass of kindness: When virtuous


others become partners, saints, and heroes

On the flip side, others generosity can trigger positive reactions that include
feelings of respect and admiration. Haidt (2000, 2003) employed the term
elevation to refer to this warm, uplifting feeling that people experience when
they see unexpected acts of human goodness, kindness, and compassion
(Haidt 2000, p. 1). Elevation is generally associated with feelings of warmth in
the chest and feeling a lump in the throat. The distinctive appraisal, physical
sensations, and motivations related to elevation differentiate it from happiness
and other positive moral emotions, such as gratitude or admiration for skill
(Algoe and Haidt 2009).
To date, the most remarkable evidence in this field has been a positive
relationship between elevation and prosocial behavior. Participants exposed
to stories showing expressions of forgiveness or gratitude are more likely to
donate money for charity (Freeman etal. 2009), or volunteer for an unpaid
study, and spend time helping an experimenter by completing a tiresome
task compared to neutral or mirth-inducing conditions (Schnall etal. 2010).
Importantly, the more participants report feelings relating to elevation, such
as warmth in the chest and optimism about humanity, the more time they
engage in helping behavior (Schnall et al. 2010). Similar effects have been
observed in real-life settings outside of the laboratory. Employees who evaluate
their boss as highly fair and likely to self-sacrifice report greater feelings of
62 Advances in Experimental Moral Psychology

elevation, and are in turn more likely to show organizational citizenship and
affective commitment (Vianello etal. 2010). Further, self-reported frequency
of feelings of elevation during a volunteering service trip predicts trip-specific
volunteerism 3 months later. This effect holds above and beyond the effect of
personality traits, such as empathy, extroversion, and openness to experience
(Cox 2010).
Although the emotional correlates of these prosocial initiatives substantially
differ from those that arise from strategic prosocial behavior, this does not rule
out the possibility that they represent a reaction to a threatening comparison.
In other words, after being confronted with somebody more moral than
they are, participants may have felt motivated to act prosocially in order to
restore their self-worth and moral reputation. Recent evidence, however, has
convinced us otherwise. We have found, for example, that individuals who feel
elevation after exposure to morally upstanding others rarely report feelings of
envy or engage in contrastive comparisons between their moral qualities and
the ones of the protagonist. Rather, they often justify the magnificence of the
other persons act by referring to general standards (e.g., what she did was a
very sort of selfless act. I dont think many people would have chosen to do
that) suggesting little self-threat (Pavarini etal. 2013).
In another study, we explored the effects of self-affirmation before the
exposure to an elevating video clip and the opportunity to engage in helping
(Schnall and Roper 2012). Previous research has shown that self-affirmation
can reduce defensive responses to self-threatening information (McQueen
and Klein2006; Sherman and Cohen 2006). Therefore, we predicted that being
reminded of ones qualities would make participants more receptive to being
inspired by a virtuous other, increasing prosocial responding. Indeed, our
results suggested that participants who affirmed their personal qualities before
watching an uplifting clip engaged in more helping behavior than participants
who self-affirmed before watching a neutral clip. Further, those who specifically
affirmed moral self-qualities showed the highest level of helping, more than
participants in the elevation condition who affirmed a more selfish value or no
value at all. Affirming ones prosocial qualities possibly reminded participants
of their core values, as well as their ability to do good. The exposure to a
prosocial other under these conditions had an empowering effect on prosocial
responding.
Is the Glass of Kindness Half Full or HalfEmpty? 63

These results suggest that prosocial motives that accompany elevation


are not a reaction to self-threat; rather, they seem to be grounded in moral,
self-transcendent values. Indeed, elevating stories lead participants to
spontaneously declare their values and beliefs (Pavarini etal. 2013) and, from
a neurological perspective, uplifted participants show activation of brain areas
(e.g., posterior/inferior sectors of the posteromedial cortex) implicated in high-
level consciousness and moral reasoning (Englander etal. 2012; Immordino-
Yang etal. 2009; Immordino-Yang and Sylvan 2010). Furthermore, individuals
for whom moral values are highly self-defining report stronger feelings of
elevation after reading an uplifting story than people for whom moral values
are less central. These feelings, in turn, predict greater engagement in prosocial
behavior (Aquino etal. 2011).
The strong self-transcendent nature of elevation helps illuminate its
functions. We propose that positive emotional reactions to others kindness
facilitate two processes that are inherent to human sociality: humans tendency
to affiliate with prosocial others (Li et al. 2008; Hamlyn et al. 2007), and
peoples tendency to provide social rewards to generous third parties (Willer
2009). Thus, feeling moved and uplifted in the face of virtuous others play two
distinct, yet interconnected roles. First, it supports the formation of strongly
bonded dyads and cohesive cooperative communities. Second, it enables
ascriptions of status in cooperative communities.
If elevation is about identifying truly prosocial partners, one would expect
it to be modulated by the extent to which others actions signal genuine moral
qualities. The witnessed prosocial deed would then lead to a positive shift
in ones perceptions of the virtuous other. There is initial support for these
predictions. Participants who are asked to remember a highly costly witnessed
prosocial deed report higher feelings of elevation and stronger prosocial
motives than those who recall a good deed requiring minimal effort (Thomson
and Siegel 2013). In turn, witnessing such actions changes observers attitudes
toward the agent. After recalling a situation when somebody acted virtuously,
41 percent of the participants report having identified positive qualities of the
other person and 48 percent express a desire for affiliation with him or her
(Algoe and Haidt 2009). Spontaneous verbal reports during the experience
of elevation typify these motivations, such as he is just extremely selfless!,
itsalmost like you wanna go give them a pat on the back or like hug them.
64 Advances in Experimental Moral Psychology

Belike, youre such an awesome person sort-of-hug, and although I have never
seen this guy as more than just a friend, I felt a hint of romantic feeling for him
at this moment (Haidt 2000; Immordino-Yang, personal communication).
Beyond a desire for affiliation, moral elevation is accompanied by feelings
of respect and a tendency to praise the virtuous other. Forty-eight percent
of participants who recall a virtuous act report having gained respect for
the virtuous other (Algoe and Haidt 2009), and spontaneous verbal reports
also suggest a tendency to enhance the persons status (e.g., I felt like telling
everyone about his good deed; Haidt 2000). Outside the emotion literature,
peoples tendency to ascribe rewards and status to virtuous others has been
widely documented (e.g., Hardy and Van Vugt 2006; Milinski etal. 2000; Willer
2009). Willer (2009), for example, observed that both players and observers of
an economic game rate generous participants as more prestigious, honorable,
and respected than low-contributing participants. Further, when given the
opportunity to freely allocate $3 between themselves and a game partner,
benevolent others are given a greater share than those who had contributed less
in the previous game. Moral elevation may be the emotional underpinning of
a tendency to ascribe social and material rewards to highly prosocial members
of a group.
It is important, however, to differentiate the prestige attributed to prosocial
members from related constructs such as power, authority, and dominance.
A prestigious other is defined as somebody who is respected and listened
to, normally due to socially desirable skills, whereas dominance implies use
of intimidation and coercion (Henrich and Gil-White 2001). Thus, positive
emotional reactions to acts of virtue possibly support attributions of prestige,
but do not necessarily lead to attributions of dominance.
Strong and recurring feelings of elevation, respect, and honor toward virtuous
others may transform them into role models or heroes. There is evidence that
virtuous others are indeed viewed as role models for children and youths
(Bucher 1998) and their stories of bravery, courage, and compassion used as a
tool to foster moral development (e.g., Conle 2007; Puka 1990). After exposure
to prosocial members of their own community, students generally report
having identified positive qualities of the local hero and mention feelings
of inspiration (e.g., he can deal with everything so well; I think everyone
shouldbe told this. It touched me so much; Conle and Boone 2008, pp.323).
Is the Glass of Kindness Half Full or HalfEmpty? 65

In a similar manner, adults frequently perceive certain cultural heroes as images


of their ideal selves (Sullivan and Venter 2005) and occasionally experience
strong feelings of awe toward them (Keltner and Haidt 2003).
In this context, moral elevation seems to serve as an affective cue that
positively modifies ones views of the observed other. As Haidt and Algoe
(2004) argue, individuals tend to rank social targets along a vertical dimension
of morality with goodness on top and badness on the bottom. This vertical
hierarchy helps individuals understand and organize their moral world. Saints
(who are normally regarded as virtuous) are perceived as more sacred than
humans (Brandt and Reyna 2010). In this context, feelings of moral elevation
may facilitate locating a virtuous target at the top of this vertical dimension
(Haidt and Algoe 2004; also see Brandt and Reyna 2011). The meaning of being
ascribed a top position in this chain of being is still unclear, but it possibly
implies greater chances of being chosen as a cooperative partner, being treated
well, and given an influential position in a group.
From a broad perspective, experiences of moral elevation may signal safety
in the social domain. Most individuals wish to live in a community that values
and respects cooperative norms (e.g., kindness, reciprocity, honesty; Brown
1991). As suggested by Haidt (2006), feelings of elevation may signal that the
current environment involves generosity. Individuals might therefore appraise
it as a safe and reliable place for prosocial exchanges and bonding. In fact,
moral elevation has been shown to have a relaxing effect; individuals feeling
uplifted report muscle relaxation (Algoe and Haidt 2009) and nursing mothers
are more likely to release milk after watching an inspiring clip than an amusing
video (Silvers and Haidt 2008). This association with milk letdown suggests
a possible involvement of oxytocin, a hormone implicated in relaxing states
(Uvnas-Moberg etal. 2005), prosocial orientation (Israel etal. 2009; Zak etal.
2007), and social trust (Zak etal. 2005). In short, feelings of moral elevation
may support a shift from a competitive mindset to thinking and acting in a
safe cooperative environment as a dyad or community.
Elevation is a state of self-transcendence and social focus. Yet, as we have
attempted to demonstrate, different flavors of elevation lead to different
motivational tendencies. When blended with love and compassion, feeling
uplifted may emerge as an affective element of caregiving, mating, and general
affiliation systems. When blended with respect and awe, elevation may support
66 Advances in Experimental Moral Psychology

attributions of prestige and status to generous members of a group. These


different reactions may depend on by whom and in which circumstances the
good deed is performedfor example, whether the agent represents a potential
cooperative partner or a more distant other. Such differential effects deserve
further investigation. Beneath all particularities, however, underlies the
remarkable observation that simply watching other peoples good deeds can
be emotionally rewarding, and that this affective reaction crucially sustains
human prosociality.

Concluding remarks

Individuals are strongly affected by witnessing the kindness of strangers;


however, these reactions can be either positive or negative. We here attempted
to shed light onto some of the conditions under which one denigrates or
elevates a virtuous other. In general, positive reactions take place in conditions
of little self-threat, when individuals adopt a more collective mindset and
focus on their values and identity. In competitive situations, though, where
the virtuous other and the observer compete for the same resources and future
interaction partners, defensive reactions become more likely.
Research on morality has clarified how a persons moral actions are affected
by either self-regulatory or identity concerns. On the one hand, there is
evidence that people self-regulate their morality over time. After doing a bad
deed, individuals tend to do good deeds in order to regain their moral worth,
and when they are reminded of positive self-qualities, they feel licensed
to act immorally (Merritt etal. 2010; Sachdeva etal. 2009). However, when
individuals are reminded of positive self-qualities in an abstract manner (e.g.,
a good deed in the past), identity concerns are activated and they tend to act
in a consistent instead of compensatory fashionthat is, engaging in further
prosocial behavior (Conway and Peetz 2012).
The present chapter expands this literature by exploring how analogous
processes take place in a dyadic or group context. Exposure to a virtuous other
may lead people to engage in contrastive comparisons and regulatory behavior
(e.g., derogating the persons intentions), or it may lead to the activation of moral
values that are central to ones identity, inspiring individuals to put these values
Is the Glass of Kindness Half Full or HalfEmpty? 67

into action. Interestingly, although these are individual processes, such reactions
may influence how the virtuous other reacts and may ultimately regulate a group
morality. For example, rewarding others generosity may encourage them to
engage in further prosocial acts, and help to establish stronger prosocial bonds,
whereas derogating their intentions may prevent an increase in the general
standard for prosocial behavior for all members of the group.
The capacity to evaluate and react emotionally to others moral behavior is
essential for navigating the social world. Such evaluations help observers to
identify who may help them and who might be a foe. Yet, prosocial others are
not always judged positively. Our review suggests that reactions to expressions
of uncommon goodness vary strikingly. The glass of kindness can be
perceived as half empty or half full, depending on whether the act is appraised
from a competitive or cooperative mindset, and on whether the virtuous other
is seen as a suitable social partner, a potential leader, or a possible rival.

Note

* Authors Note: Gabriela Pavarini and Simone Schnall, Department of Psychology,


University of Cambridge. The preparation of this chapter was supported by ESRC
Grant RES-000-22-4453 to S. S. Correspondence: Simone Schnall, University of
Cambridge, Department of Psychology, Downing Street, Cambridge, CB2 3EB,
Email: ss877@cam.ac.uk.

References

Abrams, D., Marques, J. M., Bown, N., and Henson, M. (2000). Pro-norm and anti-
norm deviance within and between groups. Journal of Personality and Social
Psychology, 78, 90612. doi:10.1037//0022-3514.78.5.906
Aknin, L. B., Hamlin, J. K., and Dunn, E. W. (2012). Giving leads to happiness in
young children. PLoS ONE, 7, e39211. doi:10.1371/journal.pone.0039211
Aknin, L. B., Barrington-Leigh, C. P., Dunn, E. W., Helliwell, J. F., Burns, J., Biswas-Diener,
R., Kemeza, I., Nyende, P., Ashton-James, C. E., and Norton, M. I. (2013). Prosocial
spending and well-being: Cross-cultural evidence for a psychological universal.
Journal of Personality and Social Psychology, 104, 635-52. doi: 10.1037/a0031578
68 Advances in Experimental Moral Psychology

Algoe, S., and Haidt, J. (2009). Witnessing excellence in action: The other-praising
emotions of elevation, admiration, and gratitude. Journal of Positive Psychology, 4,
10527. doi:10.1080/17439760802650519
Aquino, K., McFerran, B., and Laven, M. (2011). Moral identity and the experience of
moral elevation in response to acts of uncommon goodness. Journal of Personality
and Social Psychology, 100, 70318. doi:10.1037/a0022540
Barclay, P., and Willer, R. (2007). Partner choice creates competitive altruism in
humans. Proceedings of the Royal Society of London, Series B, 274, 74953.
doi:10.1098/rspb.2006.0209
Bateson, M., Nettle, D., and Roberts, G. (2006). Cues of being watched enhance
cooperation in a real-world setting. Biology Letters, 2, 41214. doi:10.1098/
rsbl.2006.0509
Becker, S. W., and Eagly, A. H. (2004). The heroism of women and men. American
Psychologist, 59, 16378. doi:10.1037/0003-066X.59.3.163
Bereczkei, T., Birkas, B., and Kerekes, Zs. (2007). Public charity offer as a proximate
factor of evolved reputation-building strategy: An experimental analysis of a
real life situation. Evolution and Human Behavior, 28, 27784. doi:10.1016/j.
evolhumbehav.2007.04.002
Brandt, M. J., and Reyna, C. (June 2010). Beyond infra-humanization: The perception
of human groups, the self, and supernatural entities as more or less than human.
Poster presented at the annual meeting of the Society for Personality and Social
Psychology, Las Vegas, NV.
(2011). The chain of being: A hierarchy of morality. Perspectives on Psychological
Science, 6, 42846. doi:10.1177/1745691611414587
Brown, D. E. (1991). Human universals. New York: McGraw-Hill.
Bucher, A. A. (1998). The influence of models in forming moral identity. International
Journal of Educational Research, 27, 61927. doi:10.1016/S0883-0355(97)00058-X
Burkart, J. M., Hrdy, S. B., and van Schaik, C. P. (2009). Cooperative breeding and
human cognitive evolution. Evolutionary Anthropology, 18, 17586. doi:10.1002/
evan.20222
Conle, C. (2007). Moral qualities of experiential narratives. Journal of Curriculum
Studies, 39, 1134. doi:10.1111/j.1467-873X.2007.00396.x
Conle, C., and Boone, A. (2008). Local heroes, narrative worlds and the imagination:
The making of a moral curriculum through experiential narratives. Curriculum
Inquiry, 38, 737. doi:10.1111/j.1467-873X.2007.00396.x
Conway, P., and Peetz, J. (2012). When does feeling moral actually make you a better
person? Conceptual abstraction moderates whether past moral deeds motivate
consistency or compensatory behavior. Personality and Social Psychology Bulletin,
6, 90719. doi: 10.1177/0146167212442394
Is the Glass of Kindness Half Full or HalfEmpty? 69

Cox, K. S. (2010). Elevation predicts domain-specific volunteerism 3 months later.


Journal of Positive Psychology, 5, 33341. doi:10.1080/17439760.2010.507468
Critcher, C. R., and Dunning, D. (2011). No good deed goes unquestioned: Cynical
reconstruals maintain belief in the power of self-interest. Journal of Experimental
Social Psychology, 47, 120713. doi:10.1016/j.jesp.2011.05.001
Dewitte, S., and Cremer, D. (2004). Give and you shall receive. Give more and you shall
be honored. Experimental evidence for altruism as costly signaling. Working Paper,
Department of Applied Economics 0458, University of Leuven, Belgium.
Dunn, E. W., Aknin, L. B., and Norton, M. I. (2008). Spending money on others
promotes happiness. Science, 319, 16878. doi:10.1126/science.1150952
Englander, Z. A., Haidt, J., and Morris, J. P. (2012). Neural basis of moral elevation
demonstrated through inter-subject synchronization of cortical activity during
free-viewing. PLoS ONE, 7, e39384. doi:10.1371/journal.pone.0039384
Fessler, D. M. T., and Haley, K. J. (2003). The strategy of affect: Emotions in human
cooperation. In P. Hammerstein (ed.), The Genetic and Cultural Evolution of
Cooperation. Cambridge, MA: MIT Press, pp. 736.
Fetchenhauer, D., and Dunning, D. (2010). Why so cynical? Asymmetric feedback
underlies misguided skepticism in the trustworthiness of others. Psychological
Science, 21, 18993. doi:10.1177/0956797609358586
Fiske, S. (2011). Envy up, scorn down: How status divides us. New York: Russel Sage
Foundation.
Flynn, F. J., and Bohns Lake, V. K. (2008). If you need help, just ask: Underestimating
compliance with direct requests for help. Journal of Personality and Social
Psychology, 95, 12843. doi:10.1037/0022-3514.95.1.128
Freeman, D., Aquino, K., and McFerran, B. (2009). Overcoming beneficiary race
as an impediment to charitable donation: Social Dominance Orientation, the
experience of moral elevation, and donation behavior. Personality and Social
Psychology Bulletin, 35, 7284. doi:10.1177/0146167208325415
Graham, J., Haidt, J., and Nosek, B. A. (2009). Liberals and conservatives rely on
different sets of moral foundations. Journal of Personality and Social Psychology,
96, 102946. doi:10.1037/a0015141
Griskevicius, V., Tybur, J. M., and Van den Bergh, B. (2010). Going green to be seen:
Status, reputation, and conspicuous conservation. Journal of Personality and Social
Psychology, 98, 392404. doi:10.1037/a0017346
Haidt, J. (2000). The positive emotion of elevation. Prevention & Treatment, 3, 14.
doi:10.1037//1522-3736.3.1.33c
(2003). Elevation and the positive psychology of morality. In C. L. M. Keyes and
J.Haidt (eds), Flourishing: Positive Psychology and the Life Well-lived. Washington,
DC: American Psychological Association, pp. 27589.
70 Advances in Experimental Moral Psychology

(2006). The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom. New
York: Basic Books.
Haidt, J., and Algoe, S. (2004). Moral amplification and the emotions that attach us
to saints and demons. In J. Greenberg, S. L. Koole, and T. A. Pyszczynski (eds),
Handbook of Experimental Existential Psychology. New York: Guilford, pp. 32235.
Halevy, N., Chou, E. Y., Cohen, T. R., and Livingston, R. W. (2012). Status conferral
in intergroup social dilemmas: Behavioral antecedents and consequences of
prestige and dominance. Journal of Personality and Social Psychology, 102, 35166.
doi:10.1037/a0025515
Haley, K. J., and Fessler, D. M. T. (2005). Nobodys watching? Subtle cues affect
generosity in an anonymous economic game. Evolution and Human Behavior, 26,
24556. doi:10.1016/j.evolhumbehav.2005.01.002
Hamlin, J. K., Wynn, K., and Bloom, P. (2007). Social evaluation by preverbal infants.
Nature, 450, 5579. doi:10.1038/nature06288
Hardy, C., and Van Vugt, M. (2006). Nice guys finish first: The competitive
altruism hypothesis. Personality and Social Psychology Bulletin, 32, 140213.
doi:10.1177/0146167206291006
Henrich, J., and Gil-White, F. J. (2001). The evolution of prestige: Freely conferred
status as a mechanism for enhancing the benefits of cultural transmission.
Evolution and Human Behaviour, 22, 16596. doi:10.1016/S1090-5138(00)00071-4
Herrmann, B., Thni, C., and Gchter, S. (2008). Antisocial punishment across
societies. Science, 319, 13627. doi:10.1126/science.1153808
Immordino-Yang, M. H., and Sylvan, L. (2010). Admiration for virtue:
Neuroscientific perspectives on a motivating emotion. Contemporary Educational
Psychology, 35, 110115. doi:10.1016/j.cedpsych.2010.03.003
Immordino-Yang, M. H., McColl, A., Damasio, H., and Damasio, A. (2009). Neural
correlates of admiration and compassion. Proceedings of the National Academy of
Sciences USA, 106, 80216. doi:10.1073/pnas.0810363106
Israel, S., Lerer, E., Shalev, I., Uzefovsky, F., Riebold, M., Laiba, E., Bachner-Melman,
R., Maril, A., Bornstein, G., Knafo, A., and Ebstein, R. P. (2009). The oxytocin
receptor (OXTR) contributes to prosocial fund allocations in the dictator game
and the social value orientations task. PLoS ONE, 4, e5535. doi:10.1371/journal.
pone.0005535
Keltner, D., and Haidt, J. (2003). Approaching awe, a moral, spiritual, and aesthetic
emotion. Cognition and Emotion, 17, 297314. doi:10.1080/02699930302297
Li, N. P., Halterman, R. A., Cason, M. J., Knight, G. P., and Maner J. K. (2008). The
stress-affiliation paradigm revisited: Do people prefer the kindness of strangers or
their attractiveness? Journal of Personality and individual Differences, 44, 38291.
doi: 10.1016/j.paid.2007.08.017
Is the Glass of Kindness Half Full or HalfEmpty? 71

Marques, J. M., Yzerbyt, V. Y., and Leyens, J. P. (1988). The black sheep effect: Extremity
of judgments towards ingroup members as a function of group identification.
European Journal of Social Psychology, 18, 116. doi:10.1002/ejsp.2420180102
McAndrew, F. T. (2002). New evolutionary perspectives on altruism: Multilevel-
selection and costly-signaling theories. Current Directions in Psychological Science,
11, 7982. doi:10.1111/1467-8721.00173
McQueen, A., and Klein, W. M. P. (2006). Experimental manipulations of self-affirmation:
A systematic review. Self and Identity, 5, 289354. doi:10.1080/15298860600805325
Merritt, A. C., Effron, D. A., and Monin, B. (2010). Moral self-licensing: When being
good frees us to be bad. Social and Personality Psychology Compass, 4/5, 34457.
doi:10.1111/j.1751-9004.2010.00263.x
Milinski, M., Semmann, D., and Krambeck, H. (2000). Donors to charity gain in both
indirect reciprocity and political reputation. Proceedings of Royal Society, 269,
88183. doi:10.1098/rspb.2002.1964
Miller, G. F. (2007). Sexual selection for moral virtues. Quarterly Review of Biology,
82, 97125. doi:10.1086/517857
Minson, J. A., and Monin, B. (2012). Do-Gooder derogation: Disparaging morally-
motivated minorities to defuse anticipated reproach. Social Psychological and
Personality Science, 3, 2007. doi:10.1177/1948550611415695
Monin, B. (2007). Holier than me? Threatening social comparison in the moral
domain. International Review of Social Psychology, 20, 5368.
Monin, B., Sawyer, P. J., and Marquez, M. J. (2008). The rejection of moral rebels:
Resenting those who do the right thing. Journal of Personality and Social
Psychology, 95, 7693. doi:10.1037/0022-3514.95.1.76
Parks, C. D., and Stone, A. B. (2010). The desire to expel unselfish members from the
group. Journal of Personality and Social Psychology, 99, 30310. doi:10.1037/a0018403
Pavarini, G., Schnall, S., and Immordino-Yang, M. H. (2013). Verbal and Nonverbal
Indicators of Psychological Distance in Moral Elevation and Admiration for Skill.
Manuscript in preparation.
Puka, B. (1990). Be Your Own Hero: Careers in Commitment. Troy, NY: Rensselaer
Polytechnic Institute.
Reeder, G. D., and Coovert, M. D. (1986). Revising an impression of morality. Social
Cognition, 4, 117. doi:10.1521/soco.1986.4.1.1
Roberts, G. (1998). Competitive altruism: From reciprocity to the handicap principle.
Proceedings of the Royal Society of London, Series B: Biological Sciences, 265,
42731. doi:10.1098/rspb.1998.0312
Sachdeva, S., Iliev, R., and Medin, D. L. (2009). Sinning saints and saintly sinners: The
paradox of moral self-regulation. Psychological Science, 20, 5238. doi:10.1111/
j.1467-9280.2009.02326.x
72 Advances in Experimental Moral Psychology

Salovey, P., and Rodin, J. (1984). Some antecedents and consequences of social-
comparison jealousy. Journal of Personality and Social Psychology, 47, 78092.
doi:10.1037//0022-3514.47.4.780
Schnall, S., and Roper, J. (2012). Elevation puts moral values into action. Social
Psychological and Personality Science, 3, 3738. doi:10.1177/1948550611423595
Schnall, S., Roper, J., and Fessler, D. (2010). Elevation leads to altruistic behavior.
Psychological Science, 21, 31520. doi:10.1177/0956797609359882
Schwartz, S. H. (2006). A theory of cultural value orientations: Explication and
applications. Comparative Sociology, 5, 13682. doi:10.1163/156913306778667357
Sherman, D. K., and Cohen, G. L. (2006). The psychology of self-defense: Self-
affirmation theory. In M. P. Zanna (ed.), Advances in Experimental Social
Psychology (Vol. 38). San Diego, CA: Academic Press, pp. 183242.
Silvers, J., and Haidt, J. D. (2008). Moral elevation can induce nursing. Emotion, 8,
2915. doi:10.1037/1528-3542.8.2.291
Singh, G. B. (2004). Gandhi: Behind the Mask of Divinity. New York: Prometheus.
Sullivan, M. P., and Venter, A. (2005). The hero within: Inclusion of heroes into the
self. Self and Identity, 4, 10111. doi:10.1080/13576500444000191
Sylwester, K., and Roberts, G. (2010). Cooperators benefit through reputation-
based partner choice in economic games. Biology Letters, 6, 65962. doi:10.1098/
rsbl.2010.0209
Thomson, A. L., and Siegel, J. T. (2013). A moral act, elevation, and prosocial
behavior: Moderators of morality. Journal of Positive Psychology, 8, 5064.
doi:10.1080/17439760.2012.754926
Uvanas-Moberg, A., Arn, I., and Magnusson, D. (2005). The psychobiology of
emotion: The role of the oxytocinergic system. International Journal of Behavioral
Medicine, 12, 5965. doi:10.1207/s15327558ijbm1202_3
Vianello, M., Galliania, E. M., and Haidt, J. (2010). Elevation at work: The effects of
leaders moral excellence. Journal of Positive Psychology, 5, 390411. doi:10.1080/
17439760.2010.516764
Willer, R. (2009). Groups reward individual sacrifice: The status solution to
the collective action problem. American Sociological Review, 74, 2343.
doi:10.1177/000312240907400102
Willer, R., Feinberg, M., Flynn, F., and Simpson, B. (2012). Is generosity sincere
or strategic? Altruism versus status-seeking in prosocial behavior. Revise and
resubmit from Journal of Personality and Social Psychology.
Zak, P. J., Kurzban, R., and Matzner, W. T. (2005). Oxytocin is associated with human
trustworthiness. Hormones and Behavior, 48, 52227. doi:10.1016/j.yhbeh.2005.07.009
Zak, P. J., Stanton, A. A., and Ahmadi, S. (2007). Oxytocin increases generosity in
humans. PLoS ONE, 2, e1128. doi:10.1371/journal.pone.0001128
4

What are the Bearers of Virtues?


Mark Alfano*

Despite the recent hubbub over the possibility that the concepts of character
and virtue are empirically inadequate,1 researchers have only superficially
considered the fact that these concepts purport to refer to dispositional
properties.2 For the first time in this controversy, we need to take the
dispositional nature of virtue seriously. Once we do, one question immediately
arises: What are the bearers of virtues?
In this chapter, I argue for an embodied, embedded, and extended answer
to this question. It is generally hopeless to try to say what someone would do
in a given normative state of affairs without first specifying bodily and social
features of her situation. Theres typically no fact of the matter, for instance,
about whether someone would help when there is sufficient reason for her to
help. However, there typically is a fact of the matter about whether someone
in a particular bodily state and social environment would help when there is
sufficient reason to help.
If thats right, it puts some pressure on agent-based theories of virtue, which
tend to claim or presume that the bearers of virtue are individual agents (Russell
2009; Slote 2001). Such theories hold that a virtue is a monadic property of
an individual agent. Furthermore, this pressure on agent-based and agent-
focused theories suggests a way of reconceiving of virtue as a triadic relation
among an agent, a social milieu, and an asocial environment (Alfano 2013).
On this relational model, the bearers of virtue are not individual agents but
ordered triples that include objects and properties outside the agent.
Here is the plan for this chapter. Section 1 summarizes the relevant literature
on dispositions. Section 2 sketches some of the relevant psychological
findings. Section 3 argues that the best response to the empirical evidence
74 Advances in Experimental Moral Psychology

is to revise the concept of a virtue. A virtue is not a monadic property of an


agent, but a triadic relation among an agent, a social milieu, and an asocial
environment.

Virtues as dispositional properties

The subjunctive conditional analysis


The most straightforward way to approach dispositions is through the simple
subjunctive conditional analysis:

(S-SCA) object o is disposed to activity A in condition C if and only if o


would A if C were the case
(Choi and Fara 2012)

The A-term refers to the characteristic manifestation of the disposition;


the C-term refers to its stimulus conditions. To say that OxyContin is an
analgesic for humans is to attribute a dispositional property to it: OxyContin
is disposed to relieve pain when ingested by a human. This statement would
then be analyzed as: OxyContin would relieve pain if it were ingested by a
human. According to the standard semantics for subjunctive conditionals, this
analysis means that all close possible worlds at which a pained person ingests
OxyContin are worlds at which that persons pain is subsequently relieved.
Many of the dispositions we refer to on a regular basis, such as being
fragile, soluble, or poisonous, do not wear their manifestation and stimulus
conditions on their sleeves. Following Lewis (1997), the standard two-step
strategy for dealing with them is first to spell out these conditions, and then to
translate them into the subjunctive conditional schema. So, for example, in the
case of a fragile vase, the first step would be to spell out the A-term (chipping,
cracking, shattering) and the C-term (being struck, dropped on a hard surface,
abraded). The second step would be to slot these into the schema: the vase
is fragile is translated as the vase is disposed to chip, crack, or shatter when
struck, dropped, or abraded, which is true if and only if it would chip, crack,
or shatter if it were struck, dropped, or abraded.
Furthermore, its now recognized that most dispositions have characteristic
masks, mimics, and finks. A fragile vase might not break when struck because
What are the Bearers of Virtues? 75

its fragility is masked by protective packaging. A sugar pill can mimic a real
analgesic via the placebo effect. An electrons velocity, which it is naturally
disposed to retain, inevitably changes when it is measureda case of
finking.3 Such possibilities are not evidence against the presence or absence
of the disposition in question; instead, they are exceptions to the subjunctive
conditional. The vase really is fragile, despite its resistance to chipping,
cracking, and shattering. The sugar pill is not really an analgesic, despite the
pain relief. The electron really is disposed to follow its inertial path, despite
Heisenbergs uncertainty principle.
Lets stipulate that finks, masks, and mimics be collectively referred to
as disrupters. Since it is possible to possess a disposition that is susceptible
to finks and masks, and to lack a disposition that is mimicked, the simple
subjunctive conditional analysis fails. In my view, the most attractive response
to the constellation of disrupters is Chois (2008) anti-disrupter SCA:

(AD-SCA) object o is disposed to activity A in condition C if and only if o


would A if C were the case and there were no disrupters present.

If the object fails to A in C when there are finks or masks present, the right-
hand side of the definition is false. For instance, if the fragile vase were encased
in protective bubble wrap, its fragility would be masked: it is not disposed to
chip, crack, or shatter when struck, dropped, or abraded. But a disrupter is
present, which means that the biconditional is still true. Likewise, if the object
As in C when a mask is present, the right-hand side of the definition is again
false, which would not yield the undesirable conclusion that the object has the
disposition in question.

Woulda, coulda, shoulda


The subjunctive conditional analysis, even in the sophisticated form that
allows for disrupters, expresses a very strong notion of what it takes to be
a disposition. Consider, for example, a loaded die that has a 0.5 probability
of showing ace and a 0.1 probability of showing each of the other results.
Surely, one might argue, the die is disposed to show ace, even though
there are plenty of close possible worlds at which it shows two, three, four,
five,or six.
76 Advances in Experimental Moral Psychology

In light of such cases, its helpful to provide weak and comparative analyses
of dispositions. For instance, we can analyze a weak disposition as follows:

(W-AD-SCA) o is weakly disposed to A in C if and only if o could A if C were


the case and there were no disrupters present.

In the standard semantics, this means that o As at some undisrupted nearby


C-worlds. The same object can be both weakly disposed to A in C and weakly
disposed to not-A in C. This makes weak dispositions less informative than
we might like. W-AD-SCA can be supplemented with the comparative
analysis:

(C-AD-SCA) o is more disposed to A than to A* in C if and only if o is


significantly more likely to A than to A* if C were the case and no disrupters
were present.

In the standard semantics, this means that there are more nearby undisrupted
C-worlds where o As than nearby undisrupted C-worlds where o A*s.
Which, if any, of these notions is appropriate to an analysis of virtues?
Elsewhere (Alfano 2013), I have argued for an intuitive distinction between
high-fidelity and low-fidelity virtues. High-fidelity virtues, such as honesty,
chastity, and loyalty, require near-perfect manifestation in undisrupted
conditions. For these, AD-SCA seems most appropriate. Someone only counts
as chaste if he never cheats on his partner when cheating is a temptation. Low-
fidelity virtues, such as generosity, tact, and tenacity, are not so demanding.
For them, some combination of the W-AD-SCA and C-AD-SCA seems
appropriate. Someone might count as generous if she were more disposed to
give than not to give when there was sufficient reason to do so; someone might
count as tenacious if she were more disposed to persist than not to persist in
the face of adversity.4
If this is on the right track, the analysis of virtuous dispositions adds one
additional step before Lewiss two. First, determine whether the virtue in
question is high fidelity or low fidelity. For instance, it seems reasonable to say
that helpfulness is a low-fidelity virtue whereas loyalty is a high-fidelity virtue.
Second, identify the stimulus conditions and characteristic manifestations.
The most overt manifestation of helpfulness is of course helping behavior,
but more subtle manifestations presumably include noticing opportunities to
What are the Bearers of Virtues? 77

help, having occurrent desires to help, deliberating in characteristic ways, and


forming intentions to help. The most overt manifestation of loyalty is refusal to
betray, but again there are more subtle manifestations. The stimulus condition
for helpfulness is a normative state of affairs: that there is adequate reason to
help. For loyalty, too, the stimulus condition appears to be a normative state of
affairs: that there is temptation but not sufficient reason to betray whomever
or whatever one is loyal to. Finally, the stimulus conditions and characteristic
manifestations are slotted into the relevant schema. To be helpful, then, is to be
weakly disposed to help (among other things) when there is adequate reason
to do so, whereas to be loyal is to be strongly disposed not to betray (among
other things) when there is a temptation to do so.

The psychology of dispositions

In this section, I argue that both high-fidelity and low-fidelity virtues, as


traditionally conceived, suffer from an indeterminacy problem. Theres often
no fact of the matter about whether someone would exhibit the characteristic
manifestations of a high-fidelity virtue in undisrupted stimulus conditions.
Theres often no fact of the matter about whether someone is more disposed
than not to exhibit the characteristic manifestations of a low-fidelity virtue
in undisrupted stimulus conditions. This is because both bodily and social
factors partially determine whether such a disposition is manifested.
Someone might be strongly disposed to tell the truth if others think of her
as an honest person, but strongly disposed to deceive if others think of her as
a dishonest person. For such a person, it would be incorrect to say that she
would tell the truth in undisrupted stimulus conditions, but also incorrect to
say that she would lie in undisrupted stimulus conditions. However, once the
asocial and social contexts are specified, it generally will be possible to assert
true subjunctive conditionals about her.
Similarly, someone might be significantly more disposed to persist than
desist in the face of adversity if she is in a good mood, but significantly more
disposed to desist than persist if she is in a bad mood. For such a person, it
would be incorrect to say that she is weakly disposed to persist in undisrupted
stimulus conditions, but also incorrect to say that she is weakly disposed
78 Advances in Experimental Moral Psychology

to desist in undisrupted stimulus conditions. As before, once the asocial


and social contexts are specified, it generally will be possible to assert true
subjunctive conditionals.

Asocial situational influences


Asocial situational influences are asocial features of the immediate environment
that partially determine which subjunctive conditionals are true about how
someone will act in a given virtues stimulus conditions. As I explore in more
detail elsewhere (Alfano 2013), two of the main varieties are ambient sensibilia
and mood modulators.
For instance, the volume of ambient sound influences both helping behavior
and physical aggressiveness (Matthews and Cannon 1975; Donnerstein and
Wilson 1976). Subtle shifts in lighting partially determine whether people cheat:
they cheat more in an almost-imperceptibly darker room and act more selfishly
when wearing shaded glasses rather than clear ones (Zhong etal. 2010).
These are just a few of literally hundreds of relevant experiments. Together,
they suggest that there are stable, though weak, connections between seemingly
morally irrelevant sensibilia and the manifestation of virtue. The connections
dont all run in the same direction, and they interact. Loud environments
dont uniformly dispose toward morally bad behavior, nor do pleasant smells
invariably dispose toward morally good behavior. Whether a given sensory
input will tend to produce good, bad, or neutral results depends on what kind
of behavior is normatively appropriate and which other sensibilia are present,
among many other things.
I should also note that these connections are not crudely behavioristic,
an accusation often unfairly leveled at situationist critics of virtue theory
(Kamtekar 2004; Kristjansson 2008). According to behaviorism, Behavior
can be described and explained without making ultimate reference to mental
events or to internal psychological processes (Graham 2010). But ambient
sensibilia influence behavior in large part by modifying the agents cognitive
and motivational set. Loud noises, for instance, result in attentional focusing
(Cohen 1978), while pleasant smells induce openness to new experiences
(Baron and Thomley 1994). These internal states mediate the connection
between the asocial environment and the agents behavior.
What are the Bearers of Virtues? 79

Another primary asocial influence is the set of affect modulators, very broadly
construed to include mood elevators, mood depressors, emotion inducers, and
arousal modifiers. There are documented effects for embarrassment (Apsler
1975), guilt (Regan 1971), positive affect (Isen 1987), disgust (Schnall et al.
2008), and sexual arousal (Ariely 2008), among others. As with sensibilia, affect
modulators are connected in weak but significant ways to the manifestation
(or not) of virtue. Fair moods dont necessarily make us fair, nor do foul
moods make us foul. The valence of the effect depends on what is normatively
appropriate in the particular circumstances.
Its important to point out, furthermore, that while asocial influences
tend to have fairly predictable effects on behavioral dispositions, they by no
means explain action all by themselves. Indeed, any particular factor will
typically account for at most 16 percent of the variance in behavior (Funder
and Ozer 1983).

Social influences
In addition to the asocial influences canvassed above, there are a variety of
social influences on the manifestation of virtue. Two of the more important
are expectation confirmation and outgroup bias. In cases of expectation
confirmation, what happens is that the agent reads others expectations
off explicit or implicit social cues, and then acts in accordance with the
expectations so read. People often enough mistake or misinterpret others
expectations, so what they end up doing isnt necessarily what others expect,
but what they think others expect. In cases of outgroup bias, the agent
displays unwarranted favoritism toward the ingroup or prejudice toward the
outgroup. Since everyone belongs to myriad social groups, whether someone
is perceived as in or out depends on which group identities are salient at
the time; (Tajfel 1970, 1981); hence, the perceived social distance of a given
person will vary over time with seemingly irrelevant changes in the salience
of various group identities. In this section, I have room to discuss only social
expectations.
Much of the groundbreaking social psychology of the second half of the
twentieth century investigated the power of expectation confirmation. The
most dramatic demonstration was of course the Milgram paradigm (1974),
80 Advances in Experimental Moral Psychology

in which roughly two-thirds of participants were induced to put what they


thought was 450 volts through another participant (actually an actor who was
in on the experiment) three times in a row. While there were many important
features of this study, the key upshot was that the participants were willing to
do what they should easily have recognized was deeply immoral based on the
say-so of a purported authority figure. They performed exactly as expected.
Blass (1999) shows in a meta-analysis that Milgrams results were no fluke: they
have been replicated all around with the world with populations of diverse age,
gender, and education level.
Another example of the power of social expectations is the large literature
on bystander apathy (Darley and Latan 1968; Latan and Nida 1981). It turns
out that the more bystanders are present in an emergency situation, the lower
the chances that even one of them will intervene. What seems to happen in
such cases is that people scan others immediate reactions to help themselves
determine what to do. When they see no one else reacting, they decide not
to intervene either; thus, everyone interprets everyone elses moment of
deliberation as a decision not to intervene.
Reading off others expectations and acting accordingly doesnt always lead
to bad outcomes. Recent work on social proof shows that the normative valence
of acting in accordance with expectations depends on what is expected. For
instance, guests at a hotel are 40 percent more likely to conserve water by not
asking for their towels to be washed if they read a message that says, 75%
of the guests who stayed in this room participated in our resource savings
program by using their towels more than once than one that says, You can
show respect for nature and help save the environment by reusing towels
during your stay (Goldstein etal. 2008)
Psychologists and behavioral economists have also investigated the effect
of subtle, thoroughly embodied, social distance cues on moral behavior. In a
string of fascinating studies, its been shown that people are much more willing
to share financial resources (Burnham 2003; Burnham and Hare 2007; Rigdon
etal. 2009), less inclined to steal (Bateson et al. 2006), and less disposed to
litter (Ernest-Jones etal. 2011) when they are watched by a representation
of a face. The face can be anything from a picture of the beneficiary of their
behavior to a cartoon robots head to three black dots arranged to look like
eyes and a nose.
What are the Bearers of Virtues? 81

Revising the metaphysics of virtue

In the previous section, I argued that both social and asocial factors shape how
people are disposed to think, feel, and act. What we notice, what we think, what
we care about, and what we do depend in part on bodily and social features of
our situations. This is not to deny that people also bring their own distinctive
personalities to the table, but it suggests that both high-fidelity and low-fidelity
virtues, as traditionally conceived, are rare.
To see why, lets walk through the three-step analysis of a traditional virtue:
honesty. For current purposes, Ill assume that its uncontroversial that honesty
is high fidelity. Next, we specify the stimulus conditions and characteristic
manifestations. I dont have space to do justice to the required nuances
here, but it wouldnt be too far off to say that the stimulus conditions C are
temptations to lie, cheat, or steal despite sufficient reason not to do so, and
that the characteristic manifestations A are behavioral (not lying, cheating,
or stealing), cognitive (noticing the temptation without feeling too much
of its pull), and affective (disapprobation of inappropriate behavior, desire
to extricate oneself from the tempting situation if possible, perhaps even
prospective shame at the thought that one might end up acting badly). Finally,
we slot these specifications into the schema for high-fidelity virtue:

(AD-SCA-honesty) The agent is disposed to activity A in condition C if and


only if she would A if C were the case and there were no disrupters present.

More longwindedly, at all nearby undisrupted worlds where she is tempted


but has sufficient reason to resist temptation, she thinks, feels, and acts
appropriately. Its only reasonable to assume that at some of these undisrupted
temptation-worlds, however, the lighting will not be bright; at others, she
might not feel watched. What she would do depends in part on these factors,
and so the subjunctive conditional is false, which in turn means that she is not
honest.
It could be objected that these factors are disrupters, and so should be ruled
out by fiat. This objection is unmotivated, however. We can see how protective
packaging masks an objects fragility. Does it make sense to say that being in
slightly dim conditions would mask someones honesty? What good is honesty
if it gives out so easily? Does it make sense to say that not being watched would
82 Advances in Experimental Moral Psychology

fink someones honesty? What good is honesty if honest people need constant
monitoring? Ruling out all of the asocial and social influences described in
the previous section as disrupters isnt just ad hoc; it threatens to disqualify
honesty from being a virtue.
Furthermore, social and asocial influences are ubiquitous. Indeed, its
difficult even to think of them as influences because they are so common.
Should we say that very bright lighting is the default condition, and that lower
levels of light are all situational influences? To do so would be to count half of
each day as disrupted. Or should we say that twilight is the default, and that
both very bright and very dark conditions are situational influences? Its hard
to know what would even count in favor of one of these proposals.
Even more to the point, what one might want to rule out as a disrupter in
one case is likely to contribute to what seems like a manifestation of virtue in
other cases. Should we say that being in a bad mood is a situational influence?
People in a bad mood give much less than other people to charities that are
good but not great; they also give much more than other people to charities
that are very good indeed (Weyant 1978). You cant have it both ways. If bad
moods mask generosity in the former case, they mimic it in the latter. Failure to
give in the former type of case would then not be evidence against generosity,
but even giving quite a bit in the latter type of case would not be evidence for
it. If we try to rule out all of these factors, leaving just the agent in her naked
virtue or vice, we may find that she disappears too. Strip away the body and the
community, and you leave not the kernel of authentic character, but something
thats not even recognizably human.
Instead of filtering out as much as possible, I want to propose including
as much as possible by expanding the unit of analysis, the bearer of virtue.
Instead of thinking of virtue as a property of an individual agent, we should
construe it as a triadic relation among a person, a social milieu, and an asocial
environment.
There are two ways of fitting the milieu and the environment into the
subjunctive conditional analysis. They could be incorporated into the stimulus
conditions:

(AD-SCA*) Person p is disposed to activity A in condition C-social-milieu-


S-and-asocial-environment-E if and only if p would A if it were the case that
C-in-S-and-E.
What are the Bearers of Virtues? 83

Or they could be fused with the agent:

(AD-SCA) Person p-in-S-and-E is disposed to activity A in condition C


if and only if p-in-S-and-E would A if C were the case and there were no
disrupters present.

According to AD-SCA*, the person is still the sole bearer of the disposition; its
just a more limited disposition, with much stronger stimulus conditions. This
can be seen as a rendering of Doriss (2002) theory of local traits in the language
of disposition theory. An important problem with such dispositions is that,
even if they are empirically supportable, they are normatively uninspiring.
According to AD-SCA, in contrast, the bearer of the disposition is now a
complex, extended object: the person, the milieu, and the environment. What I
want to suggest is that, given the sorts of creatures we areembodied, socially
embedded, with cognition and motivation extended beyond the boundaries
of our own skin (Clark 2008; Clark and Chalmers 1998)AD-SCA is more
attractive.
Virtue would inhere, on this view, in the interstices between the person
and her world. The object that possesses the virtue in question would be a
functionally and physically extended complex comprising the agent, her social
setting, and her asocial environment. The conditions under which the social
and asocial environment can be legitimately included in such an extended
whole are complex, but we can take a cue here from Pritchard (2010, p. 15),
who argues that phenomena that extend outside the skin of [the] agent can
count as part of ones cognitive agency just so long as they are appropriately
integrated into ones functioning. Pritchard is here discussing cognitive
rather than ethical dispositions, but the idea is the same: provided that the
social and asocial phenomena outside the moral agents skin are appropriately
integrated into her functioning, they may count as part of her moral agency
and partially constitute her moral virtues. A paradigm example is the ongoing
and interactive feedback we have with our friends. At least when we are at
our best, we try to live up to our friends expectations; we are attuned to their
reactive attitudes; we consider prospectively whether they would approve or
disapprove of some course of action; we consult with them both explicitly
and imaginatively; we revise our beliefs and values in light of their feedback.5
When we are functionally integrated with friends in this way, on the model
84 Advances in Experimental Moral Psychology

Iam proposing here, they are partial bearers of whatever virtues (and vices)
we might have. Or rather, to the extent that a virtue or vice is possessed in this
context, it is possessed by the pair of friends together, and not by either of them
on her own. Friendship is an ideal example of the kind of functional integration
I have in mind here, though it may well be possible to integrate other social
and asocial properties and objects into a moral agents functioning.
This doesnt mean that we couldnt also continue to think of individuals as
(potential) bearers of virtues, but the answer to the question, Is there virtue
here?, might differ depending on which bearer the questioner had in mind.
For example, it might not be the case that the individual agent has the virtue
in question, but that the complex object constituted by the agent, her social
milieu, and her asocial environment does have the virtue in question.
One consequence of this view is that virtue is multiply realizable, with
different levels of contribution made by each of the relata. To be honest, for
example, would be to have certain basic personality dispositions, but also
some combination of the following: to be considered honest by ones friends
and peers (and to know it), to consider oneself honest, to be watched or at
least watchable, and to be in whatever bodily states promote the characteristic
manifestations of honesty. Someone could become honest, on this view, in
standard ways, such as habituation and reflection on reasons. But someone
could also become honest in non-standard ways, such as noticing others
signaling of expectations or an increase in local luminescence. This makes it
both easier and harder to be virtuous: deficiencies in personality can be made
up for through social and bodily support, but strength of personality can also
be undermined by lack of social and bodily support. To illustrate this, consider
the differences among Figures 4.1, 4.2, and 4.3.
One of the salutary upshots of this way of thinking about virtue is that it
helps to make sense of the diversity named by any given trait term. Different
people are more or less generous, and on several dimensions. By making explicit
reference to the social milieu and the asocial environment, this framework
suggests ways in which partial virtue could be differently instantiated. Two
people might both count, at a very coarse-grained level of description, as
mostly honest, but one could do so because of personal and social strengths
and despite asocial weaknesses, while the other does so because of social and
asocial strengths and despite some personal weaknesses. One way to capture
What are the Bearers of Virtues? 85

Personal 1

Environmental 3 Personal 2

Environmental 2 Personal 3

Environmental 1 Social 1

Social 3 Social 2

Figure 4.1 Represents a case of perfect virtue: all three relata (personal, social, and
environmental) make maximal contributions. But virtue-concepts are threshold
concepts. Someone can be generous even if she sometimes doesnt live up to the ideal
of perfect generosity.

Personal 1

Environmental 3 Personal 2

Environmental 2 Personal 3

Environmental 1 Social 1

Social 3 Social 2

Figure 4.2 Represents one way of doing that, with a modest contribution from the
environment and more substantial contributions from social and personal resources.

this idea is to specify, for each virtue, the minimum area of the relevant radar
graph that would need to be filled for the agent-in-milieu-and-environment to
be a candidate for possessing that virtue.
Furthermore, the framework allows for the plausible idea that there is a
kind of asymmetry among the relata that bear virtues. Someones personality
can only be so weak before we are no longer inclined to call him (or even the
86 Advances in Experimental Moral Psychology

Personal 1

Environmental 3 Personal 2

Environmental 2 Personal 3

Environmental 1 Social 1

Social 3 Social 2

Figure 4.3 Represents another way of being generous enough without being perfectly
generousthis time with a primary contribution from social factors and more modest
contributions from both personal and environmental factors.

complex of which he is a part) virtuous, even if that weakness is counteracted


by great social and asocial strengths. This condition could be captured by
further specifying a minimum area for the personal contribution.
We can also make sense of the intuition that someone is extremely virtuous
if he displays characteristic manifestations despite weaknesses or pitfalls in the
social and asocial environment. This could be done by specifying a different
(larger) minimum area for the personal contribution that would ensure that
the overall area was sufficiently great.
Before concluding, I want to point to two normative upshots of this view.
The first is that each of us is to some extent causally and even constitutively
responsible for the character of others, and in several ways. By signaling
our expectations, we tend to induce expectation-confirming responses. By
interacting with others, we alter their moods. When we help to construct
the material and bodily environment, we also construct others character.
Especially in relation to our friends, with whom we are likely to be functionally
integrated in the appropriate way, what we do, say, and signal may partially
constitute our friends character traits. For someone who is teetering between
virtue and vice, such influences can make all the difference.6 The comforting
myth of individual responsibility notwithstanding, each of us truly is our
brothers and sisters keeper.
What are the Bearers of Virtues? 87

This is a heavy responsibility to bear, but it pales in comparison to the


responsibility borne by those with the power to set the default expectations
that govern a society and to shape the material conditions of peoples lives.
On the view I am proposing, politicians, corporate leaders, reporters, and
architects, among many others, quite literally have the power to make people
virtuousand prevent them from being or becoming virtuous. If this is right,
we need to hold such people more accountable, and to stop pretending that its
possible to separate political and economic power from ethics.

Notes

* Author Note: Mark Alfano, Princeton University Center for Human Values&
Department of Philosophy, University of Oregon. The author thanks Philip
Pettit, Hagop Sarkissian, Jennifer Cole Wright, Kate Manne, Jonathan Webber,
and Lorraine Besser-Jones for helpful comments, criticisms, and suggestions.
Correspondence should be addressed to Mark Alfano, 321 Wallace Hall, Princeton
University, Princeton, NJ 08544. Email: mark.alfano@gmail.com.
1 The canonical firebrands are Doris (2002) and Harman (1999). Flanagan arrived
at the party both too early (1991) and too late (2009) to shape the course of the
debate.
2 Upton (2009) is the only book-length effort, but her work makes little use of the
literature on dispositions, relying instead on her own naive intuitions. Sreenivasan
(2008) also discusses the dispositional nature of virtues without reference to the
literature on dispositions.
3 The concepts of masking, mimicking, and finking were introduced by Johnston
(1992), Smith (1977), and Martin (1994), respectively.
4 This distinction is based only on my own hunches, but conversations with
philosophers and psychologists have left me confident in it. An empirical study
ofits plausibility would be welcome.
5 See Millgram (1987, p. 368), who argues that, over the course of a friendship, one
becomes (causally) responsible for the friends being who he is, and Cocking and
Kennett (1998, p. 504), who argue that a defining feature of friendship is that as a
close friend of another, one is characteristically and distinctively receptive to being
directed and interpreted and so in these ways drawn by the other.
6 Sarkissian (2010) makes a similar point.
88 Advances in Experimental Moral Psychology

References

Adams, R. M. (2006). A Theory of Virtue: Excellence in Being for the Good. New York:
Oxford University Press.
Alfano, M. (2013). Character as Moral Fiction. Cambridge: Cambridge University Press.
Annas, J. (2011). Intelligent Virtue. New York: Oxford University Press.
Apsler, R. (1975). Effects of embarrassment on behavior toward others. Journal of
Personality and Social Psychology, 32, 14553.
Ariely, D. (2008). Predictably Irrational. New York: Harper Collins.
Baron, R. A., and Thomley, J. (1994). A whiff of reality: Positive affect as a potential
mediator of the effects of pleasant fragrances on task performance and helping.
Environment and Behavior, 26, 76684.
Bateson, M., Nettle, D., and Roberts, G. (2006). Cues of being watched enhance
cooperation in a real-world setting. Biology Letters, 12, 41214.
Blass, T. (1999). The Milgram paradigm after 35years: Some things we now know
about obedience to authority. Journal of Applied Social Psychology, 29(5), 95578.
Boles, W., and Haywood, S. (1978). The effects of urban noise and sidewalk density
upon pedestrian cooperation and tempo. Journal of Social Psychology, 104, 2935.
Burnham, T. (2003). Engineering altruism: A theoretical and experimental
investigation of anonymity and gift giving. Journal of Economic Behavior and
Organization, 50, 13344.
Burnham, T., and Hare, B. (2007). Engineering human cooperation. Human Nature,
18(2), 88108.
Carlsmith, J., and Gross, A. (1968). Some effects of guilt on compliance. Journal of
Personality and Social Psychology, 53, 117891.
Choi, S. (2008). Dispositional properties and counterfactual conditionals. Mind, 117,
795841.
Choi, S., and Fara, M. (Spring 2012). Dispositions. The Stanford Encyclopedia of
Philosophy, Edward N. Zalta (ed.), http://plato.stanford.edu/archives/spr2012/
entries/dispositions/
Clark, A. (2008). Supersizing the Mind: Embodiment, Action, and Cognitive Extension.
New York: Oxford University Press.
Clark, A., and Chalmers, D. (1998). The extended mind. Analysis, 58, 719.
Cocking, D., and Kennett, J. (1998). Friendship and the self. Ethics, 108(3), 50227.
Darley, J., and Latan, B. (1968). Bystander intervention in emergencies: Diffusion
ofresponsibility. Journal of Personality and Social Psychology, 8, 37783.
Donnerstein, E., and Wilson, D. (1976). Effects of noise and perceived control on
ongoing and subsequent aggressive behavior. Journal of Personality and Social
Psychology, 34, 77481.
What are the Bearers of Virtues? 89

Doris, J. (2002). Lack of Character: Personality and Moral Behavior. Cambridge:


Cambridge University Press.
Ernest-Jones, M., Nettle, D., and Bateson, M. (2011). Effects of eye images on everyday
cooperative behavior: A field experiment. Evolution and Human Behavior, 32(3),
1728.
Flanagan, O. (1991). Varieties of Moral Personality: Ethics and Psychological Realism.
Cambridge, MA: Harvard University Press.
(2009). Moral science? Still metaphysical after all these years. In Narvaez and
Lapsley (eds), Moral Personality, Identity and Character: An Interdisciplinary
Future. New York: Cambridge University Press.
Funder, D., and Ozer, D. (1983). Behavior as a function of the situation. Journal of
Personality and Social Psychology, 44, 10712.
Graham, G. (Fall 2010). Behaviorism. In E. Zalta (ed.), The Stanford Encyclopedia of
Philosophy, http://plato.stanford.edu/archives/fall2010/entries/behaviorism/
Harman, G. (1999). Moral philosophy meets social psychology: Virtue ethics and the
fundamental attribution error. Proceedings of the Aristotelian Society, New Series,
119, 31631.
Isen, A. (1987). Positive affect, cognitive processes, and social behavior. In
L.Berkowitz (ed.), Advances in Experimental Social Psychology, Vol. 20,
SanDiego:Academic Press, pp. 20354.
Isen, A., Clark, M., and Schwartz, M. (1976). Duration of the effect of good mood
on helping: Footprints on the sands of time. Journal of Personality and Social
Psychology, 34, 38593.
Isen, A., Shalker, T., Clark, M., and Karp, L. (1978). Affect, accessibility of material
in memory, and behavior: A cognitive loop. Journal of Personality and Social
Psychology, 36, 112.
Johnston, M. (1992). How to speak of the colors. Philosophical Studies, 68, 22163.
Kamtekar, R. (2004). Situationism and virtue ethics on the content of our character.
Ethics, 114(3), 45891.
Kristjansson, K. (2008). An Aristotelian critique of situationism. Philosophy, 83(1),
5576.
Latan, B., and Nida, S. (1981). Ten years of research on group size and helping.
Psychological Bulletin, 89, 30824.
Lewis, D. (1997). Finkish dispositions. The Philosophical Quarterly, 47, 14358.
MacIntyre, A. (1984). After Virtue: A Study in Moral Theory. Notre Dame: University
of Notre Dame Press.
Martin, C. (1994). Dispositions and conditionals. The Philosophical Quarterly, 44, 18.
Matthews, K. E., and Cannon, L. K. (1975). Environmental noise level as a determinant
of helping behavior. Journal of Personality and Social Psychology, 32, 5717.
90 Advances in Experimental Moral Psychology

McKitrick, J. (2003). A case for extrinsic dispositions. Australasian Journal of


Philosophy, 81, 15574.
Milgram, S. (1974).Obedience to Authority. New York: Harper Collins.
Millgram, E. (1987). Aristotle on making other selves. Canadian Journal of
Philosophy, 17(2), 36176.
Mischel, W. (1968). Personality and Assessment. New York: Wiley.
Pritchard, D. (2010). Cognitive ability and the extended cognition thesis. Synthese, 175,
13351.
Regan, J. (1971). Guilt, perceived injustice, and altruistic behavior. Journal of
Personality and Social Psychology, 18, 12432.
Rigdon, M., Ishii, K., Watabe, M., and Kitayama, S. (2009). Minimal social cues in the
dictator game. Journal of Economic Psychology, 30(3), 35867.
Russell, D. (2009). Practical Intelligence and the Virtues. Oxford: Oxford University Press.
Sarkissian, H. (2010). Minor tweaks, major payoffs: The problems and promise of
situationism in moral philosophy. Philosophers Imprint, 10(9), 115.
Schnall, S., Haidt, J., Clore, G., and Jordan, A. (2008). Disgust as embodied moral
judgment. Personality and Social Psychology Bulletin, 34, 1096109.
Schwartz, S., and Gottlieb, A. (1991). Bystander anonymity and reactions to
emergencies. Journal of Personality and Social Psychology, 39, 41830.
Slote, M. (2001). Morals from Motives. New York: Oxford University Press.
Smith, A. (1977). Dispositional properties. Mind, 86, 43945.
Snow, N. (2010). Virtue as Social Intelligence: An Empirically Grounded Theory.
NewYork: Routledge.
Sreenivasan, G. (2008). Character and consistency: Still more errors. Mind, 117(467),
60312.
Tajfel, H. (1970). Experiments in intergroup discrimination. Scientific American,
5(223), 2797.
(1981). Human Groups and Social Categories. Cambridge: Cambridge University
Press.
Upton, C. (2009). Situational Traits of Character: Dispositional Foundations and
Implications for Moral Psychology and Friendship. Lanham: Lexington Books.
Weyant, J. (1978). Effects of mood states, costs, and benefits on helping. Journal of
Personality and Social Psychology, 36, 116976.
Zhong, C.-B., Bohns, V., and Gino, F. (2010). Good lamps are the best police:
Darkness increases dishonesty and self-interested behavior. Psychological Science,
21(3), 31114.
5

The Moral Behavior of Ethicists


and the Power of Reason
Joshua Rust and Eric Schwitzgebel*

Professional ethicists behave no morally better than do other professors.


At least that is what we have found in a series of empirical studies that we
will summarize below. Our results create a prima facie challenge for a
certain picture of the relationship between intellectual reasoning and moral
behaviora picture on which explicit, intellectual cognition has substantial
power to change the moral opinions of the reasoner and thereby to change
the reasoners moral behavior. Call this picture the Power of Reason view. One
alternative view has been prominently defended by Jonathan Haidt. We might
call it the Weakness of Reason view, or more colorfully the Rational Tail view,
after the headline metaphor of Haidts seminal 2001 article, The emotional
dog and its rational tail.
According to the Rational Tail view (which comes in different degrees of
strength), emotion or intuition drives moral opinion and moral behavior, and
explicit forms of intellectual cognition function mainly post hoc, to justify
and socially communicate conclusions that flow from emotion or intuition.
Haidt argues that our empirical results favor his view (2012, p. 89). After all,
if intellectual styles of moral reasoning dont detectably improve the behavior
even of professional ethicists who build their careers on expertise in such
reasoning, how much hope could there be for the rest of us to improve by
such means? While we agree with Haidt that our results support the Rational
Tail view over some rationalistic rivals, we believe that other models of moral
psychology are also consistent with our findings, and some of these models
give explicit intellectual reasoning a central, powerful role in shaping the
92 Advances in Experimental Moral Psychology

reasoners behavior and attitudes. Part 1 summarizes our empirical findings.


Part 2 explores five different theoretical models, including the Rational Tail,
that are more or less consistent with those findings.

Part 1: Our empirical studies

Missing library books


Our first study (Schwitzgebel 2009) examined the rates at which ethics books
were missing from 32 leading academic libraries, compared to other philosophy
books, according to those libraries online catalogs. The primary analysis was
confined to relatively obscure books likely to be borrowed mostly by specialists
in the field275 books reviewed in Philosophical Review between 1990 and
2001, excluding titles cited five or more times in the Stanford Encyclopedia of
Philosophy. Among these books, we found ethics books somewhat more likely
to be missing than non-ethics books: 8.5 percent of the ethics books that were
off the shelf were listed as missing or as more than one year overdue, compared
to 5.7 percent of the non-ethics philosophy books that were off the shelf.
This result holds despite a similar total number of copies of ethics and non-
ethics books held, similar total overall checkout rates of ethics and non-ethics
books, and a similar average publication date of the books. We also found that
classic pre-twentieth-century ethics texts were more likely to be missing than
comparable non-ethics texts.

Peer ratings
Our second study examined peer opinion about the moral behavior of
professional ethicists (Schwitzgebel and Rust 2009). We set up a table in a central
location at the 2007 Pacific Division meeting of the American Philosophical
Association (APA) and offered passersby gourmet chocolate in exchange
for taking a 5-minute philosophical-scientific questionnaire, which they
completed on the spot. One version of the questionnaire asked respondents
their opinion about the moral behavior of ethicists in general, compared to other
philosophers and compared to non-academics of similar social background
(with parallel questions about the moral behavior of specialists in metaphysics
The Moral Behavior of Ethicists and the Power of Reason 93

and epistemology). Opinion was divided: Overall, 36 percent of respondents


rated ethicists morally better behaved on average than other philosophers,
44 percent rated them about the same, and 19 percent rated them worse.
When ethicists behavior was compared to that of non-academics, opinion
was split 50 percent32 percent18 percent between better, same, and worse.
Another version of the questionnaire asked respondents to rate the moral
behavior of the individual ethicist in their department whose last name comes
next in alphabetical order, looping back from Z to A if necessary, with a
comparison question about the moral behavior of a similarly alphabetically
chosen specialist in metaphysics and epistemology. Opinion was again split:
44 percent of all respondents rated the arbitrarily selected ethics specialist
better than they rated the arbitrarily selected M&E specialist, 26 percent rated
the ethicist the same, and 30 percent rated the ethicist worse. In both versions
of the questionnaire, the skew favoring the ethicists was driven primarily by
respondents reporting a specialization or competence in ethics, who tended to
avoid rating ethicists worse than others. Non-ethicist philosophers tended to
split about evenly between rating the ethicists better, same, or worse.

Voting rates
We assume that regular participation in public elections is a moral duty, or
at least that it is morally better than non-participation (though see Brennan
2011). In an opinion survey to be described below, we found that over 80
percent of sampled US professors share that view. Accordingly, we examined
publicly available voter participation records from five US states, looking for
name matches between voter rolls and online lists of professors in nearby
universities, excluding common and multiply-appearing names (Schwitzgebel
and Rust 2010). In this way, we estimated the voting participation rates of four
groups of professors: philosophical ethicists, philosophers not specializing in
ethics, political scientists, and professors in departments other than philosophy
and political science. We found that all four groups of professors voted at
approximately the same rates, except for the political science professors, who
voted about 1015 percent more often than did the other groups. This result
survived examination for confounds due to gender, age, political party, and
affiliation with a research-oriented versus teaching-oriented university.
94 Advances in Experimental Moral Psychology

Courtesy at philosophy conferences


While some rules of etiquette can be morally indifferent or even pernicious, we
follow Confucius (5th c. BCE/2003), Karen Stohr (2012), and others in seeing
polite, respectful daily behavior as an important component of morality. With
this in mind, we examined courteous and discourteous behavior at meetings
of the American Philosophical Association, comparing ethics sessions with
non-ethics sessions (Schwitzgebel et al. 2012). We used three measures of
courtesytalking audibly during the formal presentation, allowing the door
to slam when entering or exiting mid-session, and leaving behind litter at ones
seatacross 2,800 audience hours of sessions at four different APA meetings.
None of the three measures revealed any statistically detectable differences in
courtesy. Audible talking (excluding brief, polite remarks like thank you for
a handout) was rare: 0.010 instances per audience hour in the ethics sessions
versus 0.009 instances per audience hour in the non-ethics sessions (z0.3,
p0.77). The median rate of door slamming per session (compared to mid-
session entries and exits in which the audience member attempted to shut the
door quietly) was 18.2 percent for the ethics sessions and 15.4 percent for the
non-ethics sessions (Mann-Whitney test, p0.95). Finally, ethicists were not
detectably less likely than non-ethicists to leave behind cups (16.8% vs. 17.8%
per audience member, z0.7, p0.48) or trash (11.6% vs. 11.8%, z0.2,
p0.87). The latter result survives examination for confounds due to session
size, time of day, and whether paper handouts were provided. However, we did
find that the audience members in environmental ethics sessions left behind
less trash than did the audience inall other sessions combined (3.0% vs. 11.9%,
Fishers exact test, p0.02).

APA free riding


We assume a prima facie duty for program participants in philosophy
conferences to pay the modest registration fees that the organizers of
those conferences typically charge. However, until recently the American
Philosophical Association had no mechanism to enforce conference
registration, which resulted in a substantial free-riding problem. With this in
mind, we examined the Pacific Division APA programs from 2006 to 2008,
classifying sessions into ethics, non-ethics, or excluded. We then examined
The Moral Behavior of Ethicists and the Power of Reason 95

the registration compliance of program participants in ethics sessions versus


program participants in non-ethics sessions by comparing de-identified,
encrypted lists of participants in those sessions (participants with common
names excluded) to similarly encrypted lists of people who had paid their
registration fees (Schwitzgebel 2013).1 During the period under study,
ethicists appear to have paid their conference registration fees at about the
same rate as did non-ethicist philosophers (74% vs. 76%, z0.7, p0.50).
This result survives examination for confounds due to gender, institutional
prestige, program role, year, and status as a faculty member versus graduate
student.

Responsiveness to student emails


Yet another study examined the rates at which ethicists responded to brief
email messages designed to look as though written by undergraduates (Rust
and Schwitzgebel 2013). We sent three email messagesone asking about
office hours, one asking for the name of the undergraduate advisor, and one
inquiring about an upcoming courseto ethicists, non-ethicist philosophers,
and a comparison group of professors in other departments, drawing from
online faculty lists at universities across several US states. All messages
addressed the faculty member by name, and some included additional
specific information such as the name of the department or the name of an
upcoming course the professor was scheduled to teach. The messages were
checked against several spam filters, and we had direct confirmation through
various means that over 90 percent of the target email addresses were actively
checked. Overall, ethicists responded to 62 percent of our messages, compared
to a 59 percent response rate for non-ethicist philosophers, and 58 percent for
non-philosophersa difference that doesnt approach statistical significance
despite (were somewhat embarrassed to confess) 3,109 total trials (c23.4,
p0.18).

Self-reported attitudes and behavior


Our most recent study examined ethicists, non-ethicist philosophers, and
non-philosophers self-reported attitudes and behavior on a number of issues
including membership in disciplinary societies, voting, staying in touch with
96 Advances in Experimental Moral Psychology

ones mother,vegetarianism, organ and blood donation, responsiveness to student


emails, charity, and honesty in responding to survey questionnaires (Schwitzgebel
and Rust in press). The survey was sent to about a thousand professors in five
different US states, with an overall response rate of 58 percent or about 200
respondents in each of the three groups. Identifying information was encrypted
for participants privacy. On some issuesvoting, email responsiveness,
charitable donation, societal membership, and survey response honestywe also
had direct, similarly encrypted, observational measures of behavior that we could
compare with self-report.2 Aggregating across the various measures, we found
no difference among the groups in overall self-reported moral behavior, in the
accuracy of the self-reports for those measures where we had direct observational
evidence, or in the correlation between expressed normative attitude and either
self-reported or directly observed behavior. The one systematic difference we
did find was this: Across several measuresvegetarianism, charitable donation,
and organ and blood donationethicists appeared to embrace more stringent
moral views than did non-philosophers, while non-ethicist philosophers held
views of intermediate stringency. However, this increased stringency of attitude
was not unequivocally reflected in ethicists behavior.
This last point is best seen by examining the two measures on which we had
the best antecedent hope that ethicists would show moral differences from
non-ethicists: vegetarianism and charitable donation. Both issues are widely
discussed among ethicists, who tend to have comparatively sophisticated
philosophical opinions about these matters, and professors appear to exhibit
large differences in personal rates of charitable donation and meat consumption.
Furthermore, ethicists stances on these issues are directly connected to
specific, concrete behaviors that they can either explicitly implement or not
(e.g., to donate 10% annually to famine relief; to refrain from eating the meat
of such-and-such animals). This contrasts with exhortations like be a kinder
person that are difficult to straightforwardly implement or to know if one has
implemented.

Self-reported attitude and behavior: Eating meat


We solicited normative attitudes about eating meat by asking respondents
to rate regularly eating the meat of mammals such as beef or pork on a
The Moral Behavior of Ethicists and the Power of Reason 97

nine-point scale from very morally bad to very morally good with the
midpoint marked morally neutral. On this normative question, there were
large differences among the groups: 60 percent of ethicist respondents rated
meat-eating somewhere on the bad side of the scale, compared to 45percent
of non-ethicist philosophers and only 19 percent of professors from other
departments (c2 64.2, p 0.001). Later in the survey, we posed two
behavioral questions. First, we asked During about how many meals or
snacks per week do you eat the meat of mammals such as beef or pork? Next,
we asked Think back on your last evening meal, not including snacks. Did
you eat the meat of a mammal during that meal? On the meals-per-week
question, we found a modest difference among the groups: Ethicists reported
a mean of 4.1 meals per week, compared to 4.6 for non-ethicist philosophers
and 5.3 for non-philosophers (ANOVA, F5.2, p0.006). We also found
27percent of ethicists to report no meat consumption (zero meat meals per
week), compared to 20 percent of non-ethicist philosophers and 13 percent of
non-philosophers (c29.3, p0.01). However, statistical evidence suggested
that respondents were fudging their meals-per-week answers: Self-reported
meals per week was not mathematically consistent with what one would
expect given the numbers reporting having eaten meat at the previous evening
meal. (For example, 21% of respondents who reported eating meat at only one
meal per week reported having eaten meat at their previous evening meal.)
And when asked about their previous evening meal, the groups self-reports
differed only marginally, with ethicists in the intermediate group: 37 percent
of ethicists reported having eaten the meat of a mammal at their previous
evening meal, compared to 33 percent of non-ethicist philosophers and 45
percent of non-philosophers (c25.7, p0.06).

Self-reported attitude and behavior: Charity


We solicited normative opinion about charity in two ways. First, we asked
respondents to rate donating 10 percent of ones income to charity on the
same nine-point scale we used for the question about eating meat. Ethicists
expressed the most approval, with 89 percent rating it as good and a mean
rating of 7.5 of the scale, versus 85 percent and 7.4 for non-ethicist philosophers
and 73 percent and 7.1 for non-philosophers (c217.0, p0.001; ANOVA,
98 Advances in Experimental Moral Psychology

F4.3, p0.01). Second, we asked what percentage of income the typical


professor should donate to charity (instructing participants to enter 0 if
they think its not the case that the typical professor should donate to charity).
Among ethicists, 9 percent entered 0, versus 24 percent of non-ethicist
philosophers and 25 percent of non-philosophers (c2 18.2, p 0.001).
Among those not entering 0, the geometric mean was 5.9 percent for the
ethicists versus 4.8 percent for both of the other groups (ANOVA, F3.6,
p0.03). Later in the survey, we asked participants what percentage of their
income they personally had donated to charity in the previous calendar year.
Non-ethicist philosophers reported having donated the least, but there was
no statistically detectable difference between the self-reported donation rates
of the ethicists and the non-philosophers. (Reporting zero: 4% of ethicists
vs. 10% of non-ethicist philosophers and 6% of non-philosophers, c25.9,
p0.052; geometric mean of the non-0s 3.7% vs. 2.6% vs. 3.6%, ANOVA,
F5.5, p0.004.) However, we also had one direct measure of charitable
behavior: Half of the survey recipients were given a charity incentive to
return the survey$10 to be donated to their selection from among Oxfam
America, World Wildlife Fund, CARE, Make-a-Wish Foundation, Doctors
Without Borders, or American Red Cross. By this measure, the non-ethicist
philosophers showed up as the most charitable, and in fact were the only group
who responded at even statistically marginally higher rates when given the
charity incentive (67% vs. 59%; compared to 59% on both versions for ethicists
and 55% vs. 52% for non-philosophers; c22.8, p0.097; c20.2, p0.64;
c20.0, p1.0). While we doubt that this is a dependably valid measure of
charitable behavior overall, we are also somewhat suspicious of the self-report
measures. We judge the overall behavioral results to be equivocal, and certainly
not to decisively favor the ethicists over both of the two other groups.

Conclusion
Across a wide variety of measures, it appears that ethicists, despite expressing
more stringent normative attitudes on some issues, behave not much
differently than do other professors. However, we did find some evidence that
philosophers litter less in environmental ethics sessions than in other APA
sessions, and we found some equivocal evidence that might suggest slightly
The Moral Behavior of Ethicists and the Power of Reason 99

higher rates of charitable giving and slightly lower rates of meat-eating among
ethicists than among some other subsets of professors. On one measurethe
return of library booksit appears that ethicists might behave morally worse.

Part 2: Possible explanations

The rational tail view


One possibility is that Haidts Rational Tail view, as described in the introduction,
is correct. Emotion or intuition is the dog; explicit reasoning is the tail; and
this is so even among professional ethicists, whom one might have thought
would be strongly influenced by explicit moral reasoning if anyone is. Our
judgments and behavioreven the judgments and behavior of professional
ethicistsare very little governed by our reasoning. We do what were going to
do, we approve of what were going to approve of, and we concoct supporting
reasons to a large extent only after the fact as needed. Haidt compares
reasoning and intuition to a rider on an elephant, with the rider, reasoning,
generally compelled to travel in the direction favored by the elephant. Haidt
also compares the role of reasoning to that of a lawyer rather than a judge:
The lawyer does her best to advocate for the positions given to her by her
clientsin this case the intuitions or emotionsproducing whatever ideas and
arguments are convenient for the predetermined conclusion. Reason is not a
neutral judge over moral arguments but rather, for the most part, a paid-off
advocate plumping for one side. Haidt cites our work as evidence for this view
(e.g., Haidt 2012, p. 89), and were inclined to agree that most of it fits nicely
with his view and so in that way lends support. If moral reasoning were almost
entirely ineffectual, that could explain our virtually flat results; and where our
results are not entirely flat, specific secondary mechanisms could be invoked
(e.g., the social awkwardness of leaving trash behind at an environmental
ethics session; a tendency for people with antecedently stringent moral views
to be more likely to enter professional ethics in the first place).
It would be rash, however, to adopt an absolutely extreme version of
the Rational Tail view (and Haidt himself does not). At least sometimes, it
seems, the tail can wag the dog and the elephant can take direction from the
rider. Rawlss (1971) picture of philosophical method as involving reflective
100 Advances in Experimental Moral Psychology

equilibrium between intuitive assessments of particular cases and rationally


appealing general principles is one model of how this might occur. The idea
is that just as one sometimes adjusts ones general principles to match ones
pretheoretical intuitions about particular cases, one also sometimes rejects
ones pretheoretical intuitions about particular cases in light of ones general
principles. It seems both anecdotally and phenomenologically compelling that
explicit moral reasoning sometimes prompts rejection of ones initial intuitive
moral judgments, and that when this happens, changes in real-world moral
behavior sometimes follow. How could there not be at least some truth in
the Power of Reason view? So why does there seem to be so little systematic
evidence of that powereven when looking at what one might think would be
the best-case population for seeing its effects?
Without directly arguing against Haidts version of the Rational Tail view
or for the Power of Reason view, we present four models of the relationship
between explicit moral reasoning and real-world moral behavior that permit
explicit reasoning to play a substantial role in shaping the reasoners moral
behavior, compatibly with our empirical findings above. While we agree with
Haidt that our results support the Rational Tail view, our findings are also
consistent with some models of moral psychology which place more emphasis
on the Power of Reason. We focus on our own evidence, but we recognize
that a plausible interpretation of it must be contextualized with other sorts of
evidence from recent moral psychology that seems to support the Rational
Tail viewincluding Haidts own dumbfounding evidence (summarized in his
2012); evidence that we have poor knowledge of the principles driving our
moral judgments about puzzle cases (e.g., Cushman etal. 2006; Mikhail 2011;
Ditto and Liu 2012); evidence about the diverse factors influencing moral
judgment (e.g., Hauser 2006; Greene 2008; Schnall etal. 2008); and evidence
from the cognitive dissonance and neuropathology literatures on post hoc
rationalization of behavior (e.g., Festinger 1957; Hirstein2005; Cooper 2007).

Narrow principles
Professional ethicists might have two different forms of expertise. One
might concern the most general principles and unusually clean hypothetical
casesthe kinds of principles and cases at stake when ethicists argue about
The Moral Behavior of Ethicists and the Power of Reason 101

deontological versus consequentialist ethics using examples of runaway trolleys


and surgeons who can choose secretly to carve up healthy people to harvest
their organs. Expertise of that sort might have little influence on ones day-
to-day behavior. A second form of expertise might be much more concretely
practical but concern only narrow principlesprinciples like whether its okay
to eat meat and under what conditions, whether one should donate to famine
relief and how much, or whether one has a duty to vote in public elections.
An ethicist can devote serious, professional-quality attention to only a limited
number of such practical principles; and once she does so, her behavior might
be altered favorably as a result. But such reflection would only alter the ethicists
behavior in those few domains that are the subject of professional focus.
If philosophical moral reasoning tends to improve moral behavior only in
specifically selected narrow domains, we might predict that ethicists would
show better behavior in just those narrow domains. For example, those who
select environmental ethics for a career focus might consequently pollute and
litter less than they otherwise would, in accord with our results. (Though it is
also possible, of course, that people who tend to litter less are more likely to
be attracted to environmental ethics in the first place, or that the context of an
environmental ethics session is such that even non-specialists would be moved
to litter a bit less.) Ethicists specializing in issues of gender or racial equality
might succeed in mitigating their own sexist and racist behavior. Perhaps, too,
we will see ethicists donating more to famine relief and being more likely to
embrace vegetarianismissues that have received wide attention in recent
Anglophone ethics and on which we found some equivocal evidence of
ethicists better behavior.
Common topics of professional focus tend to be interestingly difficult and
nuanced. So maybe intellectual forms of ethical reflection do make a large
difference in ones personal behavior, but only in hard cases, where our pre-
reflective intuitions fail to be reliable guides: The reason why ethicists are
no more likely than non-ethicists to call their mothers or answer student
emails might be because the moral status of these actions is not, for them, an
intuitively nonobvious, attractive subject of philosophical analysis and they
take no public stand on it.
Depending on other facts about moral psychology, the Narrow Principles
hypothesis might predictas we seem to find in the vegetarianism and charity
102 Advances in Experimental Moral Psychology

datathat attitude differences will tend to be larger than behavioral differences.


It will do so because, on this model, the principle must be accepted before
the behavior changes, and since behavioral change requires further exertion
beyond simply adopting a principle on intellectual grounds. Note that, in
contrast, a view on which people embrace attitudes wholly to rationalize
their existing behaviors or behavioral inclinations would probably not predict
that ethicists would show highly stringent attitudes where their behavior is
unexceptional.
The Narrow Principles model, then, holds that professional focus on
narrowprinciples can make a substantial behavioral difference. In their limited
professional domains, ethicists might then behave morally better than they
otherwise would. Whether they also therefore behave morally better overall
might then turn on whether the attention dedicated to one moral issue results
in moral backsliding on other issues, for example due to moral licensing (the
phenomenon in which acting well in one way seems to license people to
act worse in others; Merritt, Effron, and Monin2010) or ego depletion (the
phenomenon according to which dedicating self-control in one matter leaves
fewer resources to cope with temptation in other matters; Mead etal. 2010).

Reasoning might lead one to behave


more permissibly but no better
Much everyday practical moral reasoning seems to be dedicated not to
figuring out what is morally the best courseoften we know perfectly well
what would be morally ideal, or think we dobut rather to figuring out
whether something that is less than morally ideal is still permissible. Consider,
for example, sitting on the couch relaxing while ones spouse does the dishes
(a very typical occasion of moral reflection for some of us!). One knows
perfectly well that it would be morally better to get up and help. The topic
of reflection is not that, but instead whether, despite not being morally ideal,
it is still permissible not to help: Did one have a longer, harder day? Has one
been doing ones fair share overall? Maybe explicit moral reasoning can help
one see ones way through these issues. And maybe, furthermore, explicit
moral reasoning generates two different results approximately equally often:
the result that what one might havethought was morally permissible is not
The Moral Behavior of Ethicists and the Power of Reason 103

in fact permissible (thusmotivatingone to avoid it, e.g., to get off the couch)
and the result that what one might have thought was morally impermissible
is in fact permissible (thus licensing one not to do the morally ideal thing,
e.g., to stay on the couch). If reasoning does generate these two results about
equally often, people who tend to engage in lots of moral reflection of this
sort might be well calibrated to permissibility and impermissibility, and thus
behave more permissibly overall than do other people, despite not acting
morally better overall. The Power of Reason view might work reasonably well
for permissibility even if not for goodness and badness. Imagine someone who
tends to fall well short of the moral ideal but who hardly ever does anything
that would really qualify as morally wrong, contrasted with a sometimes-sinner
sometimes-saint.
This model, if correct, could be straightforwardly reconciled with our data
as long as the issues we have studiedexcept insofar as they reveal ethicists
behaving differentlyallow for cross-cutting patterns of permissibility, for
example, if it is often but not always permissible not to vote. It would also be
empirically convenient for this view if it were more often permissible to steal
library books than non-ethicists are generally inclined to think and ethical
reflection tends to lead people to discover that fact.

Compensation for deficient intuitions


Our empirical research can support the conclusion that philosophical
moral reflection is not morally improving only given several background
assumptions, such as (i) that ethicists do in fact engage in more philosophical
moral reflection than do otherwise socially similar non-ethicists and (ii)that
ethicists do not start out morally worse and then use their philosophical
reflection to bring themselves up to average. We might plausibly deny the
latter assumption. Heres one way such a story might go. Maybe some people,
from the time of early childhood or at least adolescence, tend to have powerful
moral intuitions and emotions across a wide range of cases while other people
have less powerful or less broad-ranging moral intuitions and emotions.
Maybe some of the people in the latter group tend to be drawn to intellectual
and academic thought; and maybe those people then use that intellectual
and academic thought to compensate for their deficient moral intuitions and
104 Advances in Experimental Moral Psychology

emotions. And maybe those people, then, are disproportionately drawn into
philosophical ethics. More or less, they are trying to figure out intellectually
what the rest of us are gifted with effortlessly. These people have basically made
a career out of asking What is this crazy ethics thing, anyway, that everyone
seems so passionate about? and Everyone else seems to have strong opinions
about donating to charity or not, and when to do so and how much, but they
dont seem able to defend those opinions very well and I dont find myself
with that same confidence; so lets try to figure it out. Clinical psychopathy
isnt what were imagining here, nor do we mean to assume any particularly
high uniformity in ethicists psychological profile. All this view requires is that
whatever positive force moral reflection delivers to the group as a whole is
approximately balanced out by a somewhat weaker set of pretheoretical moral
intuitions in the group as a whole.
If this were the case, one might find ethicists, even though no morally
better behaved overall, more morally well behaved than they would have
been without the crutch of intellectual reflection, and perhaps also morally
better behaved than non-ethicists are in cases where the ordinary intuitions
of the majority of people are in error. Conversely, one might find ethicists
morally worse behaved in cases where the ordinary intuitions of the majority
of people are a firmer guide than abstract principle. We hesitate to conjecture
about what issues might fit this profile but if, for example, ordinary intuition
is a poorer guide than abstract principle about issues such as vegetarianism,
charity, and environmentalism and a better guide about the etiquette of day-
to-day social interactions with ones peers, then one would expect ethicists to
behave better than average on the issues of the former sort and worse on issues
of the latter sort.

Rationally driven moral improvement plus


toxic rationalization in equal measure
A final possibility is this: Perhaps the Power of Reason view is entirely right
some substantial proportion of the time, but also a substantial proportion of
the time explicit rational reflection is actually toxic, leading one to behave
worse; and these two tendencies approximately cancel out in the long run. Such
tendencies neednt only concern permissibility and impermissibility, and the
The Moral Behavior of Ethicists and the Power of Reason 105

consequence of these countervailing forces neednt involve any improvement


in overall moral calibration. Perhaps we sometimes care about morality for
its own sake, think things through reasonably well, and then act on the moral
truths we thereby discover. And maybe the tools and habits of professional
ethics are of great service in this enterprise. For example: One might stop to
think about whether one really does have an obligation to goto the polls for
the mayoral runoff election, despite a strong preference to stay at home and a
feeling that ones vote will make no practical difference to the outcome. And
one might decide, through a process of explicit intellectual reasoning (lets
suppose by correctly applying Kants formula of universal law), that one does
in fact have the duty to vote on this particular occasion. One rightly concludes
that no sufficiently good excuse applies. As a result, one does something one
would not have done absent that explicit reasoning: With admirable civic
virtue, one overcomes ones contrary inclinations and goes to the polls. But
then suppose that also, in equal measure, things go just as badly wrong: When
one stops to reflect, what one does is rationalize immoral impulses that one
would otherwise not have acted on, generating a superficially plausible patina of
argument that licenses viciousness which would have been otherwise avoided.
Robespierre convinces himself that forming the Committee of Public Safety
really is for the best, and consequently does evil that he would have avoided
had he not constructed that theoretical veil. Much less momentously, one
might concoct a superficial consequentialist or deontological story on which
stealing that library book really is just fine, and so do it. The tools of moral
philosophy might empower one all the more in this noxious reasoning.
If this bivalent view of moral reflection is correct, we might expect moral
reflection to produce movement away from the moral truth and toward ones
inclinations where common opinion is in the right and our inclinations are
vicious but not usually acted on, and movement toward the moral truth where
common opinion and our inclinations and unreflective behavior are all in
the wrong. When widely held norms frustrate our desires, the temptation
toward toxic rationalization can arise acutely and professional ethicists might
be especially skilled in such rationalization. But this misuse of reason might
be counterbalanced by a genuine noetic desire, whichperhaps especially
with the right trainingsometimes steers us right when otherwise we would
have steered wrong. In the midst of widespread moral misunderstanding that
106 Advances in Experimental Moral Psychology

accords with peoples pretheoretic intuitions and inclinations, there might


be few tools that allow us to escape error besides the tools of explicit moral
reasoning.
Again, one might make conditional predictions, depending on what one
takes to be the moral truth. For example, if common opinion and ones
inclinations favor the permissibility of single-car commuting and yet single-
car commuting is in fact impermissible, one might predict more ethicist bus
riders. If stealing library books is widely frowned upon and not usually done,
though tempting, we might expect ethicists to steal more books.

Conclusion
We decline to choose among these five models. There might be truth inall of
them; and still other views are available too. Maybe ethicists find themselves
increasingly disillusioned about the value of morality at the same time they
improve their knowledge of what morality in fact requires. Or maybe ethicists
learn to shield their personal behavior from the influence of their professional
reflections, either to improve the objectivity of their reasoning or as a kind of
self-defense against the apparent unfairness of being held to higher standards
because of their choice of profession. In short, we believe the empirical evidence
is insufficient to justify even tentative conclusions. We recommend the issues
for further empirical study and for further armchair reflection.

Notes

* Authors Note: Joshua Rust, Stetson University, and Eric Schwitzgebel, University
of California at Riverside. For helpful discussion of earlier drafts, thanks to
Gunnar Bjornnson, Jon Haidt, Linus Huang, Hagop Sarkissian, and Jen Wright.
Correspondence should be sent to: Eric Schwitzgebel, Department of Philosophy,
University of California at Riverside, Riverside, CA 92521-0201, Email: eschwitz@
ucr.edu or Joshua Rust, Stetson University, Department of Philosophy 421 North
Woodland Boulevard, DeLand, Florida 32723, Phone: 386.822.7581, Email: jrust@
stetson.edu.
1 The APA sent the list of names of APA paid registrants to a third party (U.C.R.s
Statistical Consulting Collaboratory) who were not informed of the nature of the
The Moral Behavior of Ethicists and the Power of Reason 107

research or the significance of the list of names. To them, it was just a meaningless
list of names. Separately, we (2nd author and an Research Assistant (RA)) generated
a list of names of people listed as participants on the APA program. Finally, a
2nd RA generated a mathematical formula unknown to us (but using certain
guidelines) that would convert names into long number strings. This 2nd RA then
converted the list of program participants into those number strings according
to that formula and told the formula to the Collaboratory, who then separately
converted their name lists into number strings using that same formula. Finally,
the 2nd author received both encrypted lists and wrote a program to check for
encrypted name matches between the lists. Names were matched just by last name
and first initial to reduce the rate of false negatives due to different nicknames (e.g.,
Thomas vs. Tom), and common or repeated last names were excluded to prevent
false positives, as were names with spaces, mid-capitals, diacritical marks, or in
which the person used only an initial as the first name. Although the APA Pacific
Division generously supplied the encrypted data, this research was neither solicited
by nor conducted on behalf of the APA or the Pacific Division.
2 Survey recipients were among the people whose voting and email responsiveness
we had examined in the studies reported above. The other observational measures
were collected in the course of the survey study.

References

Brennan, J. (2011). The Ethics of Voting. Princeton, NJ: Princeton University Press.
Confucius. (5th c. BCE/2003). Analects. (E. Slingerland, trans.). Indianapolis, IN:
Hackett Publishing Company.
Cooper, J. (2007). Cognitive Dissonance. London: Sage.
Ditto, P. H., and Liu, B. (2011). Deontological dissonance and the consequentialist
crutch. In M. Mikulincer and P. R. Shaver (eds), The Social Psychology of Morality.
Washington, DC: American Psychological Association.
Festinger, L. (1957). A Theory of Cognitive Dissonance. Evanston, IL: Row, Peterson.
Green, J. D. (2008). The secret joke of Kants soul. In W. Sinnott-Armstrong (ed.),
Moral Psychology, vol. 3. Cambridge, MA: MIT Press.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach
to moral judgment. Psychological Review, 108, 81434. doi:10.1037/0033-
295X.108.4.814
(2012). The Righteous Mind. New York: Pantheon.
Hauser, M. D. (2006). Moral Minds. New York: Ecco/HarperCollins.
108 Advances in Experimental Moral Psychology

Hirstein, W. (2005). Brain Fiction. Cambridge, MA: MIT Press.


Mead, N. L., Alquist, J. L., and Baumeister, R. F. (2010). Ego depletion and the limited
resource model of self-control. In R. Hassin, K. Ochsner, and Y. Trope (eds),
Self-control in Society, Mind, and Brain. Oxford: Oxford University Press.
Merritt, A. C., Effron, D. A., and Monin, B. (2010). Moral self-licensing: When being
good frees us to be bad. Social and Personality Psychology Compass, 4, 34457.
doi:10.1111/j.1751-9004.2010.00263.x
Mikhail, J. (2011). Elements of Moral Cognition. Cambridge: Cambridge University
Press.
Rawls, J. (1971). A Theory of Justice. Harvard: Harvard University Press.
Rust, J., and Schwitzgebel, E. (2013). Ethicists and non-ethicists responsiveness to
student emails: Relationships among self-reported behavior, expressed normative
attitude, and directly observed behavior. Metaphilosophy, 44, 35071. doi:10.1111/
meta.1203
Schnall, S., Haidt, J., Clore, G. L., and Jordan, A. H. (2008). Disgust as embodied
moral judgment. Personality and Social Psychology Bulletin, 34, 1096109.
doi:10.1177/0146167208317771
Schwitzgebel, E. (2009). Do ethicists steal more books? Philosophical Psychology, 22,
71125. doi:10.1080/09515080903409952
(2013). Are ethicists any more likely to pay their registration fees at professional
meetings? Economics & Philosophy, 29, 37180.
Schwitzgebel, E., and Rust, J. (2009). The moral behaviour of ethicists: Peer opinion.
Mind, 118, 104359. doi:10.1093/mind/fzp108
(2010). Do ethicists and political philosophers vote more often than other
professors? Review of Philosophy and Psychology, 1, 18999. doi:10.1007/s13164-
009-0011-6
(in press). The moral behavior of ethics professors: Relationships among expressed
normative attitude, self-described behavior, and directly observed behavior.
Philosophical Psychology.
Schwitzgebel, E., Rust, J., Huang, L. T., Moore, A. T., and Coates, J. (2012). Ethicists
courtesy at philosophy conferences. Philosophical Psychology, 35, 33140.
doi:10.1080/09515089.2011.580524
Stohr, Karen (2012). On Manners. New York: Routledge.
Part Two

Moral Groundings
110
6

Pollution and Purity in Moral


andPoliticalJudgment
Yoel Inbar and David Pizarro*

Disgust, an emotion that most likely evolved to keep us away from noxious
substances and disease, seems especially active in our moral lives. People report
feeling disgust in response to many immoral acts (e.g., Rozin etal. 1999), make
more severe moral judgments when feeling disgusted (e.g., Wheatley and
Haidt 2005), and are more likely to view certain acts as immoral if they have a
tendency to be easily disgusted (Horberg etal. 2009). Yet, despite the wealth of
evidence linking disgust and morality, the reason for the link remains unclear.
This may be because the bulk of empirical work on the topic has been aimed
at simply demonstrating that disgust and moral judgment are connecteda
claim that, given the influence of rationalist models of moral judgment such as
Kohlbergs (1969), is novel and surprising. Fewer researchers have attempted
to explain why disgust and moral judgment should be so connected (for recent
exceptions, see Kelly 2011 and Tybur etal. 2012). Here, we present an attempt
to do so.
Our primary claim is that disgust functions as part of a general
motivational system that evolved to keep individuals safe from disease. As
such, disgust motivates negative evaluations of acts that are associated with
a threat of contamination (e.g., norm violations pertaining to food and sex);
negative attitudes toward unfamiliar groups who might pose the threat of
contamination through physical contact (e.g., outgroups characterized by
these norm violations, or who are unfamiliar); and greater endorsement of
certain social and political attitudes that minimize contamination risk (such
as increased sexual conservatism, reduced contact between different social
112 Advances in Experimental Moral Psychology

groups, and hostility toward foreigners). This account provides a theoretical


rationale for the observed relationship between disgust and moral judgment,
and it is able to unify findings from two literatures that, until now, have been
largely separate: research examining the role of disgust in moral judgment,
and research examining the effects of pathogen threat on political and social
attitudes. One of the conclusions to emerge from this review is that that the link
between disgust and morality may be different from what has been assumed
by many researchers. Rather than a response to moral violations per se, disgust
may instead be linked more generally to judgments about acts, individuals,
and groups that pose a pathogen threat.

Disgust and moral judgment: Three claims

In order to defend this conclusion, it is necessary to first review the evidence


linking disgust to moral judgment, and to distinguish between the various
ways disgust has been hypothesized to play a role in moral judgment. We have
argued previously (Pizarro etal. 2011) that researchers have made three distinct
claims regarding the relationship between disgust and moral judgment: (1) that
the emotion of disgust is a consequence of perceiving moral violations; (2) that
disgust serves to amplify judgments of immorality; and (3) that disgust acts
as a moralizer, pushing previously non-moral issues into the moral domain.
These claims are not mutually exclusiveall three could be true. However,
there are varying degrees of empirical evidence to support each.
According to the disgust as consequence view, disgust is the emotional
output of a certain kind of moral appraisal. For instance, researchers have
found that disgust is elicited by violations of moral purity (Rozin etal. 1999),
taboo moral violations (Gutierrez and Ginner-Sorolla 2007), or being treated
unfairly (Chapman etal. 2009). On this view, disgust might drive reactions to
immoralityfor example, by motivating people to reject or distance themselves
from those seen as immoralbut does not play a causal role in determining
whether an action is seen as immoral.
In contrast, the disgust as amplifier view characterizes disgust as a causal
influence on moral judgment, arguing that the presence of disgust during a
moral evaluation makes wrong things seem even more wrong. This is a stronger
Pollution and Purity in Moral andPoliticalJudgment 113

claim regarding the role of disgust, and has been made by researchers who have
experimentally manipulated disgust independently of the act being evaluated,
for example by inducing disgust with a post-hypnotic suggestion (Wheatley and
Haidt 2005), with a foul odor, or with disgusting film clips (Schnall etal. 2008).
Finally, the strongest causal claim regarding the influence of disgust on moral
judgment is that of disgust as moralizer. On this view, morally neutral acts
can enter the moral sphere by dint of their being perceived as disgusting. For
instance, an act (such as smoking) can move from unhealthy to immoral
if reliably accompanied by the emotion of disgust. This claim has the least
empirical support of the three, although it is consistent with the finding that
morally dumbfounded participants defend their self-admittedly irrational
moral judgments with an appeal to the disgusting nature of an act (Haidt and
Hersch 2001).
Our argument here relies primarily on evidence for the disgust-as-
consequence and disgust-as-amplifier views, for which the evidence is
strongest (see Pizarro etal. 2011). In particular, the view we will defend here
is a combination of these two approaches that takes into account additional
research on the specificity of these effectsthat disgust is more likely to arise
and amplify judgments within a particular domain (viz., when the threat of
pathogens is involved).

Why disgust?

Why should disgust be involved in moral judgment at all, whether as a


consequence, amplifier, or moralizer? Theoretical justifications have come
largely in the form of broad statements that disgust is an emotional impulse to
reject certain objects, individuals, or ideas that, for a variety of reasons, happen
to overlap with objects, individuals, and ideas that are morally objectionable.
For example, Schnall etal. (2008, p. 1097) write that disgust is an emotion of
social rejection that is misattributed to many targets of judgment. Cannon
etal. (2011, p. 326) write that disgust is a reaction to offensive objects as well
as offensive actions, and Koleva etal. (2012) describe disgust as a response to
social contaminants. Wheatley and Haidt (2005, p. 780) write that disgust is
a kind of information that influences moral judgments.
114 Advances in Experimental Moral Psychology

Many of these theoretical explanations are simply restatements of the link


between disgust and morality, and do not offer much by way of explanation
for it. Rozin etal. (2008, p. 764) offer a more detailed argument, stating that
disgust at immoral behavior results from an opportunistic accretion of
new domains of elicitors to a rejection system already in placein other
words, that moral disgust piggybacks on an older, more basic food rejection
response. Along the same lines, Kelly (2011) argues that disgust first evolved
to motivate food rejection and pathogen avoidance, and was later co-opted
to motivate moral judgment and intergroup attitudes. Finally, Tybur et al.
(2012) propose an entirely different account, arguing that disgust in response
to immorality is an evolved solution to a social coordination problem
namely, the need to coordinate condemnation of specific actions with others.
On this account, expressions of disgust function as condemnation signals to
others in the vicinity.
All these accounts point to the possibility that moral judgments may be
built on more simple psychological systems of avoidance and rejection. But
why should the emotion of disgust in particular be involved in reactions to
immorality? Kelly (2011) argues that disgust has two features that make it
particularly suited to this role: (1) it entails a strong rejection response; and
(2) its antecedents (i.e., elicitors) are, at least in part, learned (and therefore
flexible). However, humans (and other animals) also show non-disgust-based
aversive responses to unpleasant stimuli such as extreme heat or cold, loud or
high-pitched sounds, dangerous predators, and so on. In fact, such responses
are phylogenetically older than disgustwhich is found in its full form only in
humansand are quite flexible, in that people (and other animals) can readily
acquire aversions to novel stimuli (Staats and Staats 1958; Tully and Quinn
1985). In contrast, the elicitors of core disgust are in fact fairly circumscribed
when compared to these other emotional responses, and tend to be limited to
food, certain animals, and human body products (Rozin etal. 2008). If moral
judgments needed to be built on top of an existing affective response, a more
basic rejection system would thus be an equally if not more plausible candidate.
Similarly, any number of emotions (such as anger) could be used to signal
moral condemnation. Why would natural selection have favored disgustan
emotion that likely had its origins in a gustatory response to potential oral
contaminationto serve this purpose?
Pollution and Purity in Moral andPoliticalJudgment 115

It turns out that there is a good reason that disgust, rather than a more
general-purpose rejection response, would have become associated with
some moral violationsnamely, that disgust evolved to motivate individuals
not only to avoid ingesting (or touching) poisons and contaminants, but
also to distance themselves from people who posed a risk of pathogen
transmission. Schaller and colleagues (Faulkner etal. 2004; Park etal. 2003;
Schaller and Duncan 2007) have argued that, over the course of human
evolution, people developed a behavioral immune system that functioned
as a first line of defense against exposure to pathogens or parasites. According
to this theory, individuals who show cues of infection or disease should
trigger the behavioral immune system, leading to disgust and, consequently,
rejection or avoidance of that individual. Because this system would have
evolved independently of any explicit knowledge about pathogens, its
disease detection mechanism would need to be heuristic in nature
most likely, something like any significant anomaly in an individuals
physical appearance. This means that the behavioral immune system can
be expected to respond to any individuals who deviate from normative
physical appearance, regardless of whether they actually pose a contagion
risk (Schaller and Park 2011). Likewise, individuals seen as engaging in
unusual (i.e., non-normative) practices regarding food, cleanliness, and
sexactivities that carry an especially high risk of pathogen transmission
should also be likely to evoke disgust and rejection.
Finally, strangers (i.e., members of other groups or tribes) would have
been especially likely to harbor novel (and therefore particularly dangerous)
infectious agents. Encountering such individuals should thus also activate
the behavioral immune system, motivating hostility, rejection, and the
accompanying emotion of disgust. Indeed, individuals in hunter-gatherer
cultures are often intensely hostile to strangers. The anthropologist Margaret
Mead wrote that most primitive tribes feel that if you run across one of these
subhumans from a rival group in the forest, the most appropriate thing to
do is bludgeon him to death (as cited in Bloom 1997, p. 74). Likewise, the
geographer and anthropologist Jared Diamond wrote that for New Guinean
tribesmen, to venture out of ones territory to meet [other] humans, even
if they lived only a few miles away, was equivalent to suicide (Diamond
2006, p. 229).
116 Advances in Experimental Moral Psychology

Importantly, this argument does not assume that all or even most of the
individuals or groups evoking disgust and rejection actually pose a risk of
infection. But because risks of failing to detect a contagious individual (serious
illness and possibly premature death) greatly outweighed the cost of wrongly
identifying a harmless individual as contagious (the foregone benefits of a
positive interaction), one would expect the behavioral immune system to tend
toward hypervigilance (Schaller and Duncan 2007; Shaller and Park 2011).
Cues that might be associated with the risk of contamination would have
become heuristics, whose mere presence would trigger disgust and rejection,
but which could easily be overgeneralized.

The behavioral immune system and social attitudes

Disease risk and attitudes toward the obese and disabled


One prediction that follows from the behavioral immune system account is
that heightened perceptions of disease riskeither chronic (i.e., dispositional)
or situationalshould be associated with more negative attitudes toward
individuals (heuristically) associated with pathogen threat. This appears to
be the casepeople who are especially worried about contagious disease
(as measured by a subscale of the Perceived Vulnerability to Disease scale;
Faulkner etal. 2004; Park etal. 2003) are also more likely to show negative
attitudes toward obese people (Park etal. 2007), and people who read bogus
news articles about contagious diseases showed more negative associations
with physically disabled people (as measured by the Implicit Association Test;
Greenwald et al. 1998) than did those who read news articles about other
health topics (Park etal. 2003). Of course, neither the obese nor the disabled
are likely to actually pose a disease risk, but a perceptual system that responds
to significant anomalies in appearance would likely be triggered by these
individuals.

Disgust and attitudes toward homosexuals


A number of researchers have found that disgust tends to be related to harsher
attitudes toward gay people. Dasgupta etal. (2009) and Inbar etal. (2012) found
Pollution and Purity in Moral andPoliticalJudgment 117

that induced disgust led to more negative implicit and explicit evaluations of
gay men, respectively. Inbar etal. (2009) found that dispositional sensitivity to
disgust was associated with more negative implicit evaluations of gay people,
and Terrizzi etal. (2010) found a relationship between disgust sensitivity and
explicit evaluations of gay people.

Disease risk and attitudes toward foreigners


Concern about contagious diseases is also associated with negativity toward
foreign outgroups, especially unfamiliar ones. For instance, in one study
participants who were shown a slideshow highlighting disease and pathogen
threats were more inclined (compared to a control group who were shown
a slideshow about non-disease threats) to prefer familiar (e.g., Polish) over
unfamiliar (e.g., Mongolian) immigrant groups (Faulkner et al. 2004). This
claim also finds support from the finding that women in their first trimester
of pregnancy (during which immune function is suppressed) are more
ethnocentric and xenophobic than women in their second and third trimesters
(Navarette etal. 2007).

Other sociopolitical attitudes


There is also evidence that differences in the strength of the behavioral immune
system are related to sociopolitical attitudes more broadly. Individuals who feel
more vulnerable to disease consistently provide more conservative responses on
a variety of measures tapping social conservatism (Terrizzi etal. 2013), such as
right-wing authoritarianism (Altemeyer 1988), social dominance orientation
(Pratto etal. 1994), and vertical collectivism (Singelis etal. 1995). Likewise, at
the level of group differences, geographic variation in parasite and pathogen
prevalence has been found to be associated with variation in the strength of
conservative social attitudes in particular cultures. Across 71 world regions,
greater historic disease prevalence is associated with more restricted (i.e.,
conservative) sexual attitudes and lower openness to experience (Schaller and
Murray 2008); and across countries and US states, current disease prevalence
is associated with greater religiosity and stronger family ties (Fincher and
Thornhill 2012). Like the intergroup attitudes described above, these attitudes,
118 Advances in Experimental Moral Psychology

personality differences, and social preferences all entail greater separation


between groups, less experimentation with novel cultural and sexual practices,
and less contact with strangers. Although such attitudes clearly have costs
(e.g., reduced opportunities for trade and slower adoption of potentially useful
cultural innovations), they also have benefits, especially in environments
where pathogen threat is high. Less contact with outgroups, lower mobility,
and conservation of existing social practices (especially food- and sex-related)
minimizes exposure to novel, potentially dangerous pathogens.

The behavioral immune system and moral judgment

Disgust is the emotion most closely linked to the behavioral immune system,
in that it motivates individuals to distance themselves from people or groups
seen (implicitly or explicitly) as contaminated or contagious (Oaten et al.
2009). Is it possible that disgust is implicated in moral judgment for similar
reasonsthat is, because it arises as a reaction to perceived physical contagion
threats? The most common disgust-eliciting contagion threats involve sex,
food, and outgroups (Oaten et al. 2009). If disgust is involved in moral
judgment primarily for violations having to do with contagion threats, moral
disgust should largely be limited to these specific domains.
This prediction comes close to the view endorsed by Haidt and Graham
(2007) in their description of the moral domain of purity/sanctity. They write
that moral disgust is attached at a minimum to those whose appearance
(deformity, obesity, or diseased state), or occupation (the lowest castes in caste-
based societies are usually involved in disposing of excrement or corpses)
makes people feel queasy (p. 106). Certainly, on the basis of the behavioral
immune system literature one would expect avoidance of these groups.
However, Haidt and Graham expand their argument, proposing that the moral
domain of purity/sanctity includes a metaphorical conception of impurity as
well, such that disgust (and judgments of immorality) is also evoked by those
who seem ruled by carnal passions such as lust, gluttony, greed, and anger
(p. 106). But how much empirical evidence is there for this more extended,
metaphorical role for disgust in moral judgment? In the next section, we
examine the research bearing on this question.
Pollution and Purity in Moral andPoliticalJudgment 119

Which moral violations elicit disgust?


A number of studies have examined peoples reactions to moral violations, often
by asking them to read about immoral or morally neutral actions and asking
them to report their emotional and cognitive evaluations. Results have reliably
shown a link between certain types of violations and disgust reactions.

Sex
Many of the studies showing disgust at moral violations have asked participants
to evaluate sexual practices, including homosexuality, incest, and unusual
forms of masturbation. Haidt and Hersch (2001), for example, asked liberal and
conservative undergraduates to evaluate examples of gay and lesbian sex, unusual
masturbation (e.g., a woman who masturbates while holding her favorite teddy
bear), and consensual sibling incest. Haidt etal. (1993) did not directly measure
disgust responses, but two of the three behaviors that they expected a priori to
elicit disgust involved sex (having sex with a dead chicken and then consuming
it, and consensual sibling incest). Perhaps the most commonly studied moral
violation of this class has been incestan act known to elicit disgust reliably.
For instance, Rozin et al. (1994) asked participants about their responses to
incest in general, Royzman etal. (2008) asked participants to evaluate parent-
child incest, Gutierrez and Giner-Sorolla (2007) asked about sibling incest, and
Horberg etal. (2009) used the same sibling incest vignette originally used by
Haidt etal., along with the chicken sex vignette from the same source.

Repugnant foods
Consumption of repugnant foods has been another commonly studied type of
moral violation that appears to reliably elicit disgust. For instance, both Haidt
etal. (1993) and Russell and Giner-Sorolla (2011) used a scenario in which a
family ate their deceased dog. Similarly, Gutierrez and Giner-Sorolla (2007),
and Russell and Giner-Sorolla (2011) presented participants with a scenario in
which a scientist grew and consumed a steak made of human muscle cells.

Other moral violations


Researchers have also uncovered a few moral violations that do not involve sex
or food, but that nonetheless appear to elicit disgust (for a recent review, see
120 Advances in Experimental Moral Psychology

Chapman and Anderson 2013). In one notable example, Chapman etal. (2009)
examined reactions to people who made unfair offers in an ultimatum game.
This economic game involves two parties: a proposer and a responder.
The proposer suggests a division of a sum (in the current study, $10) between
the two, and the responder can either accept this suggestion or reject it (in
which case neither party receives anything). In this study, the proposer was
(unbeknownst to the participants) a computer program that sometimes
made very unfair offers (i.e., $9 for the proposer and $1 for the responder).
Both participants self-reports and their facial expressions showed that they
felt disgusted by very unfair offersand the more disgusted they were, the
more likely they were to reject the offer. Similarly, when people read about
unfairness (e.g., someone cheating at cards), they showed increased activation
in a facial muscle (the levator) involved in the expression of disgust (Cannon
etal. 2011).
Other studies sometimes cited as showing that disgust can occur as a response
to general moral violations are harder to interpret. Some neuroimaging studies
have demonstrated overlapping regions of neural activation (as measured by
fMRI) for physically disgusting acts and acts of moral indignation (Moll
etal. 2005). However, the stimuli used in the study to evoke moral indignation
often contained basic, physical elicitors of disgust (e.g., You took your mother
out to dinner. At the restaurant, she saw a dead cockroach floating on the
soap pan.). The overlapping brain regions found when participants read the
indignation statements and the pure disgust statements (e.g., One night
you were walking on a street. You saw a cat eating its own excrement) could
therefore be due to the fact that both statement types contain powerful elicitors
of basic disgust.
One study has found that people report feeling disgust in response to
pictures that depict violations such as ethnic cleansing or child abuse (but
do not show physical disgust elicitors; Simpson, Carter etal. 2006). However,
self-reported disgust in this study was highly correlated with self-reported
anger, leaving open the possibility that participants were using the term in
a metaphorical rather than literal sense (see Nabi 2002). Similarly, young
children agree that moral violations such as being very mean to someone
can be described as disgusting, and that a disgust face can go with these
violations (Danovitch and Bloom 2009). However, in these studies other
Pollution and Purity in Moral andPoliticalJudgment 121

negative emotion words and faces were not possible responses, leaving open
the possibility that children simply endorsed the one negatively valenced
emotion available to them.

Summary
Most disgusting moral violations involve unusual sex or foodstuffs (or, in
the case of the chicken sex vignette, both). This is what one would expect if
disgust-evoking moral violations activated the behavioral immune system and
negative evaluations of these acts were driven by avoidance in the same way
that behavioral immune system-relevant intergroup and political attitudes are.
The pattern of data is also compatible with the first part of the view advanced by
Haidt and Graham (2007)that disgust functions as the guardian of physical
purity. However, empirical support for the second half of their viewthat
violations of spiritual purity also evoke disgustis lacking.
Furthermore, some findings are explained poorly by both accounts
namely, that unfair or selfish behavior also evokes disgust, at least under some
circumstances. Such behavior is neither straightforwardly related to pathogen
threats, nor to physical or spiritual purity. Of course, these findings are from
only two studies, and further research is necessary to determine the robustness
and generality of the relationship between witnessing unfairness or selfishness
and disgust. One (admittedly speculative) possibility is that cheaters and non-
reciprocators are seen as an outgroup that evokes a distancing motivation in
the same way that groups seen as unfamiliar or unclean do.

Induced disgust and harsher moral judgment

A number of studies have experimentally induced disgust (e.g., using bad


smells,dirty surroundings, or disgusting film clips), and examined the effects
of this extraneously induced disgust on peoples moral judgments. In the
terminology used by Pizarro, Inbar, and Helion (2011), these studies have
been used to test the disgust as amplifier and/or disgust as moralizer
hypotheses. Generally, these studies have found that incidental disgust makes
moral judgments harsher for a wide range of infractions, including incest,
eating ones dog, bribery, stealing library books, falsifying ones resume, and
122 Advances in Experimental Moral Psychology

masturbating with a kitten (Schnall et al. 2008; Wheatley and Haidt 2005).
Schnall etal. examined whether the type of moral infraction (purity-violating,
e.g., dog-eating or sex between first cousins, vs. non-purity-violating, e.g.,
falsifying ones resume) moderated the effects of induced disgust on moral
judgment and found that it did not. However, Horberg et al. (2009) found
that inducing disgust (as opposed to sadness) had a stronger amplification
effect on judgments of purity violations (such as sexual promiscuity) than
harm/care violations (such as kicking a dog). Thus, there is conflicting
evidence on whether inducing disgust selectively affects certain kinds of moral
judgments.
However, studies that demonstrate the effects of experimental inductions of
disgust on moral evaluation do not serve as evidence that disgust is naturally
elicited by moral violations. An analogy to research on the effects of emotion on
judgment is useful here. Research has shown that extraneously manipulating
emotions such as fear, sadness, anger, or even disgust can affect a wide range
of judgments and decisions (Loewenstein and Lerner 2003). But that does
not mean that these emotions naturally arise when making these judgments.
No one would conclude that because disgust makes one more willing to
sell an item that one has been given (Lerner et al. 2004), disgust therefore
also arises naturally when one is deciding whether to sell or keep an item.
Similarly, showing that disgust affects judgments of certain moral violations
is not informative about whether disgust is a naturally occurring response to
witnessing such violations.

The effects of cleanliness on moral


and political judgment

If moral and political judgments are motivated at least partly by the threat
of contamination, drawing attention to this threat by asking participants to
wash their hands (or perhaps even by simply exposing them to washing-
related stimuli) should have similar effects on judgment as other pathogen
primes. There is some evidence for this: Helzer and Pizarro (2011) found that
participants who were standing next to a hand-sanitizer dispenser described
themselves as more politically conservative, and that those who had just used an
Pollution and Purity in Moral andPoliticalJudgment 123

antiseptic hand wipe were more negative in their moral judgments of unusual
sexual behaviors (e.g., consensual incest between half-siblings), but not in their
judgments of putatively immoral acts that did not involve sexuality. Similarly,
Zhong etal. (2010) demonstrated that hand-washing made participants more
conservative (i.e., more negative) on a number of social issues related mainly
to sexual morality (e.g., casual sex, pornography, and adultery).
However, researchers who have adopted a more metaphorical notion
of purity have made exactly the opposite prediction regarding the effects of
cleanliness on moral judgment, arguing that if feeling clean is psychologically
the opposite of feeling disgusted, making cleanliness salient should reduce
feelings of disgust and therefore make moral judgments less harsh. There is
also some evidence for this view: Priming participants with purity-related
words (e.g., pure, immaculate, and pristine) made them marginally less
harsh when judging moral violations (Schnall etal. 2008, Study 1), and asking
participants to wash their hands after watching a disgusting film clip attenuated
the effects of the clip on moral judgments (Schnall etal., Study 2).
How to reconcile these conflicting results? First, it is likely that in Schnall
etal.s (2008) Study 2, in which all participants watched a film clip showing a
man crawling into a filthy toilet, physical contamination threats were salient
for all participants. When contamination is salient, hand-washing may have
a palliative effect, whereas when contamination is not already salient, hand-
washing may instead prime pathogen concerns. However, this still leaves the
results of Schnall etal.s Study 1 unexplained. It is possible that purity-related
words do not prime physical pathogen threats. Such simple cognitive primes
may simply not be enough to engage a motivational system built to avoid
pathogens, but may be effective in reminding individuals of other cleanliness-
related concepts. It is also possible that this single, marginally significant result
from a low-powered (total N40) study is anomalous. This is a question that
can only be settled by future research.
Putting this (possibly anomalous) result aside, the account we propose here
offers a parsimonious explanation why disgust and its oppositecleanliness
would show parallel effects on peoples moral judgments and sociopolitical
attitudes. Because both disgust and hand-washing make the threat of physical
contamination salient, their effects on certain kinds of moral and sociopolitical
judgments should be similar. In contrast, a more metaphorical view of the role
124 Advances in Experimental Moral Psychology

of disgust in moral judgment wouldas outlined abovepredict that physical


cleansing should make moral judgments less harsh (and, possibly, make
attitudes toward sexual morality-related social issues more tolerant). This, of
course, is not what the bulk of the evidence shows, although more research is
needed to reconcile the conflicting findings in this area.

Disgusting but permissible actions

One potential objection to the account we defend here is that there are many
behaviors that are judged by most as disgusting yet morally permissible, such
as picking ones nose in private (see also Royzman etal. 2009). However, our
argument does not require that all disgusting acts be seen as immoral (or, for
that matter, that all immoral acts be seen as disgusting). Rather, we argue that
reactions to certain moral violations (primarily those involving sex or food),
certain sociomoral attitudes (primarily toward individuals seen as physically
abnormal, norm-violating, or foreign), and certain political attitudes (primarily
those related to sexual conservatism, reduced contact between different social
groups, and hostility toward outsiders) rely on a shared motivational system;
that this system evolved due to the adaptive benefits of responding to disease
or contamination threats with rejection and avoidance; and that its primary
motivating emotion is disgust.
This account allows, but does not require, that disgust might extend to other
kinds of moral violations as well (as we have described above, evidence for
such extension is scarce). One way that such an extension could happen is that
disgust may become attached to some behaviors for which there already exist
non-moral proscriptive norms (e.g., smoking or eating meat; Nichols 2004). In
these cases, the pairing of disgust with (or the tendency to be disgusted by) the
behavior might cause it to be pushed into the moral domainespecially if the
behavior can be construed as harmful (see Rozin 1999). Such a moralization
process might be observed with longitudinal data comparing moral attitudes
toward disgusting and non-disgusting behaviors that either have an existing
(but non-moral) proscriptive norm and those which do not. If our account
is correct, one would expect moralization over time to occur only in the
disgusting behaviors for which there are already conventional norms in place.
Pollution and Purity in Moral andPoliticalJudgment 125

Conclusion

Reviewing the evidence linking moral violations and disgust shows that with
a few exceptions, the moral violations that elicit disgust involve food, sex, or
both. This is consistent with the view that seeing such acts as immoral and
feeling disgust in response to them result from activation of the behavioral
immune system, an evolved motivational system that responds to physical
contamination threats. We believe that this account parsimoniously explains
disgusts connection with moral judgments, sociomoral attitudes, and political
beliefs. It also suggests that the link between disgust and morality may be
different from what has been assumed by many researchers.
Although there is an empirical connection between disgust and seeing a
variety of acts as immoral, this may be due to the specific content of the acts
in question rather than to a more general relationship between disgust and
judgments of immorality. A great deal of research points to a reliable connection
between disgust and acts, individuals, or groups that are threatening because of
the potential for physical contamination, whereas there is as yet little evidence
that disgust is a reaction to immoral behaviors per se.

Note

* Authors Note: Yoel Inbar, Tilburg University, and David Pizarro, Cornell University.
Corresponding Author: Yoel Inbar, Department of Social Psychology, Tilburg
University, Email: yinbar@uvt.nl.

References

Altemeyer, R. A. (1998). The other authoritarian personality. In M. P. Zanna (ed.),


Advances in Experimental Social Psychology (Vol. 30). New York: Academic Press,
pp. 4791.
Bloom, H. (1997). The Lucifer Principle: A Scientific Expedition into the Forces of
History. New York: Atlantic Monthly Press.
Cannon, P. R., Schnall, S., and White, M. (2011). Transgressions and expressions:
Affective facial muscle activity predicts moral judgments. Social Psychological
andPersonality Science, 2, 32531.
126 Advances in Experimental Moral Psychology

Chapman, H. A., and Anderson, A. K. (2013). Things rank and gross in nature:
Areview and synthesis of moral disgust. Psychological Bulletin, 139, 30027.
Chapman, H. A., Kim, D. A., Susskind, J. M., and Anderson, A. K. (2009). In bad
taste: Evidence for the oral origins of moral disgust. Science, 323, 12226.
Danovitch, J., and Bloom, P. (2009). Childrens extension of disgust to physical and
moral events. Emotion, 9, 10712.
Dasgupta, N., DeSteno, D. A., Williams, L., and Hunsinger, M. (2009). Fanning the
flames of prejudice: The influence of specific incidental emotions on implicit
prejudice. Emotion, 9, 58591.
Diamond, J. M. (2006). The Third Chimpanzee: The Evolution and Future of the
Human Animal. New York: Harper Perennial.
Faulkner, J., Schaller, M., Park, J. H., and Duncan, L. A. (2004). Evolved disease-
avoidance mechanisms and contemporary xenophobic attitudes. Group Processes
and Intergroup Behavior, 7, 33353.
Fincher, C. L., and Thornhill, R. (2012). Parasite-stress promotes in-group assortative
sociality: The cases of strong family ties and heightened religiosity. Behavioral and
Brain Sciences, 35, 6179.
Graham, J., Haidt, J., and Nosek, B. (2009). Liberals and conservatives use different sets
of moral foundations. Journal of Personality and Social Psychology, 96, 102946.
Greenwald, A. G., McGhee, D. E., and Schwartz, J. L. K. (1998). Measuring individual
differences in implicit cognition: The implicit association test. Journal of
Personality and Social Psychology, 74, 146480.
Gutierrez, R., and Giner-Sorolla, R. S. (2007). Anger, disgust, and presumption of
harm as reactions to taboo-breaking behaviors. Emotion, 7, 85368.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach
to moral judgment. Psychological Review, 108, 81434.
Haidt, J., and Graham, J. (2007). When morality opposes justice: Conservatives
have moral intuitions that liberals may not recognize. Social Justice Research, 20,
98116.
Haidt, J., and Hersh, M. (2001). Sexual morality: The cultures and emotions of
conservatives and liberals. Journal of Applied Social Psychology, 31, 191221.
Helzer, E. G., and Pizarro, D. A. (2011). Dirty liberals! Reminders of physical
cleanliness influence moral and political attitudes. Psychological Science, 22, 51722.
Horberg, E. J., Oveis, C., Keltner, D., and Cohen, A. B. (2009). Disgust and the
moralization of purity. Journal of Personality and Social Psychology, 97, 96376.
Inbar, Y., Pizarro, D., Knobe, J., and Bloom, P. (2009). Disgust sensitivity predicts
intuitive disapproval of gays. Emotion, 9, 4359.
Inbar, Y., Pizarro, D. A., and Bloom, P. (2009). Conservatives are more easily disgusted.
Cognition & Emotion, 23, 71425.
Pollution and Purity in Moral andPoliticalJudgment 127

Kelly, D. (2011). Yuck! The Nature and Moral Significance of Disgust. Cambridge, MA:
The MIT Press.
Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to
socialization. In D. A. Goslin (ed.), Handbook of Socialization Theory and Research.
Chicago, IL: Rand McNally, pp. 347480.
Koleva, S. P., Graham, J., Iyer, R., Ditto, P. H., and Haidt, J. (2012). Tracing the
threads: How five moral concerns (especially Purity) help explain culture war
attitudes. Journal of Research in Personality, 46, 18494.
Lerner, J. S., Small, D. A., and Loewenstein, G. (2004). Heart strings and purse strings
carryover effects of emotions on economic decisions. Psychological Science, 15,
33741.
Loewenstein, G., and Lerner, J. S. (2003). The role of affect in decision making. In
R. J. Davidson, K. R. Scherer, and H. H. Goldsmith (eds), Handbook of Affective
Science. New York: Oxford University Press, pp. 61942.
Moll, J., de Oliveira-Souza, R., Moll, F. T., Igncio, F. A., Bramati, I. E., Caparelli-
Dquer, E. M., and Eslinger, P. J. (2005). The moral affiliations of disgust: A
functional MRI study. Cognitive and Behavioral Neurology, 18, 6878.
Nabi, R. L. (2002). The theoretical versus the lay meaning of disgust: Implications for
emotion research. Cognition and Emotion, 16, 695703.
Navarrete, C. D., Fessler, D. M. T., and Eng, S. J. (2007). Elevated ethnocentrism in
the first trimester of pregnancy. Evolution and Human Behavior, 28, 605.
Nichols, S. (2004). Sentimental Rules: On the Natural Foundations of Moral Judgment.
New York: Oxford University Press.
Oaten, M., Stevenson, R. J., and Case, T. I. (2009). Disgust as a disease-avoidance
mechanism. Psychological Bulletin, 135, 30321.
Park, J. H., Faulkner, J., and Schaller, M. (2003). Evolved disease-avoidance processes
and contemporary anti-social behavior: Prejudicial attitudes and avoidance of
people with physical disabilities. Journal of Nonverbal Behavior, 27, 6587.
Park, J. H., Schaller, M., and Crandall, C. S. (2007). Pathogen-avoidance mechanisms
and the stigmatization of obese people. Evolution and Human Behavior, 28,
41014.
Pizarro, D. A., Inbar, Y., and Helion, C. (2011). On disgust and moral judgment.
Emotion Review, 3, 2678.
Pratto, F., Sidanius, J., Stallworth, L. M., and Malle, B. F. (1994). Social dominance
orientation: A personality variable predicting social and political attitudes. Journal
of Personality and Social Psychology, 67, 74163.
Royzman, E. B., Leeman, R. F., and Baron, J. (2009). Unsentimental ethics: Towards
a content-specific account of the moralconventional distinction. Cognition, 112,
15974.
128 Advances in Experimental Moral Psychology

Royzman, E. B., Leeman, R. F., and Sabini, J. (2008). You make me sick: Moral
dyspepsia as a reaction to third-party sibling incest. Motivation and Emotion, 32,
1008.
Rozin, P. (1999). The process of moralization. Psychological Science, 10, 21821.
Rozin, P., Haidt, J., and McCauley, C. R. (2008). Disgust. In M. Lewis, J. M. Haviland-
Jones, and L. F. Barrett (eds), Handbook of Emotions (3rd ed.). New York: Guilford,
pp. 75776.
Rozin, P., Lowery, L., Imada, S., and Haidt, J. (1999). The moral-emotion triad
hypothesis: A mapping between three moral emotions (contempt, anger, disgust)
and three moral ethics (community, autonomy, divinity). Journal of Personality
and Social Psychology, 76, 57486.
Russell, P. S., and Giner-Sorolla, R. (2011). Moral anger, but not moral disgust,
responds to intentionality. Emotion, 11, 23340.
Schaller, M., and Duncan, L. A. (2007). The behavioral immune system: Its evolution
and social psychological implications. In J. P. Forgas, M. G. Haselton, and W. von
Hippel (eds), Evolution and the Social Mind: Evolutionary Psychology and Social
Cognition. New York: Psychology Press, pp. 293307.
Schaller, M., and Murray, D. R. (2008). Pathogens, personality, and culture: Disease
prevalence predicts worldwide variability in sociosexuality, extraversion, and
openness to experience. Journal of Personality and Social Psychology, 95, 21221.
Schaller, M., and Park, J. H. (2011). The behavioral immune system (and why it
matters). Current Directions in Psychological Science, 20, 99103.
Schnall, S., Benton, J., and Harvey, S. (2008). With a clean conscience: Cleanliness
reduces the severity of moral judgments. Psychological Science, 19, 121922.
Schnall, S., Haidt, J., Clore, G. L., and Jordan, A. H. (2008). Disgust as embodied
moral judgment. Personality and Social Psychology Bulletin, 34, 1096109.
Simpson, J., Carter, S., Anthony, S. H., and Overton, P. G. (2006). Is disgust a
homogeneous emotion? Motivation and Emotion, 30, 3141.
Singelis, T. M., Triandis, H. C., Bhawuk, D. P. S., and Gelfand, M. J. (1995). Horizontal
and vertical dimensions of individualism and collectivism: A theoretical and
measurement refinement. Cross-Cultural Research, 29, 24075.
Staats, A. W., and Staats, C. K. (1958). Attitudes established by classical conditioning.
The Journal of Abnormal and Social Psychology, 57, 3740.
Terrizzi, J. A., Shook, N. J., and McDaniel, M. A. (2013). The behavioral immune
system and social conservatism: A meta-analysis. Evolution and Human Behavior,
34, 99108.
Terrizzi, J. A., Shook, N. J., and Ventis, W. L. (2010). Disgust: A predictor of social
conservatism and prejudicial attitudes toward homosexuals. Personality and
Individual Differences, 49, 587592.
Pollution and Purity in Moral andPoliticalJudgment 129

Tully, T., and Quinn, W. G. (1985). Classical conditioning and retention in normal
and mutant Drosophila melanogaster. Journal of Comparative Physiology A,
157(2), 26377.
Tybur, J. M., Lieberman, D., Kurzban, R., and DeScioli, P. (2012). Disgust: Evolved
function and structure. Psychological Review, 120, 6584.
Wheatley, T., and Haidt, J. (2005). Hypnotic disgust makes moral judgments more
severe. Psychological Science, 16, 7804.
Zhong, C. B., Strejcek, B., and Sivanathan, N. (2010). A clean self can render harsh
moral judgment. Journal of Experimental Social Psychology, 46, 85962.
7

Selective Debunking Arguments, Folk


Psychology, and Empirical Moral Psychology
Daniel Kelly*

Some framing questions

Reflecting on the significance of his early research on the neuropsychology of


moral judgment, Joshua Greene (2007) raises an important and increasingly
pressing kind of question: Where does one draw the line between correcting
the nearsightedness of human moral nature and obliterating it completely?
and goes on to more directly wonder How far can the empirical debunking of
human moral nature go? (p. 76). The gist of such questions can be fleshed out
in several ways; I attempt to distinguish different approaches in the latter half
of this chapter, and situate my own in the resulting landscape. The approach
I favor foregrounds the relationship between empirical cognitive science and
morality,1 in order to more crisply express certain kinds of question. For example:
Are their constraints on human morality that make it inflexible or resistant to
transformation in certain ways? If so, what are those constraints, what imposes
them, and why do they make morality rigid in whatever way they do? Are
those constraints only knowable a priori, perhaps via conceptual analysis or
reflection on the essence of morality, or can cognitive science help to discover
them, perhaps by revealing innate features of our moral psychology? On the
other hand it could be the case that human morality is relatively unconstrained,
and thus fairly malleable. Is it possibledo we have it within ourselvesto
transcend the types of moral judgments that are so naturally made by minds
like ours? Can cognitive science show us how to most effectively do so?
One virtue of this way of framing the issues is that it invites us to consider an
analogy between moral theorizing and scientific theorizing and the relationship
Selective Debunking Arguments 131

each bears to its commonsensical starting place, with an eye toward where that
analogy might break down. For instance, Noam Chomsky suggests that when
we are doing science, theorizing can and should transcend the folk intuitions
it begins with, and that departure or movement away from the common-
sense concepts in which early investigation is typically couched is relatively
unrestricted. For instance, while discussing scientific inquiries into the mind,
and the relationship between the categories of folk psychology and those that
will be taken up by cognitive science as it proceeds, he remarks:

These are serious inquiries, not to be undertaken casually; our intuitions


about them provide some evidence, but nothing more than that. Furthermore,
whatever may be learned about folk science will have no relevance to the
pursuit of naturalistic inquiry into the topics that folk science addresses in
its own way.
(Chomsky 1995, p. 14)

Indeed, he even suggests that in the practice of science, leaving the


vernacular behind is indicative of advance, improvementtheoretic progress:
As the disciplines progress, they depart still further from the common sense
and ordinary language origins of inquiry (1995, pp. 256; for more recent
comments in a similar vein, see Chomsky 2009).
K. Anthony Appiah appears to agree with Chomsky on this picture, at least
as it applies to, say, the increasingly tenuous relationship between folk physics
and contemporary physical theories. However, he takes the view that scientific
theorizing is importantly different from moral theorizing on this score, that is,
with respect to how tightly each is tethered to the intuitive categories of the
folk. In his 2007 Presidential address to the American Philosophical Association,
Appiah suggests there are, in fact, limits on the extent to which morality can
be detached from common-sense psychology, or significantly transformed by
advances in cognitive science. Furthermore, he suggests that the presence of
such limits in the moral case, and their absence in the scientific, both stem
from a difference in the roles that moral and scientific theories play in human
lives, and the different kinds of connections each type of theory needs to bear
to our intuitive understanding to effectively play its part:

Its common to analogize folk psychology with folk physics. But, of course,
professional physicists can happily leave folk physics far behind as they
tinker with their Calabi-Yau Manifolds and Gromov-Witten invariants.
132 Advances in Experimental Moral Psychology

Bycontrast, moral psychology, however reflective, cant be dissociated from


our moral sentiments, because its basic to how we make sense of one another
and ourselves. In a deliberately awkward formulation of Bernard Williamss,
moral thought and experience must primarily involve grasping the world
in such a way that one can, as a particular human being, live in it.
(Appiah 2007, p. 15)

Put this way, a core question that emerges is whether morality and moral
theory is special or distinctive in its relation to empirical psychology and other
natural sciencesroughly, whether something about human moral nature
makes it more or less debunkable than other aspects of human nature, or
whether something about moral judgment makes it more or less resistant to
transformation than other types of judgment.
These are fascinating and timely topics; they are also difficult ones. Rather
than set out an overarching view or take a stand on the debunking of morality
tout court, in what follows Ill explore a divide and conquer strategy. First, I will
briefly sketch a debunking argument that, instead of targeting all of morality
or human moral nature, has a more narrow focusnamely, the intuitive moral
authority of disgust. The argument concludes that as vivid and compelling as
they can be while one is in their grip, feelings of disgust should be granted no
power to justify moral judgments. Importantly, the argument is grounded in
empirical advances concerning the character of the emotion itself. Next, I will
step back and consider the arguments general form. I then point to arguments
that others have made that seem to share this form and selective focus, and
comment on what such arguments do and do not presuppose. Finally, I locate
the selective strategy with respect to approaches to debunking morality and
end by reflecting on what the entire line of thought implies about Greenes
question and Appiahs claim.

Disgust and moral justification

Consider some of the following contentious, yuck-relevant issues: abortion,


nipple piercing, same-sex marriage, circumcision (either female or male),
human cloning, stem cell research, euthanasia, pornography. Also imagine
that your response to one of those activities or social practices is: yuck!
Selective Debunking Arguments 133

You find it simply, but unequivocally, revolting and repulsive, or you just find
yourself slightly disgusted by it. What follows from that yuck reaction, from
the point of view of morality? Do feelings of disgust, in and of themselves,
provide good enough reason to think the practice is morally wrong or
problematic?
Recently, such issues have come to the fore in normative and applied ethics,
centering on the question of what role the emotion of disgust should play
in morality, broadly construed: whether or not disgust should influence our
considered moral judgments; if so, how feelings of disgust should be accounted
for in various ethical evaluations, deliberations, and decisions; what sort of
weight, import, or credit should be assigned to such feelings; and how our legal
system and other institutions should best deal with the emotion (see Kelly and
Morar manuscript for full references).
Elsewhere (Kelly 2011) I have fleshed out a debunking argument designed
to undermine confidence in the normative force that feelings of disgust
can seem to have in moral cognition. The resulting position, which I call
disgust skepticism, holds that: feelings of disgust have no moral authority;
that explicit appeals to disgust, while often rhetorically effective, are morally
empty; that the emotion should not be granted any justificatory value;
and that we should aspire to eliminate its influence on morality, moral
deliberation, and institutional operation to the extent that we can. Rather
than recapitulate the argument in full, I will here mention some of its most
relevant properties.
First, while the argument has a normative thrust concerning the role that
feelings of disgust should place in moral justification, it is firmly rooted in a
descriptive and explanatory account of the nature of the emotion itself. It is
worth noting that my argument shares this structural feature with arguments
that others have made concerning the moral significance of disgust. All
interested parties, both skeptics (Nussbaum 2004a, 2004b) and advocates
(Kass 1997, 2002; Kahan 1998, 1999), base their normative conclusions on
descriptive claims concerning the character of the emotion. On this score, a
key advantage I claim over those competing arguments is that my account of
disgust is superior to its competitors: it is more detailed, more evolutionarily
plausible, and better able to explain the wealth of empirical data recently
discovered by moral psychologists.
134 Advances in Experimental Moral Psychology

The two core claims of what I call the E&C view are the Entanglement thesis
and the Co-opt thesis. The first holds that at the heart of the psychological
disgust system are two distinguishable but functionally integrated mechanisms,
one that initially evolved to protect the gastrointestinal system from poisons
and other harmful food, and another that initially evolved to protect the entire
organism from infectious diseases and other forms of parasites. Appeal to the
operation of these two mechanisms and their associated adaptive problems
can explain much of the fine-grained structure of the disgust response, its
intrinsic sensitivity to perceivable cues associated with poisons and parasites,
its propensity to err in the direction of false positives (rather than false
negatives), and its malleability and responsiveness to social influence, which
can result in variation in what triggers disgust from one group of people to the
next. The second core claim, the Co-opt thesis, holds that this malleability and
responsiveness to social influence was exploited by evolution, as disgust was
recruited to perform auxiliary functions having nothing to do with poisons or
parasites, infusing certain social norms and group boundaries with a disgust-
based emotional valence. In doing so, disgust did not lose its primary functions
or those properties clearly selected to allow it to perform those functions well.
Rather, it retained those functions and properties, and simply brought them to
bear on the auxiliary functions associated with norms and group membership
(Kelly 2011, 2013).
In addition to these features, the argument in favor of disgust skepticism
appeals to other facts about the emotion and key elements of the picture
provided by the E&C view. One is that disgust has an intrinsic negative valence,
which can manifest subjectively as a kind of nonverbal authority. Intense
episodes of disgust obviously have a powerful and vivid phenomenology, but
even less flagrant instances can bias judgments that they influence toward
negativity and harshness. However, the mere activation of disgust, in and
of itself, is not even a vaguely reliable indicator of moral wrongness. The
emotion remains overly sensitive to cues related to its primary functions
of protecting against poisons and parasite, which results in many false
positives even in those domains. There is no reason to think the situation
improves when disgust operates in the sociomoral domain. Indeed, there
is reason to think that disgust renders those in its grip less sensitive to the
agency and intentions of others, and can make it easier to dehumanize them.
Selective Debunking Arguments 135

Moreover,triggers ofdisgust exhibit considerable variation from person to


person and from culture to culture. This variation is found in types of cuisine
that are considered edible or disgusting, but more importantly in the types of
norms with which disgust becomes involved, as well as the group boundaries
and markers to which it is sensitive (also see Henrich et al. 2010). Hence,
when there is disagreement about the moral status of a norm, the practice
it regulates, or the type of people who engage in that practice, the fact that
disputants on one side of the debate denounce and feel disgust at the norm or
practice, while the disputants on the other side of the debate feel no disgust
and see nothing wrong with the norm or practice, may be an interesting
psychological fact. But it is a psychological fact that holds no significance for
the question of who is correct, or whose assessment of the moral status of the
norm or practice is better justified.
It is worth nothing that there will be an evolutionary story to tell about many,
if not most, of the psychological mechanisms that loom large in human moral
psychology. I do not hold that every evolutionary explanation is intrinsically
debunking, or that the mere existence of an evolutionary account of some
psychological mechanism should by itself throw suspicion on it, or lead us to
doubt that it has any role to play in moral justification. However, I do hold that
debunking strategies can be more selective, and that the details of the specific
evolutionary story provided by the E&C view should undermine confidence in
the moral significance of feelings of disgust. For, the E&C view renders most
properties of the disgust system understandable, and it also allows us to see that
some of the properties that are virtues when disgust is performing its primary
functions become vices when disgust performs the social and morally oriented
auxiliary functions. A good example is provided by the automatically activated
concerns about contamination: they straightforwardly help avoid contagious
diseases, but they are mismatched to the social domain, where they drive
irrational worries about moral taint and spiritual pollution. The distinction
between primary and auxiliary functions provided by the evolutionarily
informed E&C view shows that aspects of disgust that are features in one
domain are bugs in another. Hence my skepticism about the value of disgust
to specifically moral justification.2
I take it that the inference its disgusting, therefore its immoral has prima
facie intuitive force for many people, but whether or not the inference is a
136 Advances in Experimental Moral Psychology

component of folk morality and human moral nature is ultimately an empirical


one. Initial evidence suggests that the inference is common among the folk,
at least in some cultures, and for some segments of the population (Haidt
etal.1993; Haidt etal. 1997; Nichols 2002, 2004; c.f. Cova and Ravat 2008).
Also note that my argument for disgust skepticism is designed to show that
whatever the empirical facts about who tends to make that inference, or how
compelling they find it, it is a bad one. It should not be accepted by anyone,
and those who make it are making a mistake (c.f. Haidt 2012). Of course, to
say that this kind of argument is unsound is not to say that its conclusion will
always be false, or that moral judgments accompanied by disgust are never
justified. Rather, some disgust involving judgments may be justified while
others are not. My claim is that no moral judgments are justified by disgust;
the involvement of this emotion in a judgment is just irrelevant to whether and
how the judgment is justified.3

The shape of the argument: Selective debunking

Now that my argument against the normative value of the yuck factor has been
sketched, recall the framing questions posed at the beginning of the chapter
about the relationship between morality, on the one hand, and a cognitive
scientific understanding of the mind that may depart from intuition and
folk psychology as it increases in sophistication, on the other. The issue is
not always approached this way. Many conversations have explored related
but different questions, and they have typically done so at a higher level of
generality: morality and all moral judgments (or claims) are grouped together,
and arguments are made that they are either all vulnerable to some sweeping
form of debunking, or none of them are (Mackie 1977; Blackburn 1988; Joyce
2007; c.f. Ayer 1936; also see Street2006; Greene 2013).4 While I have doubts
about the viability of this kind of global debunking, I have just advanced what
can be thought of as a selective debunking argument against the relevance of
one circumscribed set of considerations, namely feelings of disgust, to moral
justification.5 Here I will spell out the line of reasoning, first expressing it
in condensed form before going on to elaborate and comment on different
aspects of the premises and conclusion.
Selective Debunking Arguments 137

The first premise of my main argument can be expressed in the form of a


conditional:

1. If some particular psychological mechanism can be shown to be


problematic in a relevant way, and the intuitions or judgments influenced
by that psychological mechanism can be identified, then we should
disregard, discount, or discredit those intuitions and be suspicious of the
judgments that they influence, to the extent that we can.

Then the form of that argument can be understood as a modus ponens:

1. If some particular psychological mechanism can be shown to be


problematic in a relevant way, and the intuitions or judgments influenced
by that psychological mechanism can be identified, then we should
disregard, discount, or discredit those intuitions and be suspicious of the
judgments that they influence, to the extent that we can.
2. Disgust is problematic in a relevant way (the E&C view of disgust), and
the intuitions and judgments influence by disgust can be identified
(yuck-relevant issues).
3. Therefore, we should disregard, discount, or discredit those intuitions and
be suspicious of the judgments that they influence, to the extent that we
can (disgust skepticism).

Though it is schematic, I find the general line of thought expressed in the


first premise compelling, and also find this way of framing it illuminating
for a number of reasons. First, formulating the conditional premise this way
shows that its subject matter will be a specific psychological mechanism, but
it allows the identity and details of the psychological mechanism to vary from
one instantiation of the argument schema to the next. Moreover, expressing
the first premise like this makes clear that it says nothing specifically about
morality, letalone any particular moral theory, be it utilitarian, deontological,
or otherwise (c.f. Singer 2005; Greene 2007). Nor, for that matter, does it even
say anything specific about emotions or sentiments, as opposed to less affective,
more coldly cognitive types of psychological mechanisms (c.f. DArms and
Jacobson 2000, 2003).
Second, the argument assumes a picture of the structure of the human
mind that is now familiar in various forms from empirical work in psychology.
138 Advances in Experimental Moral Psychology

Whatever their differences in emphasis and preferred terminology, many


approaches share a vision that sees the mind as comprised of many distinct,
dissociable, semiautonomous psychological mechanisms, whose different
operational principles and evolutionary histories can be (and are being)
discovered by cognitive scientists.6 This point is also relevant to separating
out and assessing the prospects of global debunking strategies as compared
to selective debunking strategies. Since different psychological mechanisms
may turn out to be more or less problematic, it also suggests that individual
selective debunking arguments will be more or less convincing depending on
the details of the particular psychological mechanism they invoke.
Put another way, this picture of the mind implies that not all intuitions are
created equal. In and of itself, this claim should not be anything controversial;
most theorists at least tacitly accept the idea that not all intuitions are of equal
value, and that in the course of theory construction some will have to be rejected
or abandoned. So there is nothing revolutionary, or even very innovative, in the
ruling out of some subset of intuitions. What may be novel about this line of
argument is its method of identifying those intuitions that should be cast aside,
and also perhaps the rationale it provides for doing so. That rationale looks to the
sciences of the mind for guidance, rather than confining itself to a priori principles
or general considerations of consistency and coherence.7 Different intuitions can
be produced by different psychological mechanisms, and it is in virtue of this that
a more sophisticated, empirically informed understanding of the mind and its
component parts can reveal some intuitions to be of less value than others.8
A final reason I prefer this formulation is that it makes explicit that
problematic is probably both the most crucial and most slippery of the
notions in play. I do think that the types of considerations I have raised about
disgust show it is indeed problematic in a relevant way. At this point, though,
I do not know how to unpack that turn of phrase and remove the scare quotes.
Indeed, I suspect no general recipe will be forthcoming, and that instead, each
psychological mechanism and attempted selective debunk will need to be
assessed on a case-by-case basis, and according to its own unique details. That
said, I do think that there are some clear, perhaps paradigmatic examples that
can be pointed to, in which mechanisms have been revealed as problematic.
For example, the psychological mechanisms that underlie vision are
problematic in certain circumstances (or in selective domains) because they
Selective Debunking Arguments 139

are notoriously susceptible to certain kinds of perceptual illusions. Even


though the two lines in the Muller-Lyer illusion seem (to many of us) like they
are the same length, we should disregard that impression, despite whatever
intuitive grip it might have on us. Another example is provided by Gil Harman
in his discussion of folk physics intuitions:

Ordinary untrained physical intuitions are often in error. For example,


ordinary people expect that something dropped from a moving vehicle or
airplane will fall straight down to the point on earth directly underneath the
place from which it was released. In fact, the dropped object will fall in a
parabolic arc in the direction of the movement of the vehicle or airplane from
which it was dropped. This means, among other things, that bombardiers
need to be trained to go against their own physical intuitions.
(Harman 1999, p. 315)

Indeed, Harman uses this as an example to soften up his reader for the main
claim he makes in the paper, which is, roughly, that folk psychology consistently
makes a fundamental attribution error about the determinants of peoples
behavior, and that virtue ethical theories that seem to enshrine that error in the
character trait-based moral psychology they advance are flawed on empirical
grounds.9 Cast in my terminology, Harman is offering a debunking argument
that selectively targets a specific component of folk psychology (rather than
the kind of global attack on the whole conceptual framework associated with,
e.g., Churchland 1981). Harman even offers an account of the psychological
mechanisms that drive the fundamental attribution error, and uses it to advance
his argument against those select intuitions that lead us10 to overestimate the
extent to which peoples behavior is driven by internal character traits, and
overlook the strong (and empirically documented) influence of external cues
and situational factors.
Similarly, current empirical work has shown how the psychological
mechanisms underlying racial cognition can lead people to naturally, intuitively
ascribe some deep and evaluatively laden racial essence to individuals based
on their observable phenotypic characteristics like skin color or hair type.
Such discoveries about the operational principles and evolutionary history
of those psychological mechanisms look to be important to contemporary
discussions about the nature of race itself, but also the pragmatics of racial
classification. A society might decide, in light of its considered goals about
140 Advances in Experimental Moral Psychology

how to deal with racial categories and biases, and also in light of the mounting
facts (genetic, biological, social, historical, etc.) about race and the source of
racial differences, that its members should aspire to overcome the influence
of the psychological mechanisms underlying racial cognition, and disregard
the intuitions that issue from them. Indeed, empirical work on the character
of those psychological mechanisms will likely point the way to the most
effective methods of controlling their influence (see Kelly etal. 2010a, 2010b
for discussion).11
The upshot of these examples is that arguments that have a form similar to
the one I have made about disgust are not uncommon. However, there does
not appear to be a single monolith or univocal notion of problematic that
they all have in common, suggesting that there is a variety of ways in which
psychological mechanisms and the intuitions that issue from them can be
found to be problematic. As nice as it would be to have a single, all-purpose, or
universally applicable criterion to apply to every psychological mechanism, no
such clean, algorithmic test is yet in the offing, and may never be. This does not
render the general argumentative strategy specious, though. Rather, it pushes
us to look at and assess each instance of the argument type on a case-by-case
basis, and tend to the details of the individual psychological mechanisms to
which it appeals.12

Conclusion

One might find reason for optimism in the themes of malleability and variation
that run throughout some of the above examples, including my main example
of disgust. Perhaps psychological mechanisms that are problematic in some
people are unproblematic in others, suggesting that such mechanisms are plastic
enough to be fixable. This is an interesting possibility, to be sure. However,
it leaves untouched the question of what being fixed amounts to, and which
mechanisms are properly tuned and which are not. One way to understand
my point about justification is to say that in cases of disagreement about this
kind of issue, members on one side of the debate cannot appeal to their own
calibrated psychological mechanisms or the intuitions that issue from them to
justify their position without begging the very question being raised. Even once
Selective Debunking Arguments 141

(or if) the issue of what proper tuning amounts to is settled, the argument still
goes through for those improperly tuned mechanisms, and I maintain that we
should continue to be on guard against their influence on judgment and action.
Finally, it is also likely that different psychological mechanisms will be
malleable to different extents, and in different ways. This provides more
support for the divide-and-conquer strategy I advocate. Together, I think
theseconsiderations raise problems for familiar globally oriented approaches
that seek to draw more encompassing conclusions in one fell swoop. I began
this chapter with a passage from K. Anthony Appiah suggesting that human
moral nature, morality, and moral psychology will be resistant to transformative
influences originating in advances in the sciences of the mind, and with some
questions raised by Joshua Greene about how far the empirical debunking of
human moral nature can go. I will end not by addressing these head on, but by
pointing out that in asking questions and making claims about morality as a
single phenomenon and moral psychology as a uniform whole, they rely on an
assumption that I think we have good reason to doubt. Rather, the argument
of this chapter shows that advances in cognitive science can indeed have a
transformative effect on how we think about selective aspects of morality, and
how we should make sense of ourselves and some of our own moral impulses.
Perhaps more importantly, the empirical work is also revealing how a more
piecemeal approach is required if we are to draw any defensible normative
conclusions from it. The need for a more selective focus opens up new ways to
think about whether and which components of morality might be debunked
by, transformed by, or even just informed and guided by our growing empirical
understanding of our own moral psychology.

Notes

* Authors Note: Daniel Kelly, Department of Philosophy, Purdue University.


Correspondence should be addressed to Daniel Kelly, 7126 Beering Hall, 100 N.
University, West Lafayette, IN47906. Email: drkelly@purdue.edu. I would like to
thank Jen Cole Wright and Hagop Sarkissian for useful feedback on this chapter.
1 A similar concern animates much of Daniel Dennetts early work on the
relationship between cognitive science and propositional attitude psychology
aswell (see especially 1978, 1987).
142 Advances in Experimental Moral Psychology

2 In arguing that the role of disgust in the moral domain should be minimized,
Irealize that I am recommending that we should refrain from using what could
be a useful heuristic and powerful motivation tool. However, given the risks
attached to this particular emotion, namely its hair trigger sensitivity to cues
that are prima facie irrelevant to morality and its susceptibility to false positives,
together with its propensity to dehumanize its object, I think the costs outweigh
the benefits.
3 One might imagine an individual with a perfectly tuned sense of disgust,
whose psychological makeup is such that she feels revulsion at all and only
those norms, actions, and practices that are genuinely morally wrong. My
position is not undermined by this possibility. Even though, ex hypothesi,
all ofher judgments about those norms, action, and practices are justified,
itremains open for me to claim that it is not the attendant feelings of disgust
she feels that justify her judgments. Rather, the ultimate arbiter of justification
is something else, above and beyond the mere presence of feelings of disgust,
namely whatever standard is being appealed to in claiming that her sense of
disgust is perfectly tuned.
4 I am particularly skeptical of the prospects of empirically motivated debunking
of the entirety of morality or all moral judgments because (among other
reasons) it remains unclear how to delimit the scope of such arguments.
Separating the domain of morality and moral cognition off from the rest
ofnon-moral or extra-moral cognitionidentifying what moral judgments
have in common that makes them moral judgmentshas proven surprisingly
difficult. Certainly, no consensus has emerged among practitioners in
the growing field of empirical moral psychology. See Nado etal. 2009,
MacheryandMallon 2010, Parkinson etal. 2011, Sinnott-Armstrong and
Wheatley 2012.
5 The terminology selective debunking is taken from a series of thought-provoking
posts on the topic by Tamler Sommers at The Splintered Mind blog (http://
schwitzsplinters.blogspot.com/2009/05/on-debunking-part-deux-selective.html).
6 I mean to cast my net widely with the first premise, but recognize that the
details and preferred jargon used to discuss the distinguishable psychological
mechanisms vary in different literatures. For instance, see Fodor (1983, 2000),
Pinker (1997), and Carruthers (2006) for discussion in terms of different
psychological modules; Evans (2003), Stanovich (2005), and Frankish (2010)
for discussion in terms of dual process theory, and Ekman (1992) and Griffith
(1997) for discussion of affect programs and basic emotions.
Selective Debunking Arguments 143

7 See Rawls (1971) on the method of reflective equilibrium and also David Lewiss
methodological contention that to the victor go the spoils (Lewis 1973). For
some interesting recent discussion on the later, see (Eddon 2011; Ichikawa 2011).
8 This suggestion is very much in the spirit of some comments in Tim Maudlins
book The Metaphysics in Physics: if we care about intuitions at all, we ought
to care about the underlying mechanism that generates them (Maudlin2010,
pp.1467). In the main text, I am working with a picture similar to that implied
by Maudlins comment, namely that one of the things psychological mechanisms
that comprise the disgust system do is generate an intuition, namely the
intuition that whatever triggered the system (or whatever the person thinks
triggered the system, in cases of misattribution) is disgusting.
9 Also see Doris (2002) for a book-length defense of what has become known as
the situationist critique of virtue ethics, and Alfano (2013) for a discussion of
the current state of the debate.
10 That can lead those of us in Western cultures to commit the error, anyway.
Members of Eastern Asian cultures are less prone to the mistake, suggesting it
is not a universal component of folk psychology (Nisbett 2003). For another
discussion about cultural variability and the fundamental attribution error,
this time within the context of Confucian versus Aristotelian versions of virtue
ethics, see Sarkissian (2010).
11 A final illuminating comparison, and one that might feel more apt to someone
sympathetic to metaethical constructivism, is suggested by considering how
intuition, on the one hand, and theoretical psychological knowledge, on the
other, can best inform and guide not moral judgment but artistic creation.
Reflecting on his project in Sweet Anticipation: Music and the Psychology of
Expectation, cognitive musicologist David Huron offers some reasonable and
intriguing comments:

My musical aim in this book is to provide musicians with a better


understanding of some of the tools they use, not to tell musicians
what goals they should pursue. If we want to expand artistic horizons
and foster creativity there is no better approach than improving our
understanding of how minds work. Many artists have assumed that
such knowledge is unnecessary: it is intuition rather than knowledge that
provides the boundaries for artistic creation. I agree that intuition is
essential for artistic production: in the absence of knowledge, our only
recourse is to follow our intuitions. But intuition is not the foundation
for artistic freedom or creative innovation. Quite the contrary. The more
144 Advances in Experimental Moral Psychology

we rely on our intuitions, the more our behaviors may be dictated by


unacknowledged social norms or biological predispositions. Intuition
is, and has been, indispensible in the arts. But intuition needs to be
supplemented by knowledge (or luck) if artists are to break through
counterintuitive barriers into new realms of artistic expression.
(Huron 2006, pp. ixx, italics in original)

12 Another metaethical view that bears intriguing similarities to the one suggested
by the selective debunking approach endorsed here is the patchy realism
described by Doris and Plakias (2007).

References

Alfano, M. (2013). Character as Moral Fiction. Cambridge: Cambridge University


Press.
Ayer, A. (1936). Language, Truth, and Logic. New York: Dover Publications.
Blackburn, S. (1998). Ruling Passion. London: Oxford University Press.
Carruthers, P. (2006). The Architecture of the Mind. New York: Oxford University Press.
Churchland, P. M. (1981). Eliminative materialism and the propositional attitudes.
Journal of Philosophy, 78, 6790.
Chomsky, N. (1995). Language and Nature. Mind, 104(431), 161.
(2009). The Mysteries of Nature: How Deeply Hidden? The Journal of Philosophy,
106(4), 167200.
Cova, F., and Ravat, J. (2008). Sens commun et objectivisme moral: objectivisme
global ou objectivisme local? Une introduction par lexemple la philosophie
exprimentale. (English Modified Version). Klesis - Revue Philosophique: Actualit
de la Philosophie Analytique, 9, 180202.
DArms, J., and Jacobson, D. (2000). Sentiment and value. Ethics, 110, 72248.
(2003). The significance of recalcitrant emotions. In A. Hatzimoysis (ed.),
Philosophy and the Emotions. Cambridge: Cambridge University Press, pp. 12746.
Dennett, D. (1978). Brainstorms. Montgomery, VT: Bradford Books.
(1987). The Intentional Stance. Montgomery, VT: Bradford Books.
Doris, J. (2002). Lack of Character: Personality and Moral Behavior. New York:
Cambridge.
Doris, J., and Plakias, A. (2007). How to argue about disagreement: Evaluative
diversity and moral realism. In W. Sinnott-Armstrong (ed.), Moral Psychology,
vol. 2: The Biology and Psychology of Morality. Oxford: Oxford University Press,
pp.30332.
Selective Debunking Arguments 145

Eddon, M. (2011). Intrinsicality and hyperintensionality. Philosophy and


Phenomenological Research, 82, 31436.
Ekman, P. (1992). An argument for basic emotions. Cognition and Emotion, 6,
169200.
Evans, J. (2003). In two minds: Dual-process accounts of reasoning. Trends in
Cognitive Science, 7(10), 4549.
Frankish, K. (2010). Dual-process and dual-system theories of reasoning. Philosophy
Compass, 5(10), 91426.
Greene, J. (2007). The secret joke of Kants soul. In W. Sinnott-Armstrong (ed.),
Moral Psychology, Vol. 3: The Neuroscience of Morality: Emotion, Disease, and
Development. Cambridge, MA: The MIT Press, pp. 3580.
(2013). Moral Tribes: Emotion, Reason, and the Gap Between Us and Them.
NewYork: Penguin Press.
Griffiths, P. (1997). What the Emotions Really Are. Chicago: University of Chicago
Press.
Haidt, J. (2012). The Righteous Mind: Why Good People Are Divided by Politics and
Religion. New York: Pantheon Press.
Haidt, J., Koller, S., and Dias, M. (1993). Affect, culture, and morality, or is it wrong to
eat your dog? Journal of Personality and Social Psychology, 65(4), 61328.
Haidt, J., Rozin, P., McCauley, C., and Imada, S. (1997). Body, psyche, and culture:
The relationship between disgust and morality. Psychology and Developing
Societies, 9, 10731.
Harman, G. (1999). Moral philosophy meets social psychology: Virtue ethics and the
fundamental attribution error. Proceedings of the Aristotelian Society, 99, 31531.
Henrich, J., Heine, S., and Norenzayan, A. (2010). The weirdest people in the world.
Behavioral and Brain Sciences, 33(June), 61135.
Huron, D. (2006). Sweet Anticipation: Music and the Psychology of Expectation.
Cambridge, MA: The MIT Press.
Ichikawa, J. (2011). Experimentalist pressure against traditional methodology.
Philosophical Psychology, 25(5), 74365.
Joyce, R. (2007). The Evolution of Morality. Cambridge, MA: MIT Press.
Lewis, D. (1973). Causation. Journal of Philosophy, 70, 55667.
Kahan, D. (1998). The anatomy of disgust in criminal law. Michigan Law Review,
96(May), 162157.
(1999). The progressive appropriation of disgust. In Susan Bandes (ed.), The
Passions of the Law. New York: New York University Press, pp. 6380.
Kass, L. (2 June 1997). The wisdom of repugnance. The New Republic, 216(22),
available online at http://www.catholiceducation.org/articles/medical_ethics/
me0006.html
146 Advances in Experimental Moral Psychology

(2002). Life, Liberty, and the Defense of Dignity: The Challenge to Bioethics.
NewYork: Encounter Books.
Kelly, D. (2011). Yuck! The Nature and Moral Significance of Disgust. Cambridge, MA:
The MIT Press.
(2013). Moral disgust and tribal instincts: A byproduct hypothesis. In R. Joyce,
K.Sterelny, and B. Calcott (eds), Cooperation and Its Evolution. Cambridge, MA:
The MIT Press.
Kelly, D., Faucher, L., and Machery, E. (2010). Getting rid of racism: Assessing three
proposals in light of psychological evidence. Journal of Social Philosophy, 41(3),
293322.
Kelly, D., Machery, E., and Mallon, R. (2010). Race and racial cognition. In J. Doris
etal. (eds), The Moral Psychology Handbook. New York: Oxford University Press,
pp. 43372.
Kelly, D., and Morar, N. (in press). Against the Yuck Factor: On the Ideal Role of
Disgust Society. Utilitas.
Mackie, J. L. (1977). Ethics: Inventing Right and Wrong. New York: Penguin Books.
Maudlin, T. (2010). The Metaphysics Within Physics. New York: Oxford University Press.
Nado, J., Kelly, D., and Stich, S. (2009). Moral Judgment. In John Symons and Paco
Calvo (eds), The Routledge Companion to the Philosophy of Psychology. New York:
Routledge, pp. 62133.
Machery, E., and Mallon, R. (2010). Evolution of morality. In J. Doris etal. (eds),
TheMoral Psychology Handbook. New York: Oxford University Press, pp. 346.
Nichols, S. (2002). Norms with feeling: Towards a psychological account of moral
judgment. Cognition, 84, 22136.
(2004). Sentimental Rules: On the Natural Foundations of Moral Judgment.
NewYork: Oxford University Press.
Nisbett, R. (2003). The Geography of Thought: How Asians and Westerners Think
Differently...And Why. New York: The Free Press.
Nussbaum, M. (2004a). Hiding from Humanity: Disgust, Shame, and the Law.
Princeton, NJ: Princeton University Press.
(6 August 2004b). Danger to human dignity: The revival of disgust and shame in
the law. The Chronicle of Higher Education, 50(48), B6.
Parkinson, C., Sinnott-Armstrong, W., Koralus, P., Mendelovici, A., McGeer, V., and
Wheatley, T. (2011). Is morality unified? Evidence that distinct neural systems
underlie moral judgments of harm, dishonesty, and disgust. Journal of Cognitive
Neuroscience, 23(10), 316280.
Pinker, S. (1997). How the Mind Works. New York: W.W. Norton & Co.
Rawls, J. (1971). A Theory of Justice (2nd ed. 1999). Cambridge, MA: Harvard
University Press.
Selective Debunking Arguments 147

Sarkissian, H. (2010). Minor tweaks, major payoffs: The problems and promise of
situationism in moral philosophy. Philosophers Imprint, 10(9), 115.
Singer, P. (2005). Ethics and intuitions. The Journal of Ethics, 9, 33152.
Sinnott-Armstrong, W., and Wheatley, T. (2012). The disunity of morality and why it
matters to philosophy. The Monist, 95(3), 35577.
Stanovich, K. (2005). The Robots Rebellion: Finding Meaning in the Age of Darwin.
Chicago, IL: University of Chicago Press.
Street, S. (2006). A darwinian dilemma for realist theories of value. Philosophical
Studies, 127(1), 10966.
8

The Psychological Foundations


ofMoralConviction
Linda J. Skitka*

In a letter to the editor of the Mercury News, one reader explained his views
on the death penalty as follows: Ill vote to abolish the death penalty...and
not just because it is fiscally imprudent with unsustainable costs versus a life
sentence without possibility of parole. More importantly, its morally wrong.
Making us and the state murderersthrough exercising the death penaltyis
a pure illogicality akin to saying two wrongs make a right (Mercury News
2012). In short, this letter writer believes murder is simply wrong, regardless
of whether it is an individual or state action, and for no other reason than
because it is simply and purely wrong.
Attitudes rooted in moral conviction (or moral mandates), such as the
letter writers position on the death penalty, represent a unique class of strong
attitudes. Strong attitudes are more extreme, important, central, certain, and/
or accessible, and are also more stable, enduring, and predictive of behavior
than attitudes weaker on these dimensions (see Krosnick and Petty 1995 for a
review). Attitudes held with the strength of moral conviction, even if they share
many of the same characteristics of strong attitudes, are distinguished by a sense
of imperative and unwillingness to compromise even in the face of competing
desires or concerns. Someone might experience their attitude about chocolate,
for example, in extreme, important, certain, and central terms, but still decide
not to order chocolate cake at a restaurant because of the calories. Vanity,
or other motives such as health or cost, can trump even peoples very strong
preferences. Attitudes rooted in moral conviction, however, are much less likely
to be compromised or vulnerable to trade off (cf. Tetlock etal. 2000).
The Psychological Foundations ofMoralConviction 149

To better understand how attitudes that are equally strong can nonetheless
differ in their psychological antecedents and consequences, we need to
understand the psychological and behavioral implications of the content
of attitudes as well as their structure (e.g., extremity, importance). Social
domain theory (e.g., Nucci 2001; Nucci and Turiel 1978; Turiel 1998; 2002),
developed to explain moral development and reasoning, provides some
useful hints about key ways that attitudes may differ in substance, even
when they are otherwise equally strong. Using domain categories to describe
how attitudes differ represents a useful starting point for understanding
the foundations of moral mandates (Skitka et al. 2005; Skitka et al. 20081;
Wright etal. 2008). As can be seen in Figure 8.1, one domain of attitudes is
personal preference. Personal preferences represent attitudes that people see
as subject to individual discretion, and as exempt from social regulation or
comment. For example, one person might support legalized abortion because
she prefers to have access to a backstop method of birth control, and not
because of any normative or moral attachment to the issue. She is likely to
think others preferences about abortion are neither right nor wrong; they
may just be different from her own. Her position on this issue might still be
evaluatively extreme, personally important, certain, central, etc., but it is not
one she experiences as a core moral conviction. Her neighbor, on the other
hand, might oppose legalized abortion because this practice is inconsistent
with church doctrine or because the majority of people he is close to oppose
it. If church authorities or his peer group were to reverse their stance on
abortion, however, the neighbor probably would as well. Attitudes that
reflect these kinds of normative beliefs typically describe what people like
me or us believe, are relatively narrow in application, and are usually group
or culture bound rather than universally applied. Yet a third person might
see the issue of abortion in moral terms. This person perceives abortion
(or restricting access to abortion) as simply and self-evidently wrong, even
monstrously wrong, if not evil. Even if relevant authorities and peers were
to reverse positions on the issue, this person would nonetheless maintain
his or her moral belief about abortion. In addition to having the theorized
characteristic of authority and peer independence, moral convictions are also
likely to be perceived as objectively true, universal, and to have particularly
strong ties to emotion.
150 Advances in Experimental Moral Psychology

Preferences Conventions Moral Mandates


Matters of taste Normative Absolute/universal
Often codified Objective
Subjective
Group defined Authority
Tolerant
Culturally narrow independent
Motivating
Self-justifying
Strong emotions

Figure 8.1 A domain theory of attitudes.

The goals of this chapter are to review recent developments in understanding


the psychology of moral conviction and related research. These developments
include research on operationalization and measurement as well as testing
a wide range of hypotheses about how moral convictions differ in form and
implication from otherwise strong but nonmoral attitudes.

Measurement and operationalization

Research on moral conviction has generally opted to use a bottom-up rather


than top-down empirical approach to study this construct. Instead of defining
the characteristics of what counts as a moral mandate a priori (e.g., that it be
seen as universal in application or resistant to trade-offs), researchers use face-
valid items2 to assess strength of moral conviction, and test whether variation
in strength of moral conviction yields predicted effects (e.g., differences in
perceived universal applicability). Avoiding confounds with other indices
of attitude strength is important to ensure that an individuals response is
motivated by morality, rather than by some other concern such as attitude
importance, extremity, and so on. For this reason, moral conviction researchers
see the distinction between moral and nonmoral attitudes as something that
is subjectively perceived, rather than as an objective property of attitudes,
decisions, choices, or dilemmas.
Although people do not always seek to maximize principled consistency
when making moral judgments (Ulhmann et al. 2009), they nonetheless
The Psychological Foundations ofMoralConviction 151

appear to have a strong intuitive sense of when their moral beliefs apply to a
given situation (Skitka etal. 2005). People can identify when situations engage
their moral sentiments, even when they cannot always elegantly describe the
processes or principles that lead to this sense (Haidt 2001). The assumption
that people have some insight into the characteristics of their own attitudes
is one shared by previous theory and research on the closely related concept
of attitude strength. Researchers assume that people can access from memory
and successfully report the degree to which a given attitude is (for example)
extreme, personally important, certain, or central (see Krosnick and Petty
1995 for a review).
Hornsey and colleagues (Hornsey etal. 2003; 2007) provide one example
of this approach. They operationalized moral conviction with three items, all
prefaced with the stem, To what extent do you feel your position... and the
completions of is based on strong personal principles, is a moral stance,
and is morally correct, that across four studies had an average Cronbachs
a 0.75. Others have operationalized moral conviction in similar fashion,
most typically using either a single face-valid item: How much are your
feelings about connected to your core moral beliefs and convictions (e.g.,
Brandt and Wetherell 2012; Skitka etal. 2005), or this item accompanied by a
second item, To what extent are your feelings about deeply connected
to your fundamental beliefs about right and wrong? (e.g., Skitka etal. 2009;
Skitka and Wisneski 2011; Swink 2011). Morgan (2011) used a combination
of the Hornsey etal.s (2003, 2007) and Skitka etal.s (2009) items to create a
5-item scale, and found as that ranged from 0.93 to 0.99 across three samples.
The reliability scores observed by Morgan suggest that either all, or a subset, of
these items work well, and will capture highly overlapping content.
Some have wondered, however, if moral conviction is better represented as
a binary judgment: Something that is or is not the case, rather than something
that varies in degree or strength. Measuring the categorization of an attitude
as moral and the relative strength of conviction both contribute uniquely to
the explanatory power of the variable (Wright etal. 2008; Wright 2012). For
this reason, as well as the parallelism of conceptualizing moral conviction
similarly to measures of attitude strength, we advocate that moral convictions
be measured continuously rather than nominally.
Other ways of operationalizing moral conviction are problematic because
they confound moral conviction with the things that moral convictions should
152 Advances in Experimental Moral Psychology

theoretically predict (e.g., Van Zomeron etal. 2011; Zaal etal. 2011), use items
that have no explicit references to morality (e.g., X threatens values that
are important to me,3 Siegrist et al. 2012), conflate moral convictions with
other dimensions of attitude strength (e.g., centrality, Garguilo 2010; Skitka
and Mullen 2006), and/or measure other constructs as proxies for moral
conviction, such as importance or centrality (e.g., Besley 2012; Earle and
Siegrist 2008). These strategies introduce a host of possible confounds and do
more to confuse than to clarify the unique contribution of moral conviction
independent of other characteristics of attitudes. Attitude importance and
centrality, for example, have very different associations with other relevant
variables than those observed with unconfounded measures of moral
conviction (e.g., including effects that are the reverse sign, e.g., Skitka et al.
2005). To avoid these problems, researchers should therefore use items that
(a) explicitly assess moral content, and (b) do not introduce confounds that
capture either the things moral conviction should theoretically predict (e.g.,
perceived universalism) or other dimensions of attitude strength (importance,
certainty, or centrality).
Moral philosophers argue that moral convictions are experienced as
sui generis, that is as unique, special, and in a class of their own (e.g.,
Boyd 1988; McDowell 1979; Moore 1903; Sturgeon 1985). This status of
singularity is thoughtto be due to a number of distinguishing mental states
or processes associated with the recognition of something as moral, including
(a) universalism, (b) the status of moral beliefs as factual beliefs with
compelling motives and justification for action, and (c) emotion (Skitka etal.
2005). These theoretically defining characteristics of attitudes (which taken
together represent the domain theory of attitudes) are testable propositions
in themselves, and have a number of testable implications (e.g., the authority
independence and nonconformity hypotheses). I briefly review empirical
research testing these core propositions and selected hypotheses that can be
derived from them next.

Universalism and objectivism


The domain theory of attitudes predicts that people experience moral mandates
as objective truths about the world, much as they do scientific judgments or
The Psychological Foundations ofMoralConviction 153

facts. In other words, good and bad are experienced as objective characteristics
of phenomena and not just as verbal labels that people attach to feelings
(Shweder 2002). Because beliefs rooted in moral conviction are perceived as
operationally true, they should also be perceived as universally applicable. The
author of the letter to the Mercury News, for example, is likely to believe that
the death penalty should not only be prohibited in his home state of California,
but in other states and countries as well.
Broad versions of the universalism and objectivism hypotheses have been
tested and supported. For example, people see certain moral rules (e.g.,
Nichols and Folds-Bennett 2003; Turiel 1978) and values (e.g., Gibbs etal.
2007) as universally or objectively true, and that certain moral transgressions
should be universally prohibited (e.g., Brown 1991). There is some evidence
that people also see ethical rules and moral issues as more objectively true
than, for example, various violations of normative conventions (Goodwin
and Darley 2008), but other research yields more mixed results (Wright
etal. 2012). Until recently, little or no research has tested the universalism
hypothesis.
To shed further light on the objectivism and universalism hypotheses,
Morgan, Skitka, and Lytle (under review) tested whether thinking about
a morally mandated attitude leads to a situational increase in peoples
endorsement of a universalistic moral philosophy (e.g., the degree to which
people rate moral principles as individualistic or relativistic, versus as universal
truisms). Participants endorsements of a universalistic moral philosophy,
their positions on the issue of legalized abortion, and moral conviction about
abortion were measured at least 24 hours before the experimental session.
Once in the lab, participants were primed to think about abortion by writing an
essay about their position that they thought would be shared with an another
participant. They were then given an essay presumably written by the other
participant, that was either pro-choice or pro-life (essays were modeled after
real participants essays on this topic). After reading the essay, participants
completed the same universalistic philosophy measure they had completed
at pretest. Strength of moral conviction about abortion was associated with
increased post-experimental endorsement of a universalistic philosophy,
regardless of whether participants read an essay that affirmed or threatened
their own position on the topic. In short, people see moral rules in general
154 Advances in Experimental Moral Psychology

as more universally applicable when they have just thought about an attitude
held with moral conviction.
A second study tested the universalism and objectivity hypotheses more
directly by having participants rate the perceived objectivity (e.g., Imagine
that someone disagreed with your position on [abortion, requiring the HPV
vaccine, same sex marriage]: To what extent would you conclude the other
person is surely mistaken?) and universality (To what extent would your
position on [abortion/the HPV vaccine, same sex marriage] be equally correct
in another culture?) of these attitudes, in addition to providing ratings of the
degree to which each reflected a moral conviction. Strength of moral conviction
was associated with higher perceived objectivity and universalism of attitudes,
even when controlling for attitude extremity.
Finally, in a third study, participants were asked to generate sentences that
articulated their own beliefs or positions with respect to a piece of scientific
knowledge, something that is morally right or wrong, and that you like
or dislike. Participants then completed the same objectivity and universalism
measures used in Study 2. Scientific and moral beliefs were rated as equally
objectively true and universal, and as more objectively true and universal than
likes/dislikes. In sum, these three studies demonstrated that moral convictions
are perceived as indistinguishable from scientific facts in perceived universality
and objectivism.

Motivation and behavior


Attitudes rooted in moral conviction are predicted to also be inherently
motivating, and therefore should have stronger ties to behavior than those not
rooted in moral conviction. A moral conviction that voluntarily terminating a
pregnancy (or alternatively, interfering with a womans right to choose whether
to sustain a pregnancy) is fundamentally wrong, for example, has an inherent
motivational qualityit carries with it an ought or ought not that can
motivate subsequent behavior. Moral convictions are therefore theoretically
sufficient in and of themselves as motives that can direct what people think,
feel, or do (Skitka etal. 2005).
Implicit in this reasoning is the hypothesis that people should also feel
more compelled to act on attitudes held with strong rather than weak moral
The Psychological Foundations ofMoralConviction 155

conviction. In support of this hypothesis, stronger moral convictions about


salient social issues and/or presidential candidates predict intentions to vote
and actual voting behavior, results that have now replicated across three
presidential election cycles in the United States (Morgan et al. 2010; Skitka
and Bauman 2008). The motivational impact of moral conviction was a
robust effect when controlling for alternative explanations, such as strength of
partisanship and attitude strength.
In an ingenious study, Wright etal. (2008, Study 2) put peoples self-interests
into direct conflict with their moral convictions. Participants were pretested for
their moral convictions on various issues. During the lab portion of the study
they were shown another participants essay about an issue (manipulated to be
inconsistent with the real participants attitudes). People almost always divide
the prizes equally in this kind of economic game (e.g., Fehr and Fishbach 2004).
People with stronger moral convictions about the essay issue, however, kept most
of the raffle tickets for themselves (on average, 8.5 out of 10 tickets) when dividing
the tickets between themselves and the participant who had a divergent attitude
from their own. Those who did not see the issue as a moral one, conversely, divided
the tickets equally between themselves and the other participant (Wright etal.
2008). In summary, people are usually motivated by fairness in these kinds of
economic games, but their moral convictions and disdain for someone who did
not share their moral views trumped any need to be fair.

Emotion
The domain theory of attitudes also makes the prediction that moral convictions
should have especially strong ties to emotion. For example, Person A might
have preference for low taxes. If her taxes rise, she is likely to be disappointed
rather than outraged. Imagine instead, Person B, who has a strong moral
conviction that taxes be kept low. He is likely to respond to the same rise in
tax rates with rage, disgust, and contempt. In short, the strength and content
of emotional reactions associated with attitudes rooted in moral conviction
are likely to be quite different than the emotional reactions associated with
otherwise strong but nonmoral attitudes. Emotional responses to given issues
might also play a key role in how people detect that an attitude is a moral
conviction, or in strengthening moral convictions.
156 Advances in Experimental Moral Psychology

Emotion as consequence
Consistent with the prediction that moral mandates will have different, and
perhaps stronger ties to emotion than nonmoral mandates, people whose
opposition to the Iraq War was high rather than low in moral conviction also
experienced more negative emotion (i.e., anger and anxiety) about the War in
the weeks just before and after it began. In contrast, supporters high in moral
conviction experienced more positive emotions (i.e., pleased and glad) about
going to war compared to those low in moral conviction, results that emerged
even when controlling for a variety of attitude strength measures. Similar
positive and negative emotional reactions were also observed in supporters
and opponents reactions to the thought of legalizing physician-assisted suicide
(Skitka and Wisneski 2011).

Emotion as antecedent
Other research has tested whether people use emotions as information in
deciding whether a given attitude is a moral conviction. Consistent with this
idea, people make harsher moral judgments of others behavior when exposed
to incidental disgust such as foul odors or when in a dirty lab room, than
they do when exposed to more pleasant odors or a clean lab room (Schnall
et al. 2008). People generalize disgust cues and apply them to their moral
judgments. It is important to point out, however, that moral judgments are not
the same thing as moral convictions. Attitudes (unlike judgments) tend to be
stable, internalized, and treated much like possessions (e.g., Prentice 1987). In
contrast, moral judgments are single-shot reactions to a given behavior, actor,
or hypothetical, and share few psychological features with attitudes. Learning
that incidental disgust leads to harsher moral judgments, therefore, may not
mean that incidental disgust (or other incidental emotions) would also lead
people to have stronger moral convictions.
Consistent with distinctions between judgments and attitudes, research
in my lab has found no effect of incidental emotion on moral convictions
(Skitka, unpublished data). We have manipulated whether data is collected
in a clean versus dirty lab; in the context of pleasant (e.g., Hawaiian breeze,)
versus disgusting smells (e.g., fart spray or a substance that smelled like
a dead rat); when participants have their hands and forearms placed in an
unpleasant concoction of glue and gummy worms, versus feathers and beads;
The Psychological Foundations ofMoralConviction 157

having participants write retrospective accounts about a time when they


felt particularly angry, sad, happy, or disgusted; or using a misattribution
of arousal paradigm. Although manipulation checks revealed that each of
these manipulations had the intended effect, none led to changes in moral
conviction.
One possible explanation for these null results is that integral (i.e.,
attitude-specific) emotions tied to the attitude object itself may be trumping
the potential informational influence of incidental emotions. Once a moral
conviction comes to mind, so too might all the emotional associations with
it, which could overwhelm and replace incidental affect in peoples current
working memory. Attitude-specific emotions might therefore play a more
important role than incidental emotions in how people identify whether a
given attitude is one held with moral conviction.
To test this idea, participants were exposed to one of four categories of
pictures as part of a bogus recognition task. The images varied in relevance
to the issue of abortion: pictures of aborted fetuses (attitudinally relevant
disgust/harm); animal rights abuses (attitudinally irrelevant disgust/harm);
pictures of non-bloody, disgusting images, such as toilets overflowing with
feces (attitudinally irrelevant disgust, no harm); or neutral photos (e.g., office
furniture; no disgust/harm). Pictures were presented at either subliminally
(14 msecs) or supraliminally (250 msecs). Participants moral conviction about
abortion increased relative to control only after supraliminal exposure to the
abortion pictures. Moreover, this effect was unique to moral conviction and
was not observed with attitude importance. A second study replicated this
effect, and tested whether it was mediated by disgust, anger, or perceived
harm. The effect was fully mediated by disgust (Wisneski and Skitka 2013).
Taken together, these results suggest that emotions play a key role in how
people form or strengthen moral convictions, but these processesalthough
fastnonetheless require some conscious processing.
In summary, it is clear that moral convictions have ties to integral emotion.
The relationship between emotions and moral convictions, however, appears
to be complex. Future research needs to manipulate other kinds of integral
emotions, including positive emotions, to discover whether other emotional
cues can also cause changes in moral conviction. Emotions not only serve
as an antecedent to moral convictions, but also appear to be consequences
158 Advances in Experimental Moral Psychology

of them as well. Although more research is needed to further tease apart the
complex connections between moral convictions and emotions, one thing is
clear: Emotions are clearly a key part of the story.

The authority independence hypothesis

A core premise of the domain theory of attitudes is that people do not rely on
conventions or authorities to define moral imperative; rather, people perceive
what is morally right and wrong irrespective of authority or conventional
dictates. Moral beliefs are not by definition antiestablishment or antiauthority,
but are simply not dependent on conventions, rules, or authorities. When
people take a moral perspective, they focus on their ideals and the way they
believe things ought to or should be done rather than on a duty to comply with
authorities or normative conventions. The authority independence hypothesis
therefore predicts that when peoples moral convictions are at stake, they are
more likely to believe that duties and rights follow from the greater moral
purposes that underlie rules, procedures, and authority dictate than from the
rules, procedures, or authorities themselves (see also Kohlberg 1976; Rest
etal. 1999).
One study tested the authority independence hypothesis by examining
which was more important in predicting peoples reactions to a controversial
US Supreme Court decision: peoples standing perceptions of the Courts
legitimacy, or peoples moral convictions about the issue being decided (Skitka
etal. 2009). A nationally representative sample of adults rated the legitimacy of
the Court, as well as their level of moral conviction about the issue of physician-
assisted suicide several weeks before the Court heard arguments about whether
states could legalize the practice, or whether it should be federally regulated.
The same sample of people was contacted again after the Court upheld the
right of states to legalize physician-assisted suicide. Knowing whether peoples
support or opposition to physician-assisted suicide was high versus low in
moral conviction predicted whether they saw the Supreme Courts decision
as fair or unfair, as well as their willingness to accept the decision as binding.
Pre-ruling perceptions of the legitimacy of the Court, in contrast, had no effect
on post-ruling perceptions of fairness or decision acceptance.
The Psychological Foundations ofMoralConviction 159

Other research has found behavioral support for the prediction that people
reject authorities and the rule of law when outcomes violate their moral
convictions. Mullen and Nadler (2008) exposed people to legal decisions
that supported, opposed, or were unrelated to their moral convictions. The
experimenters distributed a pen with a post-exposure questionnaire, and
asked participants to return them at the end of the session. Consistent with the
prediction that decisions, rules, and laws that violate peoples moral convictions
erode support for the authorities and authority systems who decide these things,
participants were more likely to steal the pen after exposure to a legal decision that
was inconsistent rather than consistent with their personal moral convictions.
Peoples moral mandates should affect not only their perceptions of decisions
and willingness to comply with authorities, but should also affect their
perceptions of authorities legitimacy. People often do not know the right
answer to various decisions authorities are asked to make (e.g., what is best for
the group, whether a defendant is really guilty or innocent), and therefore, they
frequently rely on cues like procedural fairness and an authoritys legitimacy
to guide their reactions (Lind 2001). When people have moral certainty about
what outcome authorities and institutions should deliver, however, they do
not need to rely on standing perceptions of legitimacy as proxy information
to judge whether the system works. In these cases, they can simply evaluate
whether authorities get it right. Right decisions indicate that authorities
are appropriate and work as they should. Wrong answers signal that the
system is somehow broken and is not working as it should. In short, one
could hypothesize that people use their sense of morality as a benchmark
to assess authorities legitimacy. Consistent with this idea, the results of the
Supreme Court study referenced earlier also found that perceptions of the
Courts legitimacy changed from pre- to post ruling as a function of whether
the Courtruled consistently or inconsistently with perceivers morally vested
outcome preferences (Skitka etal. 2009).

The nonconformity hypothesis

Moral convictions might inoculate people from peer as well as authority


influence. People typically conform to the majority when faced with the
160 Advances in Experimental Moral Psychology

choice to accept or reject the majority position. This occurs because those who
oppose the majority risk ridicule and disenfranchisement, whereas those who
conform expect acceptance (Asch 1956). In addition, people may conform
when they are unsure about the appropriate way to think or behave; they adopt
the majority opinion because they believe the majority is likely to be correct
(Chaiken and Stangor 1987; Deutsch and Gerard 1955). Therefore, people
conform both to gain acceptance from others and to be right.
Feeling strong moral convictions about a given issue should weaken the
typical motives for conformitymaking people more resistant to majority
influence. To test this idea, Hornsey and colleagues presented student
participants with feedback that their position on same-sex marriage was
either the majority or minority view on campus. Surprisingly, stronger moral
convictions about this issue were associated with greater willingness to
engage in activism when students believed they were in the opinion minority,
rather than majorityan example of counter-conformity (Hornsey et al.
2003, 2007).
Another study had participants engage in what they believed was a computer-
mediated interaction with four additional (though, in fact, virtual) peers. The
study was scripted so that each participant was exposed to a majority of peers
who supported torture (pretesting indicated that none of our study participants
did). Participants were shown the other participants opinions one at a time
before they were asked to provide their own position on the issue to the group.
Results supported the hypothesis: Stronger moral convictions were associated
with lower conformity rates, even when controlling for a number of indices of
attitude strength (Aramovich etal. 2010).4 By contrast, people do show strong
conformity effects in an Asch paradigm when making moral judgments about
moral dilemmas, such as the trolley problem (Kundu and Cummins 2012),
providing further evidence that moral judgments and moral attitudes are not
the same things.

Conclusion

Theorists in recent years have proposed a number of ways that attitudes


rooted in moral conviction differ from otherwise strong but nonmoral
attitudes. The research reviewed here supports the hypothesis that moral
The Psychological Foundations ofMoralConviction 161

mandates represent a special class of strong attitudes that do not reduce to


other dimensions of attitude strength. Moreover, moral mandates differ from
strong but nonmoral attitudes in ways that are predicted by a domain theory
of attitudes. They are perceived as akin to facts about the world, positions that
should be universally adopted, have particularly strong ties to emotion, are
motivational, and predict a host of behaviors and reactions including authority
independence, political legitimacy, anti-conformity, and civic engagement.
With some exceptions, most research on the concept of moral conviction
has focused on determining whether and how moral mandates differ from
nonmoral attitudes. The challenge for future research will be to begin to gain a
greater understanding of how moral mandates are developed in the first place
and, once established, whether people are capable of demoralizing an attitude.
Given moral mandates have the potential for motivating great good (e.g., civic
engagement, willingness to fight for justice), as well as motivating acts many
would label as evil (e.g., terrorism, vigilantism; see Morgan and Skitka 2009),
learning more about the attitude moralization process represents an important
area of inquiry going forward.

Notes

* Authors Note: Linda J. Skitka, Department of Psychology, University of Illinois


at Chicago. Thanks to Brittany Hanson, G. Scott Morgan, and Daniel Wisneski
for their helpful comments on earlier drafts of this chapter. Funding from the
National Science Foundation #1139869 facilitated preparation of this chapter.
Correspondence should be sent to: Linda J. Skitka, Ph.D., Professor of Psychology,
University of Illinois at Chicago, Department of Psychology (m/c 285), 1007
W.Harrison St., Chicago, IL 60607-7137, Email: lskitka@uic.edu.
1 Skitka etal. (2008) initially labeled this theoretical perspective as an integrated
theory of moral conviction or ITMC.
2 Face validity refers to the degree to which one can infer from test items the target
variable is being measured.
3 Not all values are perceived in moral terms. For example, fewer than 20 percent
ofparticipants perceived the Schwartz values associated with power, achievement,
hedonism, and stimulation as moral, and fewer than 30 percent rated more than
one of the self-direction items as moral (Schwartz 2007).
4 Having another dissenter in the group did not change the results of moral conviction.
162 Advances in Experimental Moral Psychology

References

Aguilera, R., Hanson, B., and Skitka, L. J. (2013). Approaching good or avoiding bad?
Understanding morally motivated collective action. Paper presented at the annual
meeting of the Society for Personality and Social Psychology, New Orleans, LA.
Aramovich, N. P., Lytle, B. L., and Skitka, L. J. (2012). Opposing torture: Moral
conviction and resistance to majority influence. Social Influence, 7, 2134.
Asch, S. E. (1956). Studies of independence and conformity: A minority of one
against a unanimous majority. Psychological Monographs, 70(9, No 416), 170.
Bartels, D. M. (2008). Principled moral sentiment and the flexibility of moral
judgment and decision making. Cognition, 180, 381417.
Besley, J. C. (2012). Does fairness matter in the context of anger about nuclear energy
decision making? Risk Analysis, 32, 2538.
Boyd, R. (1988). How to be a moral realist. In G. Sayre-McCord (ed.), Essays in Moral
Realism. Ithaca, NY: Cornell University Press, pp. 181228.
Brandt, M. J., and Wetherell, G. A. (2012). What attitudes are moral attitudes? The
case of attitude heritability. Social Psychological and Personality Science, 3, 1729.
Brown, D. (1991). Human Universals. New York: McGraw-Hill.
Chaiken, S., and Stangor, C. (1987). Attitudes and attitude change. Annual Review
ofPsychology, 38, 575630.
Cushman, F. A., Young, L., and Hauser, M. D. (2006). The role of reasoning and
intuition in moral judgments: Testing three principles of harm. Psychological
Science, 17, 10829.
Darwin, D. O. (1982). Public attitudes toward life and death. Public Opinion
Quarterly, 46, 52133.
Deutsch, M., and Gerard, H. B. (1955). A study of normative and informational social
influences upon individual judgment. Journal of Abnormal and Social Psychology,
51, 62936.
Earle, T. C., and Siegrist, M. (2008). On the relation between trust and fairness in
environmental risk management. Risk Analysis, 28, 1395413.
Fehr, E., and Fischbacher, U. (2004). Third-party punishment and social norms.
Evolution and Human Behavior, 25, 6387.
Garguilo, S. P. (2010). Moral Conviction as a Moderator of Framing Effects (Masters
thesis). Rutgers University, Rutgers, NJ.
Gibbs, J. C., Basinger, K. S., Grime, R. L., and Snarey, J. R. (2007). Moral judgment
development across cultures: Revisiting Kohlbergs universality claims.
Developmental Review, 27, 443550.
Goodwin, G. P., and Darley, J. M. (2008). The psychology of meta-ethics: Exploring
objectivism. Cognition, 106, 1139366.
The Psychological Foundations ofMoralConviction 163

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., and Cohen, J. D.
(2001). An fMRI investigation of emotional engagement in moral judgment.
Science, 293, 21058.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach
to moral judgment. Psychological Review, 108, 81434.
Hornsey, M. J., Majkut, L., Terry, D. J., and McKimmie, B. M. (2003). On being loud
and proud: Non-conformity and counter-conformity to group norms. British
Journal of Social Psychology, 42, 31935.
Hornsey, M. J., Smith, J. R., and Begg, D. I. (2007). Effects of norms among those with
moral conviction: Counter-conformity emerges on intentions but not behaviors.
Social Influence, 2, 24468.
Hume, D. (1968). A Treatise on Human Nature. Oxford, England: Clarendon Press.
Original work published 1888.
Kohlberg, L. (1976). Moral stages and moralization: The cognitive developmental
approach. In T. Lickona (ed.), Moral Development and Behavior: Theory, Research
and Social Issues. New York: Holt, Rinehart, & Winston, pp. 3153.
Krosnick, J. A., and Petty, R. E. (1995). Attitude strength: An overview. In R. E.
Petty and J. A. Krosnick (eds), Attitude Strength: Antecedents and Consequences.
Mahwah, NJ: Lawrence Erlbaum Associates, pp. 124.
Kundu, P., and Cummins, D. D. (2012). Morality and conformity: The Asch paradigm
applied to moral decisions. Social Influence, (ahead-of-print), 112.
Lind, E. A. (2001). Fairness heuristic theory: Justice judgments as pivotal cognitions
in organizational relations. In J. Greenberg and R. Cropanzano (eds), Advances in
Organizational Behavior. San Francisco: New Lexington Press, pp. 2755.
Lodewijkx, H. F. M., Kersten, G. L. E., and Van Zomeren, M. (2008). Dual pathways
to engage in Silent Marches against violence: Moral outrage, moral cleansing,
and modes of identification. Journal of Community and Applied Social Psychology,
18, 15367.
Mackie, J. L. (1977). Ethics: Inventing Right and Wrong. New York: Penguin.
McDowell, J. (1979). Virtue and reason. The Monist, 62, 33150.
Mercury News (2012). Talk back/Saturday forum letters. Retrieved 12/17/12 from
http://www.mercurynews.com/top-stories/ci_21813856/oct-20-talk-back-
saturday-forum-letters
Moore, G. E. (1903). Principia Ethica. New York: Cambridge University Press.
Morgan, G. S. (2011). Toward a Model of Morally Motivated Behavior: Investigating
Mediators of the Moral Conviction-action Link (Doctoral dissertation). University
of Illinois at Chicago.
Morgan, G. S., Skitka, L. J., and Lytle, B. L. (under review). Universally and objectively
true: The psychological foundations of moral conviction.
164 Advances in Experimental Moral Psychology

Morgan, G. S., Skitka, L. J., and Lytle, B. (in preparation).


Morgan, G. S., Skitka, L. J., and Wisneski, D. (2010). Moral and religious convictions
and intentions to vote in the 2008 Presidential election. Analyses of Social Issues
and Public Policy, 10, 30720.
Nichols, S., and Folds-Bennett, T. (2003). Are children moral objectivists? Childrens
judgments about moral and response-dependent properties. Cognition, 90,
B2332.
Nucci, L. P. (2001). Education in the Moral Domain. Cambridge, UK: Cambridge
University Press.
Nucci, L. P., and Turiel, E. (1978). Social interactions and the development of social
concepts in pre-school children. Child Development, 49, 4007.
Prentice, D. A. (1987). Psychological correspondence of possessions, attitudes, and
values. Journal of Personality and Social Psychology, 53, 9931003.
Prinz, J. J. (2007). The Emotional Construction of Morals. New York: Oxford
University Press.
Rest, J. R., Narvaez, D., Bebeau, M. J., and Thoma, S. J. (1999). Postconventional
Moral Thinking. A Neo-Kohlbergian Approach. Mahwah, NJ: Lawrence
Erlbaum.
Schnall, S., Haidt, J., Clore, G. L., and Jordan, A. H. (2008). Disgust as embodied
moral judgment. Personality and Social Psychology Bulletin, 34, 1096109.
Schwartz, S. H. (2007). Universalism and the inclusiveness of our moral universe.
Journal of Cross Cultural Psychology, 38, 71128.
Shweder, R. A. (2002). The nature of morality: The category of bad acts. Medical
Ethics, 9, 67.
Siegrist, M., Connor, M., and Keller, C. (2012). Trust, confidence, procedural fairness,
outcome fairness, moral conviction, and the acceptance of GM field experiments.
Risk Analysis, 32, 1394403.
Skitka, L. J. (2010). The psychology of moral conviction. Social and Personality
Psychology Compass, 4, 26781.
(2012). Understanding morally motivated behavioral intentions: A matter of
consequence or conscience? Paper presented at the Cognitions vs. Emotions in
Ethical Behavior Conference, University of Toronto.
Skitka, L. J., Bauman, C. W., and Lytle, B. L. (2009). The limits of legitimacy: Moral
and religious convictions as constraints on deference to authority. Journal of
Personality and Social Psychology, 97, 56778.
Skitka, L. J., Bauman, C. W., and Mullen, E. (2008). Morality and justice: An expanded
theoretical perspective and review. In K. A. Hedgvedt and J. Clay-Warner (eds),
Advances in Group Processes, Vol. 25. Bingley, UK: Emerald Group Publishing
Limited, pp. 127.
The Psychological Foundations ofMoralConviction 165

Skitka, L. J., Bauman, C. W., and Sargis, E. G. (2005). Moral conviction: Another
contributor to attitude strength or something more? Journal of Personality and
Social Psychology, 88, 895917.
Skitka, L. J., and Morgan, G. S. (2009). The double-edged sword of a moral state of
mind. In D. Narvaez and D. K. Lapsley (eds), Moral Self, Identity, and Character:
Prospects for New Field of Study. Cambridge, UK: Cambridge University Press,
pp.35574.
Skitka, L. J., and Wisneski, D. C. (2011). Moral conviction and emotion. Emotion
Review, 3, 32830.
Smith, M. (1994). The Moral Problem. Oxford, England: Blackwell.
Sturgeon, N. (1985). Moral explanations. In D. Copp and D. Zimmerman (eds),
Morality, Reason, and Truth. Totowa, NJ: Rowman and Allanheld, pp. 4978.
Swink, N. (2011). Dogmatism and moral conviction in individuals: Injustice for all.
(Doctoral dissertation). Wichita State University.
Tetlock, P. E., Kirstel, O. V., Elson, S. B., Green, M. C., and Lerner, J. S. (2000).
The psychology of the unthinkable: Taboo trade-offs, forbidden base rates, and
heretical counterfactuals. Journal of Personality and Social Psychology, 78, 85370.
Turiel, E. (1978). Social regulations and domains of social concepts. In W. Damon
(ed.), New Directions for Child Development. Vol. 1. Social Cognition. New York:
Gardner, pp. 4574.
(1983). The Development of Social Knowledge: Morality and Convention. New York:
Cambridge University Press.
(1998). The development of morality. In W. Damon (Series ed.) and N. Eisenberg
(Vol. ed.), Handbook of Child Psychology: Vol. 3. Social Emotional and Personality
Development (5th ed.). New York: Academic Press, pp. 863932.
Uhlmann, E. L., Pizarro, D. A., Tannenbaum, D., and Ditto, P. H. (2009). The
motivated use of moral principles. Judgment and Decision Making, 6, 47691.
Van Zomeron, M., Postmes, T., Spears, R., and Bettache, K. (2011). Can moral
convictions motivate the advantaged to challenge social inequality?: Extending
the social identity model of collective action. Group Processes and Intergroup
Relations, 14, 73553.
Wisneski, D. C., Lytle, B. L., and Skitka, L. J. (2009). Gut reactions: Moral conviction,
religiosity, and trust in authority. Psychological Science, 20, 105963.
Wisneski, D. C., and Skitka, L. J. (2013). Flipping the moralization switch:
Exploring possible routes to moral conviction. Emotion pre-conference, Society for
Personality and Social Psychology, New Orleans, LA.
Wright, J. C. (2012). Childrens and adolescents tolerance for divergent beliefs:
Exploring the cognitive and affective dimensions of moral conviction in our
youth. British Journal of Developmental Psychology, 30, 493510.
166 Advances in Experimental Moral Psychology

Wright, J. C., Cullum, J., and Schwab, N. (2008). The cognitive and affective
dimensions of moral conviction: Implications for attitudinal and behavioral
measures of interpersonal tolerance. Personality and Social Psychology Bulletin, 34,
146176.
Wright, J. C., Grandjean, P. T., and McWhite, C. B. (2012). The meta-ethical grounding
of our moral beliefs: Evidence of meta-ethical pluralism. Philosophical Psychology,
ifirst, 126.
Zaal, M. P., Van Laar, C., Sthl, T., Ellemers, N., and Derks, B. (2011). By any means
necessary: The effects of regulatory focus and moral conviction on hostile and
benevolent forms of collection action. British Journal of Social Psychology, 50,
67089.
9

How Different Kinds of Disagreement


ImpactFolk Metaethical Judgments
James R. Beebe*

Although the empirical study of folk metaethical judgments is still in its infancy,
a variety of interesting and significant results have been obtained.1 Goodwin
and Darley (2008), for example, report that individuals tend to regard ethical
statements as more objective than conventional or taste claims and almost
as objective as scientific claims, although there is considerable variation in
metaethical intuitions across individuals and across different ethical issues.
Goodwin and Darley (2012) also report (i) that participants treat statements
condemning ethical wrongdoing as more objective than statements enjoining
good or morally exemplary actions, (ii) that perceived consensus regarding an
ethical statement positively influences ratings of metaethical objectivity, and
(iii) that moral objectivism is associated with greater discomfort with and more
pejorative attributions toward those with whom individuals disagreed. Beebe
and Sackris (under review) found that folk metaethical commitments vary
across different life stages, with decreased objectivism during the college years.
Sarkissian etal. (2011) found that folk intuitions about metaethical objectivity
vary as a function of cultural distance, with increased cultural distance between
disagreeing parties leading to decreased attributions of metaethical objectivity.
Wright etal. (2013) found that not only is there significant diversity among
individuals with regard to the objectivity they attribute to ethical claims, there
is also significant diversity of opinion with respect to whether individuals take
certain issues such as abortion or anonymously donating money to charity
to be ethical issues at all, despite the fact that philosophers overwhelmingly
regard these issues as ethical.2 Wright etal. (2013) provide the following useful
summary of the current set of findings on folk metaethical intuitions:
168 Advances in Experimental Moral Psychology

People do not appear to conceive of morality as a unified (metaethically


speaking) domain, but rather as a domain whose normative mandates
come in different shapes and sizes. They view the wrongness of some moral
actions as clear and unquestionable, unaltered (and unalterable) by the
feelings/beliefs/values of the individual or culture. They view the wrongness
of other actions (though still genuinely moral in nature) as more sensitive
to, and molded by, the feelings/beliefs/values of the actor and/or the people
whose lives would be (or have been) affected by the action. This possibility
is one weve not seen seriously considered in the metaethical literatureand
perhaps it is time that it was.

The present chapter reports a series of experiments designed to extend


the empirical investigation of folk metaethical intuitions by examining how
different kinds of ethical disagreement can impact attributions of objectivity
to ethical claims.
Study 1 reports a replication of Beebe and Sackris work on metaethical
intuitions, in order to establish a baseline of comparison for Studies 2 through4.
In Study 2, societal disagreement about ethical issues was made salient to
participants before they answered metaethical questions about the objectivity
of ethical claims, and this was found to decrease attributions of objectivity
to those claims. In Studies 3 and 4, the parties with whom participants were
asked to consider having an ethical disagreement were made more concrete
than in Studies 1 and 2, using either verbal descriptions or facial pictures. This
manipulation was found to increase attributions of metaethical objectivity. In a
final study, metaethical judgments were shown to vary with the moral valence
of the actions performed by the disagreeing partyin other words, a Knobe
effect for metaethical judgments was found. These studies aim to increase our
understanding of the complexity of the folk metaethical landscape.

Study 1
Method

Participants
Study 1 was an attempt to replicate Beebe and Sackris (under review) initial
study with a population of participants that was limited to the same university
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 169

student population from which participants for Studies 2 and 3 would be


drawn. Participants were 192 undergraduate students (average age20, 53%
female, 40% Anglo-American) from the University at Buffalo (a large, public
university in the northeastern United States) in exchange for extra credit in an
introductory course.

Materials
Beebe and Sackris asked two and a half thousand participants between the
ages 12 and 88 to indicate the degree to which they agreed or disagreed with
the claims that appear in Table 9.1 and the extent to which they thought that
people in our society disagreed about whether they are true. The same set of
claims was used in Studies 1 through 3.

Procedure
The items from Table 9.1 were divided into three questionnaire versions, and
participants indicated their agreement or disagreement with them on a six-point
scale, where 1 was anchored with Strongly Disagree and 6 with Strongly
Agree. Participants rated the extent to which they thought people in our society
disagreed about the various claims on a six-point scale anchored with There is no
disagreement at all and There is an extremely large amount of disagreement.
In order to capture one kind of objectivity that participants might attribute to
the various claims in Table 9.1, participants were asked, If someone disagrees
with you about whether [one of these claims is true], is it possible for both of
you to be correct or must one of you be mistaken? The answer At least one of
you must be mistaken was interpreted as an attribution of objectivity, and an
answer of It is possible for both of you to be correct was taken to be a denial
of objectivity.

Results

As can be seen from Figure 9.1, the items in Table 9.1 are ordered within each
subcategory in terms of increasing proportions of participants who attributed
objectivity to them.
170 Advances in Experimental Moral Psychology

Table 9.1 Factual, ethical, and taste claims used in Beebe and Sackris (under review)
and in Studies 1 through 4

Factual
1. Frequent exercise usually helps people to lose weight.
2.Global warming is due primarily to human activity (for example, the burning
offossil fuels).
3. Humans evolved from more primitive primate species.
4. There is an even number of stars in the universe.
5. Julius Caesar did not drink wine on his 21st birthday.
6. New York City is further north than Los Angeles.
7. The earth is only 6,000 years old.
8. Mars is the smallest planet in the solar system.
Ethical
9.Assisting in the death of a friend who has a disease for which there is no known
cure and who is in terrible pain and wants to die is morally permissible.
10.Before the third month of pregnancy, abortion for any reason is morally
permissible.
11.Anonymously donating a significant portion of ones income to charity is
morally good.
12. Scientific research on human embryonic stem cells is morally wrong.
13. Lying on behalf of a friend who is accused of murder is morally permissible.
14.Cutting the American flag into pieces and using it to clean ones bathroom
ismorally wrong.
15.Cheating on an exam that you have to pass in order to graduate is morally
permissible.
16. Hitting someone just because you feel like it is wrong.
17. Robbing a bank in order to pay for an expensive vacation is morally bad.
18. Treating someone poorly on the basis of their race is morally wrong.
Taste
19. Classical music is better than rock music.
20. Brad Pitt is better looking than Drew Carey.
21. McDonalds hamburgers taste better than hamburgers made at home.
22.Gourmet meals from fancy Italian restaurants taste better than microwavable
frozen dinners.
23. Barack Obama is a better public speaker than George W. Bush.
24. Beethoven was a better musician than Britney Spears is.
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 171

Question
1.0 type
Factual
Ethical
Taste
Proportion of objectivity attributions

0.5

0.0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Question number

Figure 9.1 Proportions of participants who attributed objectivity to the 24 items in


Study 1. Error bars inall figures represent 95 percent confidence intervals.

Goodwin and Darley (2008) and Beebe and Sackris both found that more
participants attributed objectivity to factual claims than to ethical or taste
claims. In Study 1, a greater proportion of participants attributed objectivity
to factual claims (0.64, averaged across all claims in the factual subcategory)
than to ethical (0.34) or taste (0.10) claims. Chi-square tests of independence
reveal that the difference between the factual and ethical proportions was
significant, c2 (1, N926)80.523, p0.001, Cramrs V0.30, andthe
difference between the ethical and taste proportions was significant as well,
c2(1,N826)61.483, p0.001, Cramrs V0.27.3 Study 1 alsoreplicates
earlier findings that objectivity attributions are positively associated with
strength of belief about an issue (c2 (2, N 1,224) 67.276, p 0.001,
Cramrs V 0.23) but negatively associated with the extent of perceived
disagreement about the issue (c2 (5, N1,218)89.517, p0.001, Cramrs
V0.27). In other words, participants tended to attribute more objectivity to
claims that they had stronger opinions about than to claims they had weaker
172 Advances in Experimental Moral Psychology

opinions about, but they tended to attribute less objectivity to claims they
recognized were widely disputed in society. Somewhat surprisingly, higher
ratings of perceived disagreement about an issue were positively associated with
participants strength of opinion about the issue, c2 (10, N1,212)100.897,
p0.001, Cramrs V0.20.

Discussion

Like Goodwin and Darley (2008) and Beebe and Sackris, Study 1 found that
participants attribute more objectivity to some ethical claims than to some
factual claims and that there is significant variation concerning the degree of
objectivity attributed to different claims within each subcategory.4 Thus, Study
1 reinforces the conclusion already established by Goodwin and Darley (2008)
and Beebe and Sackris that the question of whether ordinary individuals are
moral objectivists is not going to have a simple Yes or No answer.

Study 2
Method

Participants
A total of 195 undergraduate students (average age 19, 47% female, 69%
Anglo-American) from the University at Buffalo participated in Study 2 in
exchange for extra credit in an introductory course.

Materials and procedure


The primary purpose of Study 1 was to construct a baseline of data with which
the results of Studies 2 through 4 could be compared. These latter studies
each introduce some kind of modification to the research materials used
in Study 1 in order to see how folk metaethical judgments will be affected.
The manipulation in Study 2 was simply a change in the order of the tasks
participants were asked to complete.
As noted above, Study 1 followed Beebe and Sackris in having participants
perform the following tasks in the following order:
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 173

Task 1: Participants indicated the extent to which they agreed or disagreed


with a given claim.
Task 2: Participants answered the question If someone disagrees with you
about whether [the claim in question is true], is it possible for both of you
to be correct or must at least one of you be mistaken?.
Task 3: Participants rated the extent to which they thought people in our
society disagreed about whether the claim in question is true.

Thus, the last thing participants were asked to do was to consider about the
extent of societal disagreement with respect to the claims. Given the negative
association between perceived disagreement and objectivity attributions, it
was hypothesized that if participants were directed to think about societal
disagreement before completing Task 2, their attributions of metaethical
objectivity would decrease. Disagreement was not hypothesized to have a
similar effect on factual and taste claims.

Results

As expected, the overall proportion of objectivity attributions in the ethical


subcategory was lower in Study 2 (0.29) than in Study 1 (0.34). This difference
was significant, c2 (1, N1045)4.015, p0.05, Cramrs V0.06. There
were no significant differences in the factual and taste subcategories. Thus, it
appears that making disagreement about ethical issues salient to participants
can have a modest effect on the metaethical judgments they make. The fact that
this result was obtained in the ethical domain but not in the factual domain
is consistent with the widespread view among philosophers that ethical
disagreementbecause of its seemingly intractabilityposes a significant
challenge to the objectivity of ethical claims in a way that disagreement about
factual matters fails to do for the objectivity of factual claims.5

Discussion

The findings of Study 2 are consistent not only with the correlational data
obtained by Goodwin and Darley (2008) and Beebe and Sackris but also with
174 Advances in Experimental Moral Psychology

the experimental data obtained by Goodwin and Darley (2012). The latter
manipulated participants perceived consensus about ethical issues by giving
them bogus information about the percentage of students from the same
institution who agreed with them. Participants who were told that a majority
of their peers agreed with them about some ethical statement were more likely
to think there was a correct answer as to whether or not the statement was true
than participants who were told that significantly fewer of their peers agreed
with them. These studies show that perceived disagreement or consensus can
be a causal and not a merely correlational factor in folk metaethical decision-
making.

Study 3

Various studies of folk intuitions about moral responsibility have shown that
individuals hold agents more responsible for their actions when the situations of
those agents are described concretely than when they are described abstractly.
Nichols and Knobe (2007), for example, obtained significantly higher ratings
of moral responsibility for Bill, who was attracted to his secretary and killed
his wife and three children in order to be with her, than for a person whose
actions were left unspecified. Small and Loewenstein (2003, 2005) showed that
the subtlest change in the concreteness of the representation of an individual
can lead to surprising differences in judgments or decisions regarding them.
When their participants were given the opportunity to punish randomly
selected defectors in an economic game, participants selected significantly
harsher punishments for anonymous defectors whose numbers had just
been chosen than for anonymous defectors whose numbers were about to be
chosen. Because increased concreteness appears to heighten or intensify the
engagement of cognitive and affective processes associated with attributions
of blame and responsibility and to lead participants to treat the actions of
concrete individuals as more serious than abstractly represented ones,6 it was
hypothesized that increasing the concreteness of those with whom participants
were asked to imagine they disagreed would lead participants to take the
disagreements more seriously and to increase attributions of metaethical
objectivity.
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 175

Method

Participants
A total of 108 undergraduate students (average age 19, 59% female, 66%
Anglo-American) from the University at Buffalo participated in Study 3 in
exchange for extra credit in an introductory course.

Materials and procedure


In Beebe and Sackris materials, which serve as the basis for Studies 1 and2,
each participant was asked If someone disagrees with you about whether [one
of these claims is true], is it possible for both of you to be correct or must
one of you be mistaken?. In Study 3, this unspecified someone was replaced
with Joelle P., a junior nursing major at UB, Mike G., a freshman computer
science major at UB, or some other student from the participants university,
whose first name, last initial, class, and major were specified.
In between completing Tasks 1 and 3 (which were described above) for
8of the 24 claims found in Table 9.1, each participant completed a modified
version of Task 2 such as the following:

Madeline B., a senior biology major at UB, believes it is permissible to


lie on behalf of a friend who is accused of murder. If you disagree with
Madeline B., is it possible for both of you to be correct or must one of you
be mistaken?
It is possible for both of you to be correct.
At least one of you must be mistaken.
[If you agree with Madeline B., please skip to the next question.]

Results

In accord with my expectations, having more concrete parties with which to


disagree resulted in a significantly greater overall proportion of objectivity
attributions to ethical claims in Study 3 (0.43) than in Study 1 (0.34), c2(1,
N 826) 5.399, p 0.05, Cramrs V 0.08. The proportions were
176 Advances in Experimental Moral Psychology

numerically higher for eight of the ten ethical claims. Having more concrete
parties in Study 3 did not, however, result in any significant difference in the
objectivity attributed to factual or taste claims.

Discussion

The results from Study 3 are consisted with those obtained by Sarkissian
etal. (2011), who found that strong objectivity ratings were obtained when
participants were asked to consider disagreeing with a concretely presented
individual from their same culture (vs. a concretely presented individual
from a different culture). The fact that the concreteness of the disagreeing
parties used in Study 3 led to increased metaethical objectivity attributions
may also explain why the objectivity ratings obtained in Study 1 fell below
those obtained by Goodwin and Darley (2008), even though both used
samples of university students. The Task 2 objectivity question in Study 1
asked participants to consider a situation of hypothetical disagreement
(If someone disagrees with you...). Goodwin and Darley (2008, 1344),
however, instructed participants, We have done prior psychological testing
with these statements, and we have a body of data concerning them. None
of the statements have produced 100% agreement or disagreement. Each
of Goodwin and Darleys objectivity questions then reiterated that some
individuals who had been previously tested disagreed with participants
about the relevant issue. Goodwin and Darley thus constructed situations of
disagreement that were more concrete than those in Studies 1 and 2 by having
(allegedly) actual rather than merely hypothetical individuals who disagreed
with participants.

Study 4

Study 3 made the parties with whom experimental participants were asked to
consider disagreeing concrete by providing them with given names, surname
initials, academic classes, and majors. In Study 4, the disagreeing parties
were made concrete by having pictures of their faces shown. Faces (and parts
of faces) have been shown to have a variety of effects on morally relevant
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 177

behavior. Forexample, Bateson etal. (2006) found that academics paid 276
percent more for the tea they took from a departmental tea station when an
image of eyes was displayed by the station than when an image of flowers
was displayed. Rezlescu, Duchaine, Olivola, and Chater (2012) found that
unfakeable facial features associated with trustworthiness attracted 42 percent
greater investment in an economic game that required trust.7

Method

Participants
A total of 360 participants (average age 32, 38% female, 82% Anglo-
American) were recruited through Amazons Mechanical Turk (www.
mturk.com) and were directed to complete online questionnaires hosted at
vovici.com.8

Materials and procedure


Combining behavioral studies and computer modeling, Oosterhof and
Todorov (2008) found that individuals make surprisingly consistent judgments
about socially relevant traits of individuals on the basis of differences
in their facial characteristics. They claim that the two most important
dimensions of face evaluation are trustworthiness/untrustworthiness and
dominance/submissiveness. Judgments concerning the first dimension are
reliably associated with judgments about whether an individual should be
approached or avoided and with attributions of happiness or anger. Judgments
concerning dominance or submissiveness were found to be reliably associated
with judgments of the maturity, masculinity, and physical strength of an
individual. Both untrustworthy and dominant faces were associated with
potential threat.9 By exaggerating features specific to one of these evaluative
dimensions, Oosterhof and Todorov (2008) created the set of faces represented
in Table9.2.10 Each of the non-neutral faces was plus or minus three standard
deviations from the mean along the relevant dimension. The faces in Table9.2
were used in Study 4, along with a control condition in which no face was
displayed.
178 Advances in Experimental Moral Psychology

Table 9.2 Faces used in Study 4

Dominant

Trustworthy Neutral Untrustworthy

Submissive

Claims (12), (13), and (14) from Table 9.1concerning embryonic stem
cell research, lying for a friend accused of murder, and treating a national flag
disrespectfullywere selected for use in Study 4. The degrees of objectivity
attributed to them in Studies 1 through 3 fell in the middle range, suggesting
that judgments about them could be more easily manipulated than judgments
near the floor or ceiling. The first screen contained one of the pictures from
Table 9.2, along with the following (Task 1) question:
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 179

Mark (pictured above11) believes that [statement (12), (13), or (14) is true].
Please indicate whether you agree or disagree with Marks belief.
Agree
Disagree

If participants selected Agree in response to one Task 1 question, they


would be directed to answer the Task 1 question for one of the other target
claims. However, if participants selected Disagree, they were directed to
answer the following (Task 2) metaethical question about their disagreement
before moving on to the next Task 1 question:
You disagree with Mark about whether [the target claim is true]. Is it possible
for both of you to be correct about this issue or must at least one of you be
mistaken?
It is possible for both of you to be correct.
At least one of you must be mistaken.

Each screen that presented the metaethical question included the same
picture (if any) that participants saw at the top of their Task 1 question. Each
participant was presented with claims (12), (13), and (14) in counterbalanced
order. The same picture (if any) of Mark appeared above each of these questions.
Thus, no participant saw more than one version of Marks face.
It was hypothesized that the five facial conditions would engage online
processes of social cognition to a greater degree than the control condition
and that this would result in higher attributions of metaethical objectivity.
On the basis of Oosterhof and Todorovs (2008) finding that untrustworthy
and dominant faces were associated with potential threat, it was also
hypothesized that untrustworthy and dominant faces would elicit lower
objectivity attributions than their dimensional pairs, since participants might
be more tentative or anxious about disagreeing with potentially threatening
interlocutors.

Results

The proportion of objective attributions was significantly higher in the


Neutral (0.65), Dominant (0.61), Submissive (0.60), Trustworthy (0.67),
180 Advances in Experimental Moral Psychology

and Untrustworthy (0.66) face conditions than it was in the No Face (0.46)
condition. The proportions of objectivity attributions in the five face conditions
did not differ significantly from each other.

Discussion

Thus, it appears that having a faceany face, perhapsmakes the situation


of moral judgment more concrete and engages moral cognitive processes in a
way that increases attributions of objectivity. Because there were no significant
differences between the Trustworthy and Untrustworthy conditions and the
Dominant and Submissive face conditions, the second hypothesis concerning
the effect of specific kinds of faces on folk metaethical intuitions failed to
receive confirmation from Study 4.12

Study 5
Method

Participants
Using a between-subject design, 160 participants (average age 34, 38%
female, 80% Anglo-American) were recruited through Amazons Mechanical
Turk and were directed to complete online questionnaires hosted at vovici.
com.13

Materials and procedure


A final study was constructed to see if the moral valence of the actions that
disagreeing parties were described as performing would have an effect on
folk metaethical judgments. Building upon work on the well-known Knobe
effect in experimental philosophy,14 in which individuals folk psychological
attributions have been shown to depend in surprising ways upon the
goodness or badness of agents actions, the following four descriptions were
constructed:
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 181

1. The CEO of a company that helps and preserves the environment believes
that it is morally wrong to harm the environment.
2. The CEO of a company that helps and preserves the environment believes
that it is not morally wrong to harm the environment.
3. The CEO of a company that harms and pollutes the environment believes
that it is morally wrong to harm the environment.
4. The CEO of a company that harms and pollutes the environment believes
that it is not morally wrong to harm the environment.

In (1) and (2), the CEO is depicted doing something morally good, namely,
helping and preserving the environment, whereas the CEOs actions in (3) and
(4) are morally bad. In (1) and (3), the CEO is described as having a morally
good belief about the environment, namely, that it should not be harmed; in
(2) and (4), the CEO has the corresponding morally bad belief. The crossing of
good and bad actions with good and bad beliefs results in the actions and beliefs
of the CEO being congruent in (1) and (4) and incongruent in (2) and (3).
Participants were first asked to indicate in a forced-choice format whether
they agreed or disagreed with the CEOs belief. They were then asked, If
someone disagreed with the CEO about whether it is morally wrong to harm
the environment, would it be possible for both of them to be correct or must
at least one of them be mistaken?. Participants were then directed to choose
between It is possible for both of them to be correct and At least one of them
must be mistaken.

Results

The results of Study 5 are summarized in Figure 9.2.


Participants were more inclined to attribute objectivity to the ethical
beliefs in question when the protagonist performed morally bad actions
than when he performed morally good ones. This difference was significant,
c2(1,N160)5.013, p0.05, Cramrs V0.18. Neither belief valence
nor the congruence between action and belief significantly affected folk
metaethical judgments. However, it is noteworthy that the highest proportion
of objectivity attributions was obtained in the double bad (i.e., Bad Action/
182 Advances in Experimental Moral Psychology

Judgment
valence
1.0 *
Good
Bad
Proportion of objectivity attributions

0.5

0.0
Good Bad
Action valence

Figure 9.2 Mean objectivity attributions in the Good Action/Good Belief (0.55),
Good Action/Bad Belief (0.43), Bad Action/Good Belief (0.57), and Bad Action/Bad
Belief (0.75) conditions of Study 5.

Bad Belief) condition, since it is badness (rather than goodness or neutrality)


that has been shown to be the driving force behind the various forms of the
Knobe effect.

Discussion

As with other findings from the Knobe effect literature, the moral valence of
a protagonists action significantly affected participants responses to probe
questions. However, unlike other results in this literature, the responses in
question were not folk psychological ascriptions. They were second-order
attributions of objectivity to ethical beliefs held by the protagonist. These
results provide further evidence that individuals assessments of metaethical
disagreements are significantly affected by a variety of factors in the situation
of disagreement.
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 183

General discussion

The foregoing studies show (i) that making disagreement salient to participants
before asking them to make metaethical judgments can decrease objectivist
responses, (ii) that increasing the concreteness of the situation of disagreement
participants are directed to consider can increase objectivist responses, and
(iii) that the moral valence of the actions performed by agents whose ethical
beliefs participants are asked to consider affected attributions of objectivity
to those beliefs. Because philosophical discussionwhether in the classroom
or at professional conferencesoften takes place in a somewhat rarefied
atmosphere of abstractions, philosophers should be aware that intuitive
agreement or disagreement with their metaethical claims can be affected by
the very abstractness of those situations and that the amount of agreement or
disagreement they encounter might be different in other situations. In spite of the
fact that an increasing number of philosophers are familiar with the Knobe effect
and its seemingly unlimited range of applicability, many philosophers continue
to give little thought either to the moral valence of the actions depicted in their
favored thought experiments and/or to the consequences this might have.
An important question raised by the studies reported above concerns the
coherence of folk metaethical commitments. Most philosophers assume that
the correct semantics for ordinary ethical judgments must show them to be
uniformly objective or subjective.15 Yet, Studies 2 through 5in addition to
work by Goodwin and Darley (2008), Beebe and Sackris (under review), and
Sarkissian etal. (2011)reveal that there are several kinds of variation in folk
metaethical judgments. The lack of uniformity in the objectivity attributed
to ethical claims might make us wonder how well ordinary individuals grasp
the ideas of objectivism and subjectivism (and perhaps the related ideas of
relativism and universalism). It might also lead us to question their reasoning
abilities. Goodwin and Darley (2008, 1358, 1359), for example, suggest that
individuals were not particularly consistent in their meta-ethical positions
about various ethical beliefs and that requirements of judgmental consistency
across ethical scenarios are not considered. However, this attribution of
inconsistency seems both uncharitable and unwarranted.
Why should we believe that the ordinary use of ethical terms requires
a semantics that assumes uniform objectivity or subjectivity? Because
184 Advances in Experimental Moral Psychology

armchair philosophers who have gathered no empirical evidence about the


actual practice of using ethical terms say so? It seems that the practice should
dictate the semantics, and not the other way around. If we find variability
in the practice, we should look for semantic theories that can accommodate
such variation. Furthermore, a variety of semantic theories can do so. For
example, in Beebe (2010) I offer a relevant alternatives account of ethical
judgments that borrows heavily from the semantic machinery of the
epistemic contextualists (e.g., Lewis 1996; DeRose 2011). I argue that treating
ethical terms as context-sensitive yields a better interpretation of ordinary
normative and metaethical judgments than interpretations that treat them
as context-invariant. Without delving into the details of the view, the upshot
for present purposes is that attributions of inconsistency or incoherence
to folk metaethical practice are premature when there are more charitable
interpretive options available.
Another important issue raised by the above studies concerns my hypothesis
that it is concreteness that is driving the effects observed in Studies 3 and 4.
An alternative possibility is that when undergraduates at the University at
Buffalo are told that Madeline B., a senior biology major at UB, believes that
some action is morally permissible, it may be Madelines cultural proximity or
group affiliation that leads participants to make more objectivist judgments.
Signaling that someone from the same university believes that p may suggest
to participants that they should believe it as well, if they are to remain members
in good standing in the relevant group. And it is of course possible that some
other kind of social influence might be operative as well. Further research is
required to determine whether it is concreteness or other social factors that
push individuals in the direction of greater objectivism.16
The studies reported above show that not only are there differences in folk
metaethical judgments that track the content of ethical claims (Goodwin and
Darley 2008; Beebe and Sackris, under review; Study 1), how contested they
are (Goodwin and Darley 2012; Study 2), and the cultural distance between
disagreeing parties (Sarkissian etal. 2011); there are also differences that track
the goodness or badness of disagreeing parties (Study 5) and possibly their
concreteness as well (Studies 3 and 4). It is hoped that the present research
sheds useful light on the multidimensional variation that characterizes the folk
metaethical landscape.
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 185

Notes

* James R. Beebe, Department of Philosophy, University at Buffalo. The author


would like to thank Mike Giblin and Anthony Vu for assistance in data
collection and Hagop Sarkissian and Jen Cole Wright for helpful comments on
a previous version of this manuscript. Correspondence should be addressed to
James R. Beebe, 135 Park Hall, Department of Philosophy, University at Buffalo,
Buffalo, NY 14260. Email: jbeebe2@buffalo.edu.
1 However, cf. Beebe and Sackris (under review, sec. 1) for critical discussion of
many of the measures of metaethical commitments that are employed in the
published literature.
2 Forty-one percent of participants in one study and 38 percent of participants
in another classified first trimester abortion as a personal rather than a moral
issue; 89 percent and 73 percent of participants did the same for anonymously
donating money to charity.
3 All statistical tests reported in this chapter are chi-square tests of independence.
On the conventional interpretation of Cramrs V, an effect size of 0.1 to 0.29
counts as small, one 0.3 to 0.49 counts as medium, and one 0.5 or larger counts
as large.
4 The only gender differences in the data were that females held slightly less strong
opinions than males on factual matters (c2 (2, N303)6.124, p0.05, Cramrs
V0.14) and reported greater societal disagreement than males concerning
matters of taste (c2 (2, N225)11.296, p0.05, Cramrs V0.22).
5 Cf. Sidgwick (1907/1981, 342), Mackie (1977, 368), Wong (1984), and Tersman
(2006). Because salient disagreement impacted participants second-order
(metaethical) judgments in Study 2, a follow-up study was performed to see if
salient disagreement might have a similar impact upon participants first-order
judgmentsthat is, upon the degree of agreement they expressed in response to
various ethical claims in Task 1. Participants were directed to complete Task3
immediately before Task 1, and it was hypothesized that salient disagreement
would result in less confident Task 1 judgments. However, this manipulation
failed to have a significant impact on participants Task 1 judgments.
6 Nahmias etal. (2007) found that this was especially true if wrongdoing is involved.
7 Thanks to Mark Alfano for bringing this work to my attention.
8 Participants were required to reside in the United States and to have at least a
95 percent approval rating on more than 500 mturk tasks. Each participant was
paid $.30.
186 Advances in Experimental Moral Psychology

9 Untrustworthy faces were associated with potentially harmful intentions, while


dominant faces were associated with the capacity to cause harm.
10 Oosterhof and Todorov constructed the faces using FaceGen Modeller 3.2
(Singular Inversions 2007).
11 This phrase of course did not appear in the No Face condition.
12 It may be that how people respond to these kinds of faces depends upon
whether they themselves are dominant, submissive, etc. The method of the
present study did not allow this factor to be explored. Thanks to Hagop
Sarkissian for raising this point.
13 Participants were required to reside in the United States and to have at least a
95 percent approval rating on more than 500 mturk tasks. Each participant was
paid $.30.
14 See Alfano etal. (2012) for an overview of this literature.
15 See Sinnott-Armstrong (2009) for further discussion of this point.
16 Thanks to Hagop Sarkissian and Jen Cole Wright for pressing these points.

References

Alfano, M. (2010). Social cues in the public good game. Presentation at KEEL 2010
Conference: How and Why Economists and Philosophers Do Experiments, Kyoto
Sangyo University, Kyoto, Japan.
Alfano, M., Beebe, J. R., and Robinson, B. (2012). The centrality of belief and reflection
in Knobe effect cases: A unified account of the data. The Monist, 95, 26489.
Bateson, M., Nettle, D., and Roberts, G. (2006). Cues of being watched enhance
cooperation in a real-world setting. Biology Letters, 12, 41214.
Beebe, J. R. (2010). Moral relativism in context. Nos, 44, 691724.
Beebe, J. R., and Sackris, D. (Under review). Moral objectivism across the lifespan.
DeRose, K. (2011). The Case for Contextualism: Knowledge, Skepticism, and Context,
vol. 1. Oxford: Oxford University Press.
Goodwin, G. P., and Darley, J. M. (2008). The psychology of meta-ethics: Exploring
objectivism. Cognition, 106, 133966.
(2012). Why are some moral beliefs perceived to be more objective than others?
Journal of Experimental Social Psychology, 48, 2506.
Lewis, D. K. (1996). Elusive knowledge. Australasian Journal of Philosophy, 74, 54967.
Nahmias, E., Coates, D., and Kvaran, T. (2007). Free will, moral responsibility, and
mechanism: Experiments on folk intuitions. Midwest Studies in Philosophy, 31,
21442.
How Different Kinds of Disagreement ImpactFolk Metaethical Judgments 187

Nichols, S., and Knobe J. (2007). Moral responsibility and determinism: The cognitive
science of folk intuitions. Nos, 41, 66385.
Oosterhof, N., and Todorov, A. (2008). The functional basis of face evaluations.
Proceedings of the National Academy of Sciences of the USA, 105, 1108792.
Rezlescu, C., Duchaine, B., Olivola, C. Y., and Chater, N. (2012). Unfakeable facial
configurations affect strategic choices in trust games with or without information
about past behavior. PLoS ONE, 7, e34293.
Sarkissian, H., Parks, J., Tien, D., Wright, J. C., and Knobe. J. (2011). Folk moral
relativism. Mind & Language, 26, 482505.
Sidgwick, H. (1907/1981). The Methods of Ethics. Indianapolis: Hackett.
Sinnott-Armstrong, W. (2009). Mixed-up meta-ethics. Philosophical Issues, 19, 23556.
Small, D. A., and Loewenstein, G. (2003). Helping a victim or helping the victim:
Altruism and identifiability. Journal of Risk and Uncertainty, 26, 516.
(2005). The devil you know: The effects of identifiability on punitiveness. Journal
ofBehavioral Decision Making, 18, 3118.
Tersman, F. (2006). Moral Disagreement. Cambridge: Cambridge University Press.
Wong, D. B. (1984). Moral Relativity. Berkeley: University of California Press.
Wright, J. C., Grandjean, P. T., and McWhite, C. B. (2013). The meta-ethical
grounding of our moral beliefs: Evidence for meta-ethical pluralism. Philosophical
Psychology, 26(3), 33661.
10

Exploring Metaethical Commitments:


MoralObjectivity and Moral Progress
Kevin Uttich, George Tsai, and Tania Lombrozo*

People have beliefs not only about specific moral issues, such as the permissibility
of slavery, but also about the nature of moral beliefs. These beliefs, or meta-
ethical commitments, have been the subject of recent work in psychology and
experimental philosophy. One issue of study has been whether people view
moral beliefs in more objectivist or relativist terms (i.e., as more like factual
beliefs or more like personal preferences).
In this chapter, we briefly review previous research on folk moral
objectivism. We then present the results of an experiment that compares
two different approaches to measuring moral objectivism (those of Goodwin
and Darley 2008, and Sarkissian et al. 2011) and consider the relationship
between objectivism and two additional metaethical beliefs: belief in moral
progress and belief in a just world. By examining the relationships between
different metaethical commitments, we can better understand the extent to
which such commitments are (or are not) systematic and coherent, shedding
light on the psychological complexity of an important area of moral belief and
experience.
To preview our results, we find that different metaethical beliefs are reliably
but weakly associated, with different measures of moral objectivism generating
distinct patterns of association with belief in moral progress and belief in a just
world. We highlight some of the challenges in reliably measuring metaethical
commitments and suggest that the distinctions that have been useful in
differentiating philosophical positions may be a poor guide to folk moral
judgment.
Exploring Metaethical Commitments 189

Recent work on metaethical beliefs

Moral objectivity
Moral objectivity is a complex idea with multiple variants and diverse
proponents (for useful discussions see Goodwin and Darley 2010; Knobe etal.
2012; Sinnott-Armstrong 2009). For our purposes, to accept moral objectivism
is to believe that some moral claims are true in a way that does not depend on
peoples decisions, feelings, beliefs, or practices. Thus, to reject the objectivity
of moral claims one can either deny that moral claims have a truth value or
allow that moral claims can be true, but in a way that does depend on decisions,
feelings, beliefs, or practices (e.g., Harman 1975; Sinnott-Armstrong 2009).
Non-cognitivism is typically an instance of the former position, and cultural
or moral relativism of the latter.
Recently, there have been a few attempts to examine empirically what people
believe about moral objectivity (Goodwin and Darley 2008, 2010; Forsyth
1980; Nichols 2004; Sarkissian etal. 2011; see Knobe etal. 2012, for review).
Goodwin and Darley (2008) asked participants to rate their agreement with
statements that were factual, ethical, social-conventional, or about personal
taste, and then asked them whether these statements were true, false, or an
opinion or attitude. For example, one of the ethical statements was Robbing
a bank in order to pay for an expensive holiday is a morally bad action, while
one of the social-conventional statements was that Wearing pajamas and bath
robe to a seminar meeting is wrong behavior. Responding that these were
either true or false was considered a more objectivist response than selecting
an opinion or attitude. Participants were later asked whether the fact that
someone disagreed with them about a given statement meant that the other
person was wrong, that neither person was wrong, that they themselves were
wrong, or something else entirely. On this measure, responding that one of the
two people must be wrong was taken as a more objectivist response.
Using a composite of these two measures, Goodwin and Darley found
evidence that people treat statements of ethical beliefs as more objective than
either social conventions or taste. They also found a great deal of variation
in objectivism across both ethical statements and individuals. Strongly held
ethical beliefs were seen as more objective than beliefs that people did not hold
190 Advances in Experimental Moral Psychology

as strongly, and those who said they grounded their ethical beliefs in religion,
moral self-identity, or the pragmatic consequences of failing to observe norms
were more likely to be objectivist about ethical statements. Subsequent work
has suggested that variation in objectivist beliefs is not an artifact of variation
concerning which issues participants themselves take to be moral, nor of
misunderstanding moral objectivism (Wright etal. 2012).
More recently, Sarkissian etal. (2011) have argued that relativist beliefs are
more prevalent than suggested by Goodwin and colleagues, but that these
beliefs are only observed when participants are comparing judgments made
by agents who differ from each other in important ways. In their studies,
participants were presented with two agents who disagreed about a moral
claim and were asked whether one of them must be wrong. For example,
participants were asked to imagine a race of extraterrestrial beings called
Pentars who have a very different sort of psychology from human beings.
Participants were then presented with a hypothetical case in which a classmate
and a Pentar had differing views on a moral case, and were asked to rate their
agreement with the statement that at least one of them must be wrong.
Participants provided more objectivist answers (one of them must be wrong)
when comparing judgments made by agents from the same culture, but more
relativist answers (denying that at least one of them must be wrong) when
comparing judgments made by agents from different planets (i.e., a human
and a Pentar). Sarkissian et al. argue that engaging with radically different
perspectives leads people to moral relativism.
What are the implications of this research? On the one hand, the findings
from Goodwin and Darley (2008) and Sarkissian et al. (2011) suggest that
metaethical beliefs are not particularly developed or unquestionably coherent.
They certainly challenge the idea that those without philosophical expertise can
be neatly classified as moral objectivists versus moral relativists. Instead,
judgments vary considerably depending on the moral claim in question and
the way in which objectivism is assessedin particular, whether a case of
disagreement involves similar or dissimilar agents.
On the other hand, a growing body of research suggests that moral
objectivism is systematically related to aspects of cognition and behavior that
go beyond metaethical beliefs. For example, Goodwin and Darley (2012)
found that moral claims were judged more objective when there was greater
Exploring Metaethical Commitments 191

perceived consensus. They also found that participants judged those who held
opposing beliefs as less moral and harder to imagine interacting with when
disagreement concerned a claim that was considered objective (see also Wright
et al. in press). Finally, Young and Durwin (2013) found that participants
primed to think in more objective terms were more likely to give to charity.
These findings, among others, suggest that despite intra- and interpersonal
variation in judgments, moral objectivism relates to factual beliefs (e.g., about
consensus), attitudes (e.g., tolerance of others), and decisions (e.g., about
whether to give to charity). We aim here to better understand the ways in which
metaethical beliefs are and are not systematic and coherent by considering
the relationship between three different metaethical beliefs: belief in moral
objectivism, belief in moral progress, and belief in a just world.

Moral progress
A belief in moral progress is a commitment to the idea that history tends
toward moral improvement over time. This notion, which postulates a certain
directionality in human history, can be contrasted with the notion of mere
moral change. Although moral progress has been defended by philosophers
in the history of philosophy, notably Marx and Hegel, the notion also finds
expression in peoples ordinary thinking. For example, Martin Luther King
famously proclaimed, the arc of the moral universe is long but it bends
towards justice (King 1986).
It is worth noting that a belief in a historical tendency toward moral
progress can be consistently held while maintaining that moral progress can
be imperceptible, occurring over long stretches of time. Sometimes moral
improvement can be dramatic and rapid, while at other times not. Thus, belief in
a tendency toward moral progress does not require committing to a particular
rate of moral progress. Additionally, to hold that there is a basic tendency
toward moral progress in human history is also compatible with allowing that
these tendencies do not inevitably or necessarily prevail. Believing in some
tendency need not require belief in inevitability. For example, one could
believe that 6-year-old children tend to grow physically larger (e.g., that a child
at 14years of age will be larger than that very same child at age 6) without
claiming that they inevitably or necessarily get physically larger (serious illness
192 Advances in Experimental Moral Psychology

or death could prevent their continuing to grow in size). Likewise, in the case
of moral progress, one could still allow that there could be exogenous forces
such as environmental and biological catastrophes or foreign invasions that
prevent the historical development toward moral progress.
One reason to focus on moral progress is that the notion is commonly invoked,
reflecting ideas in the broader culture. There is therefore reason to suspect that
people have commitments concerning its truth, and it is natural to ask, with
philosopher Joshua Cohen (1997), Do [ideas of moral progress] withstand
reflective examination, or are they simply collages of empirical rumination and
reified hope, held together by rhetorical flourish? (p. 93). In particular, we
might ask whether moral progress typically involves a commitment to moral
objectivism, as objective norms might be thought to causally contribute to
progress or simply provide a metric against which progress can be assessed.
It is also important to note that the notion of moral progress does not
merely contain metaethical content but also a kind of descriptive content: to
believe in moral progress involves believing something about the nature of
human history and the character of the social world. This suggests that our
metaethical beliefs, including beliefs about moral objectivity, do not stand
alone, compartmentalized from other classes of beliefs. Not only might they
be rationally and causally related to each other, in some cases these beliefs are
inseparable, expressing a union between the ethical and the descriptive. Thus,
a second reason for our interest in considering moral progress in tandem with
moral objectivity is that it may reveal important connections between different
types of metaethical beliefs as well as connections between metaethical beliefs
and other beliefs (such as descriptive beliefs about consensus, or explanatory
beliefs about social phenomena).

Belief in a just world


While previous research has not (to our knowledge) investigated beliefs about
moral progress directly, there is a large body of research on a related but distinct
metaethical belief, belief in a just world (e.g., Lerner 1980; Furnham 2003).
Belief in a just world refers to the idea that good things happen to good people
while bad things happen to bad people. The belief that people experience
consequences that correspond to the moral nature of their actions or character
Exploring Metaethical Commitments 193

is potentially consistent with belief in moral progress, although the relationship


is complex. For example, it is not obvious that the world is morally improved
when a criminal experiences a string of bad luck, unless retribution or the
deterrence of future criminal activity is itself the moral payoff. Nonetheless,
we focus on belief in a just world as a third metaethical belief for two reasons.
First, doing so allows us to examine empirically whether belief in a just world is
in fact related to belief in moral progress, and thus relate our novel measures to
existing research. Second, investigating a third metaethical commitment can
help us differentiate two possibilities: that relationships between metaethical
commitments are relatively selective, such that (for example) moral objectivity
and moral progress might be related but have no association with belief in
a just world, or alternatively, that the relationship reflects a single and more
general tendency, such that individuals with strong metaethical commitments
of one kind will typically have strong metaethical commitments of all kinds.

Method

We present a subset of results from a larger experiment investigating peoples


beliefs about moral objectivity using modified versions of both the Goodwin
and Darleys (2008) and Sarkissian etal.s (2011) measures, as well as peoples
beliefs about moral progress and belief in a just world. We also solicited
explanations for social changes to investigate the relationship between
metaethical beliefs and ethical explanations. In the present chapter, we focus
on the relationships between different metaethical beliefs. In ongoing work,
we consider the relationship between these beliefs and explanations (Uttich
etal. in prep).

Participants
Three hundred and eighty-four participants (223 female; mean age33) were
recruited from Amazon Mechanical Turk, an online crowd-sourcing platform.
Participants received a small payment for their participation. All participants
identified themselves as being from the United States and as fluent speakers of
English.
194 Advances in Experimental Moral Psychology

Materials and procedure


We report a subset of a larger set of experiments concerning the relationship
between metaethical beliefs and the use of explanations that cite ethical norms.
The full experiment consisted of four main parts: (1) explanation solicitation,
(2) moral objectivity measures, (3) moral progress measures and general
belief in a just world measure (GBJW), and (4) baseline check on beliefs
about the morality of social changes. The ordering of the parts was partially
counterbalanced, as detailed below.

Explanation solicitation
In the full experiment, participants were presented with a description of
a social change and asked to explain it in a few sentences (e.g., Why was
slavery abolished?). The changes included the abolition of slavery, womens
suffrage, and the potential legalization of same-sex marriage. Given our
present focus on the relationship between different metaethical beliefs,
we do not report findings concerning explanation here (see Uttich et al.
inprep).

Moral objectivity: Disagreement measure


Participants views concerning moral objectivity were examined in two
different ways. The first involved an adaptation of the disagreement method
used by both Goodwin and Darley (2008) and Sarkissian et al. (2011).
Participants read vignettes where either a person similar to themselves (i.e.,
from their same time and culture) or a person from another time period (e.g.,
the eighteenth century) disagreed with an imagined friend of the participant
about whether a social fact was morally problematic. The relevant social fact
was always matched with that for which participants had been asked to provide
an explanation. An example from the slavery condition involving the current
time and place is presented below:

Imagine a person named Allison, a fairly ordinary student from your town
who enjoys watching sports and hanging out with friends. Consider Allisons
views concerning the moral status of the following social institution:
Slavery.
Allison thinks that slavery is not morally wrong.
Exploring Metaethical Commitments 195

This scenario was matched with one involving a judgment from a different
time or place:

Imagine the social world of the United States in the eighteenth century.
Most people in this time and place view slavery as morally acceptable. The
existence of slavery is seen by many as part of the natural social order, slavery
is permitted by the law and the slave trade is at its peak, and someone who
owns many slaves is esteemed as admirable.
An individual, Jessica, from this society (eighteenth-century United
States), regards slavery as not morally wrong.1

In both cases, participants were then presented with a friend who disagreed:

Imagine that one of your friends thinks that slavery is morally wrong. Given
that these individuals (Allison [Jessica] and your friend) have different
judgments about this case, we would like to know whether you think at least
one of them must be wrong, or whether you think both of them could actually
be correct. In other words, to what extent would you agree or disagree with
the following statement concerning such a case?
Since your friend and Allison [Jessica] have different judgments about
this case, at least one of them must be wrong.

Participants rated their agreement with this statement on a 17-point


scale with 1corresponding to definitely disagree, 7 to definitely agree, and
4 to neither agree nor disagree. Each participant saw one current and one
historical vignette, with order counterbalanced across participants.

Moral objectivity: Truth-value measure


Participants beliefs about moral objectivity were also examined using a
method adapted from Goodwin and Darley (2008). Participants were asked
whether statements about the moral permissibility of the social facts are true,
false, or an opinion. The question prompt was adjusted from the original
multiple-choice format used by Goodwin and Darley to a 7-point Likert scale
to make it more comparable to the disagreement measure. Thus, participants
rated their agreement with statements concerning the moral permissibility
of a social practice (e.g., slavery is not morally wrong) on a 17-point
scale with 1 being is best described as true or false, 7 is best described
as an opinion, and 4 is equally well described as either true/false or as an
196 Advances in Experimental Moral Psychology

opinion. Participants answered questions concerning moral beliefs for all


three historical facts (demise of slavery, womens suffrage, legalization of
same-sex marriage), with the historical fact for which they rated explanations
presented first.

Moral progress and belief in a just world measures


Participants rated their agreement with eighteen statements intended
to measure their metaethical commitments concerning moral progress
and belief in a just world. Twelve items were constructed to measure
participants beliefs in moral progress. The statements examined two
dimensions of this belief: whether they concerned something concrete (i.e.,
moral progress with respect to a particular social practice or area of social
life) or abstract (i.e., moral progress in general), and whether progress was
described as a tendency or as inevitable. There were three questions for
each of the four possible combinations (e.g., three concrete questions about
tendency, three abstract questions about tendency, and so on). Participants
also evaluated six statements concerning belief in a just world, taken from
the GBJW (Dalbert et al. 1987). All eighteen statements are included in
Table 10.1.
Participants rated the statements on a 17-point scale with 1 being definitely
disagree, 7 definitely agree, and 4 neither agree nor disagree. The order of
all moral progress and GBJW statements was randomized.

Baseline check
Participants were also asked for their personal views on whether the three
social changes were good or bad. For example, for the slavery fact participants
were presented with the following statement:
The demise of slavery was a good thing.
Participants rated their agreement with this statement on a 17-point
scale with 1 being definitely disagree, 7 definitely agree, and 4 neither
agree nor disagree. All three social facts were rated. The social fact related
to the explanation for which each participant had been prompted was always
presented first.
Table 10.1 Means and factor loadings for statements of moral progress and belief in a just world. Items with an asterisk were reverse coded.

Factor 1 Factor 3 Factor 4


(Abstract Factor 2 (Concrete (Concrete
Statement Mean (SD) progress) (GBJW) inevitability) progress)
Abstract Tendency
Human history tends to move in the direction of a more 3.84 (1.60) 0.757 0.212 0.001 0.189
perfect moral world.
As time goes on, humanity does NOT generally become more 4.21 (1.68) 0.707 0.011 0.061 0.077
morally advanced.*
Over time we will move toward a more morally just world. 4.22 (1.60) 0.782 0.184 0.093 0.272
Abstract Inevitability
The moral advancement of humanity is NOT inevitable.* 3.66 (1.76) 0.686 0.238 0.227 0.068
It is inevitable that on average our morals will be better than 3.94 (1.77) 0.743 0.173 0.274 0.091
those of our distant ancestors.
Exploring Metaethical Commitments

An increase in moral justice in the world is inevitable. 4.05 (1.69) 0.729 0.340 0.227 0.072
Concrete Tendency
Over time there is moral progress concerning slavery. 5.54 (1.45) 0.159 0.146 0.179 0.813
Over time there is moral progress concerning voting rights. 5.41 (1.35) 0.250 0.066 0.249 0.733
Over time there is moral progress concerning marriage rights. 4.79 (1.60) 0.586 0.001 0.290 0.354
(Continued)
197
198

Table 10.1 (Continued)

Factor 1 Factor 3 Factor 4


(Abstract Factor 2 (Concrete (Concrete
Statement Mean (SD) progress) (GBJW) inevitability) progress)
Concrete Inevitability
The demise of slavery was inevitable. 5.14 (1.78) 0.262 0.093 0.652 0.162
The extension of the right to vote to women was inevitable. 5.43 (1.66) 0.097 0.186 0.776 0.258
The legalization of same-sex marriage is inevitable. 5.14 (1.82) 0.189 0.008 0.774 0.059
General Belief in a Just World
I think basically the world is a just place. 3.77 (1.62) 0.170 0.563 0.116 0.109
I believe that, by and large, people get what they deserve. 4.16 (1.62) 0.093 0.705 0.017 0.037
I am confident that justice always prevails over injustice. 3.64 (1.75) 0.293 0.723 0.121 0.067
I am convinced that in the long run people will be 3.95 (1.75) 0.036 0.775 0.108 0.128
compensated for injustices.
Advances in Experimental Moral Psychology

I firmly believe that injustices inall areas of life (e.g., 3.82 (1.52) 0.082 0.620 0.055 0.122
professional, family, politic) are the exception rather
thanthe rule.
I think people try to be fair when making important decisions. 4.69 (1.46) 0.159 0.607 0.006 0.271
Exploring Metaethical Commitments 199

Counterbalancing
Participants either provided an explanation first (part 1) and then completed
the two moral objectivity measures (part 2) and the moral progress measures
and GBJW measures (part 3), with the order of parts 2 and 3 counterbalanced,
or they first completed the moral objectivity measures (part 2) and the moral
progress and GBJW measures (part 3), with order counterbalanced, followed
by explanations (part 1). Participants always completed the baseline check on
social facts (part 4) last.

Results

We begin by reporting the data for each set of questions individually, and then
consider the relationship between different metaethical commitments.

Individual measures
Baseline check measures
The baseline check confirmed our assumptions about participants own
attitudes toward the moral claims in question. The average ratings were 6.70 of
7 (SD0.95) for the demise of slavery, 6.63 (SD1.00) for womens suffrage,
and 5.15 (SD2.20) for same-sex marriage.

Moral objectivism: Disagreement


The first measure of objectivism concerned participants responses to
disagreement between a friend and an individual in a current or historical
period. Overall, participants provided higher ratings for the current scenario
(M4.84, SD1.95) than for the historical scenario (M4.62, SD1.97),
indicating greater objectivism in the latter case and consistent with Sarkissian
et al.s findings. To analyze the data statistically, we performed a repeated-
measures ANOVA with time period (current vs. historical) as a within-
subjects factor and social fact (slavery, womens suffrage, same-sex marriage)
as a between-subject factor. This revealed two significant effects: a main effect
of time period, F(1,381)13.17, p0.01, with more objectivist responses for
the current vignette than for the historical vignette, and a main effect of social
200 Advances in Experimental Moral Psychology

fact, F(2,381)36.35, p0.01, with responses that were more objectivist for
slavery (M4.99, SD1.90) and womens suffrage (M4.90, SD1.69)
than for same-sex marriage (M4.30, SD1.95).
Because the correlation between participants current (C) and historical
(H) ratings was very high (r 0.817, p 0.01), we consider the average
rating (CH) for each participant (M4.72, SD1.87) in most subsequent
analyses.

Moral objectivism: Truth value


Our second measure of moral objectivism was the true, false, or opinion?
(TFO) measure adapted from Goodwin and Darley (2008). The average rating
for the TFO measure was 4.31 (SD 2.15), with lower scores indicating
greater moral objectivism. This measure varied as a function of social fact,
F(2,382) 53.65, p 0.01, with the most objectivist responses for slavery
(M3.71, SD2.58), followed by womens suffrage (M4.31, SD2.51) and
same-sex marriage (M4.91, SD2.30).2 Nonetheless, participants ratings
across the three social facts were highly related (a 0.82). In subsequent
analyses, we therefore focus on a given participants average TFO rating across
the three social facts. To facilitate the interpretation of correlations across
measures, we report a reversed average (8 minus each participants score)
such that higher numbers correspond to greater objectivism, as in the CH
measure.

Moral progress and belief in a just world measures


We analyzed ratings across the 18 statements with a factor analysis employing
principal components analysis as an extraction method and a varimax
rotation. This analysis resulted in four factors with eigenvalues over one,
accounting for a total of 59.4 percent of the variance. Table 10.1 reports
the average rating for each statement as well as the factor loadings for each
statement in the rotated component matrix, and suggests that the four factors
can be characterized as follows: abstract progress (34.2% of variance), GBJW
(11.5%), concrete inevitability (8.1%), and concrete tendency (5.6%). Its
worth noting that beliefs about moral progress were indeed differentiated
from GBJW, and that the dimension of abstract versus concrete appeared to
be psychologically meaningful while the distinction between tendency and
Exploring Metaethical Commitments 201

inevitability emerged only for the concrete items, where participants may
have been able to look back over time at the specific issues we considered to
identify both general trends and temporary setbacks.3 In subsequent analyses
we examine correlations between these four factors and our two measures of
moral objectivism.

Relationships between metaethical commitment measures


Table 10.2 reports the correlations between our two measures of moral
objectivism (CH and TFO) as well as the four factors extracted from the
factor analysis on moral progress items (abstract progress, GBJW, concrete
inevitability, concrete tendency). There are several notable results.
First, while the correlation between the CH ratings and the TFO ratings
was significant (r0.271, p0.01), it was low enough to suggest that each

Table 10.2 Correlations between metaethical measures

CH Current Historical C-H Avg TFO


CH: Current/ 1 0.953** 0.954** 0.020 0.271**
Historical
disagreement
Current 0.953** 1 0.817** 0.285** 0.258**
Historical 0.954** 0.817** 1 0.320** 0.258**
C-H: Difference 0.020 0.285** 0.320** 1 0.005
score
Avg TFO: True, 0.271** 0.258** 0.258** 0.005 1
False, or
Opinion?
Factor 1: Abstract 0.063 0.059 0.061 0.003 0.127*
progress
Factor 2: GBJW 0.042 0.018 0.061 0.072 0.116*
Factor 3: 0.040 0.009 0.066 0.094 0.063
Concrete
inevitability
Factor 4: 0.239** 0.258** 0.199** 0.093 0.056
Concrete
tendency
*0.05, **0.01.
202 Advances in Experimental Moral Psychology

measure captures some important and unique variance in beliefs about moral
objectivism, perhaps roughly capturing relativism and non-cognitivism,
respectively. To further investigate the possible relationships between
measures, we also considered whether TFO might be related to the difference
between C and H ratings (C-H), which can be conceptualized as a measure
of the extent to which a participant is influenced by sociocultural factors in
evaluating the truth of a moral claim. One might therefore expect a significant
negative correlation between TFO and C-H, but in fact the relationship was
very close to zero. Coupled with the high correlation between judgments on
the C and H questions, and the fact that C and H had very similar relationships
to other variables, this suggests that varying the sociocultural context for a
belief can indeed affect judgments concerning disagreement, but that the
effect is more like a shift in the absolute value of participants judgments than
the recruitment or application of different moral commitments.
Second, while both the CH and TFO ratings were related to moral progress
and GBJW, they had unique profiles in terms of the specific factors with which
they correlated. The CH measure was correlated with the concrete tendency
factor (r0.239, p0.01), while the TFO measure was positively correlated with
the abstract progress factor (r0.127, p0.05) and negatively correlated with
the GBJW factor (r0.116, p0.05). Although these correlations were small,
they suggest systematic relationships between measures, and more surprisingly,
non-overlapping relationships, providing further evidence that judgments of
disagreement (CH) and judgments concerning whether moral claims have a
truth value (TFO) reflect different facets of folk metaethical commitments.
Finally, its worth considering why CH and TFO had these distinct profiles.
We speculate that the dimension of concrete versus abstract evaluation can
partially explain these results. Specifically, CH and the concrete tendency
factor were positively associated and involved particular moral claims (e.g.,
about slavery) rather than abstract claims, while TFO and the abstract progress
factor were positively associated and involved judgments that were more
explicitly metaethical in that they concerned the status of particular moral
ideas (i.e., whether there is moral progress in general and whether particular
claims have a truth value). However, this speculation does not explain why the
CH measure was not also associated with the concrete tendency factor, nor
does it explain the negative association between TFO and the GBJW factor.
Exploring Metaethical Commitments 203

General discussion

Our results suggest that metaethical beliefs are varied and complex, with
significant but modest relationships across different sets of beliefs. Our results
also reinforce some of the conclusions from prior research. Like Goodwin
and Darley (2008, 2012), we find significant variation in objectivism across
individuals, and also that judgments reflect greater objectivism for some
social facts (slavery) than for others (same-sex marriage), perhaps echoing
their findings on the role of consensus, and also consistent with the strength of
participants attitudes concerning each social fact. Like Sarkissian etal. (2011),
we find evidence that measures that highlight different perspectives seem
to increase non-objectivist responses, as our historical vignette generated
less objectivist responses than the matched current vignette, although the
responses were strongly correlated. Our findings therefore support the need
to consider the characteristics of both participants and measures in drawing
conclusions about metaethical beliefs.
Beyond illuminating variation between individuals, our findingsshed light
on the coherence and variability of metaethical beliefs within individuals.
Correlations between our measures of metaethical beliefs suggest two
conclusions: that the metaethical concepts we investigate have some common
elements, but also that there is only partial coherence in the corresponding
beliefs. Our two separate measures of moral objectivity (CH and TFO)
were significantly correlated, but only weakly so. The correlation was weak
despite modifications from Goodwin and Darley (2008) and Sarkissian etal.
(2011) to make the measures more comparable: both involved judgments
on 7-point scales and referred to the same moral claims. Analyses of the
relationship between these two measures and the four factors concerning
moral progress and GBJW suggest that moral objectivism is related to these
ideas, but the two measures of objectivism had unique patterns of association.
If participants have strong, stable, and consistent metaethical commitments,
why might responses to metaethical questions be so weaklyrelated?
We first consider methodological and conceptual answers to this question.
One possibility is that we observe weak associations between metaethical
commitments as an artifact of our methods of measurement. This idea is
consistent with a suggestion by Sarkissian etal. (2011), who argue that when
204 Advances in Experimental Moral Psychology

forced to address radically different perspectives, people who appeared to


have completely objectivist commitments reveal some underlying, relativist
intuitions. It follows that methods for soliciting commitments might
themselves account for substantial variance in responses. We suspect there is
some truth to this idea, and our particular measures certainly have limitations.
Nonetheless, its worth repeating that while disagreements in current and
historical contexts (C and H) involved different absolute ratings, they were
very highly correlated and had matching patterns of association with our other
measures. Moreover, the difference between these ratings (C-H)that is the
extent to which context shifted judgmentswas not reliably associated with
any measures. One interpretation is that peoples absolute judgments may
be quite easy to manipulate, but that the relationships between metaethical
commitments, while weak, may be more stable.
Another possibility is that the metaethical commitments we investigated do
not in fact correspond to coherent and unified sets of beliefs. Thus, participants
judgments could be inconsistent across measures because the philosophical
constructs we aim to assess are themselves diverse or incoherent. For example,
we expected a stronger relationship between moral objectivism and belief in
moral progress, but such a relationship is not logically requiredone can,
for example, be a relativist and endorse moral progress, or an objectivist and
deny it. We also expected our two measures of moral objectivism to be more
strongly associated given their association within the philosophical literature
and the fact that prior research has simply combined both measures (Goodwin
and Darley 2008; Wright etal. in press), but it is logically possible, if unusual,
to be (for example) a non-cognitivist universalist (e.g., Hare 1952).
While we suspect that conceptual dissociations between metaethical
commitments partially explain our results, and that the findings are doubtless
influenced by our particular methods of measurement, our results also point to
three possible (and mutually consistent) proposals concerning the psychology
of metaethical belief.
First, as suggested by Wright etal. (in press), it could be that objectivism
in the moral domain is tempered by the need to tolerate and effectively
interact with others who hold divergent beliefs. On this view, the apparent
incoherence in participants metaethical commitments serves an important
psychosocial function, and we would expect the observed relationship
Exploring Metaethical Commitments 205

between the prevalence of a particular moral belief and an objectivist stance


toward it.
Second, it could be that people do not hold a single intuitive theory of
metaethics, but instead hold multiple theories with some moral content. For
example, intuitive theories could be organized around general principles (such
as fairness vs. justice), moral patients (such as humans vs. non-humans), or
particular practices (such as slavery vs. marriage). This idea can help make
sense of reliable relationships between metaethical commitments and other
beliefs (e.g., descriptive beliefs about consensus, explanatory beliefs), attitudes
(e.g., tolerance), and behaviors (e.g., charitable giving) despite only modest
associations across different metaethical beliefs. This proposal builds on prior
research positing intuitive theories across a wide range of domains, where
such theories embody somewhat coherent but not full articulated bodies of
belief (e.g., Carey 1985; Shtulman 2010; Thagard 1989). In the moral domain, for
example, Lombrozo (2009) investigated the relationship between deontological
versus consequentialist commitments and found evidence of a systematic but
imperfect correspondence across more abstract and explicit versus scenario-
based measures. With explicit articulation and examination, as typically
occurs with philosophical training, different metaethical commitments could
potentially become more reliably associated.
Finally, it could be that categories that make sense a priori philosophically
play a relatively minor role in driving peoples responses, with a much greater
role for (arguably philosophically irrelevant) properties, such as whether the
question prompts are abstract or concrete. Both our factor analysiswhich
suggested that the dimension of abstract versus concrete was more
psychologically significant than that between tendency and inevitabilityand
the patterns of correlations across measures support the importance of this
dimension. Along these lines, Nichols and Knobe (2007) found that concrete
vignettes about free will elicited compatibilist responses, while abstract vignettes
elicited incompatibilist responses. More generally, research on Construal Level
Theory suggests that level of abstraction can have important consequences for
cognition (Trope and Liberman 2010). This final point should give pause to
the assumption that folk morality will have any clean correspondence to extant
philosophical categories. Instead, a more bottom-up, data-driven approach to
understand folk moral commitments may be more successful.
206 Advances in Experimental Moral Psychology

Further research aimed directly at measuring the nature of metaethical


commitments will aid in distinguishing these possibilities and further
clarify the status and coherence of folk metaethical commitments. If such
commitments dont correspond to philosophical distinctions that can be
motivated a priori, which dimensions of moral experience do they track, and
why? These are important questions for future research.

Notes

* Authors Note: Kevin Uttich, University of CaliforniaBerkeley, George Tsai,


University of CaliforniaBerkeley and University of Hawaii, and Tania
Lombrozo, University of CaliforniaBerkeley. Corresponding author: Kevin
Uttich, Email: uttich@berkeley.edu, 3210 Tolman Hall, Berkeley, CA, 94720.
We thank Nicholas Gwynne, Michael Pacer, Kathie Pham, Jennifer Cole
Wright,HagopSarkissian and the Moral Psychology group at Berkeley and
theConcept and Cognition lab for feedback and data collection assistance.
This research was supported by research funds from the McDonnell
Foundation.
1 Information on the norms of the time period was added to the historical scenario
to ensure that participants were aware of the relevant norms and understood that
the scenario takes place before the change in the social fact.
2 We obtained similar results in a separate experiment which used Goodwin and
Darleys original multiple-choice format rather than a Likert scale: 18 true
responses (6%), 102 false responses (35%), and 162 opinion responses (56%)
out of 288 total responses (96 participants3 social facts).
3 We thank Jennifer Cole Wright for suggesting this interpretation for why the
concrete items may have shown a differentiation between tendency and
inevitability while the abstract items did not.

References

Carey, S. (1985). Conceptual Change in Childhood. Cambridge, MA: MIT Press.


Cohen, J. (1997). The arc of the moral universe. Philosophy and Public Affairs, 26(2),
91134.
Exploring Metaethical Commitments 207

Dalbert, C., Montada, L., and Schmitt, M. (1987). Glaube an eine ge- rechte welt
als motiv: Vali-dierungskorrelate zweier Skalen [Belief in a just world as motive:
Validity correlates of two scales]. Psychologische Beitrage, 29, 596615.
Forsyth, D. R. (1980). A taxonomy of ethical ideologies. Journal of Personality and
Social Psychology, 39, 17584.
Furnham, A. (2003). Belief in a just world: Research progress over the past decade.
Personality and Individual Differences, 34, 795817.
Goodwin, G. P., and Darley, J. M. (2008). The psychology of meta-ethics: Exploring
objectivism. Cognition, 106, 133966.
(2010). The perceived objectivity of ethical beliefs: Psychological findings and
implications for public policy. Review of Philosophy and Psychology, 1, 16188.
(2012). Why are some moral beliefs seen as more objective than others? Journal
ofExperimental Social Psychology, 48, 2506.
Hare, R. M. (1952). The Language of Morals. New York: Oxford University Press.
Harman, G. (1975). Moral relativism defended. Philosophical Review, 84, 322.
King, M. L., Jr. (1986). A Testament of Hope: The Essential Writings of Martin Luther
King, Jr. In J. Washington (ed.). San Francisco: Harper and Row.
Knobe, J., Buckwalter, W., Nichols, S., Robbins, P., Sarkissian, H., and Sommers, T.
(2012). Experimental philosophy. Annual Review of Psychology, 63, 8199.
Lerner, M. (1980). The Belief in a just World: A Fundamental Delusion. New York:
Plenum.
Lombrozo, T. (2009). The role of moral commitments in moral judgment. Cognitive
Science, 33, 27386.
Nichols, S. (2004). After objectivity: An empirical study of moral judgment.
Philosophical Psychology, 17, 326.
Nichols, S., and Knobe, J. (2007). Moral responsibility and determinism: The
cognitive science of folk intuitions. Nous, 41, 66385.
Sarkissian, H., Park, J., Tien, D., Wright, J. C., and Knobe, J. (2011). Folk moral
relativism. Mind & Language, 26, 482505.
Shtulman, A. (2010). Theories of god: Explanatory coherence in a non-scientific
domain. In S. Ohlsson and R. Catrambone (eds), Proceedings of the 32nd Annual
Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society,
pp. 1295300.
Sinnott-Armstrong, W., Moral skepticism. The Stanford Encyclopedia of Philosophy
(Summer 2009 Edition), Edward N. Zalta (ed.), URLhttp://plato.stanford.edu/
archives/sum2009/entries/skepticism-moral/.
Thagard, P. (1989). Explanatory coherence. Behavioral and Brain Sciences, 12,
435502.
208 Advances in Experimental Moral Psychology

Trope, Y., and Liberman, N. (2010). Construal-level theory of psychological distance.


Psychological review, 117(2), 440.
Uttich, K., Tsai G., and Lombrozo, T. (2013). Ethical Explanations, Moral Objectivism,
and Moral Progress. Manuscript submitted for publication.
Wright, J. C., Grandjean, P. T., and McWhite, C. B. (2013). The meta-ethical
grounding of our moral beliefs: Evidence for meta-ethical pluralism, Philosophical
Psychology, 26(3), 33661.
Wright, J. C., McWhite, C. B., and Grandjean, P. T. (in press). The cognitive
mechanisms of intolerance: Do our meta-ethical commitments matter? In
J. Knobe, T. Lombrozo, and S. Nichols (eds), Oxford Studies in Experimental
Philosophy, Vol. 1, New York, NY: Oxford University Press.
Young, L., and Durwin, A. (2013). Moral realism as moral motivation: The impact
of meta-ethics on everyday decision-making. Journal of Experimental Social
Psychology, 49(2013), 3026. doi:10.1016/j.jesp.2012.11.013
11

Agent Versus Appraiser Moral Relativism:


AnExploratory Study
Katinka J. P. Quintelier, Delphine De Smet,
and Daniel M. T. Fessler*

Theories of moral relativism do not always fit well with common intuitions.
In the Theaetetus, Plato ridiculed the relativist teachings of Protagoras (Plato
1921), and Bernard Williams dubbed moral relativism possibly the most
absurd view to have been advanced even in moral philosophy (Williams 1972,
p. 20). Nonetheless, even though some moral philosophers oppose theories
of moral relativism due to its counterintuitive implications (e.g., Streiffer
1999), other philosophers defend it by referring to common intuitions, lay
peoples speech acts, or common understandings of certain moral terms
(e.g., Brogaard 2008; Harman 1975; Prinz 2007). These intuitions have been
investigated empirically: On the one hand, empirical studies suggest that
survey respondents can make relativist moral judgments (Goodwin and Darley
2008; Sarkissian etal. 2012; Wright etal. 2012; Wright etal. in press). On the
other hand, the folks moral relativist intuitions might be self-contradictory
(cf. Beebe 2010), and this can be used as an argument against relativist moral
theories (Williams 1972).
If the prevalence and coherence of folk moral relativism are to play a role
in arguments regarding the philosophical merits of moral relativism, then
we need to know what the folk actually adhere to. In this regard, for several
reasons, it is important to take into account that there are different kinds of
moral relativism (Quintelier and Fessler 2012). First, failure to do so may
lead to an underestimation of the prevalence of folk moral relativism, as
respondents may employ relativist intuitions of a kind other than that being
210 Advances in Experimental Moral Psychology

measured. Second, some kinds of moral relativism might be more coherent


than others (Quintelier and Fessler 2012).
An important distinction that has received attention in recent philosophical
work is agent versus appraiser relativism (Beebe 2010; Lyons 1976/2001; Prinz
2007). As we further explain in the next two sections, this distinction refers to the
individual or group of individuals toward whom a moral statement is relativized.
Agent moral relativism states that the appropriate frame of reference is the
moral framework of the person who performs the act, or of the cultural group to
which this person belongs. Appraiser moral relativists state that the appropriate
frame of reference is the moral framework of the person who makes the moral
judgment, or of the cultural group to which this person belongs. Contemporary
empirical work on moral relativism has largely failed to investigate both (a) this
critical distinction between agent and appraiser moral relativism, and (b) the
corresponding comparative intuitive appeal of each type of moral relativism.
Here, we explore the existence of both agent and appraiser moral relativist
intuitions in lay people. Below, we briefly define moral relativism, after which
we explain in more detail the difference between agent and appraiser moral
relativism. In the main section, we describe our study and report our findings.
Finally, we discuss the possible implications of our findings.

Moral relativism

We construe the notion of moral relativism as consisting of three necessary


components: First, X is relative to Y; second, X is an aspect of the moral
phenomenon; and third, there is variation in Y that cannot be eliminated, as
illustrated below (cf. Quintelier and Fessler 2012).
First, we take moral relativism to mean that some aspect of moral statements
(e.g., their truth, their referent) or morally relevant acts (e.g., their moral
rightness) is relative to a moral framework (cf. Harman and Thompson 1996).
By a moral framework, we mean a set of moral values, principles, or sentiments
that play a justifying role in ones moral reasoning (e.g., one justifies moral
judgments by referring to this framework).
Consider the following example, inspired by Lyons (1976/2001): Assume
that pro-choice activists endorse a moral framework that prioritizes the value
Agent Versus Appraiser Moral Relativism: AnExploratory Study 211

of personal choice over the value of the unborn life. According to some kinds
of moral relativism, a pro-choice activistsay, Christinecan truthfully
judge that abortion is permissible because it is in accordance with her moral
framework. Nonetheless, if a pro-life activistsay, Lisaabhors abortion,
Lisas statement regarding the impermissibility of abortion is also true because
it is in accordance with Lisas moral framework that prioritizes the value of
the unborn life over personal choice. In this example, the truth of moral
statements thus depends on the moral framework of the person uttering a
moral statement.
Second, moral relativism holds that there is variation between these moral
frameworks. In our example, some people are pro-choice and others are pro-
life. Peoples moral judgments will therefore sometimes differ because their
respective moral frameworks differ.
Finally, moral relativism rests on philosophical assumptions, such that this
variation in moral frameworks cannot be eliminated. For instance, one can
hold that both frameworks are equally true, that there is no truth about the
matter, or that they are equally practical, etc. If moral relativism would allow
that all variation in moral frameworks could be eliminated, moral relativism
would be compatible with (most forms of) moral universalism. This meaning
of moral relativism would be too broad for our purposes.

Agent versus appraiser moral relativism

The above picture leaves out an important component of moral relativism,


namely, whose moral framework matters in deciding whether a moral statement
is true or not: Does it matter who is evaluating the act, does it matter who is
doing the act, or both? Another example can illustrate this: Both Carol, a pro-
choice activist, and Laura, a pro-life activist, are having an abortion. They are
agents performing an act with moral relevance. Lisa (the other pro-life activist)
and Christine (pro-choice activist) again contemplate these actions and utter
their respective judgments: Lisa says neither abortion is permissible, while
Christine says both abortions are permissible. They are appraisers evaluating
the act. Which moral statement is true now? Should we assign truth values
based on the moral frameworks of the agents performing the actthis being
212 Advances in Experimental Moral Psychology

Carol and Lauraor based on the moral frameworks of the appraisers judging
the actthis being Lisa and Christine? Or could any moral framework be an
appropriate frame of reference?
According to agent moral relativism, the appropriate frame of reference is
the agents moral framework. In this example, it would be permissible for Carol
(the pro-choice agent) to have an abortion, but it would not be permissible for
Laura (the pro-life agent) to have an abortion. Viewed from the perspective
of agent moral relativism, Christines evaluative statement that both abortions
are permissible is false, even though this statement is in accordance with her
own moral framework. In contrast, for an agent moral relativist, it would be
correct for an appraiser, such as Christine, to disapprove of Lauras abortion
(as inconsistent with Lauras own moral perspective) and to permit Carols
abortion (as consistent with Carols own moral perspective).
In contrast, according to appraiser relativism, the moral frameworks of
the agents (Laura and Carol) are irrelevant for a moral judgment to be true
or false. What matters instead are the moral frameworks of the appraisers,
Christine, and Lisa. Viewed from the perspective of appraiser moral relativism,
Christines evaluative statement that both abortions are permissible is correct,
even though abortion is against Lauras (the agents) framework.
In what follows, we consider appraisers as only those who evaluate a moral
act without being involved in the act. We consider agents as only those doing
the act without uttering a statement about the act. Thus, considering the act
of lying, when A lies to B, A and B are not appraisers. Of course, in reality,
agents can appraise their own actions. Moreover, when appraisers are uttering
a moral statementfor example, C says to D that lying is wrongthey might
in the first place have themselves as agents in mind; thus, appraisers can also be
agents. However, simplifying matters this way will make it easier to investigate
whether lay people indeed draw a distinction between agents and appraisers
when assessing the status of moral statements and behavior.

Importance of the agent-appraiser distinction

The distinction between agent moral relativism and appraiser moral relativism
is important when evaluating moral theories. One possible argument against
Agent Versus Appraiser Moral Relativism: AnExploratory Study 213

moral relativism is that it has counterintuitive implications (e.g., Williams 1972).


Moral relativism is often taken to imply that at least some moral statements are
true or false depending on the appraiser. In the above example, this would
mean that it is true (for Christine) that Carols abortion is permissible, while
it is true (for Lisa) that Carols abortion is not permissible. As a consequence,
conflicting moral statement can both be true at the same time, which is hard
to reconcile with common intuitions. Moreover, according to appraiser moral
relativism, Christine cannot reasonably say that Lisa is wrong. However, most
people do admonish others when they utter apparently conflicting moral
statements. Thus, the moral speech acts of most people are not in line with
moral relativism.
While this argument against moral relativism holds for appraiser relativism,
it does not necessarily hold for agent relativism. According to agent moral
relativism, each moral statement about a specific act performed by a specific
agent is either true or false, irrespective of who is appraising the act. In the
above example, Carols abortion is permissible, irrespective of who is judging
Carol. As a consequence, conflicting moral statements are not both true at the
same time, and it is not unreasonable for discussants to admonish those who
utter conflicting moral statements. This is easier to reconcile with common
intuitions.
Also, agent moral relativism is easier to reconcile with certain existing social
practices than appraiser moral relativism. According to agent group moral
relativism, the appropriate frame of reference is the moral framework of the
group the agent belongs to. This is akin to cultural relativism: an act is right or
wrong depending on the moral values that prevail in the culture in which the
act takes place. Cultural relativism has been vehemently defended in the past,
and moderate forms of cultural relativismwhere the wrongness of at least
some, but not all, acts depends on the culture in which the act takes placeare
still defended and applied in public policy. For instance, in Belgium, it is illegal
to kill animals without previously anesthetizing them. However, the same
does not hold for religious groups when ritually slaughtering animals. Thus,
whether an act is legally right or wrong depends on the group performing the
act. Such policies are put in practice for at least some moral issues, and people
seem to be able to relativize their practical judgments to the moral frameworks
of agents.
214 Advances in Experimental Moral Psychology

In contrast, according to appraiser group moral relativism, the appropriate


frame of reference is the moral framework of the group to which the appraiser
belongs. In the case of slaughtering animals, everyone would judge the ritual
and non-ritual practices according to their own moral framework, and all
these conflicting judgments would be true. This is hard to reconcile with the
observation that, in fact, the relevant conflicting judgments were discussed
by politicians and pundits in Belgium until a consensus was reached, and an
agent group moral relativist solution was adopted.
Thus, those who reject moral relativism because of its counterintuitive
implications should clarify what kind of moral relativism they have in mind:
appraiser moral relativism might well be counterintuitive in ways that agent
moral relativism is not, and, of course, agent moral relativism might be
counterintuitive in ways that appraiser moral relativism is not.

Previous evidence for folk moral relativism

Existing studies about folk moral relativism most often vary only the appraisers.
To date, investigators have yet to examine whether participants also reveal
agent relativist intuitions in experimental studies.
Goodwin and Darley (2008) and Wright etal. (2012; in press) both report
the existence of relativist moral positions. In these studies, participants
are presented with statements such as Before the 3rd month of pregnancy,
abortion for any reason (of the mothers) is acceptable. Some participants
indicated that the statement was true (or false) but that a person who disagrees
with them about the statement need not be mistaken. Hence, in these studies,
participants allowed the truth value of a moral statement to vary when the
appraiser varied. We do not know if participants would also allow the truth of a
moral statement, or the rightness of an act, to vary when the agent would vary.
Sarkissian etal. (2011) were able to guide participants intuitions in the
direction of moral relativism by varying the cultural background of the
appraisers. They also varied the cultural background of the agents, but this
did not have an effect on participants intuitions. However, this apparent
null result is subject to the methodological limitation that the cultural back
grounds of the hypothetical agents were much more similar to each other
Agent Versus Appraiser Moral Relativism: AnExploratory Study 215

(an American vs. an Algerian agent) than were the cultural backgrounds
of the hypothetical appraisers (a classmate vs. an appraiser from a fictitious
primitive society, or vs. an extraterrestrial).
Because the above studies do not allow us to conclude whether the folk
show agent relativist moral speech acts, we developed scenarios in which we
explicitly varied the moral frameworks of both agents and appraisers.

Method

We tested whether manipulating the moral frameworks of agents would have


an effect on lay peoples moral speech acts. We asked participants about their
own moral judgments about moral acts performed by agents holding various
moral frameworks. We then also tested whether manipulating the moral
frameworks of agents and appraisers would make a difference. To do so, we
asked the same participants to assess the truth of others moral judgments about
moral scenarios. These moral statements were uttered by appraisers who held
different moral frameworks. Moreover, these statements evaluated the acts of
moral agents that also held different moral frameworks.

Participants
From April to June 2013, we recruited participants using Amazon.coms
Mechanical Turk web-based employment system (hereafter MTurk). This is
a crowdsourcing website that allows people to perform short tasks, including
surveys, for small amounts of money. Anyone over 18 could participate. We
analyzed data from 381 participants, who were mostly from the United States
(234) and India (118).

Materials and design


We developed two questionnaires featuring agents and appraisers. All partici
pants completed only one of the two questionnaires. The first questionnaire
featured employees in a firm where, as a punishment, reducing the time
allowed for lunch was either against, or in accordance with, the employees
216 Advances in Experimental Moral Psychology

moral framework. The second questionnaire featured sailors on a ship in


international waters, where whipping as a punishment was either against, or in
accordance with, the sailors moral frameworks. The sailors questionnaire was
a modified version of a questionnaire previously employed in related research;
see Quintelier etal. (2012) for a description of this instrument. In this chapter,
we therefore give a detailed overview of the employees questionnaire first,
followed by only a short description of the sailors questionnaire. The full text
of both questionnaires is available upon request.
In order to investigate whether participants moral intuitions vary depending
on the agents moral frameworks, participants were first presented with two
scenarios describing the same act, done by different agents. In one scenario,
the act was concordant with the agents own moral framework and in the other
scenario, the act was discordant with the agents own moral framework. After
each scenario, we asked participants to morally judge this act. In order to check
whether they had understood the particular vignette, we also asked them to
complete two comprehension questions. Because the order of presentation of
the two scenarios might unintentionally shape responses due to priming or
similar psychological effects that are orthogonal to the phenomena in which
we are interested here, the order of presentation was randomized across
participants. This generated relatively equal subsamples that differed by order
of presentation, allowing us to control for order effects in the final analysis.
The employees questionnaire consisted of the following scenarios:

Scenario 1
Mr Jay is the boss of family business J in a small town in the Midwestern
United States. In this company, when employees are late for work, their wages
are reduced by a proportionate amount. As a consequence, everyone in this
company has come to think that a proportionate wage reduction is a morally
right punishment for being late for work. They think reducing lunch breaks as
a punishment is morally wrong because this is never done and they value their
lunch breaks.
One day, John is late for work. This day, his boss is not in the mood to deal
with administrative issues such as adjusting Johns wages. Instead, he punishes
John by shortening his lunch break, even though Mr Jay himself, John, and all
the other employees in this company think this is morally wrong.
Agent Versus Appraiser Moral Relativism: AnExploratory Study 217

Because this punishment is discordant with the agents moral framework,


we refer to this scenario as AD.
Participants then answered the following judgment question on a 5-point
Likert scale: To what extent do you think Mr Jays behavior is morally wrong?
(1certainly morally wrong; 5certainly not morally wrong). The higher
the participants scores, the more their judgment was discordant with the
agents moral framework.
Participants then answered two comprehension questions to check if they
had read and understood the scenario.

Scenario 2
Mr May is the boss of another family business M in the same small town in the
Midwestern United States. In this company, when employees are late for work,
their lunch break is proportionately shortened. As a consequence, everyone in
this company has come to think that a proportionately shorter lunch break is
a morally right punishment for being late for work. They think that reducing
wages as a punishment is morally wrong because this is never done and they
value their income.
One day, Michael is late for work. His boss punishes Michael by shortening
his lunch break. Mr May himself, Michael, and all the other employees in this
company think that this is morally right.
Because this punishment is concordant with the agents moral framework,
we refer to this scenario as AC.
Participants then answered the following judgment question on a 5-point
Likert scale: To what extent do you think Mr Mays behavior is morally
wrong? (1certainly morally wrong; 5certainly not morally wrong). Thus,
the higher the participants scores, the more their judgment was concordant
with the agents moral frameworks.
Participants again answered two comprehension questions.
In order to test whether participants moral judgments depended on the
agents moral frameworks, we used AC and AD as within-subject levels of the
variable AGENT.
In order to test whether participants moral intuitions varied depending on
the appraisers and the agents moral frameworks, participants were presented
with two additional scenarios, presented in randomized order, that extend
218 Advances in Experimental Moral Psychology

the previous scenarios through the addition of appraisers who utter a moral
statement about the act.

Scenario 3
James and Jared are employees in Mr Jays company. They both know that in
Mr Mays company, everyone thinks shortening lunch breaks is morally right.
Of course, in their own company, it is just the other way around: Everybody
in Mr Jays company, including James and Jared, think that shorter breaks
are a morally wrong punishment, and that wage reduction is a morally right
punishment.
James and Jared have heard that Mr May shortened Michaels lunch break.
James says to Jared: What Mr May did was morally wrong.
This statement is discordant with the agents moral framework and
concordant with the appraisers moral framework. We therefore label this
scenario AGDAPC.
Participants answered the following question: To what extent do you think
that what James says is true or false? (1certainly true; 5certainly false).
Thus, the higher the participants scores, the more that their truth evaluation was
concordant with the agents moral frameworks but discordant with the appraisers
moral frameworks. Since this is at odds with the scenario label, we reverse
coded this item. Thus, for the final variable that was used in the analysis, higher
scores indicate that the response was more discordant with the agents moral
frameworks and more concordant with the appraisers moral frameworks.
Participants answered one comprehension question.
Participants were then led to the following text: Now Jared replies to James:
No, what Mr. May did was not morally wrong. This statement is concordant
with the agents moral framework and discordant with the appraisers moral
framework. We therefore label this scenario AGCAPD.
Participants answered the following question: To what extent do you think
that what Jared says is true or false? (1certainly true; 5certainly false).
Again, we reverse coded this item. For the final variable that was used in the
analysis, higher scores indicate that the response was more concordant with
the agents moral frameworks and more discordant with the appraisers moral
frameworks, in line with the label for this scenario.
Participants answered one comprehension question.
Agent Versus Appraiser Moral Relativism: AnExploratory Study 219

Scenario 4
Mark and Matthew are employees in Mr Mays company. They both know that
in their own company, everybody, just like Mark and Matthew themselves,
thinks that reducing wages is a morally wrong punishment, and that shortening
lunch breaks is a morally right punishment.
Mark and Matthew have heard that Mr May shortened Michaels lunch
break. Mark says to Matthew: What Mr. May did was morally wrong.
This statement is discordant with the agents moral framework and
discordant with the appraisers moral framework. We therefore label this
scenario AGDAPD.
Participants answered the following question: To what extent do you think
that what Mark says is true or false? (1certainly true; 5certainly false).
Higher scores on this statement indicate that the participants truth evaluation
was more concordant with both the appraisers and the agents moral
frameworks. We reverse coded this item. For the final variable that was used
in the analysis, higher scores indicate that the response was more discordant
with the agents moral frameworks and more discordant with the appraisers
moral frameworks, in line with the label for this scenario.
Participants answered one comprehension question.
Participants were then led to the following text: Now Matthew replies
to Mark: No, what Mr. May did was not morally wrong. This statement is
concordant with the agents moral framework and concordant with the
appraisers moral framework. We therefore label this scenario AGCAPC.
Participants answered the following question: To what extent do you think
that what Matthew says is true or false? (1certainly true; 5certainly false).
We reverse coded this item, such that higher scores indicate that the response
was more concordant with the agents moral frameworks and more concordant
with the appraisers moral frameworks, in line with the label for this scenario.
Participants again answered one comprehension question.
Participants thus had to indicate the truth of four moral statements. The
variable AGENT TRUTH consists of the following two within-subject levels:
AGCAPCAGCAPD and AGDAPCAGDAPD. The variable APPRAISER
TRUTH consists of the following two levels: AGCAPCAGDAPC and
AGCAPDAGDAPD.
The sailors questionnaire featured the following scenario:
220 Advances in Experimental Moral Psychology

Scenario 1
Mr Johnson is an officer on a cargo ship in 2010, carrying goods along the
Atlantic coastline. All the crew members are American but the ship is mostly
in international waters. When a ship is in international waters, it has to follow
the law of the state whose flag it sails under and each ship can sail under only
one flag. This ship does not sail under the US flag. The law of this ships flag
state allows both whipping and food deprivation as a punishment.
On this ship, food deprivation is always used to discipline sailors who
disobey orders or who are drunk on duty; as a consequence, everyone on
this ship, Mr Johnson as well as all the sailors, has come to think that food
deprivation is a morally permissible punishment. Whipping, however, is never
used to discipline sailors and everyone on this ship. Mr Johnson, as well as all
the sailors, thinks whipping is a morally wrong punishment.
One night, while the ship is in international waters, Mr Johnson finds a
sailor drunk at a time when the sailor should have been on watch. After the
sailor sobers up, Mr Johnson punishes the sailor by giving him 5 lashes with a
whip. This does not go against the law of the flag state.
Subsequent scenarios, experimental and comprehension questions
were analogous to the employees questionnaire: As in the employees
questionnaire, there were eight comprehension questions and six experimental
questions.

Results

In order to ensure that participants read and understood the scenarios, we only
retained those participants that answered all eight comprehension questions
correctly. We analyzed the data from the two questionnaires separately. We
analyzed data from 272 participants (50.4% women) for employees and 109
participants (51.4% women) for sailors. For some analyses, the total number
of participants was lower due to missing values. For employees, mean age
was 34.92 years (SD12.42), ranging from 19 to 75years old. For sailors,
mean age was 35.63 years (SD12.11) ranging from 20 to 68. Participants
were mostly from the United States (58.1% and 69.7%) and India (34.2% and
22.9%) for employees and sailors, respectively.
Agent Versus Appraiser Moral Relativism: AnExploratory Study 221

To determine whether participants considered punishment as less morally


wrong when it was in accordance with the agents frameworks, we used a
mixed design ANOVA with AC and AD as the two within-subject levels of the
variable AGENT, and order of presentation as the between-subject variable. We
found a significant main effect of AGENT on the extent to which participants
thought that the punishment was morally wrong (employees: F(1,270)223.9,
p0.001; sailors: F(1,107)43.2, p0.001). Specifically, participants thought
that the agent concordant punishment was more morally permissible (less
morally wrong) (see Figure 11.1; employees: M 3.94, SD 0.07; sailors:
M2.86, SD0.14) than the agent discordant punishment (employees:
M2.55, SD0.08; sailors: M2.05, SD0.11). We found no significant
order effect (employees: F(1,270)0.05, p0.868; sailors: F(1,107)1.97,
p0.164) and no interaction effect between AGENT and order (employees:
F(1,270)0.406, p0.525; sailors: F(1,107)2.47, p0.119).
To determine whether there was an effect of the agents and appraisers
moral frameworks on participants truth evaluation of a moral statement, we
conducted a mixed design ANOVA with AGENT TRUTH and APPRAISER
TRUTH as within-subject variables and order as between-subject variable.

5.00
Mean moral permissibility of behavior

4.00

3.00

2.00

1.00

0.00
Employees Sailors
QUESTIONNAIRE Error bars: 95% Cl

SCENARIO
Agent CONCORDANT behavior Agent DISCORDANT behavior

Figure 11.1 Moral permissibility of behavior as a function of concordance of the


behavior with the agents moral frameworks, for two different questionnaire vignettes.
222 Advances in Experimental Moral Psychology

We found that the agents moral frameworks (AGENT TRUTH) had an effect
on whether participants thought that the moral statement was true or not
(employees: F(1,270)76.3, p0.001; sailors: F(1,107)53.9, p0.001).
Specifically, participants thought that the statement was more likely to be true
when it was in accordance with the agents moral frameworks (see Figure 11.2;
employees: M3.46, SD0.053; sailors: M3.62, SD0.089) than when
it was not in accordance with the agents moral frameworks (employees: M
2.61, SD0.053; sailors: M 2.61, SD0.096).
We found that the appraisers moral frameworks (APPRAISER TRUTH)
also had a significant effect on whether participants thought that the moral
statement was true or not (employees: F(1,270) 2496, p 0.001; sailors:
F(1,107)33.3, p0.001). Specifically, participants thought that the moral
statement was more likely to be true when it was in accordance with the
appraisers moral frameworks (see Figure 11.3; employees: M3.75, SD
0.051; sailors: M3.71 SD0.081) than when it was not in accordance with
the appraisers moral frameworks (employees: M2.32, SD0.050; sailors:
M 2.51, SD 0.092). We did not find a main effect of order (employees:
F(1,270)0.318, p0.573; sailors: F(1,107)0.067, p0.797).

4.00

3.00
Mean truth of statement

2.00

1.00

0.00
Employees Sailors
QUESTIONNAIRE Error bars: 95% Cl
SCENARIO
Agent CONCORDANT statement Agent DISCORDANT statement

Figure 11.2 Truth assessment of a moral statement as a function of concordance


of the statement with the agents moral frameworks, for two different questionnaire
vignettes.
Agent Versus Appraiser Moral Relativism: AnExploratory Study 223

4.00

3.00
Mean truth of statement

2.00

1.00

0.00
Employees Sailors
QUESTIONNAIRE Error bars: 95% Cl

SCENARIO
Appraiser CONCORDANT statement
Appraiser DISCORDANT statement

Figure 11.3 Truth assessment of a moral statement as a function of concordance of


the statement with the appraisers moral frameworks, for two different questionnaire
vignettes.

For employees, but not for sailors, we found a significant two-way interaction
between AGENT TRUTH and APPRAISER TRUTH (see Figure 11.4;
employees: F(1,270)7.58, p0.006; sailors: F(1,107)0.199, p 0.657).
Examining Figure 11.4, although this interaction was significant, the effect of
agents (or appraisers) moral frameworks was in the same direction in both
conditions. Interestingly, in the employees questionnaire, the statement was
perceived to be more true when it was concordant with the appraisers moral
framework and discordant with the agents moral framework (M 3.40,
SD1.39), than when it was concordant with the agents moral framework
and discordant with the appraisers moral framework (M2.81, SD1.42).
In the sailors questionnaire though, the truth values of these statements
were similar (M 3.21, SD 1.47, M 3.04, SD 1.47). This suggests
that, in the employees questionnaire, the appraisers moral framework was
more important than the agents moral frameworks when there was some
discordance, while in the sailors questionnaire, the appraisers and agents
moral frameworks were almost equally important when there was some
discordance.
224 Advances in Experimental Moral Psychology

5.00

4.00
Mean truth of statement

3.00

2.00

1.00

0.00
I...Appraiser CONCORDANT statement...II...Appraiser DISCORDANT statement...I
SCENARIO Error bars: 95% Cl
SCENARIO
Agent CONCORDANT statement Agent DISCORDANT statement

Figure 11.4 Truth assessment of a moral statement as a function of concordance


of the statement with the agents and with the appraisers moral frameworks, for the
employees questionnaire only.

Because these results suggests that agents and appraisers moral frameworks
independently matter for peoples evaluations of the truth of moral statements,
it might be the case that some people are predominantly and consistently
agent relativists while others are predominantly and consistently appraiser
relativists. In order to explore this possibility, we calculated three new variables:
AGENT DEGREE (AC-AD) as the degree to which participants relativized
the permissibility of behavior according to the agents moral frameworks,
AGENT TRUTH DEGREE (AGCAPCAGCAPD-AGDAPC-AGDAPD)
as the degree to which participants relativized the truth of moral statements
according to the agents moral frameworks, and APPRAISER TRUTH
DEGREE (AGCAPCAGDAPC-AGCAPD-AGDAPD) as the degree to
which participants relativized the truth of moral statements according to the
appraisers moral frameworks.
For sailors, we found that AGENT DEGREE was positively and significantly
related to AGENT TRUTH DEGREE (F(1,108)11.0, p0.001) but not to
APPRAISER TRUTH DEGREE (F(1,108)0.000, p0.989). This suggests
that participants who were relativists with regard to moral permissibility were
more likely to be agent relativists with regard to moral truth. Thus, they might
Agent Versus Appraiser Moral Relativism: AnExploratory Study 225

have been agent relativists with regard to moral permissibility and with regard
to moral truth, and therefore quite consistent in their relativist intuitions.
However, for employees, it was just the other way around: AGENT
DEGREE was positively and significantly related to APPRAISER TRUTH
DEGREE (F(1,271)5.30, p0.022) but not to AGENT TRUTH DEGREE
(F(1,271)0.141, p0.708). In this scenario, participants might have been
inconsistent in their relativist intuitions, alternating between agent relativist
speech acts with regard to moral permissibility and appraiser relativist speech
acts with regard to moral truth. Alternatively, participants might have interpreted
the actors in the moral permissibility scenario as appraisers instead of agentsas
explained in the introduction. Thus, they might have been appraiser relativists
with regard to moral permissibility and with regard to moral truth.
Finally, for employees, but not for sailors, we found a significant interaction
effect between APPRAISER TRUTH and order of presentation (see Figure11.5;
F(1,270)26.76, p0.001). Examining Figure 11.5 though, we see that the

5.00

4.00
Mean truth of statement

3.00

2.00

1.00

0.00
Appraiser CONCORDANT FIRST Appraiser CONCORDANT SECOND
ORDER Error bars: 95% Cl
SCENARIO
Appraiser CONCORDANT statement
Appraiser DISCORDANT statement

Figure 11.5 Truth assessment of a moral statement as a function of concordance


of the statement with the appraisers moral frameworks, for two different orders of
presentation, for the employees questionnaire only.
226 Advances in Experimental Moral Psychology

effect of appraisers moral frameworks was again in the same direction in both
orders. Thus, the folk seem to be appraiser relativists regardless of order of
presentation or variation in the appraisers moral frameworks.
There were no interaction effects for AGENT TRUTH and order of
presentation.

Discussion

We investigated whether lay peoples moral evaluations were in accordance


with agent moral relativism. We tested this in two ways. First, we asked
participants to make a moral judgment about an act while manipulating
the moral frameworks of the agents. We found that participants were more
likely to consider the act as morally permissible when it was in accordance
with the agents moral frameworks than when it was not in accordance with
the agents moral frameworks. This suggests that agents moral frameworks
have an effect on lay peoples moral speech acts about the moral wrongness
or permissibility of behavior: People are more likely to say that a behavior
is morally permissible when it is in accordance with the agents moral
frameworks compared to when it is not in accordance with the agents moral
frameworks.
Second, we asked participants to assess the truth of a moral statement while
manipulating the moral frameworks of the agents and of the appraisers. We
found that participants were more likely to answer that a moral statement is
true when its message was in accordance with the agents moral frameworks.
This suggests that agents moral frameworks have an effect on lay peoples
moral speech acts about the truth of moral statements: People are more likely
to say that a moral statement is true when the message is in line with the
agents moral frameworks compared to when the message is not in line with
the agents moral frameworks.
However, we also found that participants were more likely to answer
that a moral statement is true when its message was in accordance with the
appraisers moral frameworks. This suggests that appraisers moral frameworks
also have an effect on lay peoples moral speech acts about the truth of moral
statements.
Agent Versus Appraiser Moral Relativism: AnExploratory Study 227

For employees, but not for sailors, we found two interaction effects.
We found a significant two-way interaction between AGENT TRUTH
and APPRAISER TRUTH and between APPRAISER TRUTH and order of
presentation. However, the effects were always in the same direction, meaning
that our second conclusion is upheld: individuals take both agents and
appraisers moral frameworks into account when assessing the truth of moral
statements. Further research may reveal whether these interaction effects are a
consistent pattern in folk moral relativism, or whether they were an artifact of
the employees scenario.
Finally, we explored the possibility that some people are predominantly and
consistently agent relativists while others are predominantly and consistently
appraiser relativists. Our results are not conclusive. Whether people are
predominantly agent moral relativists or appraiser moral relativists might
vary depending on the scenario or depending on the moral aspect (truth vs.
permissibility) that is to be evaluated.
Our results are not definitive. Notwithstanding the fact that we excluded all
participants who did not fill out all comprehension questions correctly, given
the complexity of our scenarios and questions, future investigations would
benefit from simpler materials. Also, we examined assessments of only two
acts, namely reduction in lunch time, and whipping, both as a punishment.
The extent of lay peoples moral relativism may depend on the kind of act
or the modality of the moral statement. In addition, it remains to be seen
whether agent relativism and appraiser relativism are stable intuitions or
vary across a range of situations. These and other possibilities warrant future
research, some of which has already been undertaken by the present authors
(Quintelier etal. 2013).
With the above caveats in mind, our study reveals that there is inter-
individual as well as intra-individual variation in whether individuals
relativize moral speech acts to agents or to appraisers. Such variation in types
of moral intuitions is in line with previous suggestions (e.g., Gill 2009; Sinnott-
Armstrong 2009) that different individuals employ quite divergent moral
language. The variation that we have documented thus supports Gills position
that philosophical theories that appeal to lay peoples speech acts cannot rely
on a handful of commonsense judgments, (2009, p. 217), as the philosophers
commonsense judgment will often fail to reflect the actual distribution of
228 Advances in Experimental Moral Psychology

moral reasoning among the folk. Moreover, that people may employ divergent
relativist forms of language indicates that researchers of moral relativism
cannot make claims regarding moral relativism without first specifying the
type of relativism at issue, nor can they attend only to appraiser relativism.
Methodologically, researchers must take care in designing stimuli and queries
in order to minimize ambiguity as to which type of relativism is made salient.
Whether they be empiricists or theorists, researchers of moral relativism must
take seriously the existence of agent moral relativism, and must consider the
differences between it and appraiser moral relativism.

Note

* Authors note: Katinka J. P. Quintelier, Amsterdam Business School, University of


Amsterdam, Plantage Muidergracht 12, 1080TV Amsterdam, The Netherlands.
Delphine De Smet, Department of Legal Theory and Legal History and Research
Unit The Moral Brain, Ghent University, Universiteitstraat 4, B-9000 Ghent,
Belgium. Daniel M. T. Fessler, Department of Anthropology and Center for
Behavior, Evolution, & Culture, 341 Haines Hall, University of California, Los
Angeles, Los Angeles, 375 Portola Plaza, Los Angeles, CA 90095-1553, USA.
Correspondence should be addressed to Katinka J. P. Quintelier, Amsterdam
Business School, University of Amsterdam, Plantage Muidergracht 12, 1080 TV
Amsterdam, The Netherlands. E-mail: K.Quintelier@uva.nl. Acknowledgments:
The authors thank the participants of the first workshop of the Experimental
Philosophy Group UK, and Hagop Sarkissian and Jen Cole Wright for their
valuable feedback. The first author received funding from the Flemish Fund for
Scientific Research (FWO) and from the Konrad Lorenz Institute for Evolution
and Cognition Research while working on this chapter.

References

Beebe, J. (2010). Moral relativism in context. Nos, 44(4), 691724. doi:10.1111/


j.1468-0068.2010.00763.x
Brogaard, B. (2008). Moral contextualism and moral relativism. Philosophical
Quarterly, 58(232), 385409. doi:10.1111/j.1467-9213.2007.543.x
Agent Versus Appraiser Moral Relativism: AnExploratory Study 229

Gill, M. (2009). Indeterminacy and variability in meta-ethics. Philosophical Studies,


145(2), 21534. doi:10.1007/s11098-008-9220-6
Goodwin, G. P., and Darley, J. M. (2008). The psychology of meta-ethics:
Exploring objectivism. Cognition, 106(3), 133966. doi:10.1016/j.
cognition.2007.06.007
(2010). The perceived objectivity of ethical beliefs: Psychological findings and
implications for public policy. Review of Philosophy and Psychology, 1(2), 16188.
doi: 10.1007/s13164-009-0013-4
Harman, G. (1975). Moral relativism defended. The Philosophical Review, 84(1),
322. Stable URL: http://links.jstor.org/sici?sici00318108%28197501%2984%3A
1%3C3%3AMRD%3E2.0.CO%3B2-%23
Harman, G., and Thompson, J. J. (1996). Moral Relativism and Moral Objectivity.
Blackwell.
Lyons, D. (1976/2001). Ethical relativism and the problem of incoherence. In P. K.
Moser and T. L. Carson (eds), Moral relativism - A reader. New York/Oxford:
Oxford University Press, pp. 12741.
Plato (1921). Theaetetus (H. N. Fowler, Trans. Vol. 12). Cambridge, MA/London:
Harvard University Press/William Heinemann.
Prinz, J. J. (2007). The Emotional Construction of Morals. Oxford: University Press.
Quintelier, K., and Fessler, D. (2012). Varying versions of moral relativism: The
philosophy and psychology of normative relativism. Biology and Philosophy,
27(1),95113. doi:10.1007/s10539-011-9270-6
Quintelier, K., Fessler, D. M. T., and De Smet, D. (2012). The case of the drunken
sailor: On the generalizable wrongness of harmful transgressions. Thinking &
Reasoning. doi:10.1080/13546783.2012.669738
(2013). The moral universalism-relativism debate. Klesis Revue Philosophique, 27,
21162.
Richard, M. (2004). Contextualism and relativism. Philosophical Studies, 119(1),
21542. doi:10.1023/B:PHIL.0000029358.77417.df
Sarkissian, H., Park, J., Tien, D., Wright, J., and Knobe, J. (2012). Folk moral
relativism. Mind and Language, 26(4), 482505. doi:10.1111/j.1468-
0017.2011.01428.x
Sinnott-Armstrong, W. (2009). Mixed-up meta-ethics. Philosophical Issues, 19(1),
23556. doi:10.1111/j.1533-6077.2009.00168.x
Streiffer, R. (1999). Moral Relativism and Reasons for Action. Doctor of Philosophy,
Massachusetts Institute of Technology, Cambridge.
Williams, B. (1972). Morality: An Introduction to Ethics. Cambridge: Cambridge
University Press.
230 Advances in Experimental Moral Psychology

Wright, J. C., Grandjean, P., and McWhite, C. (2012). The meta-ethical grounding of
our moral beliefs: Evidence for meta-ethical pluralism. Philosophical Psychology,
26(3), 126. doi:10.1080/09515089.2011.633751
Wright, J. C., McWhite, C., and Grandjean, P. T. (in press). The cognitive
mechanismsof intolerance: Do our meta-ethical commitments matter? In
T. Lombrozo, S. Nichols and J. Knobe (eds), Oxford Studies in Experimental
Philosophy, Vol. 1. Oxford: Oxford University Press.
Part Three

Measuring Morality
232
12

Know Thy Participant: The Trouble with


Nomothetic Assumptions in Moral Psychology
Peter Meindl and Jesse Graham*

Imagine a world in which researchers measure morality by determining


how often people eat peanut butter sandwiches (considered morally good
behavior) versus how often they eat jelly sandwiches (considered morally
bad behavior). Researchers in this world investigate factors that influence
morality by assessing the relationships that exist between situational variables,
individual differences, and the eating of peanut butter and jelly sandwiches.
Also imagine that in this world there exist reasonable philosophical
arguments for why peanut butter sandwich eating is morally good and jelly
sandwich eating is morally bad, but there also exist well-worn ethical theories
that reject the moral significance of peanut butter and jelly sandwich eating
altogether. Additionally, although a majority of this worlds people consider
peanut butter sandwich eating to be highly morally good and jelly sandwich
eating to be highly morally bad, a sizeable portion of the population also
considers peanut butter and jelly sandwich eating morally irrelevant. Further,
a large percentage of people in this world actually hold views that are
diametrically opposed to the implicit assumptions of researchersaccording
to these people, it is morally bad to eat peanut butter sandwiches and it is
morally good to eat jelly sandwiches. The field of peanut-butter-and-jelly
moral psychology is in crisis.
In this chapter, we argue that researchers in our own world currently study
morality and its contributing factors in much the same way as researchers in
the moral PB&J world described above. Moral psychologists tend to deem
certain patterns of thoughts and behaviors moral even though neither
234 Advances in Experimental Moral Psychology

scholars nor laypeople agree about the moral significance of these thoughts
and behaviors. As a consequence, moral psychologists often investigate the
causes and consequences of behaviors that either have little impact on the
world or are of little moral interest to scholars and laypeople. We also explain
how a small change in how morality is typically assessed could significantly
increase the scope and importance of morality research. Finally, we present
an empirically informed list of traits and behaviors that we consider useful
proxies of morality.
In this chapter, we will focus primarily on the construct of morality; however,
most of the concerns we raise and the solutions we offer are relevant to other
normative concepts that are likely of interest to readers of this volume, such as
prosociality and selfishness. Although we think moral psychologists current
nomothetic bent poses a problem for many areas of psychology (e.g., judgment
and decision-making, motivation), here we focus on moral behavior, the area
of research where we think this bias is most prevalent and problematic.

Operationalizing MoralityAn
introduction to the problem

Traditionally, researchers have taken one of two approaches to operationalizing


the concept morality. Following Frimer and Walker (2008), we refer to these
as the third-person and first-person approaches. Like in the moral PB&J world,
researchers in our world tend to take a third-person, normative, non-neutral
approach to operationalizing the concept of morality, in which what is moral
is determined by the researchers, who deem one set of principles or actions
morally good and another set morally bad. The principles that researchers use
as proxies for morality can be based on normative ethical theory, religious
prescriptions, cultural values, lay definitions of morality, or the idiosyncratic
predilections of individual researchers. This is called the 3rd-person approach
because the participants perspective of what is moral does not influence how
the researcher defines morality.
The third-person approach is widely used in moral psychology. As Frimer
and Walker (2008) noted, researchers investigating moral behavior have used
bravery (Walker and Frimer 2007), extraordinary care (Matsuba and Walker
The Trouble with Nomothetic Assumptions in Moral Psychology 235

2004), social activism (Colby and Damon 1992), honesty (Derryberry and
Thoma 2005), environmentally friendly behavior (Kaiser and Wilson 2000),
and community service (Hart etal. 2006) to operationalize morality. To this
list, we add volunteerism (Aquino and Reed 2002), honesty (Teper etal. 2011),
and cooperation (Crockett etal. 2010), to name a few.
The traditional alternative to the third-person approach is the first-person,
value-neutral, descriptive approach (Frimer and Walker 2008). In contrast
to the third-person approach, the first-person approach assesses morality
according to what the participant herself considers moral. The impartial
researcher deems each individuals set of principles or actions moral, and
deems failing to follow or enact those principles or actions not moral.
Although no less a figure than Gordon Allport proposed that a first-person
approach is the only valid means of assessing moral behavior (Allport 1937),
and researchers have long warned of the flaws inherent in third-person
morality research (e.g., Pittel and Mendelsohn 1966), moral behavior has rarely
been assessed using the first-person approach. Psychologists have occasionally
taken participants idiosyncratic moral beliefs into account when determining
which variables to include in their analyses, but have taken this step almost
exclusively when studying moral cognition (e.g., Goodwin and Darley 2008;
Wright 2008; Wright 2010), but not when studying moral behavior.
That said, the first-person approach is very similar to the approach taken
by advocates of social-cognitive process models of general personality, such
as Cognitive Affective Personality System (Mischel and Shoda 1995, 1999)
and Knowledge-and-Appraisal Personality Architecture (Cervone 2004).
These models were developed largely as a response to calls that behavior was
not consistent across situations. The creators and advocates of these models
suggested that past research showed that behaviors associated with traits such
as agreeableness and conscientiousness were not meaningfully consistent,
because participants perceptions of situations were not taken into account.
When researchers assessed behavior across situations that were psychologically
similar according to the participants (not just nominally similar to an outsiders
perspective), researchers discovered that cross-situational consistency was
high (Mischel and Shoda 1998).
The idea behind the first-person approach to operationalizing morality is
that something similar might be true for moral behavior: peoples behavior
236 Advances in Experimental Moral Psychology

might not fall in line with what researchers deem moral (i.e., what is nominally
moral), but their behavior might fall in line with what they personally consider
moral (i.e., what is psychologically moral).

Comparing the two approaches

The first- and third-person approaches both have a number of unique


advantages and disadvantages, but both have their own main advantage and
disadvantage. Because the most important advantage of one approach naturally
relates to the most important disadvantage of the other approach, here we will
discuss the main disadvantage of each approach.

Third-person disadvantage: Poor construct validity


We begin by discussing the main disadvantage associated with the third-person
approachit results in poor construct validity. The third-person approachs
poor construct validity stems from the fact that, though objectively moral facts
might exist, there is certainly no consensus about whether moral objectivity
is possible, let alone which theories or principles are the most objectively
correct (Shafer-Landau 1994). As Kohlberg and Mayer (1972, p. 479) noted,
one persons integrity is another persons stubbornness, [and one persons]
honesty in expressing your true feelings is another persons insensitivity to
the feelings of others. Hence, though some people might have reason to
consider a behavior virtuous, someone else might have reason to consider it
virtue-neutral or even vicious. It was due in part to the apparent arbitrariness
of lists of moral traits and behaviors (pejoratively dubbed the bag of virtues
problem; Kohlberg 1970) that Kohlberg avoided research on moral behavior
and instead focused on moral reasoning (Narvaez and Lapsley 2009), and it is
in part for this same reason that it might be wise for researchers to avoid the
third-person approach to defining morality.
Not only do scholars of ethics disagree about what is morally good and
morally bad, but laypeople do as well. Here we call this lay disagreement. There
is evidence of at least three important types of lay moral disagreement: (a)
The Trouble with Nomothetic Assumptions in Moral Psychology 237

variability in the degree to which laypeople consider actions and traits morally
relevant, (b) disagreement about the moral valence of certain actions and
behaviors (e.g., some see obedience to authority figures as good, some see it as
bad), and, perhaps most problematic for moral psychology, (c) disagreement
between researchers and laypeople about the moral relevance of several
behaviors and traits.
The first two types of lay disagreement demand little attention here, as
intra- and intercultural variations in moral concerns and judgments have
recently received much attention in the psychology literature (for a review, see
Graham etal. 2013). Consequently, we will focus on the discrepancy between
the amount of moral relevance that researchers place on behaviors and traits
themselves (or assume that their participants place on them) and the moral
relevance that their participants actually place on these behaviors and traits
(if any at all).

Empirical evidence
To begin to assess this discrepancy, we instructed two groups of participants
to rate either the moral importance of a list of traits and behaviors (How
important it is to have each of the following traits in order to be a moral
person?; Table 12.1) or the moral valence of traits and behaviors (How
morally good or morally bad it is to possess or perform each of the following
traits or behaviors?; Tables 12.2 and 12.3). The moral importance of each
trait was rated by 905 participants on YourMorals.org (YM), and the moral
valence of each behavior or trait was rated by 125 participants on Amazons
Mechanical Turk. Both of these qualify as Western, Educated, Industrialized,
Rich, and Democratic (Henrich et al. 2010) samples, so it is likely that the
researcher-participant discrepancies suggested here underestimate the true
discrepancies.
Included in our trait list were all traits that have been previously included
in morally relevant trait lists that we are aware of (e.g., Aquino and Reed 2002;
Lapsley and Lasky 2001; Smith etal. 2007; Walker and Pitts 1998), as well as
traits listed by participants in our own pretests who completed open-ended
questions such as In order to be moral, what traits are important for people to
possess? The list of behaviors consisted of (a) actions that psychologists often
238 Advances in Experimental Moral Psychology

Table 12.1 Moral importance survey

Mean Mean
Behavior score SD Behavior score SD
Honest 4.39 0.90 Wise 3.00 1.34
Just 4.20 0.96 Controls thoughts 2.98 1.34
Compassionate 4.04 1.10 Straightforward 2.95 1.22
Treats people 3.91 1.20 Courageous 2.87 1.28
equally
Genuine 3.86 1.07 Hardworking 2.83 1.24
Kind 3.83 1.08 Environmentally 2.76 1.19
friendly
Honorable 3.81 1.16 Purposeful 2.73 1.19
Tolerant 3.80 1.12 Perseverant 2.71 1.18
Responsible 3.74 1.07 Controls 2.68 1.16
emotions
Merciful 3.68 1.16 Modest 2.65 1.17
Humane toward 3.61 1.19 Friendly 2.61 1.17
animals
Forgiving 3.59 1.19 Brave 2.61 1.24
Respectful 3.56 1.19 Determined 2.56 1.22
Conscientious 3.51 1.09 Non-materialistic 2.48 1.23
Helpful 3.44 1.00 Resourceful 2.22 1.20
Nonjudgmental 3.34 1.32 Optimistic 2.18 1.22
Loyal 3.34 1.19 Spends money 2.12 1.12
wisely
Giving 3.31 1.08 Spiritual 1.88 1.20
Rational 3.29 1.28 Obedient 1.80 1.02
Self-controlled 3.28 1.18 Is patriotic 1.59 0.97
Generous 3.24 1.14
Supportive 3.22 1.08
Selfless 3.19 1.26
Patient 3.10 1.15
Cooperative 3.01 1.15
Note: 5It is extremely important that a person possesses this characteristic (in order to be moral).
1It is not important that a person possess this characteristic (in order to be moral). Sample sizes range
from 867 to 905 raters for each item.
The Trouble with Nomothetic Assumptions in Moral Psychology 239

Table 12.2 Moral valence survey (behaviors)

Mean
Behavior score SD
Kicking a dog in the head, hard. (Graham etal. 2009) 8.54 0.99
Steal a cellphone. 7.96 1.67
A psychologist tells his or her participants that their behavior is 7.72 1.65
anonymous when it is not.
Cheat on trivia game (for money). (DeAndrea etal. 2009) 7.27 1.50
Lie about predicting the outcome of a coin toss (for money). 7.17 1.51
(Greene and Paxton 2009)
Gossiping. 6.89 1.33
Failing to pay for a subway ticket which costs somewhere between 6.72 1.24
$1.50 and $5.00.
Not stopping at a stop sign. 6.60 1.43
Not recycling. 6.08 1.26
Being impatient with people. 6.07 1.33
Illegally watching movies online. 6.06 1.27
Looking at pornography. 6.05 1.42
Keeping the majority of $20 in an ultimatum game. 5.97 1.38
Failing to flip a coin to decide whether to place self or other 5.86 1.37
participant in a positive consequence condition (and choosing
to place self in positive condition). (Batson etal. 1999)
Showing up late for something. 5.85 1.07
Not holding a door open for someone behind you. 5.84 1.02
Taking a pen that is not yours. (Mullen and Nadler 2008) 5.76 1.09
Illegally walking across a street. 5.57 0.83
Having the opportunity to keep $50 for oneself or keeping $25 and 5.56 1.26
giving $25 to charity, and deciding to keep all $50 for oneself.
Not cooperating in a one shot public goods game. 5.48 1.37
Lying in order to avoid hurting someones feelings. 5.21 1.52
Eating pickles. 4.97 0.48
Defecting in a one shot prisoners dilemma game. 4.94 1.26
Cooperating in a one shot prisoners dilemma game. 3.77 1.62
Giving the majority of $20 away in a one shot Dictator Game. 3.61 1.50
(Continued)
240 Advances in Experimental Moral Psychology

Table 12.2 (Continued)

Mean
Behavior score SD
Agree to talk to xenophobic inmates on the virtues of immigration. 3.21 1.75
(Kayser etal. 2010)
Helping an experimenter pick up and organize dropped papers. 3.01 1.32
(van Rompay etal. 2009)
Having the opportunity to keep $50 for oneself or keeping $25 and 2.57 1.35
giving $25 to charity, and deciding to give $25 to charity and
keeping $25 for oneself.
Note: 1This is an extremely morally good behavior/trait; 5This is neither a morally good nor a
morally bad behavior/trait; 9This is an extremely morally bad behavior/trait. Sample sizes range from
94 to 124 raters for each item.

use as proxies of morality and (b) behaviors we thought laypersons would


consider morally relevant and should be relatively easy to assess. In both
surveys we also included presumably morally neutral traits and behaviors
(e.g., eating pickles) that we used as points of comparison.
People rated the moral importance and moral goodness of many of the
traits and behaviors on our list as might be expected, but other traits and
behaviorsincluding those that have been used as proxies of morality and
prosocialitywere rated differently than researchers might expect. For
instance, though researchers have often used peoples choices in prisoners
dilemma games (PDGs) and public goods games as proxies of moral or
prosocial behavior (e.g., Batson and Moran 1999; Cohen etal. 2006; Rand etal.
2012; Twenge etal. 2007), our findings suggest that laypeople do not consider
such cooperation in general to be highly morally relevant or morally good.
In fact, participants considered non-cooperation in these games to be almost
entirely morally neutral. For instance, non-cooperation in a classic PDG was
rated as no more morally bad than eating pickles (on average, both were rated
as This is neither a morally good nor a morally bad behavior; see Table 12.2).
Similarly, participants rated behaviors such as helping an experimenter pick up
dropped papersa behavior that has often been used as a proxy for prosocial
behavior (e.g., Isen and Levin 1972)as only slightly morally good. As a
point of comparison, the use of deception in psychology researchsomething
common in experimental moral psychologytended to be rated as extremely
morally bad. Overall, these results suggest that laypersons and researchers
The Trouble with Nomothetic Assumptions in Moral Psychology 241

Table 12.3 Moral valence survey (traits)

Mean Mean
Behavior score SD Behavior score SD
Honest 1.62 1.05 Patient 2.72 1.29
Treats people equally 1.82 1.02 Open-minded 2.90 1.45
Compassionate 1.95 1.14 Friendly 2.97 1.36
Humane toward 2.03 1.33 Self-controlled 2.99 1.38
animals
Honorable 2.05 1.24 Non-materialistic 3.02 1.41
Charitable 2.06 1.07 Cooperative 3.05 1.36
Giving 2.13 1.24 Conscientious 3.07 1.43
Respectful 2.14 1.22 Modest 3.16 1.36
Faithful 2.15 1.43 Brave 3.42 1.48
Helpful 2.22 1.16 Wise 3.46 1.68
Generous 2.24 1.23 Perseverant 3.57 1.47
Is Fair 2.25 1.28 Patriotic 3.59 1.50
Hardworking 2.45 1.25 Independent 3.93 1.38
Loyal 2.48 1.46 Obedient 3.94 1.59
Empathetic 2.49 1.25 Sociable 4.00 1.37
Dependable 2.53 1.28 Intelligent 4.09 1.46
Humble 2.55 1.36 Lively 4.13 1.34
Polite 2.55 1.40 Bold 4.30 1.19
Selfless 2.59 1.70 Creative 4.32 1.18
Tolerant 2.61 1.27 Perfectionist 4.67 0.92
Nonjudgmental 2.65 1.51
Genuine 2.67 1.33
Environmentally 2.69 1.26
friendly
Note: 1This is an extremely morally good behavior/trait; 5This is neither a morally good nor a
morally bad behavior/trait; 9This is an extremely morally bad behavior/trait. Sample sizes range from
95 to 125 raters for each item.

disagree about the morality of traits and behaviors. Consequently, the


behaviorsresearchers use as indicators of morality may not be optimal proxies
of moral behaviora potential pitfall that seems to inherently accompany the
third-person approach.
242 Advances in Experimental Moral Psychology

Assessing a different construct


Due to the normative and descriptive disagreement described above, any time
a researcher conceptualizes morality in its broadest sense (i.e., in relation to
ethics in general), the third-person approach is likely to result in poor construct
validity. Due to ongoing normative and descriptive moral disagreement, the
third-person approach can only test the extent to which peoples behavior is
in line with the tenets of one particular set of ethical principles. Thus, this
approach may produce information on the degree to which people behave in
accord with particular moral principles, but assessing moral behaviors that
are in line with only one particular set of moral beliefs does not result in
information about morality in general (i.e., morality in its most overarching
sense), which appears to often be the goal of researchers who investigate
moral behavior (e.g., Aquino etal. 2009; Crockett etal. 2010; Gu etal. 2013;
Jordan etal. 2011; Kouchaki 2011; Perugini and Leone 2009; Reynold etal.
2010; Sachdeva etal. 2009; Teper etal. 2011). A person can consistently behave
contrary to one code of ethics while simultaneously adhering to a different
code of ethics (e.g., people involved in organized crime who adhere to strict
norms of loyalty and respect). Researchers who use a third-person approach
may thus be assessing a construct (behavior that is in line with one particular
moral viewpoint or belief) that is different from the construct they intend to
assess (morality in general).
Some psychologists may not be interested in morality in general; instead
they might be interested only in certain behaviors that they personally
consider moral or immoral. For such researchers, the concerns mentioned
above still deserve consideration for at least two reasons. First, we believe
that if psychologists were to only focus on a short list of moral behaviors, they
would be ignoring much of what makes morality and moral life interesting
and important (including the very arguments about what constitutes
morality that make universally accepted objective criteria impossible).
Furthermore, even when researchers are only interested in particular types
of morally relevant behaviors, taking a strict third-person approach is likely
to produce many of the same shortfalls described above. For instance, if a
psychologist is only interested in assessing cheating behavior it would be
unwise to use a measure of cheating that they only assume their participants
The Trouble with Nomothetic Assumptions in Moral Psychology 243

consider meaningfully bad. As the results of our surveys suggest (Table


12.1), laypersons consider behaviors that are used as standard measures of
cheating to be only mildly wrong. When psychologists make assumptions
about the moral importance of their dependent variables, they run the
risk of assessing behaviors that are so mundane that they are not assessing
something of real interest or importance. Consequently, in many cases
using these measures as dependent variables may provide researchers with
relatively little information about the relationship between cheating and
other variables of interest.

First-person disadvantage: Complexity and impracticality


Recall that the first-person approach operationalizes morality according to
whatever each person considers moral (Frimer and Walker 2008). Thus, 1st-
person moral research is immune to the problems with the third-person
approach that were outlined above. That said, an important disadvantage of
the first-person approach is its complexity and relative impracticality. This
approach requires that researchers accurately assess a persons moral values;
unfortunately, social desirability and self-deception are likely to make it
extremely difficult for researchers to do so. It is likely that some moral beliefs
are more socially desirable than others, and thus it is likely that people
will sometimes incorrectly report their moral beliefs in order to optimize
appearances. For instance, conservatives participating in a study run by a
presumably liberal researcher may not be forthcoming about their true moral
beliefs, even if their anonymity is ensured. As for self-deception, research
shows that self-enhancement causes people to label a trait as good if they think
they possess that trait (Dunning etal. 1991); thus, in a study setting people
might define what is moral according to how they actually act. As a result, the
moral values people list in a first-person study might not necessarily be those
which they consider moral, but instead they might simply be descriptions of
how they typically act.1 None of this is to say that third-person assessments of
morality are immune to the effects of social desirability and self-deception;
surely third-person self-report measures of morality are highly susceptible
to these biases. However, first-person approaches require two steps that may
244 Advances in Experimental Moral Psychology

be influenced by these biases (value assessment and behavior assessment)


whereas the third-person approach is vulnerable to these biases only once
(during behavioral assessment).
The first-person approachs second source of complexity is that a strict
first-person approach would probably require scholars to assess behaviors,
thoughts, and emotions that are in line with an impractically large number
of moral principles. Past research suggests that people attach high levels of
moral significance to hundreds of different traits and actions (Aquino and
Reed 2002; Lapsley and Lasky 2001; Smith etal. 2007; Walker and Pitts 1998).
Even if scholars assessed moral behavior only according to those moral
principles people self-described as their most valued, it is likely that subjects
would list a very large amount of principles. For instance, in a recent study we
asked participants to list only their most important moral values; even after
combining similar answers we were left with more than 70 different values.
And these are only consciously endorsed moral values; if one were to try
to include moral concerns or intuitions of which participants are unaware,
this would add a huge new layer of complexity to the first-person approach.
In contrast, the third-person approach is relatively simple and convenient:
researchers simply pick a behavior or idea they deem as moral, and then assess
participants accordingly.

A combined first- and third-person operationalization

The advantages of the two approaches complement each other, and their
disadvantages mostly negate each other. For this reason, we suggest that a
synthesis of the two approaches can lead to the best operationalizations of
morality. We call this the mixed approach. As is the case with the third-
person approach, we suggest that scholars should assess morality according
to predetermined moral principles, but in line with the first-person
approach we suggest that scholars should also examine which principles
each participant (consciously) values. Thus, in effect we are suggesting that
researchers interested in moral behavior should assess moral hypocrisy,
which is often conceptualized as the degree to which people go against
what they consider morally rightand thus involves both first-person
The Trouble with Nomothetic Assumptions in Moral Psychology 245

assessments of individual moral values and third-person assessments of


moral behavior.
The advantages of the two approaches can be combined by assessing morality
using traits and behaviors that prior research suggests people (either in general
or in a particular sample) consider highly morally relevant. For instance, a
researcher using the mixed approach might assess morality by measuring a
persons honesty, because research suggests that people in general consider
honesty to be highly morally important (Aquino and Reed 2002; Smith etal.
2007; see also Tables 12.1 and 12.3). Surprisingly, researchers interested in
assessing morality very rarely do this (or if they do, they fail to mention that
this logic underlies their choice of moral proxy).
Alternately, a researcher could choose to operationalize morality as, say,
bravery because pretests with their particular sample suggest that their
participants generally consider bravery to be the most morally important trait.
Furthermore, a researcher who is interested in only one particular type of
morally relevant behavior (e.g., cheating) may also use the mixed approach; this
type of researcher could first determine what type of cheating behavior their
participants are likely to consider highly morally bad (but are still relatively
likely to perform). All that the mixed approach requires is information on the
extent to which participants consider traits and/or behaviors morally good or
morally bad.
The results of our recent surveys provide insight into which traits and
behaviors researchers can use in order to successfully measure moral
behavior and which traits and behaviors are not good candidates for this
purpose. Some of these suggestions are underwhelming: as most readers
would probably expect, diverse groups of peopleliberals and conservatives,
men and women, theists and atheiststend to rate traits such as honesty and
fairness as highly morally important (Table 12.1). More surprisingly, people
in general do not rate traits such as cooperative, helpful, or charitable as
highly morally relevant. Researchers should be very cautious when making
assumptions about normative concepts such as morality, selfishness, and
prosociality.
To reiterate, researchers could choose to measure morality according
to that which previous research (such as our own research summarized
above) suggests people in general consider highly morally important, but
246 Advances in Experimental Moral Psychology

they could also determine what their own particular participants consider
most morally relevant by performing pretests with the particular population
or even sample that they will use in their research. As long as a group of
participants (probable) moral beliefs are somehow being taken into account,
this approach will likely produce more accurate results than the third-person
research, and at the same time require far less time and effort than a purely
first-person approach.

Examples of the mixed approach


A mixed approach is not the norm in moral psychology, but it has occasionally
been used. For example, research on the effect that moral convictions have
on behavior tends to involve assessments of each participants personal
moral convictions in regard to the target behavior (Skitka 2010; Skitka
and Baumann 2008). And in their investigation of the neural correlates
of admiration and moral elevation, Immordino-Yang et al. (2009) first
determined for each individual participant where the emotional high point
of the eliciting stimuli were, aiding their ability to assess moral emotions in
the scanner by taking individual variation in the emotional reaction into
account.
Given that the mixed approach is a means of assessing moral hypocrisy, it
should not be surprising that in the past moral hypocrisy has been assessed by
way of the mixed approach. For instance, in order to investigate the situational
factors that cause people to act unfairly, Batson et al. (1999) ensured they
were using an appropriate proxy for unfairness by asking a separate sample
of participants whether they considered their behavior of interestgiving
oneself a chance to win money rather than giving someone else a chance to
win moneyto be unfair.
Other examples of the mixed approach in moral psychology go beyond
moral hypocrisy. For instance, prior to performing research designed to
test whether various presumed moral behaviorshelping behavior, moral
courage, and heroismare associated with peoples moral prototypes,
Osswald, Greitemeyer, Fischer, and Frey (2010) first determined whether
people actually considered these three behaviors to be distinct moral
behaviors. In order to do this, Osswald et al. (2010) simply performed a
The Trouble with Nomothetic Assumptions in Moral Psychology 247

pretest with participants who were demographically similar to the people


who would participate in their main studies. These pretest participants were
asked to rate the morality of different scenarios in which helping behavior,
moral courage, or heroism were depicted, and by way of this simple pretest it
was determined that people indeed considered these behaviors to be distinct
moral behaviors.
Had these authors taken a third-person approach, the results of these
studies may have been less valid and their implications would have been
less profound (for the reasons described previously), and had either Batson
et al. (1999) or Osswald et al. (2010) taken a first-person approach, their
projects would have become much more complicated. Thus, by taking
a mixed approach, these authors struck an important balance between
methodological rigor and practicality. These extra steps required relatively
little effort, but in both cases the extra step taken provided strong support
for the assumptions that lay at the foundation of their respective research
projects.

Conclusion

Throughout the history of the field of psychology only a handful of researchers


have extolled the virtues of a first-person approach to moral research (e.g.,
Allport 1937; Blasi 1990; Colvin and Bagley 1930). In contrast, contemporary
moral psychologists usually assess moral behavior using a third-person
approach. At this point, however, it should be clear that both approaches have
disadvantages. Considering the third-person approachs potentially negative
impact on construct validity and the inherent complexity of effectively assessing
morality using the first-person approach, one might argue thatas Allport
(1937) contended long agomorality is simply not a topic for psychological
inquiry. However, in psychology and perhaps especially in moral psychology,
error and inaccuracy will inevitably exist; this does not suggest that an entire
field of study should be ignored. Nor does this mean we should settle for
research with mediocre validity. Instead, it necessitates that the weaknesses
that produce this error are minimized by operationalizing morality in the least
problematic way possible.
248 Advances in Experimental Moral Psychology

By asking large groups of laypeople what traits and behaviors they consider
moral, immoral, or morally irrelevant, we found evidence of problematic
discrepancies between researchers nomothetic assumptions and participants
views. For instance, measures that many researchers use as proxies for morality
(e.g., cooperation or defection in PDG, picking up papers in the lab) are seen
as about as morally irrelevant as eating pickles, while the actual behavior
of moral psychology researchers (deceiving participants) is seen as highly
morally bad. To help address this discrepancy we suggest that researchers
use a mixed approach combining the strengths of the first- and third-person
perspectives. This mixed approach still has disadvantages. For instance, even
if the mixed approach were used, values will sometimes clash, and it is unclear
how a persons morality should be assessed in such cases. However, we believe
that even with these disadvantages the mixed approach can help usher in a
more interesting and significant era of morality research.

Notes

* Authors Note: Peter Meindl and Jesse Graham, University of Southern California.
Address correspondence to: Peter Meindl, Department of Psychology, University
of Southern California, 3620 S. McClintock Ave., SGM 501, Los Angeles,
CA 90089, Email: meindl@usc.edu. This work was supported by Templeton
Foundation Grant 53-4873-5200.
1 See also Frimer, this volume, on the differences between values people explicitly
express on surveys and those they implicitly express in their everyday lives.

References

Allport, G. W. (1937). Personality: A Psychological Interpretation. Oxford, England: Holt.


Aquino, K., and Reed, A. (2002). The self-importance of moral identity. Journal of
Personality and Social Psychology, 83, 142340.
Batson, C. D., and Moran, T. (1999). Empathy-induced altruism in a prisoners
dilemma. European Journal of Social Psychology, 29, 90924.
Batson, C., Thompson, E. R., Seuferling, G., Whitney, H., and Strongman, J. A.
(1999). Moral hypocrisy: Appearing moral to oneself without being so. Journal of
Personality and Social Psychology, 77, 52537.
The Trouble with Nomothetic Assumptions in Moral Psychology 249

Blasi, A. (1990). Kohlbergs theory and moral motivation. New Directions for Child
and Adolescent Development, 1990, 517.
Cervone, D. (2004). The architecture of personality. Psychological Review, 111,
183204.
Cohen, T. R., Montoya, R. M., and Insko, C. A. (2006). Group morality and
intergroup relations: Cross-cultural and experimental evidence. Personality and
Social Psychology Bulletin, 32, 155972.
Colby, A., and Damon, W. (1992). Some do Care: Contemporary Lives of Moral
Commitment. New York: The Free Press.
Colvin, S., and Bagley, W. (1930). Character and behavior. In Colvin, Stephen S.
Bagley, William C. MacDonald, Marion E. (eds), Human Behavior: A First Book in
Psychology for Teachers (2nd ed. Rev.). New York, NY: MacMillan Co, pp. 292322.
Crockett, M. J., Clark, L., Hauser, M., and Robbins, T. (2010). Serotonin selectively
influences moral judgment and behavior through effects on harm aversion.
Proceedings of the National Academy of Sciences of the United States of America,
107, 174338.
DeAndrea, D. C., Carpenter, C., Shulman, H., and Levine, T. R. (2009). The
relationship between cheating behavior and sensation-seeking. Personality and
Individual Differences, 47, 9447.
Derryberry, W. P., and Thoma, S. J. (2005). Functional differences: Comparing moral
judgment developmental phases of consolidation and transition. Journal of Moral
Education, 34, 89106.
Doris, J. M. (2002). Lack of character: Personality and moral behavior. Cambridge:
Cambridge University Press.
, ed. (2010). The Moral Psychology Handbook. Oxford, UK: Oxford University Press.
Dunning, D., Perie, M., and Story, A. L. (1991). Self-serving prototypes of social
categories. Journal of Personality and Social Psychology, 61, 95768.
Frimer, J. A., and Walker, L. J. (2008). Towards a new paradigm of moral personhood.
Journal of Moral Education, 37, 33356.
Goodwin, G. P., and Darley, J. M. (2008). The psychology of meta-ethics: Exploring
objectivism. Cognition, 106, 133966.
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S., and Ditto, P. H. (2013).
Moral Foundations Theory: The pragmatic validity of moral pluralism. Advances
in Experimental Social Psychology, 47, 55130.
Graham, J., Haidt, J., and Nosek, B. A. (2009). Liberals and conservatives rely on different
sets of moral foundations. Journal of Personality and Social Psychology, 96, 102946.
Greene, J. D., and Paxton, J. M. (2009). Patterns of neural activity associated with
honest and dishonest moral decisions. Proceedings of the National Academy of
Sciences, 106, 1250611.
250 Advances in Experimental Moral Psychology

Gu, J., Zhong, C., and Page-Gould, E. (2013). Listen to your heart: When false
somatic feedback shapes moral behavior. Journal of Experimental Psychology,
142, 307.
Hart, D., Atkins, R., and Donnelly, T. M. (2006). Community service and moral
development. In M. Killen and J. Smetana (eds), Handbook of Moral Development.
Hillsdale, NJ: Lawrence Erlbaum, pp. 63356.
Immordino-Yang, M. H., McColl, A., Damasio, H., and Damasio, A. (2009). Neural
correlates of admiration and compassion. Proceedings of the National Academy of
Sciences, 106, 80216.
Isen, A. M., and Levin, P. F. (1972). Effect of feeling good on helping: Cookies and
kindness. Journal of Personality and Social Psychology, 21, 3848.
Jordan, J., Mullen, E., and Murnighan, J. K. (2011). Striving for the moral self: The
effects of recalling past moral actions on future moral behavior. Personality and
Social Psychology Bulletin, 37, 70113.
Kaiser, F. G., and Wilson, M. (2000). Assessing peoples general ecological behavior:
Across-cultural measure. Journal of Applied Social Psychology, 30, 95278.
Kayser, D., Greitemeyer, T., Fischer, P., and Frey, D. (2010). Why mood affects help
giving, but not moral courage: Comparing two types of prosocial behavior.
European Journal of Social Psychology, 40, 113657.
Kohlberg, L. (1970). Education for justice: A modern statement of the Platonic view.
In N. Sizer and T. Sizer (eds), Moral Education: Five Lectures. Cambridge, MA:
Harvard University Press, pp. 5683.
Kohlberg, L., and Mayer, R. (1972). Development as the Aim of Education. Harvard
Educational Review, 42(4).
Kouchaki, M. (2011). Vicarious moral licensing: The influence of others past
moral actions on moral behavior. Journal of personality and social psychology,
101, 702.
Lapsley, D. K., and Lasky, B. (2001). Prototypic moral character. Identity, 1, 34563.
Matsuba, M. K., and Walker, L. J. (2004). Extraordinary moral commitment: Young
adults involved in social organizations. Journal of Personality, 72, 41336.
McAdams, D. P. (2009). The moral personality. In D. Narvaez and D. K. Lapsley (eds),
Personality, Identity, and Character: Explorations in Moral Psychology. New York:
Cambridge University Press, pp. 1129.
Mischel, W., and Shoda, Y. (1995). A cognitive-affective system theory of personality:
Reconceptualizing situations, dispositions, dynamics, and invariance in
personality structure. Psychological Review, 102, 24668.
(1998). Reconciling processing dynamics and personality dispositions. Annual
review of psychology, 49, 22958.
The Trouble with Nomothetic Assumptions in Moral Psychology 251

(1999). Integrating dispositions and processing dynamics within a unified theory


of personality: The Cognitive Affective Personality System (CAPS). In L. A. Pervin
and O. John (eds), Handbook of Personality: Theory and Research. New York:
Guilford, pp. 197218.
Mullen, E., and Nadler, J. (2008). Moral spillovers: The effect of moral violations on
deviant behavior. Journal of Experimental Social Psychology, 44, 123945.
Narvaez, D., and Lapsley, D. K. (2009). Personality, Identity, and Character.
Cambridge: Cambridge University Press.
Narvaez, D., Lapsley, D. K., Hagele, S., and Lasky, B. (2006). Moral chronicity and
social information processing: Tests of a social cognitive approach to the moral
personality. Journal of Research in Personality, 40, 96685.
Osswald, S., Greitemeyer, T., Fischer, P., and Frey, D. (2010). Moral prototypes and
moral behavior: Specific effects on emotional precursors of moral behavior and on
moral behavior by the activation of moral prototypes. European Journal of Social
Psychology, 40, 107894.
Perugini, M., and Leone, L. (2009). Implicit self-concept and moral action. Journal of
Research in Personality, 43, 74754.
Pittel, S. M., and Mendelsohn, G. A. (1966). Measurement of moral values: A review
and critique. Psychological Bulletin, 66, 22.
Rand, D. G., Greene, J. D., and Nowak, M. A. (2012). Spontaneous giving and
calculated greed. Nature, 489, 42730.
Sachdeva, S., Iliev, R., and Medin, D. L. (2009). Sinning saints and saintly sinners: The
paradox of moral self-regulation. Psychological Science, 20, 5238.
Shafer-Landau, R. (1994). Ethical disagreement, ethical objectivism and moral
indeterminacy. Philosophy and Phenomenological Research, 54(2), 33144.
Skitka, L. J., and Bauman, C. W. (2008). Moral conviction and political engagement.
Political Psychology, 29, 2954.
Smith, K. D., Smith, S. T., and Christopher, J. C. (2007). What defines the good
person? Cross-cultural comparisons of experts models with lay prototypes.
Journal of Cross-Cultural Psychology, 38, 33360.
Teper, R., Inzlicht, M., and Page-Gould, E. (2011). Are we more moral than we
think?: Exploring the role of affect in moral behavior and moral forecasting.
Psychological Science, 22, 5538.
Turiel, E. (1983). The Development of Social Knowledge: Morality and Convention.
Cambridge: Cambridge University Press.
Twenge, J. M., Baumeister, R. F., DeWall, C. N., Ciarocco, N. J., and Bartels, J. M.
(2007). Social exclusion decreases prosocial behavior. Journal of Personality
andSocial Psychology, 92, 5666.
252 Advances in Experimental Moral Psychology

Valdesolo, P., and DeSteno, D. (2008). The duality of virtue: Deconstructing the moral
hypocrite. Journal of Experimental Social Psychology, 44, 13348.
Van Rompay, T., Vonk, D., and Fransen, M. (2009). The eye of the camera: Effects
of security cameras on prosocial behavior. Environment and Behavior, 41,
6074.
Walker, L. J., and Pitts, R. C. (1998). Naturalistic conceptions of moral maturity.
Developmental Psychology, 34, 40319.
Wright, J. C. (2010). On intuitional stability: The clear, the strong, and the paradigmatic.
Cognition, 115, 491503.
Wright, J. C., Cullum, J., and Schwab, N. (2008). The cognitive and affective
dimensions of moral conviction: Implications for attitudinal and behavioral
measures of interpersonal tolerance. Personality and Social Psychology
Bulletin,34,146176.
Index

abortion 132, 149, 153, 157, 167, 21114 complex scoring systems 43
agent moral relativism 210, 21213, 226 Co-opt thesis 114, 134
agents 5, 7, 32, 73, 115, 174, 180, 183,
190, 21116, 2267 Darley, J. M. 167, 1714, 176, 183,
Algoe, S. 65 18990, 1935, 200, 203, 214
Allport, G. W. 42, 235, 247 Dasgupta, N. 116
ambient sensibilia 78 death penalty 148, 153
American Philosophical Association debunking argument 1323, 1356, 138
(APA) 92, 94, 98, 106n. 1, 131 derogation 51, 578, 60
Appiah, K. A. 1312, 141 disagreement
appraiser moral relativism 210 folk metaethical intuitions 17282
Aristotle 5 moral objectivism 1945, 199200
authority independence hypothesis disgust
1589 amplifier view 11213
consequence view 11213
Batson, C. D. 3940, 51, 2467 ethnic cleansing/child abuse 120
behavioral immune system 10, 11516, foreign outgroups 117
121, 125 homosexuals 11617
moral judgment 11821 moral cognition 133
social attitudes 11618 moralizer 11213
belief in a just world 1923, 196, 200 and moral judgment 111
Blass, T. 80 moral violations 11921
bystander apathy 80 permissible actions 124
repugnant foods, consumption of 119
Cannon, P. R. 113 sexual practices 119
charitable donation 978 dispositions 74
Chomsky, N. 131 psychology of 7780
Cognitive Affective Personality disrupters 75, 81
System 235 domain theory of attitudes
cognitive science 1301, 141 authority independence
coin-flipping experiments 3940 hypothesis 1589
comparative disposition 76 emotion 1558
competence 223 motivating behavior 1545
character judgments 30, 32 schematic representation 150
emotional responses 25 universalism and objectivism 1523
moral cognition 323 Durwin, A. 191
moral identity 30
motivations 28 elevation 6, 645, 246
status 24 prosociality 613
traits 2830 email responsiveness 95
competitiveness 24, 27, 58, 60, 656 emotion 1558
254 Index

Engel, C. 37 Loewenstein, G. 174


Entanglement thesis 134 low-fidelity virtues 767, 81
ethicists 8 Lyons, D. 210
metaethicists 12 Lytle, B. L. 153
virtue ethicist 23, 33, 91106
expectation confirmation 79 Marquez, M. J. 58
Mayer, R. 236
folk metaethical intuitions McAdams, D. P. 43
Beebe and Sackris work 16872 meat consumption 59, 967
disagreement studies 1726 metaethical beliefs
face evaluation studies 17680 belief in a just world 1923
Knobe effect 1802 moral objectivism 1901
folk psychology 131, 136, 139 moral progress 1912
foreign outgroups 117 metaethical commitments 15, 183, 188,
193, 196, 199206
general belief in a just world measure intuitions 1678, 180
(GBJW) 1978, 2023 objectivity 168, 1739
generosity 4, 40, 57, 61, 65, 67, 76, 82 Milgram paradigm 7980
Giner-Sorolla, R. S. 119 mind perception 312
Goodwin, G. P. 167, 1714, 176, 183, Minson, J. A. 59
18990, 1935, 200, 203, 214 Monin, B. 589
Greene, J. 130, 132, 141 mood modulators 789
Gutierrez, R. 119 (moral) character 1, 67, 2132, 43, 49,
52, 73, 82, 86, 139, 192
Haidt, J. 8, 60, 65, 91, 99, 113, 11819,121 moral cognition 11, 1516, 21, 235
Harman, G. 139 competence 29
Helion, C. 121 disgust 133
Helzer, E. G. 122 mind perception and morality 32
Herrmann, B. 59 person perception 29
Hersch, M. 119 moral conviction
high-fidelity virtues 767, 81 abortion 149
homosexuals 11617, 119 authority independence
honesty 23, 76, 812, 84, 96, 245 hypothesis 1589
Horberg, E. J. 119, 122 face valid items 1501
Hornsey, M. J. 151, 160 motivational impact 155
hypocrisy 3940, 51, 244, 246 nonconformity hypothesis 15960
strong attitudes 148
Immordino-Yang, M. H. 246 moral exemplars 56, 9, 29, 44,
intuitive theory of metaethics 205 52, 167
vs. ordinary adults 459
Knobe, J. 174, 205 moral judgment
Knowledge-and-Appraisal Personality act-centered models 32
Architecture 235 behavioral immune system 11821
Kohlberg, L. 111, 158, 236 and disgust 11114, 118, 1214
Koleva, S. P. 113 emotions 156
mind perception 323
Lewis, D. K. 74, 76 moral relativism 20912, 215, 217
Linguistic Inquiry and Word Count neuropsychology of 130
(LIWC) 50 person-centered models 32
Index 255

moral motivation meat consumption 967


calculation of 50 Narrow Principles model 1002
projective measure hypotheses 502 Power of Reason view 1023
moral objectivism 18890 rationally driven moral
disagreement measure 1945, 199200 improvement 1046
just world measures 196, 2001 rational tail view 99100
truth-value measure 1956, 200 voting 92
moral progress 1913, 196, 2001 prosociality 47, 567
moral relativism elevation 613
agent moral relativism 210, 21213, 226 punishment 32, 57, 60, 174, 21521, 227
appraiser moral relativism 21214
kinds of 20910 racial cognition 13940
Morgan, G. S. 151, 153 Rational Tail view 91, 99100
motivating behavior 1545 Rawls, J. 99
Mullen, E. 159 relativist beliefs 190
repugnant foods, consumption of 119
Nadler, J. 159 reputational status 9, 52, 5660
Narrow Principles model 1002 Rosenberg, S. 22
Nichols, S. 174, 205 Royzman, E. B. 119
nonconformity hypothesis 15960 Rozin, P. 114, 119

objectivism 1523 see also moral Sackris, D. 1679, 1713, 175, 183
objectivism Sawyer, P. J. 58
objectivist beliefs 190 selective debunking argument see
Oosterhof, N. 177, 179 debunking argument
operationalizing morality self-affirmation 62
first-person approach 2345, 2434 self-report measures 16, 37, 445,
mixed approach 2446 478,51
third-person approach 2347, 2423 charitable donation 978
outgroup bias 79 disgust 120
expedience 42
Parks, C. D. 60 limitations 412
patients 32, 205 meat consumption 967
peer evaluations 16, 84, 923, objectivity 42
160, 174 selfish 38
personal preference 149 Small, D. A. 174
person perception Smith, A. 278
competence 26 social attitudes 112
evaluation, dimensions of 3 behavioral immune system 11618
moral character 224 social domain theory 149 see also domain
moral cognition 29 theory of attitudes
Plato 209 specialization 278, 93
Power of Reason view 91, 100, 1023 spoken words 43, 479
Pritchard, D. 83 status 4, 9, 14, 24, 44, 567, 634, 66, 101,
professional ethicists, moral behavior of 135, 152
charitable donation 978 stereotype content model 25
deficient intuitions, compensation Stohr, K. 94
for 1034 Stone, A. B. 60
email responsiveness 95 strong attitudes 148, 161
256 Index

subjunctive conditional analysis 745, Walker, L. J. 445, 234


778, 812 warmth 22
character judgments 30, 32
Terrizzi, J. A. 117 competition and status 24
Thematic Apperception Test 43 elevation 61
Todorov, A. 177, 179 emotional responses 25
Tybur, J. M. 114 perceptions 29
traits 23, 28, 31
universalism 1523, 211 weak disposition 76
Wheatley, T. 113
virtues
Williams, B. 209
ambient sensibilia 78
asocial situational influences 789
high-fidelity virtues 767, 81 Young, L. 191
low-fidelity virtues 767, 81 yuck factor 1323, 136
mood modulators 789
social influences 7980 Zhong, C. B. 123
257
258
259
260

S-ar putea să vă placă și