Sunteți pe pagina 1din 13

POST-MIDTERM HANDOUTS ON ETHICS

The Role Feelings in Moral Decision-Making by Dr. John D. Banja


During the past decade, the field of ethics has witnessed a great deal of interest in the role
of emotions and feelings in making moral decisions. There are a number of reasons for this,
perhaps beginning with the appearance of feminist, or “care,” approaches to ethics in the
1980s. This model emerged as a counterpoint to 2500 years of Western moral philosophy,
whose major figures, such as Plato and Immanuel Kant, were very concerned about feelings
influencing moral decisions. They argued that allowing feelings to sway ethical reflection
would distort moral reasoning and that if anything is important about making moral
decisions, it is that moral reasoning be pristinely logical and not “contaminated” by
emotions, passions, and sentiments.

In response, scholars such as Carol Gilligan and Nel Noddings have argued that a wholesale
distrust of feelings neglects the importance of relationships in our moral life and that
purely logical or rational approaches to ethical situations ignore the degree to which
empathy, emotional attachments, communal ties, and respecting another’s feelings inform
the moral dimensions of those relationships.

Perhaps the primary reason a great deal of ethical attention has recently turned toward
feelings and emotions in our moral life is the continuing contributions of neuroscientists
who demonstrate that all thinking and reasoning, not just moral contemplation, is laced
with feeling (or affect). Neurologist Antonio Damasio, one of the leading figures in this
research, has offered convincing findings that whenever reasoning and decision making are
occurring, the brain’s feeling circuits are coactive with its reasoning or thinking circuits. In
fact, dissociating feeling from thinking is

largely a neurological impossibility. As we consciously (and mostly unconsciously) process


our thoughts, beliefs, perceptions, imaginings, memories, anticipations, and action plans,
the brain’s feeling and reasoning circuits are working together to arrive at a plan that “feels
right” to the organism.

Immensely impressive about Damasio’s work is the way in which he and his colleagues
have shown that when the brain’s feeling circuits have been damaged by trauma or disease,
decisional ability becomes severely impaired. In contrast to the Kantian or Platonic
repudiation of feelings in making decisions, Damasio and his ken have shown that damage
to the brain’s feeling circuitry often disposes the individual to extremely poor or
maladaptive decision-making, especially in important life matters involving relationships,
finances, and job performance. As I will discuss below, feelings are increasingly being
understood as a kind of “compass” for the brain’s reasoning.
Feelings direct reason or the intellect toward what is and isn’t important; they prioritize
the factors or elements in a moral situation such that our decision making is efficient and
not haphazard; they appraise an object or situation as desirable, undesirable, dangerous, or
neutral; and they help to encode memories (eg, “That was awful” or “That was marvelous”).
Consequently, behaviors that end with satisfying results are reinforced, and those that
don’t become “learning experiences” that shouldn’t be repeated. (Thus, sociopaths, who
cannot seem to learn from trial and error or from the consequences of their actions, have
been consistently found to have serious malfunctioning in the circuitry that connects the
feeling with the thinking parts of the brain, primarily in the amygdala-anterior cingulate-
frontal and prefrontal cortical loops.)

Now, it is certainly true that feelings can occasionally distort reasoning, especially when we
argue backwards, that is, when we already know the conclusion we want—usually because
that conclusion is infused with a passionate commitment, whereas its contrary strikes us as
disgusting—and we select and adapt our evidence to suit the very conclusion we want to
adopt. As I will discuss, these cases are all too familiar.

On the other hand, especially as we go about the mundane activities of daily life, the brain’s
feeling circuits are utterly indispensable in making decisions that manifest our “values,”
what we believe is important. So I want to spend this article reclaiming the significance of
feelings in making moral decisions. Because I don’t want to get too neurologically technical,
I consistently use the word “feeling” in the following, although “emotion” might
occasionally be more accurate, whereas the safest, all-purpose technical word to use is
probably “affect.” Even so, I think the points will come across intelligibly and fairly
accurately. Also, although the following material can apply to any profession, I offer
examples familiar to case managers so that my readers’ feeling circuitry will be positively
disposed.

Feelings and Attention

An interesting distinction between feelings and moods is that feelings (and emotions) are
about things, situations, or objects, whereas moods are largely diffuse, generalized
sensations that aren’t about anything in particular. Thus, moods tend to be cognitively
troublesome, as they often shape whatever we are thinking about into the mold of the
mood. So when I am gloomy, everything I think about is colored with negativity, but when I
am upbeat, everything in my world is marvelous or hopeful.

Feelings and especially emotions, however, tend to be object oriented, or about something.
This is why feelings are so important in our daily living. They direct and narrow attention
and, by doing so, inevitably determine what will or won’t figure in our decision making.
Thus, beginners, or persons who are untutored in a particular subject, feel flustered,
anxious, or lost, because they don’t know what to attend to first or how to prioritize
multiple tasks. Their feelings signal their ineptness. Their feelings don’t know where to
turn.

Every veteran nurse is probably able to empathize with the beginner, because the latter
often doesn’t know what to do first or next, especially if the situation departs from a classic
textbook example. Notice that pure reasoning won’t perform these functions very
effectively. Pure reason doesn’t “care” about anything; it doesn’t provide motivation to do
anything; it neither has values nor finds anything interesting. Feelings, on the other hand,
supply that “oomph” to our thinking, by telling it where to go and what to concentrate on.
When feelings are vague, incoherent, or contradictory—as they often are with beginners
who haven’t had enough experience with their subject— their resulting actions often seem
hesitant, directionless, arbitrary, or harried.

To take an ethical example, I occasionally attend an ethics grand rounds that is presented
by a very bright and wellintentioned individual who isn’t particularly steeped in ethical
analysis. Frequently, the presenter provides the audience with a case study that is dense
with clinical details about the patient’s medical history, employment history, drug regimen,
vital signs, lab tests, findings, differential diagnoses, and so forth, even though none of this
information is relevant to the ethical dimensions of the case. This typifies “misplaced
salience.” Because a grand-rounds type of ethical reflection might provoke anxiety in this
individual, he or she spends an inordinate amount of time discussing items with which he
or she is familiar and comfortable. This person cannot use his feelings (or moral sense or
moral intuition) as a resource to focus attention on what is important.

Alternatively, how often have health professionals tried to avoid a particular topic or
change the direction of a conversation when its content starts evoking feelings that he or
she finds upsetting? For example, how many health professionals know that a patient is
dying but carefully avoid calling attention to the fact for fear that doing so would provoke
emotional discomfort among family members or with the patient?

I believe that one of the best ways to sharpen our moral attention is to immerse ourselves
in ethical case studies and observe how experts analyze ethical conflicts. This kind of
practice both sharpens the intellect in acquainting it with reasons for or against a decision
and instructs and orients our feelings as to what, in the occasionally vast assemblage of
moral factors, should most “concern” us. (Strong feelings inevitably express our concerns,
incidentally.) The health professional anxiously presenting that ethics grand rounds hasn’t
had those learning experiences. His or her brain must learn what elements in a moral
situation it should attend to, and it can do that only by repeated encounters with ethical
dilemmas. Over time, these encounters should facilitate optimal feeling-thinking-acting
triads to develop and result in the most satisfying or gratifying end points—something that
individuals who are feeling-impaired (such as narcissists, sociopaths, and borderline
personalities) cannot do.

Feelings Are Informative

In conjunction with the way feelings define the salient or important items in a moral
situation, they also are a key source of information in selecting which strategies we
ultimately use. Take a case manager, for example, whose supervisor asks her to do
something that doesn’t feel right to her but is told “not to worry” and that “everybody does
it.” This case manager’s moral feelings are appraising the supervisor’s request as ethically
problematic, but her supervisor is countering by giving her reasons to think, and especially
to feel, differently. Interestingly, the supervisor is trying to get the case manager to believe
differently about what she is doing, in the hopes of getting her to feel differently about it. Of
course, if this diversion, or blunting of the case manager’s moral feelings, succeeds and is
replaced by her supervisor’s moral sensibilities (ie, “This is how you should feel when I ask
you to …”), the case manager might be rewarded or at least not penalized, which tends to
reinforce the replacement of the case manager’s original sensibilities with her company’s.

It is quite extraordinary how feelings, rather than the intellect, are the first alert to an
ethically troublesome situation. Yet this should not be surprising when we consider how
feelings are so blatantly evident when people heatedly argue about controversial topics
such as abortion, physician-assisted suicide, managed care, the medical use of marijuana,
and so on. Many people are “passionate” about these beliefs, which brings up a third
feature of feelings.

Feelings and Belief Formation

Not only do feelings enable us to determine what is important and give us critical feedback
on what is going on around us, they are crucial in belief formation. When we have coupled a
thought process with a particular behavior and seen it consistently succeed (eg, “Wait until
your anger subsides before making a decision”), its favorable quality upon recall
recommends that we repeat it in the future. The more we believe something, the more our
feelings of certainty, confidence, commitment, and so forth deepen and increase the
likelihood that we are going to act in a way congruent with that belief.

Furthermore, we tend to develop new beliefs whose feeling components are similar to
those we already hold, and we tend to reject information that conflicts with those beliefs.
We might refer to this phenomenon as “belief clusters” because we often aggregate or
assimilate beliefs on the basis of the way they resonate with pre existing beliefs. For
example, it is common to find persons with disability opposing embryonic stem cell
research, abortion, physician-assisted suicide, and the discontinuation of life-prolonging
treatment, in part because they understand all these phenomena to pose considerable
threat or peril to the welfare of persons with disability in general. As one article noted,
these life-ending actions manifest a “hurtful attitude” toward persons with disability, as
they imply that “imperfect” or “defective” organisms should not live.

On a simpler note, can you imagine Newt Gingrich ideologically turning into Ted Kennedy,
or vice versa? The point here is that feelings are not only informative, but because of the
way they affect belief formation, they can account for who we are at the deepest levels of
our professional psyches. Consider, for example, how the noncompliant diabetic patient
commonly arouses the health professional’s ire or frustration, which illustrates how that
professional’s deep feelings are tightly bound up with his or her identity. The diabetic
patient’s noncompliance is a figurative slap in the face to the health professional, whose
passionate dedication to relieving pain and suffering is being arrogantly dismissed by this
patient. How easy it is to dislike this patient and convince ourselves that he or she is not
worth our time or effort, given the way he or she can be an unending source of anger and
frustration.

Monitoring Beliefs and Feelings

Beliefs can stimulate feelings: if I believe that “All CEOs make too much money,” I might
well be envious or resentful when I think about them. Alternatively, feelings can stimulate
beliefs: if I exclaim, “The era of disease management is upon us!” you can bet I believe that
its core issues are important and that more health professionals should broaden their
disease management knowledge base.

Given the way beliefs and their associated feelings are linked, it is important that we get
both our beliefs and feelings right when it comes to moral decision making. In other words,
it is important that we hold morally correct beliefs that in turn are supported by robust and
reinforcing moral feelings. People with personality disorders, for instance, manifest
impairments in just such feeling-belief dyads. Because narcissists feel anxious or insecure
about their lovability, they develop grandiose beliefs about themselves and their
accomplishments and behave obnoxiously. Because obsessive-compulsives believe their
work must be absolutely perfect, they feel terribly anxious and uncomfortable around
lackadaisical workers and go into a tizzy when a task isn’t completed just so.

All of this warrants at least two things. The first is that all of us should constantly monitor
our moral feelings in light of Kant’s and Plato’s worries. Our moral beliefs indeed can be
disrupted or distorted by our feelings. The case manager, for example, who is particularly
fond of a particular physician might discount evidence that he occasionally provides poor
care. The case manager who dislikes a particular client might lose her objectivity in writing
reports or believe that this patient isn’t “owed” the kind of advocacy she normally extends
to clients. The case manager who suspects a client is malingering—a quasi belief probably
infused with feelings of suspicion or even hostility—might find herself inclined to accept
only evidence that confirms her suspicions (or to readily believe the worst that she hears
about him). Will these case managers be able to review both their feelings and their beliefs
in a reasonably objective way, have the courage of their feelings when they seem reliable,
but try hard to give their client every reasonable benefit of doubt when their feelings aren’t
adequately supported by data? Advocacy demands that very objectivity.

The second ethical warrant suggested by the admixture of feelings with beliefs is that
organizations must insist on and reward right moral action and express disgust for
behavior that falls below ethical standards. Creating and maintaining such a moral
atmosphere will largely be the work of an organization’s leadership, as its moral pulse or
sense of conscience rarely develops from the bottom up. Consequently, and as I have
recommended in previous columns, organizations should incorporate ethics training and
set aside time for their case managers to analyze difficult cases when they experience
moral confusion or when they believe (and feel!) that they are being ethically
compromised.

Because our feelings are a product of our neural circuitry, and our neural circuitry is a
product of evolution, feelings include not only the brute, primitive emotions of fear, anger,
and envy but also a kaleidoscope of affective shadings and colorings whereby we can enjoy
the most sophisticated kinds of experiences, especially those involving art, culture,
literature, and spirituality. Ultimately, though, feelings have evolved to serve a basic
function: to help us make decisions that preserve the health and integrity of our lives.
Thinking needs feeling to infuse it with direction and value, but feeling needs thinking for
analyzing and calibrating what our needs, interests, desires, proclivities, and intentions
should reasonably be.

It is interesting how feelings and thinking serve to adjust one another so that when they are
at their best, they move together like Fred Astaire and Ginger Rogers. When they are
catastrophically out of synch, they can disrupt nations and trigger worldwide calamities.
Determining the correct moral course of action by selecting the right reasons supported by
the right feelings is no easy task. It is, however, what moral responsibility and
accountability ultimately demand.
Reason and Impartiality as Minimum requirements for Morality

Reason is the ability of the mind to think, understand, and form judgments by a process
of logic. It is the innate and exclusive human ability that utilizes new or existing
information as bases to consciously make sense out of things while applying logic. It is
also associated with thinking, cognition, and intellect.

Impartiality means manifesting objectivity. It is the quality of being unbiased and


objective in creating moral decisions while underscoring that a [morally] impartial person
makes moral decisions relative to the welfare of the majority and not for specific people
alone.

Why are Reason and Impartiality the minimum requirements for morality?

Reason and Impartiality become the basic prerequisite for morality as one is expected to
be able to deliver clear, concise, rightful, and appropriate judgments made out of logic and
understanding in an unbiased and unprejudiced manner while considering the general
welfare to accurately concoct moral decisions.

Ethical Decision-Making
In this module, we provide some guiding principles, and pathways to help guide ethical
decision-making. These are a series of basic questions that should be asked when
confronted with ethical dilemmas. These are often complex situations with no clear-cut
resolution, and without a right or wrong answer. But these decision-making processes
will go a long way towards helping all of us make informed decisions that can justify
consequent actions.

Ethical Reasoning Can Be Taught: Ethical reasoning is a way of thinking about issues
of right and wrong. Processes of reasoning can be taught, and school is an appropriate
place to teach them. the reason that, although parents and religious schools may teach
ethics, they don ot always teach ethical reasoning. See the article by: Sternberg, Robert
J. Teaching for Ethical Reasoning in Liberal Education. Liberal Education 96.3 (2010):
32-37.

And, like learning to play baseball or play the violin, it's important to practice early and
often. So, let's get started:

Beneficence
Beneficence is the concept that scientific research should have as a goal the welfare of
society. It is rooted in medical research, the central tenet is "do no harm" (and
corollaries remove harm, prevent harm, optimize benefits, "do good"). For a more
expansive introduction to beneficence, see the essay on The Principles of Beneficence
in Applied Ethics from the Stanford Encyclopedia of Philosophy. Some simple guiding
questions in applying the concept of beneficence to ethical dilemmas include:
o Who benefits?
o Who are the stakeholders?
o Who are the decision-makers?
o Who is impacted?
o What are the risks?
Take a look at the video on [linkhttp://ethicsunwrapped.utexas.edu/video/causing-harm
'Causing Harm']--"Causing harm explores the different types of harm that may be
caused to people or groups and the potential reasons we may have for justifying these
harms." From "Ethics Unwrapped", McCombs School of Business, University of Texas-
Austin.

A 7-STep Guide to Ethical Decision-Making


The following is a summary of: Seven-step guide to ethical decision-making (Davis,
M. (1999) Ethics and the university, New York: Routledge, p. 166-167.

1. State the problem.


o For example, "there's something about this decision that makes me
uncomfortable" or "do I have a conflict of interest?".
2. Check the facts.
o Many problems disappear upon closer examination of the situation, while
others change radically.
o For example, persons involved, laws, professional codes, other practical
constraints
3. Identify relevant factors (internal and external).
4. Develop a list of options.
o Be imaginative, try to avoid "dilemma"; not "yes" or" no" but whom to go to,
what to say.
5. Test the options. Use some of the following tests:
o harm test: Does this option do less harm than the alternatives?
o publicity test: Would I want my choice of this option published in the
newspaper?
o defensibility test: Could I defend my choice of this option before a
congressional committee or committee of peers?
o reversibility test: Would I still think this option was a good choice if I were
adversely affected by it?
o colleague test: What do my colleagues say when I describe my problem and
suggest this option as my solution?
o professional test: What might my profession's governing body for ethics say
about this option?
o organization test: What does my company's ethics officer or legal counsel say
about this?
6. Make a choice based on steps 1-5.
7. Review steps 1-6. How can you reduce the likelihood that you will need to make a
similar decision again?
o Are there any cautions you can take as an individual (and announce your
policy on question, job change, etc.)?
o Is there any way to have more support next time?
o Is there any way to change the organization (for example, suggest policy
change at next departmental meeting)?
[Having made a decision based on the process above, are you now prepared
to ACT?]

A Seven Step Process for Making Ethical Decisions--An example from the "Orientation
to Energy and Sustainability Policy" course at Penn State.

Additional Approaches to Ethical Decision Making

Ethical Decision-Making Model based on work by Shaun Taylor.


Shaun Taylor's presentation: Geoethics Forums (PowerPoint 2007 (.pptx) 380kB Jun11
14), given at the 2014 Teaching GeoEthics workshop, provided a simple model to help
students engage Ethical Decision-Making that includes a) the context/facts of the
situation, b) the stakeholders, c) the decision-makers, d) these inform a number of
alternate choices, e) that are mediated through the evaluation of impacts and
negotiations among the parties, that lead to f) selection of an optimal choice. Taylor
provides guidance for what makes a good ethical dilemma discussion, including:
o Trust, respect, disagreement without personal attacks
o Being judgmental vs. making a judgment
o Emphasize process vs. conclusion
o Uncertainty is OK
o Description then prescription
Teaching Activity: GeoEthics Forums--The Grey Side of Green (a guide for ethics
decision making)

Daniel Vallero also addressed ethical decision making in his presentation at the 2014
Teaching GeoEthics workshop, and defines this 6-step approach to ethical decision
making:

1. State or define the problem/issue


2. Gather information ("facts") from all sides
3. Delineate all possible resolutions.
4. Apply different values, rules, principles, regulations to
the different options.
5. Resolve conflicts among values, rules, etc.
6. Make a decision and act.

Moral courage
Moral courage is a prosocial behavior with high social costs and no (or rare) direct
rewards for the actor. In situations that demand a morally courageous intervention,
instances of injustice happen, human rights are violated, persons are treated unfairly
and in a degrading manner, or nature and cultural assets are in danger; these situations
are about discrimination against foreigners or other minorities, violence and aggression
against weaker individuals, sexual harassment or abuse, mobbing, or illegal business
practices.

It is the expression of personal views and values in the face of dissension and rejection”
and “when an individual stands up to someone with power over him or her (e.g., boss)
for the greater good”.
Similarities and Differences Between Moral Courage and Other Prosocial
Concepts
Differences Between Moral Courage and Helping Behavior: The Role of Negative
Social Consequences
The anticipated negative social consequences in case of prosocial action distinguish
moral courage from other prosocial behaviors. For helping behavior, positive
consequences, such as plaudit or acknowledgment, can be expected. Moral courage,
however, can result in negative social consequences, such as being insulted, excluded,
or even attacked. Helping or donating could also lead to negative consequences for the
help giver (e.g., losing time or money) but not to negative social consequences.
People’s implicit theories of moral courage and helping behavior do in fact differ and
that perceptions of prosocial behavior as an act of moral courage depend on expected
negative social consequences for the actor, whereas perceptions of prosocial behavior
as helping behavior do not.
Moral Courage and Heroism
Moral courage shows certain similarities with heroism. heroism as taking risks “on
behalf of one or more other people, despite the possibility of dying or suffering serious
physical consequences” Regarding the possibility of suffering serious physical
consequences, moral courage and heroism overlap: As already mentioned, when a
person acts morally courageously he or she runs the risk of negative social
consequences such as being insulted by a perpetrator; moreover, an act of moral
courage can also result in physical violence by the perpetrator against the helper and
thus lead to serious injuries or even to death. An important difference, however,
between heroism and moral courage is that in the immediate situation (and also
afterward), a hero can expect positive social consequences, such as applause or
admiration. In contrast, in the immediate moral courage situation (and often also
afterward) a helper cannot expect positive outcomes but rather negative social
consequences, such as being insulted, excluded, or even prosecuted by one or more
perpetrators
Classical Determinants of Helping Behavior and Their Failure to Predict Moral
Courage
Certain classical determinants or predictors of helping behavior exist. These predictors
cannot simply be assigned to moral courage. Relevant experiments are reported in the
next section.
The Role of Bystanders.
Plenty of studies revealed that the presence of others inhibits helping behaviour. The
moral courage situations faced in the presence of bystanders are recognized more
quickly and less ambiguously as real emergency situations than as harmless (helping)
situations. Furthermore, the costs for the victim in case of a nonintervention are also
higher in a dangerous moral courage situation than in a more harmless helping
situation. Thus, arousal in a moral courage situation is higher than in a helping situation
and an intervention becomes more probable, independently of whether a passive
bystander is present or not.
The Role of Mood
A further classical determinant of helping behavior is mood. Previous studies
demonstrated that the decision to help is influenced by the mood of the potential helper.
People are more likely to help others when they are in a positive relative to a neutral
mood because helping others is an excellent way of maintaining or prolonging positive
mood (Isen & Levin, 1972). In addition, negative, relative to neutral, mood states are
shown to increase prosocial behavior because helping dispels negative mood (Carlson
& Miller, 1987; Cialdini, Baumann, & Kenrick, 1981). Because moral courage situations
are associated with fewer anticipated positive social consequences and more
anticipated negative social consequences relative to helping situations, one may expect
that showing moral courage actually worsens an actor’s mood. As a consequence,
whereas positive and negative mood states (as opposed to neutral mood states) can be
expected to lead to more helping behavior, mood should not affect moral courage.
Factors That Foster Moral Courage
In the preceding sections we demonstrated factors that do not promote moral courage.
In the following sections we seek to show variables that have the potential to foster
morally courageous behavior.
The Role of Norms
The importance of social norms for promoting prosocial behavior has been
demonstrated in a variety of studies. It was demonstrated that anger, awareness of the
situation, and responsibility take-over (i.e., they felt more responsible to act) mediated
the intention to intervene. When prosocial norms were made salient participants
reported more anger, a higher awareness of the situation, and more responsibility
takeover. Anger, awareness of the situation, and responsibility takeover in turn fostered
the intention to show moral courage. Prosocial norms have the potential to foster moral
courage, but they have to be strongly activated in the forefront to display an effect.
The Role of Anger
In our description of moral courage, anger is an integral component: When a person
acts morally courageously, he or she is in most cases angry at a perpetrator or he or
she is upset because of injustice, violations of human dignity, and so on. Anger seems
to play an important role for moral courage. Anger possibly motivates or strengthens the
intentions to act or the behavior itself. But what kind of anger is this? The following
theoretical considerations demonstrate that a conclusive answer cannot be given yet.
When people show moral courage, they stand up for a greater good and seek to
enforce ethical norms without considering their own social costs, because one or more
perpetrators have violated ethical norms, human rights, or democratic values.
Therefore, one could guess that anger related to moral courage is about moral outrage.
Moral outrage means an anger that is provoked by the perception that a moral standard
(in most cases a standard of fairness or justice) has been violated.
Personality and Moral Courage

Besides situational factors that promote moral courage, dispositional variables also play an
important role. As noted earlier, Niesta et al. (2008) found justice sensitivity, civil
disobedience, resistance to group pressure, and moral mandate to be conducive
determinants of moral courage. In an earlier study, Kuhl (1986) demonstrated that high
self-assurance, which in turn affects how difficult the situation is perceived, fosters moral
courage. Hermann and Meyer (2000) also found self-assurance, self-efficacy, and social
competence as well as moral beliefs and responsibility takeover to be important. In a study
with more than 700 pupils, Labuhn, Wagner, van Dick, and Christ (2004) showed that the
more empathy and inner ethnical contacts and the less dominance orientation pupils had,
the higher was their intention to show moral courage.

S-ar putea să vă placă și