Documente Academic
Documente Profesional
Documente Cultură
Clarifications:
An agents duty at any given time, according to act utilitarianism, is not to act so that the
world resulting has as much overall well-being as a world can have, but just to act so that
the resulting world has just as much well-being as any world that could have resulted from
the acts that were among the agents options at the time of acting.
Maximisation in terms of the options available to the agent, not the idea of maximising
in the sense of leaving no increases to be achieved subsequently.
What are we to make of a worlds overall well-being? The sum of the well-being had by the
entities capable of having well-being. (Sentient creatures). Morality is NOT concerned with
achieving the greatest happiness for the greatest number - it is often the case that the
most beneficial act is different from the act that will distribute the benefit most widely.
The moral value of an action does not depend, at all, on whether the act complies with any
kind of moral rule. That said, act utilitarian is not blind to the usefulness of moral rules
as heuristic tools. Similarly, an understanding of customary morality is important for
foreseeing probable consequences and the likelihood of harm being caused. However,
rules have no right-making characteristics. Actions are not morally evaluated by
reference to rules, even if rules are invoked as heuristic devices.
Important Distinctions:
NB: Act utilitarianism requires the maximisation of well-being but is compatible with
various distinct concepts of well-being.
Similarly, different conceptions of act utilitarianism will accommodate whether the moral
value of an act depends on its actual consequences or those intended by the agent
(reasonably expected when the act was performed).
Important also to distinguish between the criterion of moral evaluation and an
action-guiding principle. The majority of criticisms aimed at act utilitarianism are
predicated on the belief that AU is established as action-guiding principle as well as a
criterion.
Advantages:
Objections:
Rule Utilitarianism:
Definition: An action is right in so far as it conforms to an authoritative moral code or set
of rules whose general acceptance value is at least as good (promoting well-being) as any
available alternative rule.
The authoritative moral code is NOT merely a decision-procedure. It provides the moral
standard according to which actions are morally evaluated.
NB: like act utilitarianism, RU can adopt different theories of the good and frame its theory
in terms of actual/expected outcomes, and make either average or utility the object of
promotion.
Clarifications:
Advantages:
1. John Harsanyi: Other things being equal, a rule-utilitarian society would enjoy a much
higher level of social utility than an act-utilitarian society would
- If the moral norms of ones society allow for things like promises to be broken when
this would return a marginal gain in utility, this will undermine trust in one another and
people will have less incentive to plan their future activities on the expectation that
promises made to them would be kept.
- Co-ordination effect. Act utilitarians will be unable to produce desirable outcomes
through collective action - providing the example of voting.
Issue with Harsanyi: he assumes that an act utilitarian society will internalise only
the AU principle, thereby using the standard of moral rightness as the decision-
procedure. He does not demonstrate that a society of rule-utilitarians would enjoy a
higher level of utility than a society of sophisticated (multi-level) act utilitarians
who rely on secondary principles as heuristic devices at the deliberative level.
2. Coherence with moral intuitions. Maintains the binding force of special obligations such
as promises, or the partial concern given to say ones children. Brad Hooker argues along
these lines; using moral intuitions evidentially to support a theory of the good which
provides intrinsic value to things other than welfare (such as the general inviolability of
promises, and arguing that morality does not demand the level of personal sacrifice that
AU seems to) [rule-consequentialism].
Objections:
1. Collapses into act utilitarianism? [See what does general adoption entail?]. This
objection depends on the particular formulation of rule utilitarianism.
2. Rubber Duck objection. Named after the article by Frances Howard-Snyder: Rule
Consequentialism is a Rubber Duck. Argues that RU is not really a utilitarian theory as it is
not agent neutral. Howard-Snyder maintains that consequentialism is by definition
agent neutral where the action prescriptions can be made without any essential
reference to the agent. Whereas, rule utilitarianism would allow for agent-focussed features
such as being more concerned with ones immediate family.
- Is this really convincing? Is not ethical egoism a consequentialist theory
- Issue of name? What does this objection say about the viability of RU as a moral
theory?
3. Incoherence objection. Also known as the rule-worship objection. One assumes that,
for RU, act maximisation is a goal of ultimate importance - hence the authoritative moral
code deriving its authority in virtue of the codes ability to maximise utility. If maximising
utility is of such paramount importance, however, then whenever one has a choice
between obeying the ideal moral code and performing an action that contravenes the code
but would produce more utility:
- J.J.C. Smart: I can understand it is optimific as a reason for action, but why should it
is a member of a class of actions which are usually optimific or it is a member of a
class of actions which as a class are more optimific than any alternative general class
be a good reason?
- Summary: RU insists that we should abide by a set of rules even when the same
considerations that recommend those rules in the first place count in favour of
breaking them.
- Challenge: Not all RU theories will endorse the idea that maximising utility is a goal of
overriding significance.
Satisficing Utilitarianism:
Definition: The right action is that which promotes a good enough outcome, where good
enough need not be optimal.
Accordingly, this position presents that there is some threshold of good such that if the
threshold is met then the action qualifies as right. An action which goes above the
particular threshold will be considered supererogatory.
Advocated by Michael Slote (1984).
Clarifications:
Issues:
One main challenge facing proponents of satisficing utilitarianism is to explain just when
an outcome is good enough (Bradley 2006). Is there some absolute minimum of
goodness that any act must promote in order to be good enough, or is the threshold
always determined relative to the quality of options available to you at the time?
Some have suggested that satisficing is merely a nuanced form of maximisation. Robert
Goodin: maximisation under the constraints of time and information costs is the best
sense I can make of satisficing utilitarianism (2012).
If satisficing utilitarianism is adopted, then one can justifiably fail to do more good
than the agent is capable of doing. Such arbitrary failures to maximise the good seem
to warrant feelings of blame. If one can easily save two lives but only saves one and
proposes that saving one life was good enough, one may say that the moral obligation in
the relevant case is stronger.