Sunteți pe pagina 1din 7

Iancu Daramus

Scientists must make social/political value judgments when providing expert advice to
policy-makers and this is a problem for democracy. Critically discuss this claim

In my view, a scientists task as policy advisor involves two components: conducting


research (often at the specific request of the advisee) and communicating it to the policymaker. In the first part of my essay, I will analyse these two aspects, arguing that
social/political value judgments do play a role in both. To clarify what is at stake, I claim that
value judgments (for brevity, I will drop the social/political label from now on) are not
merely added at the end of the research, perhaps in an attempt at persuading the politician.
Rather, I believe they are part and parcel of the research process itself. Some might see this
as a problem which attacks the very fabric of democracy, given that the values of some are
allowed to take precedence over the values of others. In the second part of my essay, I
argue that this problem can be defused, and that the potential consequences of forcefully
trying to impose neutrality can be even greater than those arising out of non-neutrality.
The unsteady fact/value distinction
Let us note from the outset that we are on potentially shaky ground when trying to posit a
clear-cut distinction between values (understood as sources of normativity, i.e. of how the
world should be) and facts (understood as descriptions of how the world is). According to a
particular view of neutral science, it is only statements about the latter by confrontation
with experience that should play a part in science. Yet, in Two Dogmas of Empiricism,
Quine argued forcefully that our judgments face the tribunal of experience together, not
individually1. In other words, what is to be counted as a fact (or, more importantly, as an
empirical refutation) depends on other statements in our set of beliefs. If we make sufficient
amendments elsewhere in this set, we can still hold on to a particular fact that experience
should have refuted. Nonetheless, this seems a somewhat cheap victory against those
claiming values should/do not influence research. Thus, following Richard Rudner, I aim to
pursue a different line of attack.
1. Providing advice
Accepting hypotheses is a function of risk
Rudners argument can be summarized as follows. First, by the very nature of research, a
scientist must accept hypotheses. Second, no hypothesis can ever be completely verified.
Consequently, the scientist must decide when the available evidence is strong enough to
justify him in accepting it. This, however, poses a problem: how strong is strong

Quine (1951: 38)

enough2? From the start, it is clear we cannot have a one-size-fits-all rule to help us,
since any such rule would further prompt the question of its own justification, the amount
of evidence supporting it and the reason why that particular amount was deemed sufficient.
Thus, Rudner argues that our decision will be based on the significance of a possible
mistake: the greater the risks, the higher the demand for evidence. But here is the crucial
bit: the seriousness of the risks will be weighed against an ethical scale, to measure how bad
they would be. Hence, value judgments necessarily influence scientists in accepting
hypotheses.
There is no single function of risk
In reply, Richard Jeffrey has challenged Rudners conclusion3, arguing that there is no single
function of risk that we can measure. Given that there are many possible contexts of action
in which the results of research can be used, there must necessarily be different utilities
arising out of these various possible implementations. Thus, it does not make sense to talk
of assessing risk, as if all risks were amenable to some common (ethical) denominator.4 To
use his example, the same level of safety for a vaccine test might be considered acceptable
or not, depending on whether we are vaccinating our pet monkey or our children.
The scientist as a decision-maker with multiple aims
Before I offer my own take on this issue, let us note that, even if we grant the point, the
situation is by no means as clear-cut as Jeffrey believes it to be. In the words of Churchman:
it is quite natural to ask where scientific decision making stops, for it is clear that scientists make
many decisions in addition to the "ultimate" decision to accept or reject an hypothesis, which Jeffrey
finds so objectionable. Scientists decide what makes a relevant observation, what controls should be
applied in taking observations, how many observations ought to be made, what "model" to use as a
framework for observations, and so on.
And if one goes even further, and wants to "assign probabilities" to hypotheses, I'm sure that he will
have to evaluate the relative worth of (1) more observations, (2) greater scope of his conceptual
model, (3) simplicity, (4) precision of language, (5) accuracy of the probability assignment. Such a
scientist, whether or not he thinks of himself as "accepting" hypotheses, is a decision maker with
multiple aims, and the criteria of optimal decision making depend on the values of these aims.5

Thus, we see that there are many juncture points at which values can influence the
scientific process, apart from the end point of trying to analyse consequences. While the

Rudner (1953: 3)
Jeffrey also argued that scientists do/should not accept hypotheses, but should merely present probabilities
attached to a statement. Yet, as Rudner notes, this requires scientists to accept the hypothesis that the
probability is p for some other particular hypothesis (with the further question of whether and why the
evidence is sufficient for this claim) Rudner (1953: 4)
4
Jeffrey (1956: 242)
5
Churchman (1956:246)
3

values in question need not be social and political, they can be, which will suffice for our
purposes.6
Its a Wonderful Life
To return to the previous section, I dispute the contention that we are faced with a
heteronomy of aims, that somehow induces a paralysis of analysis. Granted, the same
error made by a scientist researching, say, the time-keeping usages of caesium can lead to
someone missing a train or to a nuclear reactor blowing up. Yet one can accept this without
at the same time saying that there is no ethical judgment to be made concerning the impact
of possible miscalculation. For, in all these bad outcomes, a rare substance with alternate
uses is wasted. Economists would probably call this the problem of transaction costs, but I
would prefer to call it the Wonderful Life problem, in reference to Frank Capras classic
movie, in which a businessman asks what the world would have been like had he never
existed. Similarly, a scientist can ask how the world would look like without his (potentially
bad) theory. At the very minimum, say, there would not be a waste of human and material
resources involved in attempting to replicate experiments (as per the scientific protocol).
Yet this minimum need not be insignificant. If you think of research that has no immediately
discernible practical purpose (e.g. on the expansion of the universe), the practical
consequence of potential error is not that our anti-expansionary goggles wouldnt be thick
enough, but that huge budgets (cf. the cost of the LHC in Geneva) would have gone to
waste.7 Thus, Rudners point holds: there is at least some measure of ethical importance
that can be rescued from the indeterminacy of numerous possible uses.
Furthermore, Jeffreys analysis paints a picture of neutral results being made to serve a preexisting array of possible uses. However, like in the case of the atom bomb, research can
sometimes open up previously non-existent possibilities. In such cases, some of the new
possible outcomes can be overwhelmingly worse than others. Presumably, Jeffrey would
argue that, even in such cases, one cannot assess all the outcomes. Nonetheless, I find such
a reply to be unconvincing, for two reasons. First, I think that no plausible version of
consequentialism could require that one evaluate all possible consequences of an action or,
alternatively, ask that one refrain from any ethical judgments until such an all-embracing
evaluation can be performed. If then, as it seems to me to be the case, our judgments must
depend on the best of our knowledge concerning possible outcomes, I think that a scientist
has a very relevant degree of information and foresight at least with regards to hitherto
impossible bad outcomes whose very possibility he has created. Second, someone of a
6

A point of clarification might be in order. I am not debating the fact that scientific/epistemic values (such as
rigor, replicability etc) play a role in choosing among these different ends. Rather, I am debating the claim that
they are the only values.
7
Again, a similar point of clarification: I am not saying scientists should always be asking themselves whether
or not they are wasting money that could have built hospitals or ended world hunger, for no research could
get off the ground under such stringent requirements. I am merely arguing against the idea of such
considerations never playing a role in science they do, and they should.

deontological persuasion could claim that it is impermissible to introduce some potential


harms, (e.g. mass nuclear annihilation), no matter how small their probability, and
irrespective of any other utilitarian calculations. To put it in Wonderful Life terms, a world
where certain events are impossible can be said to be qualitatively different than a world
where the same events have a vanishingly small probability of occurring (even if they do not
occur). Thus, to sum up this section, the diversity of aims does not dispense with the need
for value judgments, be they minimal (e.g. transaction costs for using a particular
substance) or more substantive (e.g. when the aims in question are qualitatively different).
We will now turn to the second task of the scientist qua policy advisor, that of
communicating evidence to the policy-maker.
The problem of communicating evidence to the policy-maker
We have seen that values cannot be done away with, even in the pure realm of research.
Insofar as the scientist is now to communicate his findings to the policy-maker, values come
to play an even larger role. By the very nature of politics, decision-makers lack the time or
abilities to fully engage with all the subtleties of research. Take the case of the
Intergovernmental Panel on Climate Change reports. They include a summary for policymakers, which aims to condense several hundred pages into several dozen. Necessarily, this
practical constraint is tantamount to a procrustean bed, insofar as the diversity of scientific
opinions must be recast using ready-made categories such as likely or very unlikely. Thus,
in attempting to match their beliefs to this standardized scale, personal values will
necessarily play a role.8
Personal values will also play a role, albeit a different one, insofar as the scientist talking to
the policy-makers is also a citizen. Thus, he can no longer claim complete ignorance with
regards to the possible uses of his research. Presumably, the scientist must have (or must
ask for) some indication of the policy-makers intentions. Consequently, given such
information, I find it absurd to require scientists to refrain from any judgment. At this point
more than ever, we must follow James Gaa in saying that scientists cannot be both rational
and irresponsible (immoral) at the same time9.
2. A problem for democracy?
Perfect democratic equality is impossible
Given what we have said so far, a critic might argue that scientists can use their authority to
persuade politicians into pushing forward a particular ethical agenda. Thus, scientists are
given a greater say on matters having to do with more than procedural justice, i.e. on
issues of the good and not just the right than other members of the democratic

8
9

Steele (2012: 899)


Gaa, quoted in Douglas (2009: 62)

community10. However, I believe that much of the strength of the objection relies on a
desire for perfect democratic equality, in which everyones input counts exactly equally.
This, I hold, is impossible, which makes the objection lose a lot of its bite. To put this point
in perspective, Arrows impossibility theorem showed the inherent unfairness of any voting
system, thus placing severe limits on what we should expect from democratic systems.
Consequently, perhaps we need to abandon the criterion of ideal equality in favour of
comparative approaches aimed at assessing relative degrees of inequality.11Upon due
reflection, I do not think this point need be so controversial. Scientists values do influence
research and policy decisions, much as the values of teachers influence education and the
values of politicians influence laws. Of course, this is not to say that anything goes, or that
the degree of influence should be the same across the board. The only point I am making is
that we should not be surprised if, as humans, we are not able to jump over our value-laden
shadow.
Neutrality can be a greater problem
At the opposite end of the spectrum, my worry is that attempts to impose neutrality in
science might create a bigger problem for democracy that the alleged partiality of scientists.
The first thing to note is that the value-free ideal of science has not always been there. As
Heather Douglas notes, it has by and large become enshrined during the last half of the past
century. Very importantly, part of the reason behind its rise to prominence was ideological:
either one supported the US, democracy and neutral science, or one supported the USSR,
totalitarianism and the Marxist mixing of science and values12. Thus, we see the
requirement for neutrality arising in a very non-neutral, political way. Foucault has made a
similar point in noting that
There is a battle "for truth", or at least "around truth" it being understood once again that by truth
I do not mean "the ensemble of truths which are to be discovered and accepted", but rather "the
ensemble of rules according to which the true and the false are separated and specific effects of
power attached to the true"13

Hence, one might worry about the current political aims of requiring neutrality on behalf of
scientists. Specifically, it is a common strategy to dismiss as partisan scientific claims that
challenge the status quo, or to say scientists are overstepping their territory. Of course, if
we think about the terms of political discourse, it is easy to see why the requirement of
neutrality is more than convenient. Compare the following two hypothetical conclusions of
10

From a historical point of view, there is much to be said in favour of scientists helping to debunk bigotry,
racism etc. Thus, there is a sense in which had scientists been given an even larger, undemocratic say a
certain elevation of the general public would have occurred earlier than it did. Moreover, if one thinks about
current day problems, such as the belief in the biological second-hand status of women or the inclusion of
climate change denial in school curricula, it is clear that scientists could still play a similar beneficial role.
However, I do not wish to go further down this paternalistic route.
11
Sen (2006)
12
Douglas (2009: 49)
13
Rabinow (1991: 74)

a scientific study: the youth unemployment rate is 65%, which is an egregious waste of
human potential and the youth unemployment rate is 65%. When presented with the
latter, a politician might much more easily reply: under the previous government, the rate
was 67%, or we have reduced inflation by 5% which, given the Phillips curve, necessarily
entails higher unemployment etc. By solely affixing a quantitative label, we can sweep the
moral significance under the rug. We see this abundantly in the case of the climate change
debate, where corporate-funded sceptics hackle scientists into offering models with higher
and higher levels of certitude: by displacing the focus of attention on numbers and not the
bigger picture, profitable business interests can go undisturbed. Thus, to sum up the last
two sections: there is a sense in which rejecting or accepting the value-free ideal of science
amounts to the choice between some scientists gaining undue influence and large actors
(corporations, politicians) gaining even more influence (or at least going unchallenged). To
my mind, the greater challenge to democracy lies in the latter case.
Conclusion
We have seen that values necessarily influence the interaction between scientists and
policy-makers, on several different levels. First, at the level of research, there is an ethical
dimension of risk that cannot be done away with, one which imposes constraints on the
scientific process itself. Second, at the level of direct communication, practical constraints
allow values to creep in. Finally, there are different risks surrounding (non-)neutrality in
relationship with having a democratic voice. To bring the first and second parts of the essay
together, science neither is nor should be a different, morally insulated part of society. A
fortiori, value judgments such as science makes no value judgments are bound to fall.

Bibliography

C. West Churchman (1956): Science and Decision Making, Philosophy of Science, Vol. 23,
No. 3 (Jul., 1956), pp. 247-249
Heather E. Douglas (2009): Science, policy, and the value-free ideal, University of Pittsburgh
Press
Richard Jeffrey (1956): Valuation and Acceptance of Scientific Hypotheses, Philosophy of
Science, Vol. 23, No. 3 (Jul., 1956), pp. 237-246
W.V.O. Quine (1951): Main Trends in Recent Philosophy: Two Dogmas of Empiricism, The
Philosophical Review, Vol. 60, No. 1, pp. 20-43
Paul Rabinow (1991) (ed.): The Foucault Reader: An Introduction to Foucault's Thought,
Penguin
Richard Rudner (1953) The Scientist Qua Scientist Makes Value Judgments, Philosophy of
Science, Vol. 20, No. 1 (Jan., 1953), pp. 1-6
Amartya Sen (2006): What do we want from a theory of justice?, The Journal of
Philosophy, Vol. 103, No. 5, pp. 215-238
Katie Steele (2012): The Scientist qua Policy Advisor Makes Value Judgments, Philosophy
of Science, Vol. 79, No. 5, pp. 893-904

S-ar putea să vă placă și