Sunteți pe pagina 1din 22

KaNt (3:30)

Framework stuff
I value morality. The meta-ethic is procedural moral realism. Korsgaard ONE
clarifies:
The Sources of Normativity CHRISTINE M. KORSGAARD THE TANNER LECTURES
ON HUMAN VALUES Delivered at Clare Hall, Cambridge University November 16
and 17, 1992 tannerlectures.utah.edu/_documents/a-to-z/k/korsgaard94.pdf
The procedural realist
What distinguishes substantive from procedural realism is a view about the relationship between the answers to moral questions and our procedures for arriving at those answers. moral

thinks that there are answers to moral questions because there are correct procedures for
arriving at them. the substantive moral realist thinks
But correct procedures [track] that there are for answering moral

moral truths that exist independently of those procedures


questions because there are or facts , which those procedures track.35

Prefer since substantive realism relies on an implausible epistemology.


Korsgaard TWO:
The Sources of Normativity CHRISTINE M. KORSGAARD THE TANNER LECTURES
ON HUMAN VALUES Delivered at Clare Hall, Cambridge University November 16
and 17, 1992 tannerlectures.utah.edu/_documents/a-to-z/k/korsgaard94.pdf
Substantive realism conceives the procedures for answering normative questions as ways of finding out about a certain part of the world, the normative part. To that extent, substantive moral realism is

distinguished not by its view about what kind of truths there are, but by its view of what kind of subject ethics is. It conceives ethics as a branch of knowledge: knowledge of
the normative part of the world. Substantive moral realism has been criticized in many ways. It has been argued that we have no reason to believe in intrinsically normative entities or objective values. They
are not harmonious with the Modern Scientific World View, nor are they needed for giving scientific explanations. Since the time of Hume and Hutcheson, it has been argued that there is no reason why such entities should motivate us, disconnected as they ar e from
our natural sources of motivation. Many of these criticisms have been summed up in John Mackie’s famous “Argument from Queerness.” Here it is in Mackie’s own words: If there were objective values, then they would be entities or qualities or relations of a very

utterly different from anything else in the universe [known]


strange sort, by . Correspondingly, if we were aware of them, it would have to be

some special faculty of moral perception or intuition, utterly different from our ordinary ways of knowing everything else. Plato’s Forms give a dramatic picture of what objective values would

knowledge of it
have to be. The Form of the Good is such that tells the person to provides the knower with both a direction and an overriding motive; something’s being good both who knows this

pursue it and makes An objective good would be sought by anyone


him pursue it. acquainted with who was

it the end has to-be-pursuedness somehow built


, not because of any contingent fact that this person, or every person, is so constituted that he desires this end, but just because

into it. nothing, could be like


Similarly, if there were objective principles of right and wrong, any wrong (possible) course of action would have not-to-be-doneness somehow built into it.36 And Mackie suggests,

that. Of course Mackie doesn’t really prove that such entities couldn’t exist. But he does have a point, although I think it is not the point he meant to make.

Takes out Cummiskey (equality), Nagel, Sinhababu, impact-justified, and


oppression-only frameworks. They assume substantive moral realism by
assuming the correct ethic happens to track the good.

Morality must stem from reason.


Velleman 1:
. The authority we are questioning would be vindicated, in each case, by the production
of a sufficient reason.¶ What this observation suggests is that any purported source of practical authority depends
on reasons for obeying it—and hence on the authority of reasons. Suppose, then, that we attempted
to question the authority of reasons themselves, as we earlier questioned other authorities. Where we previously
asked "Why should I act on my desire?" let us now as "Why should I act for reasons?" Shouldn't this question open up a route of
escape from all requirements?¶ As soon as we ask why we should act for reasons, however, we can hear something odd in our
question. To ask "Why should I?" is to demand a reason; and so to ask "Why should I act for
reasons?" is to demand a reason for acting for reasons. This demand implicitly concedes the very
authority that it purports to question—namely, the authority of reasons. Why would we demand a reason if we didn't
envision acting for it? If we really didn't feel required to act for reasons, then a reason for doing so certainly wouldn't help. So there
is something self-defeating about asking for a reason to act for reasons. ~the foregoing argument doesn't show that the requirement
to act for reasons is inescapable. All it shows is that this requirement cannot be escaped in a particular way: we cannot escape the
requirement to act for reasons by insisting on reasons for obeying it. For all that, we still may not be required to act for reasons.¶
Yet the argument does more than close off one avenue of escape from the requirement to act for reasons. It shows that we are
subject to this requirement if we are subject to any requirements at all. The requirement to
act for reasons is
the fundamental requirement, from which the authority of all other requirements is derived, since
the¶ 22¶ Self to Self • 2. A Brief Introduction to Kantian Ethics¶ 23¶ authority of other requirements just consists in there being
reasons for us to obey them. There may be nothing that is required of us; but if
anything is required of us, then acting
for reasons is required.¶ Hence the foregoing argument, though possibly unable to foreclose escape from the requirement
to act for reasons, does succeed in raising the stakes. It shows that we cannot escape the requirement to act for reasons without
escaping the force of requirements altogether. Either
we think of ourselves as under the requirement to act
for reasons, orwe think of ourselves as under no requirements at all. And we cannot stand outside
both ways of thinking and ask for reasons to enter into one or the other, since to ask for reasons
is already to think of ourselves as subject to requirements. ~

If we didn’t have to act on reasons, we would never have to act on anything. So if morality
isn’t based off of reason, skepticism is true.

Reason is universalizable.
Velleman 2:
Reasons that are Universally Shared¶ In Kant's view, being a person consists in being a rational creature, both
cognitively and practically. And Kant thought that our rationality gives us a glimpse of—and hence an aspiration toward—a
perspective even more inclusive than that of our persisting individual selves. Rational
creatures have access to a
shared perspective, from which they not only see the same things but can also see the visibility of those
things to all rational creatures.¶ Consider, for example, our capacity for arithmetic reasoning. Anyone who adds
2 and 2 sees, not just that the sum is 4, but also that anyone who added 2 and 2 would see
that it's 4, and that such a person would see this, too, and so on. The facts of elementary arithmetic are thus
common knowledge among all possible reasoners, in the sense that every reasoner knows them,
and knows that every reasoner knows them, and knows that every reasoner knows that the every reasoner knows
them, and so on.¶ As arithmetic reasoners, then, we have access to a perspective that is constant not
only across time but also between persons.

Thus the standard is acting on universalizable reasons. Additional clarifications:


1. A nonideal form of deontology is the only way to balance a state’s responsibility to
protect its people with its obligation to maintain international peace. Therefore, my
framework outweighs on actor specificity.

Doyle ‘10
[International Theory (2010), 2:1, 87–112 & Cambridge University Press, 2010 doi:10.1017/S1752971909990248 Kantian nonideal theory and nuclear proliferation THOMAS E . DOYLE II ] [SS]

Taking a recent cue from Ken Booth (2007), I therefore contend that problems of nuclear proliferation ought to be
approached as problems of political/moral theory as much as problems of strategy or technology. And, in a
move that tries to correct one strand of the Cold War nuclear ethics literature, I maintain that this analysis should not rely
exclusively on the principles of ideal moral theory, whose (mis)applications reinforce the perception that morality is irrelevant to
politics. Instead, it
should carefully consider those nonideal moral principles whose aim is to
alleviate national, regional or global insecurity, and/or realize a greater measure of
international justice while not insisting that each and every injustice is addressed all at once.
To say that contemporary nuclear proliferation is a paradigm case for nonideal moral theory, then, is to recognize generally how the
formal is dependent upon the informal. In particular, it is to see how NPT violations can threaten to undo an
important informal norm – that is, the nuclear taboo (see e.g. Bunn, 2006; Chyba et al., 2006; Bakanic, 2008). It recognizes
also that morality permits states to do that which is otherwise impermissible in order to prevent
greater injustices from 90 happening.7 Accordingly, the most relevant question to pose is: to what extent might
nonideal moral principles permit leaders of any nonnuclear weapon state to violate a
voluntarily assumed legal obligation to refrain from acquiring nuclear weapons? I take it that this
question is best addressed by Kantian nonideal theory. Some might wonder ‘why Kant?’ Otfried Hoffe (2006) argues that Kant is
the only ‘great thinker’ to put peace among states and peoples as a fundamental principle of
philosophy. Moreover, Kant is regarded as taking an unusually rigorous moral approach. We might thus expect Kant to insist on
adherence to ideal principles in all cases, entailing an unequivocal opposition to nuclear proliferation and a corresponding
requirement that all states undertake unilateral and global nuclear disarmament without condition. Indeed, idealist leaning
nuclear ethicists during the Cold War interpreted Kant in this way (Donaldson, 1985; Lee, 1985; McMahan, 1985). However,
they overlooked those parts of Kant’s writings that comprise a less than fully articulated nonideal
theory, including Perpetual Peace (PP), Lectures on Ethics (LE), portions of the Metaphysics of Morals (MM) and Religion within
the Limits of Reason Alone (Religion). Their mistake was to apply the more familiar and ideal elements of
the Groundwork of the Metaphysics of Morals (GMM), the Critique of Practical Reason (CPR2) and Kant’s late and notorious essay
On the Supposed Right to Lie from Philanthropy (RTL) to
the problem of nuclear deterrence in which none of
the assumptions of ideal theory relevantly apply (Kant, 1785, 1793, 1795, 1797a, 1797b). If, however, a
Kantian nonideal theory can be adequately sketched and validly applied to the problem of
contemporary nuclear proliferation, a significant correction can be made to the dominant accounts that over
emphasize or misapply his ideal moral principles. Second, the corresponding relevance of morality for politics and political inquiry
might be rehabilitated (once more).8 Third, a certain methodological parsimony is achieved. If Kantian nonideal theory concludes
that Iranian nuclear proliferation is permissible under certain strict conditions, we should expect other less rigorous moral theories
to be capable of similar accommodations. Finally, if the explication is successful, it would follow that some instances
of
nuclear proliferation are not inconsistent with Kant’s ultimate vision of achieving perpetual
peace.

Universalizing in terms of actor specificity (debate drill)


Kant’s analysis is a better way for governments to take action, governments do
not act blindly but they also act on their intentions and reason to decide what
the best route ought to be.
Gov actors cannot just be pragmatic, it needs to be couched in procedural
moral realism. There IS a right procedure to do things,
Util admits that process is important (best process for policymaker). So, they
concede the thesis of the negative. IF we need to respect the gov who engages
in process, we need to engage with all people who uses procedures. So if you
take a path using reason, you cannot violate all people using reason. No matter
what.
NC is about self defense, making sure people don’t attack the country.The aff
should not be a thing because it stops countries from doing what they need to
do.

2. My framework is not about intent, actions or omissions, or intent-foresight.


The NC is about the maxim acted on, which concerns itself on the structure of
the action. Thus we imagine simultaneously willing the original maxim in the
world of the universalized maxim, and if a logical contradiction is generated,
the maxim is prohibited.
3. Here’s how to weigh. For Kant, Perfect duties come before imperfect duties,
because perfect duties can never be violated whereas imperfect duties
constantly entail trade-offs. Mahon:[1]
The "Contradiction in Conception Test" establishes perfect duties. The "Contradiction in Will Test" establishes imperfect duties. (i) Contradiction in Conception Test Take the maxim: "In order to get money, I will tell a lie" Now perf orm the

following thought-experiment: consider a world in which everyone acts on that maxim. Call it the "World of the Universalized Maxim". You are al so included in "everyone", so imagine that in this world you are also acting on this maxim. What

would be the case in this world? Well, in that world, everyone would lie to get money. But, apparently, in that world, a lie would nev er get you money. A lie would be useless. No-one would believe what anyone said when they were trying to

get money. So, in that world, YOU would not get money by telling a lie. There is a contradiction in CONCEIVING of a law of lying to get money -- it would never work! What does this test show? It shows that lying cannot work by being practiced

[A liar] rel i[es] on its not being the case that


by everyone. Lies can only work by being practiced by some people and not others. The person who tells a lie is y ng

people lie to get what they want . He or she is relying on its being the case that people tell the truth to get what they want. He or she is making an exception of himself or herself, in

[So universalized] there is a contradiction in CONCEIVING of [the]


principle, to the rule of truthfulness. Such a failure – a law

lying working
of ever -- entails that acting on that maxim is impermissible, or forbidden. However, Kant holds that ANY maxim involving lying, since it will never get you what you want, will fail the test. Hence, all lying is

since
impermissible, or wrong. In the case of an action that is forbidden, or wrong, it follows that the opposite action is required, or right. In this case, one is required not to lie (or to be truthful in one's assertions). Furth ermore,

the contradiction is in the conceiving there s a perfect duty not to lie


very , it follows that i (or, a perfect duty to be truthful

[Yet] a world [where]


in one's assertions). (ii) Contradiction in Willing Test Take the maxim: "In order to be happy, I will not help anyone else" Again, perform the thought-experiment: consider in

everyone [isn’t altruistic]


which (including you) acts on that maxim. What would be the case in this world? Well, in that world, no-one would help anyone. I would not help anyone, and no one would

is possible to CONCEIVE
help me. Kant thinks that it indeed [But] no rational being of law of non-beneficence (or pure self-reliance). However, he claims,

would ever will [that] to live in such a world "inasmuch as cases might often arise in which one would have need of the love and sympathy of others and in which he would deprive himself, by such a law of

nature springing from his own will, of all hope of the aid he wants for himself." In such a world, that is, I have willed tha t no-one help ME when I am in need of help. But no rational being would ever will this, since she will have various ends to

pursue, and she will indeed will whatever means are necessary to the pursuit of those ends, and sometimes this will involve the help of others. How could it be rational, granted that I have various ends to pursue, to will that other people never

help you in under circumstances whatsoever? You would never will it! What does this test show? It shows that non-beneficence would never be willed by anyone. The person who wills non-beneficence is relying on its being the case that others

help him. He really wants others to help him, and him not to help others. So, once again, he is making an exception of himself, in principle, to the rule of helping others. Such a failure -- there is a contradiction in WILLING a law of non-

beneficence -- entails that acting on that maxim is impermissible, or forbidden. However, Kant holds that ANY maxim involving non-beneficence, since it will never get you what you want, will fail the test. Hence, non-beneficence, or principled
refusal to help others, is impermissible, or wrong. In the case of an action that is forbidden, or wrong, it follows that the opposite action is required, or right. In this case, one is required not to not be beneficent (or to be beneficent insofar as

a contradiction in will [entails]


one can). Since the contradiction is only an imperfect duty ing only, however, it follows that there is to be beneficent

[which can’t] violate a perfect duty


insofar as one can. What this means is that, first of all, one can never, in being beneficent, ny (e.g. one cannot help others by lying). Secondly,

since it is [as] an imperfect duty, one is n t required to be beneficent all the time [like with]o . Perfect

Help others [vs] Develop your talents


duties include: Do not lie, Do not steal, Do not murder. Imperfect Duties include: when you can, natural , Develop your moral perfection.

But I also co-opt all reasons to prefer the aff standard since even util impacts link back to my NC;
they’re just imperfect duties and come second since you can conceive of a world where bad
things happen. Thus, you always accept NC standard.
Case
Nation-states will always be in a relation of war and have a right to defend themselves against
their neighbors. Thus, lethal weapons and standing armies are allowed as deterrents.

Doyle ‘10
[International Theory (2010), 2:1, 87–112 & Cambridge University Press, 2010 doi:10.1017/S1752971909990248 Kantian nonideal theory and nuclear proliferation THOMAS E . DOYLE II ] [SS]

Kantian nonideal theory recognizes that conceptions of and commitments to national interest drive
national security policy in today’s world. But, it should be emphasized that Kant’s nonideal theory is a bridge between this
world and that world governed by moral legitimacy and public right. Just as Kant regarded indirect duties (e.g. to treat animals
humanely) as tutors in the service of ideal duties (e.g. treating humans humanely), Kant regards the duties of public
right that nation–states bear as a condition necessary to the construction of long-term security and peace. Kant
contends that an international state of nature is a mutual relation of constant war, which
necessarily constitutes a wrongful condition (MM, 6:343; PP, 8:349). Each country has a self-defense
right, which is identical to the right to go to war (MM, 6:346). Even so, Kant disagrees with Clausewitz that the logic of warfare is
that of unlimited violence (Walzer, [1977] 2000: 23). Warfare must always leave open the possibility of leaving the state of nature
and entering the rightful condition of a pacific federation (MM, 6:347). Thus, Kant
permits standing armies as a
temporary instrument to deter or respond proportionally to aggression (PP, 8:345). On the other hand,
no hostile act during war is morally permissible (or even instrumentally rational!) if it undermines the trust necessary to establish a
future peace (PP, 8:346).21 It is this tension between permitting the existence of armed forces for deterrence and the requirement
to forego hostile actions that undermines future trust that directly relates to our current inquiry. Since a (vastly) weaker or
disadvantaged state might not be capable of effective defense without recourse to deception, we want to know if, for Kant,
permissible uses of deception include the issuance of nuclear deterrence threats where the credibility of the bluff depends upon the
acquisition of nuclear weapons.

The advocacy text: States will adopt a no-first use policy and only use nuclear weapons against
military centers if they are not an ally of a country with a nuclear umbrella.

If a state has been threatened, the threat is only against a military center, and/or another state
has launched a nuclear weapon, the nation has a duty to utilize whatever means necessary to
prevent further injustice. In this scenario, it would be immoral for the state to NOT use nuclear
weapons

Doyle ‘10
[International Theory (2010), 2:1, 87–112 & Cambridge University Press, 2010 doi:10.1017/S1752971909990248 Kantian nonideal theory and nuclear proliferation THOMAS E . DOYLE II ] [SS]

Now, an intention to carry out nuclear deterrent threats in the context of deterrence failure is an intention to engage in nuclear war. Analysis of this category obviates the need to apply the Kantian view on lying in selfdefense, since Aspirant means to make good on
retaliatory threats. Let us evaluate the only three options. The first is Aspirant’s policy to carry out deterrent threats on military and population/government centers. This choice is tantamount to desiring Rival’s annihilation, which violates the doctrine of right (MM,
6: 235) as well as the nonideal proscriptions against enmity and vengeance in LE. On Kantian terms, the Formula of Universal Law would not pass any maxim that corresponds to a policy of comprehensive nuclear reprisal. To see why, let us assess two varied
formulations. One might be called a maxim of overkill: ‘In all instances of nuclear deterrence failure, the victim of aggression must retaliate by means of nuclear strikes against the aggressor’s military, political, and population centers’. This maxim is easily rejected,

In all instances of nuclear first strikes, it


for it is the clearest case of state annihilation, that Kant prohibits absolutely. However, it might be compared to a maxim of strict nuclear reciprocity: ‘

is the duty of the victim to retaliate in kind’. One of the most challenging cases would be
where Aspirant suffers a nuclear first strike on one military center and one industrial center. The

acting on this maxim falls significantly short of state


maxim dictates that the retaliatory nuclear strike must hit one of Rival’s military and industrial centers. In many cases,

annihilation It might even promote the re-establishment of deterrence


, even though it probably involves high levels of civilian casualties.

in as much as it reinforces the expectation that nuclear escalation will be punished and de-escalation will be rewarded (Gauthier, 1984). However, a Kantian assessment of this maxim is largely independent of these consequentialist concerns. Recall that the Formula
of Universal Law draws an uncompromising bottom line where even Rival’s wrongdoing cannot justify Aspirant’s maxim of strict nuclear reciprocity. The indiscriminate destruction of human life, even when Aspirant and Rival destroy only one military and industrial
center apiece, cannot be willed as a universal law of nature. And if Aspirant really intended to allow strict nuclear reciprocity only for themselves and no one else, such partiality cannot be reconciled with the Formula of Universal Law’s requirement to transform a

for Aspirant to
maxim into a law that obligates all relevant actors. Kant and nuclear proliferation 105 The same analysis applies to any policy of carrying out deterrent threats solely against population/government centers. However,

carry out deterrent threats solely against military centers seems prima facie consistent with
Kant’s view on the right of national defense, and it parallels some applications of just war
theory on the problem of limited nuclear warfighting (Ramsey, 1962; Orend, 2000). Once acquired, a lowyield nuclear device might annihilate one or more of Rival’s

, a maxim that corresponds to this intention


army divisions, naval task forces, or air-force bases, severely crippling its capacity to continue to aggress. More importantly

appears to pass the universality test. Aspirant could assent to a rule that permits all in principle

nuclear-armed states to threaten and carry out exclusively counterforce nuclear reprisals, much in the
same way that nationalist morality permits all states to use conventional force in selfdefense.23 This isn’t to say that Rival can read off Aspirant’s intentions from its nuclear procurement behavior. And this is not to say that in the process of nuclear miniaturization

were Aspirant to
required to produce these weapons that Aspirant might not retain its larger nuclear devices. It is to say, though, that Aspirant’s maxim on this point can be imagined without formal contradiction. Moreover,

miniaturize its arsenal and then verifiably decommission or destroy its larger devices, Rival
might come to behave that Aspirant had abandoned any policy of mutually assured
destruction in favor of a policy of severely limited counterforce warfare. There are, however, significant constraints on this nuclear
defense right. Intending to carry out threats against counterforce targets would be impermissible on Kantian nonideal principles if they led to countervalue strikes or a counterforce escalation that entailed massive collateral damage. In conventional warfighting, the
just-war doctrine of double effect permits unintended and limited killing of noncombatants on grounds of military necessity. Howev er, at some point an escalation of counterforce strikes cannot avoid ruining the surrounding eco-systems and in turn injuring or
destroying innocent human life. It would then be false to claim that Aspirant only intended to do good by undertaking counterforce strikes of this kind. It would also be false to claim these counterforce strikes were necessary evils, and that the goodness of the

my reading of Kantian nonideal theory suggests


outcome made the cost of the counterforce strikes worth it (Orend, 2000: 164). Barring these prohibiting conditions, though,

that Aspirant’s choice to carry out deterrent threats strictly against military centers of Rival is
permissible Kantian nonideal theory permits Aspirant to do all that is
. Given the special nature of the nuclear threat environment,

consistent with the reciprocity corollary to defend itself. And if Aspirant truly faces a nuclear
threat from Rival, their NPT commitments do not clearly over-ride their national defense
obligations. Indeed, the inclusion of Article X into the NPT is evidence that states parties are already committed to this position. What of the other policy options? The most salient options are that Aspirant might levy deterrent threats against

Rival’s military or/and their government and population centers but never intend to actually strike any target or never intend to strike any population or government centers. The important practical difference between this set of possibilities and those already
considered is that Aspirant expresses a threat against both military and government/population centers. The important moral difference is that this latter set of options embodies the intent to deceive. What moral assessments now follow on the basis of Kant’s views
on lying in self-defense and the Formula of Universal Law? First, we recall that the purpose of lying to criminal aggressors is to deflect or avoid aggression and that lying to liars or assailants is not an injustice to them. If, for instance, North Korea’s nuclear threats are
bluffs, they nevertheless are regarded by some to have helped dissuade America (i.e. the Bush Administration) from launching anticipatory military strikes against Pyonyang (Smith, 2006: Ch. 4). This kind of deterrent threat by itself then appears at first glance to be
permissible within the bounds of reciprocity. That said, the lie told to the criminal aggressor in Kant’s example does not include a threat of harm. In contrast, the promise of harm that the threat conveys – which is an attitude Aspirant intends to cultivate even though
he does not actually intend to carry out the threat – activates Rival’s hostility and the corresponding difficulty in trust building. Such threats are inconsistent with Kant’s nonideal Sixth Preliminary Article that proscribes acts of hostility ‘as would have to make mutual
trust impossible during a future peacey’ (PP, 8: 346). In plain terms, a persuasive nuclear deterrent threat that Aspirant secretly intends to never carry out still inflicts a determinate harm that mere deflection or avoidance of aggression does not, namely the

it would
construction of an existential insecurity in the threatened state and, to the extent the threat is made public, the creation of existential fear among people that are ‘ends in themselves’.24 Still, given the kind of anarchy that is in today’s world,

be a mistake to think that Kant would absolutely forbid the practice of using deception in
nuclear deterrent threats. Assuming that Rival previously uttered a credible existential
threat to Aspirant, and assuming that Aspirant is not an ally of a country with a
nuclear umbrella of its own, a nuclear deterrent lie told to Rival is not unjust, even if it
generally increases the tendency to not believe statements of this kind. A credible deterrent
lie prevents or deflects aggression without causing further injury . In the same fashion, the reciprocity corollary advanced in fourth Section

Aspirant might reasonably


establishes the possibility that, given a world of nuclear-armed states that have already instituted nuclear deterrent regimes and have made hostile threats of their own,

conclude that advancing persuasive nuclear deterrent lies is necessary for national defense . And to

establish the credibility of those threats, it would be morally permissible for Aspirant to acquire nuclear weapons even though the NPT forbids it.
1NCs – Deterrence DA
1NC – Generic
Link 1 – AFF causes nuke war -- attempting to prevent prolif makes nuclear war
inevitable – independently turns the AFF
Keck 2014 (Zachary Keck, Managing Editor of The Diplomat and Deputy Editor of e-
International Relations, "A Global Zero World Would Be MAD: Abolishing nuclear weapons
would make the world more violent and, paradoxically, more prone to nuclear warfare.", The
Diplomat, March 24 2014, https://thediplomat.com/2014/03/a-global-zero-world-would-be-
mad/)

In fact, global
nuclear disarmament, if achieved, is likely to lead to a less peaceful world and one
where the threat of nuclear war is, paradoxically, much greater.

One of the biggest dangers of nuclear disarmament is not that a rogue nation would cheat, but
that there would be no nuclear deterrence to prevent conventional conflicts between great
powers. Nearly seven decades removed from the end of the last great power conflict, it’s easy to understate just how destructive
these wars can be. For that reason, it’s imperative that we periodically revisit history.

The number of deaths in the last great power conflict, WWII, is generally calculated to be anywhere from 50 to 70 million people,
which includes civilian and military deaths. However, the global population was only about 2.25 billion at the start of WWII, or less
than a third of the current global population of 7.152 billion. Thus, assuming the same level of lethality, a great power conflict today
would result in between 150 and 210 million deaths, many times greater than an accidental nuclear launch or nuclear terrorist
attack, however devastating both would be.

There’s little reason to believe that a global war today— even if fought conventionally— would
not be many times more lethal than WWII, however. Although strategic bombings were
certainly a factor in WWII, for much of the war technology and rival air forces limited their
effectiveness.

Offensive operations against civilian populations in a modern conflict would be much more
effective. To begin with, most nations would turn to launching ballistic and cruise missiles in
unprecedented quantities. Like Korea and Vietnam, but unlike most of WWII, there would essentially be no
methods for defending civilian population centers against these missiles.

Moreover, because of urbanization, populations are far more concentrated than they were in
WWII. According to the UN, the number of people living in urban areas more than quadrupled between 1950 and 2005, increasing
from 732 million (29 percent of total population) to 3.2 billion (49 percent of population). In 2010 more than half the world
population was living in cities and this number is expected to rise to 60 percent by 2030. By mid-century, a full 70 percent of the
world’s population, or 6.4 billion people, will be urban dwellers.

Thus, the combination of missile attacks for which there are few defenses, combined with much
greater population density, would alone make WWIII much more lethal than either of its
predecessors.

But as deadly as a modern conventional war would be in a nuclear free world, the real danger is
that it wouldn’t remain conventional. Along with making great power conflict far more likely,
global nuclear disarmament offers no conceivable mechanism to ensure that such a war would
remain non-nuclear. In fact, common sense would suggest that immediately following the
outbreak of hostilities — if not in the run-up to the war itself — every previous nuclear power
would make a rapid dash to reconstruct their nuclear forces in the shortest amount of time.
The result would not merely be a return to the nuclear world we currently inhabit. Rather, some
countries would reconstruct their nuclear weapons more quickly than others, and no power
could be sure of the progress their rivals had made. The “winners” in this nuclear arms race
would then have every incentive to immediately use their new nuclear capabilities against their
adversaries in an effort to quickly end the conflict, eliminate others’ nuclear weapons-making
capabilities, or merely out of fear that others will launch a debilitating strike on its small and
vulnerable nuclear arsenal. There would be no mutually assured destruction in such an
environment; a “use-it-or-lose-it” mentality would prevail.
Chem/Bio Weapons Turn
1NCs – Chemical and Biological Weapons
(CBW)
1NC – Disease
Limiting nuclear weapons shifts to CBWs – state and non-state threats
Neil Narang 15, Assistant Professor in the Department of Political Science @ UCSB, Ph.D.,
UC San Diego in 2012, “All Together Now? Questioning WMDs as a Useful Analytical Unit for
Understanding Chemical and Biological Weapons Proliferation”, Nonproliferation Review,
22:3-4, 457-468, DOI: 10.1080/10736700.2016.1153184 [“- ” mass removed]
The first inference that one may be tempted to draw from past findings is that a policy focused on achieving reductions in the global nuclear stockpile could cause a rise

, findings suggested
in chemical and biological weapons proliferation as more states view them as a “poor man’s atomic bomb.” As noted above our

that states appear to seek chemical and biological weapons for many of the same reasons as
they pursue nuclear weapons. Furthermore, our findings also indicate that states that do not possess
nuclear weapons appear to be systematically more likely to pursue chemical and
biological weapons than states that do possess them. When combined, it may seem reasonable to suppose that, conditional on
some level of demand for one of these types of weapons, reductions in the global supply of nuclear weapons

could cause some states to pursue chemical and biological weapons as “imperfect
substitutes” for the deterrence and compellence benefits of nuclear weapons. A second inference that one may
be tempted to draw is that a strengthened NPT may increase the risk of chemical and biological

weapons proliferation. Understood in the terms of our study, policies and institutions designed to monitor and sanction the unilateral pursuit or
dissemination of nuclear weapons material and technical expertise—like the NPT or the Nuclear Suppliers Group—might be understood as supply constraints that
effectively increase the transaction costs of nuclear weapons acquisition. Furthermore, previous research has shown that the supply of sensitive nuclear assistance and
civilian nuclear assistance are both positively associated with the risk of nuclear weapons pursuit and acquisition across states and over time.17 When combined, it may

seem reasonable to suppose that, given some demand for a “weapon of mass destruction,” chemical and
biological weapons could seem like relatively cheaper pursuits under a more robust
global nuclear nonproliferation regime that further regulates the supply of nuclear
weapons. A third inference that one may be tempted to draw is that reductions in the global supply of nuclear
weapons and a strengthening of the nuclear nonproliferation regime could increase the risk
of chemical and biological weapons pursuit by terrorist groups. If one is willing to assume terrorist groups aim to
influence governments by threatening to impose costs in order to achieve concessions— whether this
be through strategies like coercion, provocation, spoiling, or outbidding—then it may seem reasonable to suppose that limiting the availability

of nuclear weapons might shift the demand to other coercive instruments such as chemical and
biological weapons.18

CBW use causes extinction – states and terrorists prove


Piers Millett and Andrew Snyder-Beattie 17, PhD, Senior Research Fellow at the
Future of Humanity Institute, where he focuses on pandemic and deliberate disease and the
implications of biotechnology, consults for the World Health Organization on research and
development for public health emergencies, spent more than a decade working for the
Biological Weapons Convention, the international treaty that bans these weapons; Director
of Research at the Future of Humanity Institute, University of Oxford, where he manages a
number of research, outreach, and fundraising activities; “Existential Risk and Cost-Effective
Biosecurity”, Health Security, Volume 15, Number 4, 2017 Mary Ann Liebert, Inc,
http://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC5576214&blobtype=pdf
[copied weird so i removed “.x”, {, *, and “- ”] [figures and tables removed]
How worthwhile is it spending resources to study and mitigate the chance of human extinction from biological risks? The risks of such a
catastrophe are presumably low, so a skeptic might argue that addressing such risks would be a waste of scarce resources. In this article, we investigate this position

using a cost-effectiveness approach and ultimately conclude that the expected value of reducing these risks is large, especially since such risks jeopardize
the existence of all future human lives. Historically, disease events have been responsible for the greatest death tolls on humanity.
The 1918 flu was responsible for more than 50 million deaths,1 while smallpox killed perhaps 10 times that many in the 20th century alone.2 The Black Death was
responsible for killing over 25% of the European population,3 while other pandemics, such as the plague of Justinian, are thought to have killed 25 million in the 6th
century—constituting over 10% of the world’s population at the time.4 It is an open question whether a future pandemic could result in outright human extinction or

A skeptic would have many good reasons to think that existential risk
the irreversible collapse of civilization.

from disease is unlikely. Such a disease would need to spread worldwide to remote
populations, overcome rare genetic resistances, and evade detection, cures, and
countermeasures. Even evolution itself may work in humanity’s favor: Virulence and transmission is often a trade-off, and so
evolutionary pressures could push against maximally lethal wild-type pathogens.5,6 While these arguments point to a very small risk of

human extinction, they do not rule the possibility out entirely. Although rare, there are
recorded instances of species going extinct due to disease—primarily in amphibians, but also in 1 mammalian
species of rat on Christmas Island.7,8 There are also historical examples of large human populations being almost

entirely wiped out by disease, especially when multiple diseases were simultaneously
introduced into a population without immunity. The most striking examples of total population collapse include native American tribes exposed to European
diseases, such as the Massachusett (86% loss of population), Quiripi-Unquachog (95% loss of population), and the Western Abenaki (which suffered a staggering 98%
loss of population).9 In the modern context, no single disease currently exists that combines the worst-case levels of transmissibility, lethality, resistance to

many diseases are proof of principle that each worst-case attribute can be realized
countermeasures, and global reach. But

independently. For example, some diseases exhibit nearly a 100% case fatality ratio in the absence of treatment, such as rabies

or septicemic plague. Other diseases have a track record of spreading to virtually every human community worldwide, such as
the 1918 flu,10 and seroprevalence studies indicate that other pathogens, such as chickenpox and HSV-1, can successfully reach over 95% of a population.11,12 Under
optimal virulence theory, natural evolution would be an unlikely source for pathogens with the highest possible levels of transmissibility, virulence, and global reach. But

Recent controversy has already emerged


advances in biotechnology might allow the creation of diseases that combine such traits.

over a number of scientific experiments that resulted in viruses with enhanced


transmissibility, lethality, and/or the ability to overcome therapeutics.13-17 Other experiments
demonstrated that mousepox could be modified to have a 100% case fatality rate and render a vaccine ineffective.18 In addition to transmissibility and lethality, studies

have shown that other disease traits, such as incubation time, environmental survival, and available vectors,
could be modified as well.19-21 Although these experiments had scientific merit and were not conducted with malicious intent, their
implications are still worrying. This is especially true given that there is also a long
historical track record of state-run bioweapon research applying cutting-edge science
and technology to design agents not previously seen in nature. The Soviet bioweapons program developed
agents with traits such as enhanced virulence, resistance to therapies, greater environmental resilience, increased difficulty to diagnose or treat, and which caused

.
unexpected disease presentations and outcomes 22 Delivery capabilities have also been subject to the cutting edge of technical development, with Canadian, US, and UK
bioweapon efforts playing a critical role in developing the discipline of aerobiology.23,24 While there is no evidence of staterun bioweapons programs directly
attempting to develop or deploy bioweapons that would pose an existential risk, the logic of deterrence and mutually assured destruction could create such incentives in
more unstable political environments or following a breakdown of the Biological Weapons Convention.25 The possibility of a war between great powers could also
increase the pressure to use such weapons—during the World Wars, bioweapons were used across multiple continents, with Germany targeting animals in WWI,26 and

Japan using plague to cause an epidemic in China during WWII.27 Non-state actors may also pose a risk, especially those with explicitly
omnicidal aims. While rare, there are examples. The Aum Shinrikyo cult in Japan sought biological weapons for the express purpose of causing extinction.28
Environmental groups, such as the Gaia Liberation Front, have argued that ‘‘we can ensure Gaia’s survival only through the extinction of the Humans as a species . we
now have the specific technology for doing the job . several different [genetically engineered] viruses could be released’’(quoted in ref. 29). Groups such as R.I.S.E. also

capabilities needed to
sought to protect nature by destroying most of humanity with bioweapons.30 Fortunately, to date, non-state actors have lacked the

could change in future decades as biotechnology becomes more


pose a catastrophic bioweapons threat, but this

accessible and the pool of experienced users grows.31,32 What is the appropriate response to these speculative
extinction threats? A balanced biosecurity portfolio might include investments that reduce a mix of proven and speculative risks, but striking this balance is still difficult
given the massive uncertainties around the low-probability, high-consequence risks. In this article, we examine the traditional spectrum of biosecurity risks (ie,
biocrimes, bioterrorism, and biowarfare) to categorize biothreats by likelihood and impact, expanding the historical analysis to consider even lower-probability, higher-
consequence events (catastrophic risks and existential risks). In order to produce reasoned estimates of the likelihood of different categories of biothreats, we bring
together relevant data and theory and produce some first-guess estimates of the likelihood of different categories of biothreat, and we use these initial estimates to
compare the cost-effectiveness of reducing existential risks with more traditional biosecurity measures. We emphasize that these models are highly uncertain, and their

even with the most


utility lies more in enabling order-of-magnitude comparisons rather than as a precise measure of the true risk. However,

conservative models, we find that reduction of low-probability, high-consequence risks


can be more cost-effective, as measured by quality-adjusted life year per dollar,
especially when we account for the lives of future generations. This suggests that
despite the low probability of such events, society still ought to invest more in preventing
the most extreme possible biosecurity catastrophes.
Case
1. No warrant for reflective equilibrium just a definition
2. TURN: deontology is the best at explaining why rights matter and why people are equal,
you have a half solution at best
3. This card assumes util as a starting point and says that rule util is better than act util but
deontology is better than both of them!
4. TURN: deontology is the most intuitive: it’s why we think there are side constraints on
our actions like don’t rape, don’t torture, etc. util leads to all kinds of unintuitive
conclusions like torture the terrorist’s daughter or put us in a medically induced coma
and pump us full of serotonin
5. Don’t vote off of argument NOT READ DURING THE SPEECH: this card is super lined
down and has no actual warrants
6. This is substantive moral realism which my framework critiques: Feldman’s theory
assumes pain and pleasure are intrinsically valuable which is impossible because no
material object could induce obligation on its own
7. Infinite regress actions have infinite consequences so A. we’d be stuck calculating
forever B. we’d never know if anything is moral or not
8. Util is repugnant because it can justify oppressing some to avoid future pain which is
awful
C1
States are not likely to escalate further, history proves.

Ganguly ‘19

[https://www.foreignaffairs.com/articles/india/2019-03-05/why-india-pakistan-crisis-isnt-likely-
turn-nuclear, March 5, 2019, Saumit Ganguly] [SS]

No one can say for sure, but history suggests that there is cause for optimism. During the Kargil
War, India worked to contain the fighting to the regions around Pakistan’s original incursions and the war concluded with
no real threat of nuclear escalation. Less than two years later, the two countries plunged into crisis once again. In
December 2001, five terrorists from the Pakistan-based groups Lashkar-e-Tabia and Jaish-e-Mohammed attacked the parliament
building in New Delhi with AK-47s, grenades, and homemade bombs, killing eight security guards and a gardener. In response, India
launched a mass military mobilization designed to induce Pakistan to crack down on terrorist groups. As Indian troops deployed to the
border, terrorists from Pakistan struck again. In May 2002, three men killed 34 people in the residential area of an Indian army camp
in Kaluchak, in Jammu and Kashmir. Tensions spiked. India seemed poised to unleash a military assault
on Pakistan. Several embassies in New Delhi and Islamabad withdrew their nonessential personnel and issued travel advisories. The
standoff lasted for several months, but dissipated when it became apparent that India lacked viable military options and that
the long mobilization was taking a toll on the Indian military’s men and materiel. The United States also helped ease tensions by
urging both sides to start talking. India claimed victory, but it was a Pyrrhic one, as Pakistan failed to sever its ties with a range of
terrorist organizations. Now
that both sides have gone through the motions, neither is likely
to escalate any further. Other nuclear states have also clashed without resorting to
nuclear weapons. In 1969, China, then an incipient nuclear weapons state, and the Soviet Union, a full-fledged nuclear
power, came to blows over islands in the Ussuri River, which runs along the border between the two countries. Several hundred
Chinese and Soviet soldiers died in the confrontation. Making matters worse, Chinese
leader Mao Zedong had a
tendency to run risks and dismissed the significance of nuclear weapons, reportedly telling
Indian Prime Minister Jawaharlal Nehru that even if half of mankind died in a nuclear war, the other half would survive and
imperialism would have been razed to the ground. Yet despite Mao’s views, the crisis ended without going
nuclear, thanks in part to the efforts of Soviet Prime Minister Alexei Kosygin, who took the first step by travelling to Beijing for
talks. There’s reason to believe that the current situation is similar. Pakistan’s overweening military
establishment undoubtedly harbors an extreme view of India and determines
Pakistan’s policy toward its neighbor. The military, however, is not irrational. In India,
although Prime Minister Narendra Modi has a jingoistic disposition, he, too, understands the risks of
escalation, and he has a firm grip on the Indian military. Another source of optimism comes from what political scientists call
the “nuclear revolution,” the idea that the invention of nuclear weapons fundamentally changed
the nature of war. Many strategists argue that nuclear weapons’ destructive power is so great that states understand the
awful consequences that would result from using them—and avoid doing so at all costs. Indian and Pakistani strategists are no
different from their counterparts elsewhere. Even Pakistani Prime Minister Imran Khan, a political neophyte, underscored the
dangers of nuclear weapons in his speech addressing the crisis last week. And Modi, for all his chauvinism, has scrupulously avoided
referring to India’s nuclear capabilities. The decision by India and Pakistan to allow their jets to cross the border represents a major
break with the past. Yet so far both countries have taken only limited action. Their principal aim, it appears, is what the political
scientist Murray Edelman once referred to as “dramaturgy”—theatrical gestures designed to please domestic audiences. Now that both
sides have gone through the motions, neither is likely to escalate any further. Peering into the nuclear abyss concentrates the mind
remarkably.

Their card is from 2009, why aren’t we all dead by now. Won’t actually be rational no
stats or evidence.
Miscal
US and Russia also have access to ABMs so they can stop any nuclear attack.

Curley

[Britannica.com, Antiballastic missile (ABM), https://www.britannica.com/technology/Atlas-American-


launch-vehicles, Robert Curley, Senior Editor]

Antiballistic missile (ABM), Weapon designed to intercept and destroy ballistic missiles. Effective
ABM systems have been sought since the Cold War, when the nuclear arms race raised the spectre of complete destruction by
unstoppable ballistic missiles. In the late 1960s both the U.S. and the Soviet Union developed nuclear-
armed ABM systems that combined a high-altitude interceptor missile (the U.S. Spartan and Soviet Galosh) with a terminal-
phase interceptor (the U.S. Sprint and Soviet Gazelle). Both sides were limited by the 1972 Treaty on Antiballistic Missile Systems to
one ABM location each; the U.S. dismantled its system, while the Soviet Union deployed one around Moscow. During the 1980s the
U.S. began research on an ambitious Strategic Defense Initiative against an all-out Soviet attack, but this effort proved expensive and
technically difficult, and it lost urgency with the collapse of the Soviet Union. Attention shifted to “theatre” systems such as the U.S.
Patriot missile, which was used with limited effect against conventionally armed Iraqi Scud missiles during the Persian Gulf
War (1990–91). In 2002 the U.S. formally withdrew from the ABM treaty in order to develop a defense
against limited missile attack by smaller powers or “rogue” states.

Conventional conflict and arms races and cyberattacks cause extinction


through non-nuclear emerging technologies
The aff can’t solve, the last miscal links

Klare 18—Michael T. Klare, professor emeritus of peace and world security studies at
Hampshire College and senior visiting fellow at the Arms Control Association (“The
Challenges of Emerging Technologies,” Arms Control Association, December 2018,
https://www.armscontrol.org/act/2018-12/features/challenges-emerging-technologies)
Today, a whole new array of technologies—artificial intelligence (AI), robotics, hypersonics, and
cybertechnology, among others—is being applied to military use, with potentially far-ranging
consequences. Although the risks and ramifications of these weapons are not yet widely recognized, policymakers
will be compelled to address the dangers posed by innovative weapons technologies and
to devise international arrangements to regulate or curb their use. Although some early efforts have been undertaken in this
direction, most notably, in attempting to prohibit the deployment of fully autonomous weapons systems, far more work is
needed to gauge the impacts of these technologies and to forge new or revised control mechanisms as deemed appropriate.
Tackling the arms control implications of emerging technologies now is becoming a matter of ever-increasing urgency as the
pace of their development is accelerating and their potential applications to warfare are multiplying. Many analysts believe
that the utilization of AI and
robotics will utterly revolutionize warfare, much as the introduction of tanks,
airplanes, and nuclear weapons transformed the battlefields of each world war. “We
are in the midst of an ever
accelerating and expanding global revolution in [AI] and machine learning, with enormous
implications for future economic and military competitiveness,” declared former U.S. Deputy Secretary of Defense Robert
Work, a prominent advocate for Pentagon utilization of the new technologies.1 The Department of Defense is spending billions
of dollars on AI, robotics, and other cutting-edge technologies, contending that the United States must maintain leadership in
the development and utilization of those technologies lest its rivals use them to secure a future military advantage. China and
Russia are assumed to be spending equivalent sums, indicating the initiation of a vigorous arms race in emerging technologies.
“Our adversaries are presenting us today with a renewed challenge of a sophisticated, evolving threat,” Michael Griffin, U.S.
undersecretary of defense for research and engineering, told Congress in April. “We are in turn preparing to meet that
challenge and to restore the technical overmatch of the United States armed forces that we have traditionally held.”2 In
accordance with this dynamic, the United States and its rivals
are pursuing multiple weapons systems
employing various combinations of AI, autonomy, and other emerging technologies. These include, for
example, unmanned aerial vehicles (UAVs) and unmanned surface and subsurface naval vessels capable of being assembled in
swarms, or “wolfpacks,” to locate enemy assets such as tanks, missile launchers, submarines and, if communications are lost
with their human operators, decide to strike them on their own. The Defense Department also has funded the development of
two advanced weapons systems employing hypersonic technology: a hypersonic air-launched cruise missile and the Tactical
Boost Glide (TBG) system, encompassing a hypersonic rocket for initial momentum and an unpowered payload that glides to
its destination. In the cyberspace realm, a variety of offensive and retaliatory cyberweapons are being developed by the U.S.
Cyber Command for use against hostile states found to be using cyberspace to endanger U.S. national security. The
introduction of these and other such weapons on future battlefields will transform every aspect of
combat and raise a host of challenges for advocates of responsible arms control. The use of fully autonomous weapons in
combat, for example, automatically raises questions about the military’s ability to comply with the laws of war and
international humanitarian law, which require belligerents to distinguish between enemy combatants and civilian bystanders.
It is on this basis that opponents of such systems are seeking to negotiate a binding international ban on their deployment.
Even more worrisome, some of the weapons now in development, such as unmanned anti-submarine wolfpacks and
the TBG system, could theoretically endanger the current equilibrium in nuclear relations among
the major powers, which rests on the threat of assured retaliation by invulnerable second-strike forces, by opening or seeming
to open various first-strike options. Warfare in cyberspace could also threaten nuclear stability by
exposing critical early-warning and communications systems to paralyzing attacks and prompting anxious leaders to
authorize the early launch of nuclear weapons. These are only some of the challenges to global security and arms control that
are likely to be posed by the weaponization of new technologies. Observers of these developments, including many who have
studied them closely, warn that the development and weaponization of AI and other emerging technologies is occurring faster
than efforts to understand their impacts or devise appropriate safeguards. “Unfortunately,” said former U.S. Secretary of the
Navy Richard Danzig, “the
uncertainties surrounding the use and interaction of new military
technologies are not subject to confident calculation or control.”3 Given the enormity of the risks
involved, this lack of attention and oversight must be overcome. Mapping out the implications of the new technologies for
warfare and arms control and devising effective mechanisms for their control are a mammoth undertaking that requires the
efforts of many analysts and policymakers around the world. This piece, an overview of the issues, is the first in a series for
Arms Control Today (ACT) that will assess some of the most disruptive emerging technologies and their war-fighting and arms
control implications. Future installments will look in greater depth at four especially problematic technologies: AI,
autonomous weaponry, hypersonics, and cyberwarfare. These four have been chosen for close examination because, at this
time, they appear to be the furthest along in terms of conversion into military systems and pose immediate challenges for
international peace and stability. Artificial Intelligence AI is a generic term used to describe a variety of techniques for
investing machines with an ability to monitor their surroundings in the physical world or cyberspace and to take independent
action in response to various stimuli. To invest machines with these capacities, engineers have developed complex algorithms,
or computer-based sets of rules, to govern their operations. An AI-equipped aerial drone, for example, could be equipped with
sensors to distinguish enemy tanks from other vehicles on a crowded battlefield and, when some are spotted, choose on its
own to fire at them with its onboard missiles. AI
can also be employed in cyberspace, for example to watch
for enemy cyberattacks and counter them with a barrage of counterstrikes. In the future, AI-invested
machines may be empowered to determine if a nuclear attack is underway and, if so, initiate a retaliatory strike.4 In this sense,
AI is an “omni-use” technology, with multiple implications for war-fighting and arms control.5
Many analysts believe that AI will revolutionize warfare by allowing military commanders to bolster or, in some cases, replace
their personnel with a wide variety of “smart” machines. Intelligent systems are prized for the speed with which they can
detect a potential threat and their ability to calculate the best course of action to neutralize that peril. As warfare among the
major powers grows increasingly rapid and multidimensional, including in the cyberspace and outer space domains,
commanders may choose to place ever-greater reliance on intelligent machines for monitoring enemy actions and initiating
appropriate countermeasures. This could provide an advantage on the battlefield, where rapid and informed action could
prove the key to success, but also raises numerous concerns, especially regarding nuclear “crisis stability.” Analysts worry that
machines will accelerate the pace of fighting beyond human comprehension and possibly take actions
that result in the unintended escalation of hostilities, even leading to use of nuclear weapons. Not only are AI-
equipped machines vulnerable to error and sabotage, they lack an ability to assess the context of
events and may initiate inappropriate or unjustified escalatory steps that occur too rapidly for humans to
correct. “Even if everything functioned properly, policymakers could nevertheless effectively lose the ability to control
escalation as the speed of action on the battlefield begins to eclipse their speed of decision-making,” writes Paul Scharre, who
is director of the technology and national security program at the Center for a New American Security.6 As AI-equipped
machines assume an ever-growing number and range of military functions, policymakers will have to determine what
safeguards are needed to prevent unintended, possibly catastrophic consequences of the sort suggested by Scharre and many
others. Conceivably, AI could bolster nuclear stability by providing enhanced intelligence about enemy intentions and reducing
the risk of misperception and miscalculation; such options also deserve attention. In the near term, however, control efforts
will largely be focused on one particular application of AI: fully autonomous weapons systems. Autonomous Weapons Systems
Autonomous weapons systems, sometimes called lethal autonomous weapons systems, or “ killer robots,” combine
AI and drone technology in machines equipped to identify, track, and attack enemy assets on their own. As defined by
the U.S. Defense Department, such a device is “a weapons system that, once activated, can select and engage targets without
further intervention by a human operator.”7 Some such systems have already been put to military use. The Navy’s Aegis air
defense system, for example, is empowered to track enemy planes and missiles within a certain radius of a ship at sea and, if it
identifies an imminent threat, to fire missiles against it. Similarly, Israel’s Harpy UAV can search for enemy radar systems over
a designated area and, when it locates one, strike it on its own. Many other such munitions are now in development, including
undersea drones intended for anti-submarine warfare and entire fleets of UAVs designed for use in “swarms,” or flocks of
armed drones that twist and turn above the battlefield in coordinated maneuvers that are difficult to follow.8 The
deployment of fully autonomous weapons systems poses numerous challenges to
international security and arms control, beginning with a potentially insuperable threat to the laws of war and
international humanitarian law. Under these norms, armed belligerents are obligated to distinguish between enemy
combatants and civilians on the battlefield and to avoid unnecessary harm to the latter. In addition, any civilian casualties that
do occur in battle should not be disproportionate to the military necessity of attacking that position. Opponents of lethal
autonomous weapons systems argue that only humans possess the necessary judgment to make such fine distinctions in the
heat of battle and that machines will never be made intelligent enough to do so and thus should be banned from deployment.9
At this point, some 25 countries have endorsed steps to enact such a ban in the form of a protocol to the Convention on Certain
Conventional Weapons (CCW). Several other nations, including the United States and Russia, oppose a ban on lethal
autonomous weapons systems, saying they can be made compliant with international humanitarian law.10 Looking further
into the future, autonomous weapons systems could pose a potential threat to nuclear stability by investing their owners with
a capacity to detect, track, and destroy enemy submarines and mobile missile launchers. Today’s stability, which can be seen
as an uneasy nuclear balance of terror, rests on the belief that each major power possesses at least some devastating second-
strike, or retaliatory, capability, whether mobile launchers for intercontinental ballistic missiles (ICBMs), submarine-launched
ballistic missiles (SLBMs), or both, that are immune to real-time detection and safe from a first strike. Yet, a nuclear-armed
belligerent might someday undermine the deterrence equation by employing undersea drones to pursue and destroy enemy
ballistic missile submarines along with swarms of UAVs to hunt and attack enemy mobile ICBM launchers. Even the mere
existence of such weapons could jeopardize stability by encouraging an opponent in a crisis to launch a nuclear first strike
rather than risk losing its deterrent capability to an enemy attack. Such an environment would erode the underlying logic of
today’s strategic nuclear arms control measures, that is, the preservation of deterrence and stability with ever-diminishing
numbers of warheads and launchers, and would require new or revised approaches to war prevention and disarmament.11
Hypersonic Weapons Proposed hypersonic weapons, which can travel at a speed of more than five time the speed of sound, or
more than 5,000 kilometers per hour, generally fall into two categories: hypersonic glide vehicles and hypersonic cruise
missiles, either of which could be armed with nuclear or conventional warheads. With hypersonic glide vehicle systems, a
rocket carries the unpowered glide vehicle into space, where it detaches and flies to its target by gliding along the upper
atmosphere. Hypersonic cruise missiles are self-powered missiles, utilizing advanced rocket technology to achieve
extraordinary speed and maneuverability. No such munitions currently exist, but China, Russia, and the United States are
developing hypersonic weapons of various types. The U.S. Defense Department, for example, is testing the components of a
hypersonic glide vehicle system under its Tactical Boost Glide project and recently awarded a $928 million contract to
Lockheed Martin Corp. for the full-scale development of a hypersonic air-launched cruise missile, tentatively called the
Hypersonic Conventional Strike Weapon.12 Russia, for its part, is developing a hypersonic glide vehicle it calls the Avangard,
which it claims will be ready for deployment by the end of 2019, and China in August announced a successful test of the Starry
Sky-2 hypersonic glide vehicle described as capable of carrying a nuclear weapon.13 Whether armed with conventional or
nuclear warheads, hypersonic weapons pose a variety of challenges to international stability and arms control. At the heart of
such concerns is these weapons’ exceptional speed and agility. Anti-missile systems that may work against existing threats
might not be able to track and engage hypersonic vehicles, potentially allowing an aggressor to contemplate first-strike
disarming attacks on nuclear or conventional forces while impelling vulnerable defenders to adopt a launch-on-warning
policy.14 Some analysts warn that the mere acquisition of such weapons could “increase the expectation of a disarming
attack.” Such expectations “encourage the threatened nations to take such actions as devolution of command-and-control of
strategic forces, wider dispersion of such forces, a launch-on-warning posture, or a policy of preemption during a crisis.” In
short, “hypersonic threats encourage hair-trigger tactics that would increase crisis instability.”15 The development of
hypersonic weaponry poses a significant threat to the core principle of assured retaliation, on which today’s nuclear strategies
and arms control measures largely rest. Overcoming that danger will require commitments on the part of the major powers
jointly to consider the risks posed by such weapons and what steps might be necessary to curb their destabilizing effects. The
development of hypersonic munitions also introduces added problems of proliferation. Although the bulk of research on such
weapons is now being conducted by China, Russia, and the United States, other nations are exploring the technologies involved
and eventually could produce such munitions on their own eventually. In a world of widely disseminated hypersonic weapons,
vulnerable states would fear being attacked with little or no warning time, possibly impelling them to conduct pre-emptive
strikes on enemy capabilities or to commence hostilities at the earliest indication of an incoming missile. Accordingly, the
adoption of fresh nonproliferation measures also belongs on the agenda of major world leaders.16 Cyberattack Secure
operations in cyberspace, the global web of information streams tied to the internet, has become essential for the continued
functioning of the international economy and much else besides. An extraordinary tool for many purposes, the internet is also
vulnerable to attack by hostile intruders, whether to spread misinformation, disrupt vital infrastructure, or steal valuable data.
Most of those malicious activities are conducted by individuals or groups of individuals seeking to enrich themselves or sway
public opinion. It is increasingly evident, however, that governmental bodies, often working in conjunction with some of those
individuals, are employing cyberweapons to weaken their enemies by sowing distrust or sabotaging key institutions or to
bolster their own defenses by stealing militarily relevant technological know-how. Moreover, in the event of a crisis or
approaching hostilities, cyberattacks could be launched on an adversary’s early-warning,
communications, and command and control systems, significantly impairing its response
capabilities.17 For all these reasons, cybersecurity, or the protection of cyberspace from malicious attack, has become a
major national security priority.18
2NR
You vote on Kant. We win the fw debate.

The standard is acting on universalizable reasons

1. The meta ethic is procedural moral realism, that doesn’t rule under util because it
assumes substantive moral realism. Substantive moral realism assumes that there is
something that has a “to be pursuedness” built in and nothing exists like that. Even
EXTINCTION doesn’t work like that
2. They concede don’t let them read any new arguments because thjey already read fw,
they should have read ext, harder to negate

The Lee cards

S-ar putea să vă placă și