Sunteți pe pagina 1din 61

OTEC Supplement

Aff
Politics Link Turns
OTEC Bipartisan
OTEC is bipartisan in congress
Akaka, Hawaiian State Senator, 7
[Daniel Kahikina, June 2007, Alaska Senate Website, Senate Passes Progressive Energy Policy,
http://akaka.senate.gov/public/index.cfm?FuseAction=Newsletters.Home&month=6&year=2007&relea
se_id=1779]
On June 21, 2007, sustainable energy policy took a giant leap forward when the United States Senate
passed H.R. 6, the Renewable Fuels, Consumer Protection, and Energy Efficiency Act of 2007. This
comprehensive measure, passed with strong bipartisan support, will grow our economy, strengthen
our national security, and protect our environment. If enacted into law, it will put us on a path toward
reducing our reliance on oil, foreign and domestic, by increasing the production and supply of renewable
fuels and reducing the amount of energy we use, both at home and at work.
For the first time since 1975, our bill raises standards for new cars and trucks from 25 to 35 miles per
gallon by 2020. That will save us up to one billion gallons of gasoline every day.
The bill also reduces crude oil consumption by more than 10 percent over the next 15 years by
producing more renewable fuels domestically. There are numerous incentives that will benefit Hawaii
farmers who are interested in growing the feedstock necessary for the production of cellulosic ethanol,
providing a boost to our agricultural industry.
New energy efficiency standards for lighting, appliances and water use, will save electricity and half a
trillion gallons of water every year. Because government should lead by example, we will also
dramatically improve the energy efficiency of federal buildings and vehicles, saving billions of American
taxpayer dollars while helping our planet.
The bill protects consumers from price gouging and supply manipulation during times of national crisis,
and also invests in technologies that will drive our energy future, like carbon capture and storage which
holds the hope of containing carbon emissions from existing power sources before they ever reach the
air.
I am also pleased that an amendment I introduced which authorized $50 million to establish a research
and development program to promote marine and hydrokinetic sources of renewable energy was
included in the bill. The funding will allow researchers to study innovative ways to harness energy
from our ocean's waves, tides, and currents, as well as free flowing water in rivers, lakes and streams,
and ocean thermal energy conversion. It establishes up to six National Ocean Energy Research Centers
which will work with institutions such as the University of Hawaii to conduct research, development,
demonstration and testing of these new ocean energy technologies. This is an exciting prospect,
especially for Hawaii and the coastal regions of the mainland as studies have shown huge potential in
this vast source of renewable energy.
GOP Supports Renewables
Majority of Republicans support renewables and climate mitigation the GOP is
changing
Hower, Sustainable Brands, Associate Editor, 13
[Mike Hower, Triple Pundit, Contributor, 4-04-13, Triple Pundit, Ending the Debate: Most Republicans
Actually Support Increased Renewable Energy Use, http://www.triplepundit.com/2013/04/breaking-
debate-republicans-actually-support-increased-renewable-energy/, accessed, 7-07-13]
Apparently, the debate over global warming is not as big as the hard-liners at Fox News and on Capitol
Hill would lead us to believe. A recent study released by Yale and George Mason University found that
nearly 80 percent of Republicans and Republican-leaning Independents support increasing renewable
energy use and more than 60 percent believe the United States should take action to address climate
change.
Interestingly, the report also found that only a third of Republican respondents agree with the GOPs
position on climate change, which has changed dramatically since 2008.
AT: Oil Lobbies
Renewable lobbies are fighting and winning against oil lobbying
Raymundo, Latin Post columnist, 14
[Shawn, 4-20-14, Latin Post, Koch Brothers, Conservatives & Oil Companies Lobby States Using
Renewable Energy Sources: Alternative, Solar Power And Environmentalism Gaining Popularity,
http://www.latinpost.com/articles/10814/20140420/koch-brothers-conservatives-oil-companies-lobby-
states-using-renewable-energy-sources-alternative-solar-power-and-environmentalism-gaining-
popularity.htm, accessed 7-9-14, AKS]
As more and more states are beginning to utilize solar energy and adapt to other clean green energy
solutions, conservative lobby groups and oil tycoons have aggressively started pushing back against
alternative energy.
The Koch brothers, anti-tax activist Grover Norquist and a number of powerful companies in the
nation have started running campaign ads in Arizona, Kansas and North Carolina that paint renewable
energy as a greedy bad guy, according to the Los Angeles Times.
With the help of solar power companies, environmentalists are battling back against big oil companies
and their lobbyists over states that have implemented two types of energy policies: net metering and
renewable energy requirements.
Net metering allows homeowners or businesses that have solar panels installed on roofs to sell back
extra electricity to the power grid at attractive rates. The other policy requires utility companies to
generate at least 10 percent of renewable energy, the Times reported.
The majority of states in the U.S. have begun operating under at least one of the two policies if not
both. The only states to not use net metering or generate power from renewable energy are Alabama,
Idaho, Mississippi and Tennessee.
South Dakota and Texas are the only two states without metering programs but generate a
percentage of their power from renewable energy, according to the Times.


AT: ASW DA/DOD CP

Litany of alt causes to ASW China and Iran development, faltering tech, and natural
ocean variability
Keller, writer for Military and Aerospace, 12
*John, December 2012, U.S. anti-submarine capability is eroding, and it may be too late to turn it
around, http://www.militaryaerospace.com/blogs/aerospace-defense-blog/2012/12/u-s-anti-
submarine-capability-is-eroding-and-it-may-be-too-late-to-turn-it-around.html, 7/18/14, TYBG]
Here's a not-so-comforting thought. The U.S. Navy's anti-submarine warfare (ASW) skills are getting
rusty during the same period that quiet submarine technology in China and Iran is improving at a
noticeable rate.
I wish that were the only bad news on the submarine warfare front, but it isn't. We have U.S. ASW
capability going backward, submarine capability of U.S. strategic adversaries going forward, and U.S.
Navy capability as a whole in decline, according to a top Navy official.
"We're long past the point of doing more with less," says Under Secretary of the Navy, Robert Work.
"We are going to be doing less with less in the future."
Work was quoted in an AOL blog by Sydney J. Freedberg Jr. headlined U.S. Military Will Have To Do 'Less
With Less': Hill Must Vote On Money .
Freedberg wasn't finished there, however. "The capacity of the US and allied navies to hunt enemy
submarines has suffered even as potential adversaries like China and Iran have built up their sub
fleets," he blogged in a piece headlined Navy's Sub-Hunting Skills Declined While China, Iran Built More
Submarines .
The subtle message here is that vital U.S. Navy ASW capability is eroding due to a longtime emphasis on
counter insurgency, and with strong prospects for a dwindling future Navy budget, it might already be
too late to turn around the ASW decline.
Yikes.
You can talk about stealth aircraft technology all you want, but there's really only one kind of military
stealth vehicle on the planet, and that's the submarine.
Stealth aircraft might have low radar cross sections, but they still can be seen with the naked eye, and
heard from long distances. Aircraft, no matter what their futuristic shapes, have a difficult time hiding
from ever-more-sophisticated electro-optical sensors.
Land vehicles? They still have substantial infrared signatures, and they can be seen and heard just like
aircraft. Surface ships? Please. Big metal objects against a cool, flat surface. Not much ability to hide
there.
But submarines, they're a different story. It's true that ASW technology is advancing throughout the
world, and today's advanced diesel-electric submarines are as close to silent as you can get.
The ocean, however, is a difficult and unpredictable environment in which to hunt submerged vessels.
Water columns at different depths, water densities, and salinity levels often can be a difficult, if not
impossible, barrier to even the most sophisticated sonar sensors.
Sophisticated U.S. submarines for decades have enjoyed the ability to hide from almost everyone.
Today, however, it's getting tougher to do as adversaries make up technological ground quickly.

Perm: establish a DOE and DOD formal liason for research and development of ocean
thermal energy conversion this is the conclusion to their authors article
Shapiro, Energy Consultant to TRW and Tracor, 81
*David I. Shapiro, Energy and Sea Power: Challenge for the Decade, Military Applications and
Implications for Ocean Thermal Energy Conversion Systems, p. 129, 7/18/14, TYBG+
Major recommendations resulting from this preliminary assessment of military applications and
implications of OTEC facility and plant- ship equipment and platforms are:
1. Establishment of Department of Defense - Department of Energy Liaison - Formal liaison should be
established between the Department of Defense (DOD) and the Department of Energy (DOE) on a
weekly basis, at least initially. This would require a part-time DOD billet at DOE to be filled by a senior-
level U.S. Navy officer. In this matter, a current information base regarding rapidly changing OTEC
technology, proposed OTEC facility and plantship deployments, and inter- national developments can
be maintained at DOD. Furthermore, the liaison, if effectively instituted, could provide the foundation
for informed responses required by the Secretary of Defense (together with other affected Executive
Branch secretaries) by Sections 101 and 102 of Public Law 96-320, the Ocean Thermal Energy
Conversion Act of 1980.

OTEC can produce 16.5 million kg of hydrogen and sequester 726 thousand tonnes of
Co2 per year
Baird, University of Calgary, B.S. in Chemistry 14 (Jim, University of Calgary, B.S. in Chemistry
July 10, 2014, Owner at Global Warming Mitigation Method, Carbon Sequestering Energy Production
http://theenergycollective.com/jim-baird/423076/carbon-sequestering-energy-production Accessed:
July 18, 2014, KS)
According to a 2005 NREL it takes about 52.5 kWh of electricity to produce 1 kg of hydrogen through
electrolysis. If my calculations are correct a 100 MW OTEC plant could therefore produce about
16,500,000 kilograms of hydrogen a year and since this is equivalent to 16,500,000 gallons of gasoline
at current price of roughly $3.70/gallon, the value of the hydrogen generated would be about $62
million.
With every mole of hydrogen produced however you would also generate about 1 mole of Na, if you
electrolyzed the desalination concentrate, and this in turn would precipitate 1 mole of CO2 out of the
ocean and/or atmosphere. Since a mole of CO
2
weighs 44 grams (*) you could therefore sequester
about 726,000 tonnes of CO2 per year with a 100 MW OTEC plant.

Carbon neutral energy sources like OTEC will be net carbon negative- feasibility of
aquaculture proves
Jovine, inventor, 8 (Raffael, inventor, Oct 29, 2008, Patent Application: US 8278082 B2, Method of
carbon sequestration http://www.google.com/patents/EP2364201A1?cl=en Accessed: July 18, 2014,
KS)

The method makes use of an OTEC process that has been modified to make it suitable for use in
combination with large-scale land-based aquaculture of coccolithophorid algae for carbon
sequestration. OTEC is a method for generating electricity that utilizes the temperature difference that
exists between deep and shallow waters. The use of OTEC is currently limited to particular geographical
areas where the temperature difference between the warm surface water and the cold deep seawater is
large, ideally at least 20 C. This temperature difference only really occurs in equatorial waters, defined
as lying between 10 N and 10 S, which are adequate.3 In these areas, coastal land use is intensive. In
contrast, in areas where there is significant availability of coastal land that lies unused, the surface
water is cool and, thus, there is an insufficient temperature differential for OTEC to function
efficiently.
The inventor hereof has realized that by heating seawater using solar energy under controlled
conditions, a sufficient temperature difference can be established between this heated water and
cooler, nutrient-rich deep seawater to allow OTEC to be used in regions where land is unutilized
and/or available. This modified OTEC system can thus be used in regions with vast areas of arid, desert
or under-utilized or non-productive coastal land (such as in the Gulf States, the Californian Peninsula in
Mexico, Australia, Western, Northern and Southern Africa and Chile), which are ideal for large-scale
land-based aquaculture. Because this land is commonly not used for economic benefit, the economics of
this modified system become practical for carbon sequestration.
This solves several problems from which land-based aquaculture of marine algae suffers, at least in
the context of carbon sequestration. Of course, pumping nutrient-rich seawater onto land requires
large amounts of energy. Given that the purpose of these aquaculture preserves is to provide a net
carbon sink, any CO2 produced in order to supply the water for the aquaculture must be more than
offset within the CO2 that is sequestered by the algae. Thus, only renewable, or carbon neutral,
sources of energy are suitable in this context. Examples of renewable energy sources include solar
energy, high altitude and ground level wind power, tidal, hydroelectric, and biomass fueled power
generation. Other carbon neutral, although not renewable energy sources, such as nuclear or
geothermal power may also be suitable to generate the power necessary to pump the large quantities
of water for the aquaculture preserves. Both nuclear and geothermal power have the advantage of
creating heated water that could be exploited by OTEC.


NEG- Compiled Frontlines

Aquaculture

1NC
Aquaculture Bad
Aquaculture fails and is environmentally hazardous significant waste, diseases,
chemical run-off, and invasive species means the technology doesnt protect fish
stocks
Mother Jones, news organization, 6
*March/April, Mother Jones, Is Aquaculture the Answer?,
http://www.motherjones.com/politics/2006/03/aquaculture-answer, accessed 7/2/14, TYBG]
Q: Can aquaculture have a negative impact on the environment? A: Yes, very much so. Organic wastes
from fish cages in public waters can have a significant effect on the surrounding water quality. Waste
from fish farms can include: fecal matter and uneaten food, along with chemicals used in farming such
as pesticides, herbicides, and antibiotics. Because fish and other organisms are kept in close proximity,
they are more likely to breed diseases. All of these can impact the surrounding environment. In
addition, many "crops" are transported from one region to another for farming, which can introduce
new organisms and diseases into the surrounding area. A few examples: In New Brunswick, despite the
fact that salmon farming sites occupy less than 0.01 percent of the coastal area in their region, scientists
have found significant degradation of the water in the surrounding area. A: lowered oxygen levels,
replacement of native seaweeds with invasive species, algal blooms, reduction of wild species, and a
loss of nursery habitat for wild fish. In the United States, a pathogen that attacks the eastern oyster
was likely introduced into the U.S. Atlantic and Gulf coasts through aquaculture. There are also
concerns that fish will escape from the fish farms and either breed with wild fishaffecting genetic
diversity and decreasing their survivabilityor else compete for food and spread diseases. Over the
past decade, nearly one million non-native Atlantic salmon have escaped from fish farms and
established themselves in streams of the Pacific Northwest. Q: Are some fish farms dependent on wild
fisheries for food? A: Yes. Many farmed fish species are carnivorous, so other wild fish species must be
harvested to maintain the farm. Carnivorous species of fish, such as salmon, trout, tuna, grouper, and
cod, require a protein rich, high-energy diet. Herring are used to make salmon feed, so they have been
heavily harvested in the wild, putting pressure on the many other wild species that depend on herring
for food. So fish farms can end up leading to over-fishing of the very wild fisheries they are supposed
to help protect.
Mangrove DA
Mangroves are on the verge of extinction- loss kills fish populations,
Polidoro et al., professor of Environmental Chemistry @ ASU, 10
[Beth A., Kent E. Carpenter, Lorna Collins, Norman C. Duke, Aaron M. Ellison, Joanna C. Ellison, Elizabeth
J. Farnsworth, Edwino S. Fernando, Kandasamy Kathiresan, Nico E. Koedam, Suzanne R. Livingstone,
Toyohiko Miyagi, Gregg E. Moore, Vien Ngoc Nam, Jin Eong Ong, Jurgenne H. Primavera, Severino G.
Salmo III, Jonnell C. Sanciangco, Sukristijono Sukardjo, Yamin Wang, Jean Wan Hong Yong, PLOSONE,
04/08/10, The Loss of Species: Mangrove Extinction Risk and Geographic Concern,
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0010095, 07/01/14, PD]
Mangrove species are uniquely adapted to tropical and subtropical coasts, and although relatively low in
number of species, mangrove forests provide at least US $1.6 billion each year in ecosystem services
and support coastal livelihoods worldwide. Globally, mangrove areas are declining rapidly as they are
cleared for coastal development and aquaculture and logged for timber and fuel production. Little is
known about the effects of mangrove area loss on individual mangrove species and local or regional
populations. To address this gap, species-specific information on global distribution, population status,
life history traits, and major threats were compiled for each of the 70 known species of mangroves. Each
species' probability of extinction was assessed under the Categories and Criteria of the IUCN Red List of
Threatened Species. Eleven of the 70 mangrove species (16%) are at elevated threat of extinction.
Particular areas of geographical concern include the Atlantic and Pacific coasts of Central America,
where as many as 40% of mangroves species present are threatened with extinction. Across the globe,
mangrove species found primarily in the high intertidal and upstream estuarine zones, which often
have specific freshwater requirements and patchy distributions, are the most threatened because they
are often the first cleared for development of aquaculture and agriculture. The loss of mangrove
species will have devastating economic and environmental consequences for coastal communities,
especially in those areas with low mangrove diversity and high mangrove area or species loss. Several
species at high risk of extinction may disappear well before the next decade if existing protective
measures are not enforced. Introduction The importance of mangroves for humans and a variety of
coastal organisms has been well documented [1][7]. Mangrove forests are comprised of unique plant
species that form the critical interface between terrestrial, estuarine, and near-shore marine
ecosystems in tropical and subtropical regions. They protect inland human communities from damage
caused by coastal erosion and storms [8][11], provide critical habitat for a variety of terrestrial,
estuarine and marine species [5], [12][14], and serve as both a source and sink for nutrients and
sediments for other inshore marine habitats including seagrass beds and coral reefs [2], [15].
Mangrove species that form dense and often monospecific stands are considered foundation
species that control population and ecosystem dynamics, including fluxes of energy and nutrients,
hydrology, food webs, and biodiversity [16]. Mangroves have been widely reviewed [17] as supporting
numerous ecosystem services including flood protection, nutrient and organic matter processing,
sediment control, and fisheries. Mangrove forests are the economic foundations of many tropical
coastal regions *18+ providing at least US$1.6 billion per year in ecosystem services worldwide *7+. It is
estimated that almost 80% of global fish catches are directly or indirectly dependant on mangroves
[1], [19]. Mangroves sequester up to 25.5 million tonnes of carbon per year [20], and provide more than
10% of essential organic carbon to the global oceans [21]. Although the economic value of mangroves
can be difficult to quantify, the relatively small number of mangrove species worldwide collectively
provide a wealth of services and goods while occupying only 0.12% of the world's total land area [22].
Turn: Aquaculture destroys mangroves and decreases fish stocks
Emerson, Supervising editor of Aquatic Sciences and Fisheries Abstracts and PhD in
Oceanography, 99
*Craig, Community Supported Agriculture, December, Aquaculture Impacts on the Environment,
http://www.csa.com/discoveryguides/aquacult/overview.php, 07/01/14, PD]
Nowhere are the negative impacts on the natural environment more apparent than with shrimp
farming and the associated destruction of mangrove forests22. In Asia, over 400,000 hectares of
mangroves have been converted into brackishwater aquaculture for the rearing of shrimp. Farmed
shrimp boost a developing country's foreign exchange earnings, but the loss of sensitive habitat is
difficult to reconcile. Tropical mangroves are analogous to temperate salt marshes, a habitat critical to
erosion prevention, coastal water quality, and the reproductive success of many marine organisms.
Mangrove forests have also provided a sustainable and renewable resource of firewood, timber, pulp,
and charcoal for local communities. To construct dyked ponds for shrimp farming, these habitats are
razed and restoration is extremely difficult. Unfortunately, shrimp ponds are often profitable only
temporarily as they are subject to disease and to downward shifts in the shrimp market. Growing
political pressure in western countries may restrict the shrimp market in response to consumers'
avoidance of environmentally-unfriendly products. More significantly, Japan's economy is experiencing
difficulty at present, and Japan is the world's largest market for shrimp; when the market falls, ponds
are abandoned. A return to traditional fishing is not always possible because the lost mangroves no
longer serve as nursery areas which are critical for the recruitment of many wild fish stocks.
Unemployment prospects cannot always balance short-term gains. It is clear that socio-economic
effects are as important as pollution and ecological damage when evaluating the sustainability of
aquaculture.
Eutro Turn
Eutrophication and resulting algal blooms turns the caseresults in toxic food,
increased disease transmission, and decreased production
Emerson, Ph.D in Oceanography from Dalhousie University, 99
[Craig, 12-99, CSA, Aquaculture Impacts on the Environment,
http://www.csa.com/discoveryguides/aquacult/overview.php, 6-25-14, IC]
An increasingly significant effect of intensive fish culture is eutrophication of the water surrounding
rearing pens or the rivers receiving aquaculture effluent. Fish excretion and fecal wastes combine with
nutrients released from the breakdown of excess feed to raise nutrient levels well above normal,
creating an ideal environment for algal blooms to form. To compound the problem, most feed is
formulated to contain more nutrients than necessary for most applications. In Scotland, an estimated
50,000 tonnes of untreated and contaminated waste generated from cage salmon farming goes directly
into the sea, equivalent to the sewage waste of a population of up to three quarters of Scotland's
population7. Once the resulting algal blooms die, they settle to the bottom where their decomposition
depletes the oxygen. Before they die, however, there is the possibility that algal toxins are produced.
Although any species of phytoplankton can benefit from an increased nutrient supply, certain species
are noxious or even toxic to other marine organisms and to humans. The spines of some diatoms (e.g.
Chaetoceros concavicornis) can irritate the gills of fish, causing decreased production or even death8.
More importantly, blooms ("red tides") of certain species such as Chattonella marina often produce
biological toxins that can kill other organisms. Neurotoxins produced by several algal species can be
concentrated in filter-feeding bivalves such as mussels and oysters, creating a serious health risk to
people consuming contaminated shellfish (e.g. paralytic shellfish poisoning9). Fish is low in fat and
considered a healthy alternative to other meats, but consumers cannot ignore the potential health
risks of cultured species, just as they must not ignore the risks associated with terrestrial agriculture. In
addition to shellfish contaminated with toxic algae, cultured seafood can pose additional concerns from
disease transmission. Most fish pathogens are not hazardous to humans, but some fish pathogens such
as Streptococcus bacteria can infect humans10. High levels of antibiotics and genetically-engineered
components in fish feed (e.g. soya additives) can also pose risks. The challenge for regulatory agencies
like the Food & Drug Administration in the United States is to ensure that these risks are "acceptable".
Red Tide Turn
Eutrophication and resulting algal blooms turns the caseresults in toxic food,
increased disease transmission, and decreased production
Emerson, Ph.D in Oceanography from Dalhousie University, 1999
[Craig, 12-99, CSA, Aquaculture Impacts on the Environment,
http://www.csa.com/discoveryguides/aquacult/overview.php, 6-25-14, IC]
An increasingly significant effect of intensive fish culture is eutrophication of the water surrounding
rearing pens or the rivers receiving aquaculture effluent. Fish excretion and fecal wastes combine with
nutrients released from the breakdown of excess feed to raise nutrient levels well above normal,
creating an ideal environment for algal blooms to form. To compound the problem, most feed is
formulated to contain more nutrients than necessary for most applications. In Scotland, an estimated
50,000 tonnes of untreated and contaminated waste generated from cage salmon farming goes directly
into the sea, equivalent to the sewage waste of a population of up to three quarters of Scotland's
population7. Once the resulting algal blooms die, they settle to the bottom where their decomposition
depletes the oxygen. Before they die, however, there is the possibility that algal toxins are produced.
Although any species of phytoplankton can benefit from an increased nutrient supply, certain species
are noxious or even toxic to other marine organisms and to humans. The spines of some diatoms
(e.g. Chaetoceros concavicornis) can irritate the gills of fish, causing decreased production or even
death8. More importantly, blooms ("red tides") of certain species such as Chattonella marina often
produce biological toxins that can kill other organisms. Neurotoxins produced by several algal species
can be concentrated in filter-feeding bivalves such as mussels and oysters, creating a serious health risk
to people consuming contaminated shellfish (e.g. paralytic shellfish poisoning9).
Fish is low in fat and considered a healthy alternative to other meats, but consumers cannot ignore the
potential health risks of cultured species, just as they must not ignore the risks associated with
terrestrial agriculture. In addition to shellfish contaminated with toxic algae, cultured seafood can pose
additional concerns from disease transmission. Most fish pathogens are not hazardous to humans, but
some fish pathogens such as Streptococcus bacteria can infect humans10. High levels of antibiotics and
genetically-engineered components in fish feed (e.g. soya additives) can also pose risks. The challenge
for regulatory agencies like the Food & Drug Administration in the United States is to ensure that these
risks are "acceptable".

AT: Algae
Algae fails none of their studies consider the impacts of large-scale algae production
heres the only piece of evidence that does
Science Daily 2012
*October 24, Science Daily, National Academy of Sciences Large-scale production of biofuels made from
algae poses sustainability concerns, http://www.sciencedaily.com
/releases/2012/10/121024133413.htm]//CV, 7-8-14, JY]
Oct. 24, 2012 Scaling up the production of biofuels made from algae to meet at least 5 percent --
approximately 39 billion liters -- of U.S. transportation fuel needs would place unsustainable
demands on energy, water, and nutrients, says a new report from the National Research Council.
However, these concerns are not a definitive barrier for future production, and innovations that would
require research and development could help realize algal biofuels' full potential. Biofuels derived from
algae and cyanobacteria are possible alternatives to petroleum-based fuels and could help the U.S. meet
its energy security needs and reduce greenhouse gas emissions, such as carbon dioxide (CO2). Algal
biofuels offer potential advantages over biofuels made from land plants, including algae's ability to grow
on non-croplands in cultivation ponds of freshwater, salt water, or wastewater. The number of
companies developing algal biofuels has been increasing, and several oil companies are investing in
them. Given these and other interests, the National Research Council was asked to identify
sustainability issues associated with large-scale development of algal biofuels. The committee that
wrote the report said that concerns related to large-scale algal biofuel development differ depending on
the pathways used to produce the fuels. Producing fuels from algae could be done in many ways,
including cultivating freshwater or saltwater algae, growing algae in closed photobioreactors or open-
pond systems, processing the oils produced by microalgae, or refining all parts of macroalgae. The
committee's sustainability analysis focused on pathways that to date have received active attention.
Most of the current development involves growing selected strains of algae in open ponds or closed
photobioreactors using various water sources, collecting and extracting the oil from algae or collecting
fuel precursors secreted by algae, and then processing the oil into fuel. The committee pointed out
several high-level concerns for large-scale development of algal biofuel, including the relatively large
quantity of water required for algae cultivation; magnitude of nutrients, such as nitrogen,
phosphorus, and CO2, needed for cultivation; amount of land area necessary to contain the ponds
that grow the algae; and uncertainties in greenhouse gas emissions over the production life cycle .
Moreover, the algal biofuel energy return on investment would have to be high, meaning more energy
would have to be produced from the biofuels than what is required to cultivate algae and convert them
to fuels. The committee found that to produce the amount of algal biofuel equivalent to 1 liter of
gasoline, between 3.15 liters to 3,650 liters of freshwater is required, depending on the production
pathway. Replenishing water lost from evaporation in growing systems is a key driver for use of
freshwater in production, the committee said. In addition, water use could be a serious concern in an
algal biofuel production system that uses freshwater without recycling the "harvest" water. To produce
39 billion liters of algal biofuels, 6 million to 15 million metric tons of nitrogen and 1 million to 2
million metric tons of phosphorus would be needed each year if the nutrients are not recycled, the
report says. These requirements represent 44 percent to 107 percent of the total nitrogen use and 20
percent to 51 percent of the total phosphorus use in the U.S . However, recycling nutrients or utilizing
wastewater from agricultural or municipal sources could reduce nutrient and energy use, the committee
said. Another resource that could limit the amount of algal biofuels produced is land area and the
number of suitable and available sites for algae to grow. Appropriate topography, climate, proximity
to water supplies -- whether freshwater, inland saline water, marine water, or wastewater -- and
proximity to nutrient supplies would have to be matched carefully to ensure successful and
sustainable fuel production and avoid costs and energy consumption for transporting those resources
to cultivation facilities. If the suitable sites for growing algae are near urban or suburban centers or
coastal recreation areas, the price of those lands could be prohibitive. A national assessment of land
requirements for algae cultivation that takes into account various concerns is needed to inform the
potential amount of algal biofuels that could be produced economically in the U.S. One of the primary
motivations for using alternative fuels for transportation is reducing greenhouse gas emissions.
However, estimates of greenhouse gas emissions over the life cycle of algal biofuel production span a
wide range; some studies suggest that algal biofuel production generates less greenhouse gas
emissions than petroleum-based fuels while other studies suggest the opposite. These emissions
depend on many factors in the production process, including the amount of energy needed to
dewater and harvest algae and the electricity sources used. The committee emphasized that the crucial
aspects to sustainable development are positioning algal growth ponds close to water and nutrient
resources and recycling essential resources. With proper management and good engineering designs,
other environmental effects could be avoided, the committee said. Examples include releasing harvest
water in other bodies of water and creating algal blooms and allowing harvest water to seep into ground
water. For algal biofuels to contribute a significant amount of fuels for transportation in the future,
the committee said, research and development would be needed to improve algal strains, test
additional strains for desired characteristics, advance the materials and methods for growing and
processing algae into fuels, and reduce the energy requirements for multiple stages of production. To
aid the U.S. Department of Energy in its decision-making process regarding sustainable algal biofuel
development, the committee proposed a framework that includes an assessment of sustainability
throughout the supply chain, a cumulative impact analysis of resource use or environmental effects,
and cost-benefit analyses.
AT: Food Scarcity
New agricultural models proves no food scarcity will exist
Eisenstein, Analyst and Senior Writer and graduated from Yale University in 1989 with
a degree in Mathematics and Philosophy, 13
(Charles, 09-3-13, Resilience, Permaculture and The Myth Of Scarcity,
http://www.resilience.org/stories/2013-09-03/permaculture-and-the-myth-of-scarcity, accessed 7-2-14,
YLP)
Food scarcity, in fact, is a myth. In Sacred Economics I cite research showing that when it is done properly, organic growing
methods can deliver two to three times the yield of conventional methods. (Studies showing the opposite are
poorly constructed. Of course if you take two fields and plant each with a monocrop, then the one without pesticides will do worse than the
one with, but that isnt really what organic farming is.) Conventional agriculture doesnt seek to maximize yield per acre; it seeks
to maximize yield per unit of labor. If we had 10% of the population engaged in agriculture rather than the
current 1%, we could easily feed the country without petrochemicals or pesticides. It turns out, though, that my statistics are way
too conservative. The latest permaculture methods can deliver much more than just double or triple the yield of
conventional farming. I recently came across this article by David Blumechronicling his nine-year permaculture enterprise in California.
Running a CSA for 300-450 people on two acres of land, he achieved yields eight times what the Department of Agriculture says is possible per
square foot. He didnt do it by mining the soil either soil fertility increased dramatically over his time there. When people project an
imminent food crisis based on population growth or Peak Oil, they take for granted the agricultural methods we practice today. Thus, while
the transitional period may involve temporary food shortages and real hardship, permaculture methods can easily
feed the peak world population of perhaps 10 or 11 billion well see by mid-century. It is true that the old, control-based methods
of agriculture are nearing the peak of their productive potential. Further investments in this kind of technology are bringing diminishing
marginal returns witness the proliferation of Roundup-resistant weeds and the necessity of new kinds of herbicides to deal with them.
This parallels the situation with so many other kinds of control-based technology, whether in
medicine, in education, politics. we are indeed nearing the end of an era.
AT: Water Wars
Water wars promote cooperation international law checks the impact
IRIN News, 4/22
*Analysis: Water and conflict, http://www.irinnews.org/report/99972/analysis-water-and-conflict,
7/2/14, TYBG]
In 2001 then UN Secretary-General Kofi Annan declared: Fierce competition for fresh water may well become a source of conflict and wars in
the future. A year later he revised that position, saying water problems could be a catalyst for cooperation. Delivered
after decades of water war threats that never materialized, Annans conflicting statements hint at the complexity of understanding how
water and conflict interact. Ahead of a water security summit in Malaysia on 23 April called the 2014 Aid & International Development Forum,
IRIN talked to experts to learn more. If you come at it analysing water like other resources, youll see an absence of conflict in a lot of
situations where you might expect to see conflict, explained Janani Vivekananda, the environment, climate change and security manager at
London-based International Alerts peacebuilding issues programme. In fact, some scholars point to an ancient Babylonian
conflict 4,500 years ago as being the only true water war to have ever occurred. But when one fifth of the
worlds population faces water scarcity and another 1.6 billion people live in countries where infrastructure is too weak to get water to where it
is needed, why is there not more violence over this precious resource? And how dire are warnings from the US Central Intelligence Agency such
as: During the next 10 years, many countries will experience water problems - shortages, poor water quality, or floods - that will risk
instability and state failure, *and+ increase regional tensions? By way of an answer, experts point to unique characteristics
surrounding water: it is a natural resource that makes it both valuable and a challenge to control;
there is an international legal framework that encourages local cooperation; and there is also a
ubiquitous understanding that sustained water access is vital for life to continue.
AT: Seaweed
Global seaweed aquaculture is impossiblethe oceans are too crowded and no
organizations exist to oversee their development
Costa-Pierce, University of New England Marine Science Center Director, 2012
[Barry A., 1-29-12, Ecological Aquaculture, EXPANSION OF SEAWEED AQUACULTURE IS NOT A
PANACEA: IT NEEDS ECOLOGICAL APPROACHES TO DEVELOPMENT AND AN ENLIGHTENED
GOVERNANCE SYSTEM TO BE AN IMPORTANT SOLUTION TO GLOBAL FOOD SECURITY,
http://ecologicalaquaculture.org/welcome/2012/01/29/expansion-of-seaweed-aquaculture-is-not-a-
panacea-it-needs-ocean-space-and-a-governance-system-to-be-an-important-solution-to-global-food-
security/, 6-25-14, IC]
Seaweed aquaculture is now being seen in a new light for good reasons, but again is being viewed as yet another new
panacea. It is neither new, nor is a panacea. Scientists have well recognized all of its advantages versus land-based farming. The
development of alternative, socio-ecological approaches to the science and management of aquaculture has been formalized (see the
Ecosystem Approach to Aquacultureby the Food and Agriculture Organization (FAO) of the United Nations). However, large scale global
projections of seaweed farms of 180,000 square kilometers, the size of Washington state, and seaweeds
providing enough protein for the entire world population, or setting aside 3% of the worlds oceans for seaweed farming to
meet world energy needs are fanciful, and get us nowhere. The governance systems of coastal states, and of the
ocean commons, are completely inadequate to handle these types of large scale seaweed aquaculture
developments. China is the worlds largest producer of farmed seaweed (63% of global production), but just one massive seaweed farm
(located in Jiazhou Bay in Quingdao) accounts for almost half of global production (and is visible from space!). Needless to say, China has a very
unique governance system that is incompatible with that of most coastal nations. Overall, the governance systems to manage the
ocean commons for large scale seaweed developments do not exist (see Walljasper, 2010). Expansion of
seaweed aquaculture will be viewed by many ocean agencies and decision-makers as yet another new use of already crowded ocean
space, and will compete for that space with increased maritime traffic, energy and mining developments, just to
name a few, so rigorous spatial planning within a participatory governance system with plans for adaptive management will be
needed (see examples: the R.I. Ocean Special Area Management Plan and the Massachusetts Ocean Management Plan). In many parts of the
world, such participatory ocean planning and adaptive management processes do not exist, or are in their infancy.
Marine aquaculture can flourish on a crowded ocean planet only if such processes exist. If they do our greatest
opportunity may be within the jurisdiction of small island states with their vast coastlines and huge exclusive economic zones (EEZs). As H.E. Mr.
Peter Thomson, Permanent Representative of Fiji to the United Nations stated recently on behalf of the Alliance of Small Island States at the
Rio+20 Second Preparatory Committee Meeting in New York, We are not small island nations, but large ocean nations.
AT: Food Prices
Internal market policies solve global agricultural price fluctuations
Hendrix visiting fellow at the Peterson Institute for International Economics, is
Assistant Professor of Government at the College of William & Mary 2011
[Cullen S. Hendrix, , 7-11, *Markets vs. Malthus: Food Security and the Global Economy, Policy Brief, N
u m b er Pb 11-12, http://www.mi.iie.com/publications/pb/pb11-12.pdf] E. Liu]
Beginning with the Agricultural Act of 1949 in the United States and the Treaty of Rome in the European
Union, enhancing food self-sufficiency through programs of domestic subsidies has been a goal of
agricultural policy in the developed world. Japans agricultural markets are character- ized by even more
massive distortions. In spite of significant pressure during the Doha round of WTO negotiations, the
United States and European Union have been reticent to back away from significant domestic support
for agriculture, and Japanese liberalization has been glacial. In the midst of the 200708 crisis, Michel
Barnier, current EU Commissioner for Internal Market and Services and then French Minister of
Agriculture and Fisheries, defended the EUs policy on food self-sufficiency, arguing, Food is not
televisions or cars. You cant leave all that to the laws of the market.11 While food sovereigntyor
reducing import depen- dency, at leasthas been on the agenda in the United States and Europe for
decades, it is back in vogue in the developing world. In 1960, developing countries ran food trade
surpluses totaling $1 billion; by the beginning of the 21st century, defi- cits were the norm and 48 of 63
low income countries, and 45 of the 46 least developed countries, were net food importers. Speaking
this year at the World Social Forum in Dakar, former Brazilian president Luiz Inacio Lula da Silva said that
African nations should pursue a green revolution and move toward food self-sufficiency, noting,
There can be no sovereignty without food sovereignty.12 During the price spike of 200708, over 85
percent of the 105 emerging and developing countries surveyed by the World Bank had taken some
policy measures to reduce the trans- mission of world food prices to domestic consumers. While these
measures included a reduction of import restrictions, they also included releasing food from reserves,
price controls and consumer subsidies, direct cash transfers, food-for-work programs, food rations or
stamps, and export restrictions, some of which, particularly export restrictions, have large market-
distorting effects (FAO 2008, World Bank 2008, 2009). Acute crises often call for extensive market
interventions. In the aftermath of these crises, however, renewed emphasis has been placed on food
sovereignty or food self-sufficiency as a durable policy goal by many developing countries. Even Qatar
and Saudi Arabia, two arid countries with extremely limited access to renewable water, announced
plans to become food self-sufficient through a mixture of increasing domestic production and leasing
farmland abroad. Qatar plans to increase cultivated land by over 140 percent in the next decade using
water from solar-powered desalinization plants.13 Olivier de Schutter, UN Special Rapporteur on the
Right to Food, recently called on the G20 to help developing countries reverse dependence on food
imports via producer subsidies and protected markets.14 There is some evidence that agricultural
protectionism is rising. Since 2007, global average import tariffs on maize, the main staple in much of
Latin American and Africa, have increased by 68 percent, while the percentage of duty free maize
imports has dropped by over a third (see figure 3). Tariffs for wheat, however, have seen secular
declines throughout the period.
Sea Colonies
1NC
Colonies fail Laundry List
[] Dont solve extinction nations can still launch nukes at the colonies even if the
colonies are underwater, the aquaculture has to be above
Floating settlements fail- political, physical, and technological complication plague
development and cause costs to balloon
Lee, member of the Association of Professional Futurists and founder of Strategic
Foresight Investments, 04
*James H., February 2004, Journal of Futures Studies, Microtopias, http://www.jfs.tku.edu.tw/8-
3/A02.pdf, 06/29/14, PD]
Political considerations aside, there are also many purely physical challenges to creation of a floating
microtopia. The sea is a harsh and difficult environment for construction. Storms, waves and ocean
currents all pose potential design challenges. The ocean is largely a 3% solution of salt water, which has known
corrosive qualities. Biofouling from barnacles and local plant life can impair mobility and longevity for
any ocean-going craft. These issues can turn the creation of a floating paradise into a potentially
expensive proposition. Projects such as the Freedom Ship will cost billions of dollars to complete. Countless new country
projects have floundered due to inadequate funding. Unless these projects are made profitable (such as at ResidenSea).
Very few of them will be funded as a matter of philanthropic principal. Furthermore, there is also the issue of financing. One
could hardly expect to apply for a 30-year mortgage backed by the Federal Housing Authority for an
experimental seastead. (How would it pass inspection?) This issue alone could place many of these alternative communities out of
consideration for many. Assuming that a microtopia is established, there is also the question of sustainability. A
microtopia may be responsible for the generation of its own food, energy, raw materials and
economic support. A "balance of energy is required such that a microtopia should rave a self-sufficient supply of energy, or else it
would have to rely upon out- side sources. Many plans for microtopias rely on OTEC technology (explained
earlier). While this is uniquely appropriate for oceanic colonies, it is still a largely theoretical
technology. Also, OTECs have some scalability issues. Large OTECs can exploit much greater thermal differential and
therefore generate significant levels of productivity. Unfortunately, initial start-up costs increase dramatically with size as
well. There are currently no operating OTEC plants. Several projects have treated OTEC as a virtually free
energy source, when it is in reality very expensive." Alternatives to this would be solar or wind energy, which are generally
competitive with traditional sources of energy only when located "off-grid.
[] The Buoyancy and Energy internal link concedes that sea colonies are built slowly
and one at a time, meaning by the time enough of the human population lives on the
VLFSs, their impacts would have already occurred
Aquaculture Fails
Aquaculture fails and is environmentally hazardous significant waste, diseases,
chemical run-off, and invasive species means the technology doesnt protect fish
stocks
Mother Jones, news organization, 6
*March/April, Mother Jones, Is Aquaculture the Answer?,
http://www.motherjones.com/politics/2006/03/aquaculture-answer, accessed 7/2/14, TYBG]
Q: Can aquaculture have a negative impact on the environment? A: Yes, very much so. Organic wastes from fish cages in public
waters can have a significant effect on the surrounding water quality. Waste from fish farms can
include: fecal matter and uneaten food, along with chemicals used in farming such as pesticides,
herbicides, and antibiotics. Because fish and other organisms are kept in close proximity, they are more likely to breed
diseases. All of these can impact the surrounding environment. In addition, many "crops" are transported
from one region to another for farming, which can introduce new organisms and diseases into the
surrounding area. A few examples: In New Brunswick, despite the fact that salmon farming sites occupy less than 0.01 percent of the
coastal area in their region, scientists have found significant degradation of the water in the surrounding area.
A: lowered oxygen levels, replacement of native seaweeds with invasive species, algal blooms,
reduction of wild species, and a loss of nursery habitat for wild fish. In the United States, a pathogen that
attacks the eastern oyster was likely introduced into the U.S. Atlantic and Gulf coasts through
aquaculture. There are also concerns that fish will escape from the fish farms and either breed with
wild fishaffecting genetic diversity and decreasing their survivabilityor else compete for food and
spread diseases. Over the past decade, nearly one million non-native Atlantic salmon have escaped from fish farms and established
themselves in streams of the Pacific Northwest. Q: Are some fish farms dependent on wild fisheries for food? A: Yes. Many farmed fish
species are carnivorous, so other wild fish species must be harvested to maintain the farm.
Carnivorous species of fish, such as salmon, trout, tuna, grouper, and cod, require a protein rich, high-
energy diet. Herring are used to make salmon feed, so they have been heavily harvested in the wild, putting pressure on the
many other wild species that depend on herring for food. So fish farms can end up leading to over-
fishing of the very wild fisheries they are supposed to help protect.
Chamberland Solvency Args
a. Submarine collisions and sonar interference
Chamberland, Nuclear Engineer, 2007
[Dennis, Former United States Naval Officer and M.S. in Bioenvironmental engineering @ Oklahoma
State, Undersea Colonies: the future of permanent undersea settlements, page 184, JF]
But submarines are most famous for their conspicuous lack of windows and if they fail to update their
charts or the navigator is not paying attention, it could be a very bad day for everyone. The seas of the world
are full of submarines from many nations and to prevent undersea collisions, everybody has to be paying attention. It is likely
that all undersea colonies will have some form of acoustic warning device so that if a submarine passes within their range, they can activate it
immediately. This requires that the submariners of the world know what that noise means and that it therefore warns them away instead of
actually attracting them! The other hazard is exposure to the submarines powerful sonar. Inside an undersea colony,
if a submarine is located feet away and pings the colony directly, the acoustic energy could cause
damage to the structure or harm the hearing of those inside. However, the undersea colony will be the noisiest thing in
the oceans and submarines will certainly hear it miles away. Hopefully they will understand the source of that noise by looking on their
corrected charts before they rush over to see what it is! Of all the nameless hazards, submarines worry me the most.
b. Hurricanes
Chamberland, Nuclear Engineer, 7
[Dennis, Former United States Naval Officer and M.S. in Bioenvironmental engineering @ Oklahoma
State, Undersea Colonies: the future of permanent undersea settlements, page 184, JF]
For colonies in hurricane or typhoon prone waters, these are the next biggest threat. As we have discussed in
previous chapters, these storms come with a mighty punch of hydrodynamic energy. It is very probable that
colonists in the path of a storm will be evacuated. In this event, the greatest threat to the colonists will be
their attitude. If they see themselves as permanent colonists with no intention of ever returning to
the surface, they may resist evacuation. As Mission Commander, I would require that all possible crew members submit to a
psychological evaluation to pre-determine any such mindsets. Permanence as a resident of the sea in no way means that they will never again
stick their heads above the surface. It means the Aquatican views the undersea dominion as their permanent home. It does not mean they will
never visit relatives, take vacations or in many cases take a shore based job and return home to Aquatica each day. And it most certainly does
not mean they cannot leave in the face of a hurricane! It is no more of a shame for an Aquatican to evacuate to
safety in a hurricane threat than any other land based citizen. A well placed hurricane or typhoon has
just as much potential to leave an undersea colony in shambles as it does any other city.
c. Oxygen depletion
Chamberland, Nuclear Engineer, 2007
[Dennis, Former United States Naval Officer and M.S. in Bioenvironmental engineering @ Oklahoma
State, Undersea Colonies: the future of permanent undersea settlements, page 223, JF]
An undersea habitat whether it is on shore or under the sea is a closed system. That means the very
second you close the door, you begin depleting your oxygen and creating a cloud of carbon dioxide
that you will have to re-breathe. This system is therefore a dangerous one by its most elemental
definition. I would strongly suggest you take the very same approach I always have when building an
undersea habitat leave the windows and doors off until you are ready for the water.


Warming
1NC
No Warming
No warming historical data disproves the climate change hypothesis and statistical
analysis disproves the predictive capability of climate models
Fyfe, Research Scientist with the Canadian Centre for Climate Modeling, et al, 13 [John,
with Nathan Gillett, Research Scientist with the Canadian Centre for Climate Modeling, and Francis Zwiers, Director of the Pacific Climate
Impacts Consortium and Adjunct Professor in the Dept. of Mathematics and Statistics of the University of Victoria, September, Overestimated
Global Warming Over the Past 20 Years, Nature, Vol. 3, p. 767-769]
Global mean surface temperature over the past 20 years (19932012) rose at a rate of 0.14 0.06 C per
decade (95% confidence interval)1. This rate of warming is significantly slower than that simulated by the
climate models participating in Phase 5 of the Coupled Model Intercomparison Project (CMIP5). To illustrate this, we considered
trends in global mean surface temperature computed from 117 simulations of the climate by 37 CMIP5
models (see Supplementary Information). These models generally simulate natural variability including that associated
with the El NioSouthern Oscillation and explosive volcanic eruptions as well as estimate the combined response of
climate to changes in greenhouse gas concentrations, aerosol abundance (of sulphate, black carbon and organic
carbon, for example), ozone concentrations (tropospheric and stratospheric), land use (for example, deforestation) and solar
variability. By averaging simulated temperatures only at locations where corresponding observations exist, we find an
average simulated rise in global mean surface temperature of 0.30 0.02 C per decade (using 95% confidence
intervals on the model average). The observed rate of warming given above is less than half of this simulated rate ,
and only a few simulations provide warming trends within the range of observational uncertainty (Fig. 1a).
The inconsistency between observed and simulated global warming is even more striking for temperature trends
computed over the past fifteen years (19982012). For this period, the observed trend of 0.05 0.08 C per decade
is more than four times smaller than the average simulated trend of 0.21 0.03 C per decade (Fig. 1b). It is worth noting
that the observed trend over this period not significantly different from zero suggests a temporary hiatus in
global warming 24. The divergence between observed and CMIP5- simulated global warming begins in the early 1990s, as can be seen
when comparing observed and simulated running trends from 19702012 (Fig. 2a and 2b for 20-year and 15-year running trends, respectively).
The evidence, therefore, indicates that the current generation of climate models (when run as a group, with the
CMIP5 prescribed forcings) do not reproduce the observed global warming over the past 20 years, or the
slowdown in global warming over the past fifteen years. This interpretation is supported by statistical
tests of the null hypothesis that the observed and model mean trends are equal, assuming that either: (1) the models are exchangeable with
each other (that is, the truth plus error view); or (2) the models are exchangeable with each other and with the observations (see
Supplementary Information). Differences between observed and simulated 20-year trends have p values (Supplementary Information) that
drop to close to zero by 19932012 under assumption (1) and to 0.04 under assumption (2) (Fig. 2c). Here we note that the smaller the p value
is, the stronger the evidence against the null hypothesis. On this basis, the rarity of the 19932012 trend difference under assumption (1) is
obvious. Under assumption (2), this implies that such an inconsistency is only expected to occur by chance once in 500 years, if 20-year periods
are considered statistically independent. Similar results apply to trends for 19982012 (Fig. 2d). In conclusion, we reject the null hypothesis that
the observed and model mean trends are equal at the 10% level. One possible explanation for the discrepancy is that forced
and internal variation might combine differently in observations than in models. For example, the forced
trends in models are modulated up and down by simulated sequences of ENSO events, which are not expected to
coincide with the observed sequence of such events. For this reason the moderating influence on global
warming that arises from the decay of the 1998 El Nio event does not occur in the models at that time. Thus we
employ here an established technique to estimate the impact of ENSO on global mean temperature, and to incorporate the
effects of dynamically induced atmospheric variability and major explosive volcanic eruptions 5,6. Although these three
natural variations account for some differences between simulated and observed global warming, these differences do
not substantively change our conclusion that observed and simulated global warming are not in
agreement over the past two decades (Fig. 3). Another source of internal climate variability that may contribute to the
inconsistency is the Atlantic multidecadal oscillation7 (AMO). However, this is difficult to assess as the observed and simulated
variations in global temperature that are associated with the AMO seem to be dominated by a large and concurrent signal of presumed
anthropogenic origin (Supplementary Fig. S1). It is worth noting that in any case the AMO has not driven cooling over the
past 20 years. Another possible driver of the difference between observed and simulated global warming is increasing stratospheric
aerosol concentrations. Results from several independent datasets show that stratospheric aerosol
abundance has increased since the late 1990s, owing to a series of comparatively small tropical volcanic eruptions8 . Although
none of the CMIP5 simulations take this into account, two independent sets of model simulations
estimate that increasing stratospheric aerosols have had a surface cooling impact of about 0.07 C per
decade since 1998,9. If the CMIP5 models had accounted for increasing stratospheric aerosol, and had responded with the same surface
cooling impact, the simulations and observations would be in closer agreement. Other factors that contribute to the
discrepancy could include a missing decrease in stratospheric water vapour 10 (whose processes are not well
represented in current climate models), errors in aerosol forcing in the CMIP5 models, a bias in the prescribed solar
irradiance trend , the possibility that the transient climate sensitivity of the CMIP5 models could be on average
too high 11,12 or a possible unusual episode of internal climate variability not considered above 13,14. Ultimately the
causes of this inconsistency will only be understood after careful comparison of simulated internal climate variability and climate model
forcings with observations from the past two decades, and by waiting to see how global temperature responds over the coming decades.
No Impact
Even if warming is real it doesnt cause extinction the newest IPCC report concludes
with other independent studies that catastrophe is impossible slowing population
growth, energy efficiency, and innovation
Ridley, writer at the Financial Post, 14
*Matt, 6/19/14, The Financial Post, Junk Science Week: IPCC commissioned models to see if global
warming would reach dangerous levels this century. Consensus is no,
http://business.financialpost.com/2014/06/19/ipcc-climate-change-warming/, accessed 7/1/14, TYBG]
The debate over climate change is horribly polarized. From the way it is conducted, you would think that only two positions are possible: that
the whole thing is a hoax or that catastrophe is inevitable. In fact there is room for lots of intermediate positions, including the view I hold,
which is that man-made climate change is real but not likely to do much harm, let alone prove to be the
greatest crisis facing humankind this century. After more than 25 years reporting and commenting on this topic for various
media organizations, and having started out alarmed, thats where I have ended up. But it is not just I that hold this view. I share it with a very
large international organization, sponsored by the United Nations and supported by virtually all the worlds governments: the
Intergovernmental Panel on Climate Change (IPCC) itself. The IPCC commissioned four different models of what might
happen to the world economy, society and technology in the 21st century and what each would mean
for the climate, given a certain assumption about the atmospheres sensitivity to carbon dioxide.
Three of the models show a moderate, slow and mild warming, the hottest of which leaves the planet
just 2 degrees Centigrade warmer than today in 2081-2100. The coolest comes out just 0.8 degrees
warmer. Now two degrees is the threshold at which warming starts to turn dangerous, according to the scientific consensus. That is to say,
in three of the four scenarios considered by the IPCC, by the time my childrens children are elderly, the earth will still not have experienced any
harmful warming, let alone catastrophe. But what about the fourth scenario? This is known as RCP8.5, and it produces
3.5 degrees of warming in 2081-2100. Curious to know what assumptions lay behind this model, I decided to look up the original papers
describing the creation of this scenario. Frankly, I was gobsmacked. It is a world that is very, very implausible. For a start,
this is a world of continuously increasing global population so that there are 12 billion on the
planet. This is more than a billion more than the United Nations expects, and flies in the face of the
fact that the world population growth rate has been falling for 50 years and is on course to reach zero
i.e., stable population in around 2070. More people mean more emissions. Second, the world is
assumed in the RCP8.5 scenario to be burning an astonishing 10 times as much coal as today, producing
50% of its primary energy from coal, compared with about 30% today. Indeed, because oil is assumed to have
become scarce, a lot of liquid fuel would then be derived from coal. Nuclear and renewable technologies contribute little, because of a slow
pace of innovation and hence fossil fuel technologies continue to dominate the primary energy portfolio over the entire time horizon of the
RCP8.5 scenario. Energy efficiency has improved very little. These are highly unlikely assumptions. With abundant
natural gas displacing coal on a huge scale in the United States today, with the price of solar power
plummeting, with nuclear power experiencing a revival, with gigantic methane-hydrate gas resources
being discovered on the seabed, with energy efficiency rocketing upwards, and with population
growth rates continuing to fall fast in virtually every country in the world, the one thing we can say
about RCP8.5 is that it is very, very implausible. Notice, however, that even so, it is not a world of catastrophic pain. The per
capita income of the average human being in 2100 is three times what it is now. Poverty would be history. So its hardly Armageddon. But
theres an even more startling fact. We now have many different studies of climate sensitivity based on
observational data and they all converge on the conclusion that it is much lower than assumed by the
IPCC in these models. It has to be, otherwise global temperatures would have risen much faster than they have over the past 50 years.
As Ross McKitrick noted on this page earlier this week, temperatures have not risen at all now for more than 17 years.
With these much more realistic estimates of sensitivity (known as transient climate response), even RCP8.5 cannot produce
dangerous warming. It manages just 2.1C of warming by 2081-2100. That is to say, even if you pile crazy assumption
upon crazy assumption till you have an edifice of vanishingly small probability, you cannot even
manage to make climate change cause minor damage in the time of our grandchildren, let alone
catastrophe. Thats not me saying this its the IPCC itself.

Modeling
The aff doesnt get modeled this means they can win all of their other internal links
but OTEC isnt sufficient to solve warming because structural international barriers
prevent a global shift
Klein, environmental writer, 14
*Ezra, 6/5/14, Vox, 7 reasons America will fail on climate change,
http://www.vox.com/2014/6/5/5779040/7-reasons-America-fail-global-warming, accessed 7/2/14,
TYBG]
In 2006, China passed the United States as the world's leading emitter of carbon dioxide. And their
emissions aren't projected to peak until 2030. They talk about capping carbon emissions, but as Plumer writes, there's
little reason to be optimistic. "So far, when China has had to choose between economic growth and
cutting its emissions, it usually chooses growth." At the same time, the hope is that India will continue to
develop and Indonesia will continue to develop and Brazil will continue to develop and Sub-Saharan
Africa will see growth surge. All that development is carbon intensive, at least using current technologies. If all goes
well for the world's poor it's going to go very badly for the planet. This is climate change's ugliest tradeoff: it pits our
most fundamental economic goal against our core environmental imperative. In the modern world,
better lives are more carbon-intensive lives. As people get richer they want to eat meat and drive cars and live in bigger
homes and travel to wonderful places. They know that America powered its growth with cheap fossil fuels and they
don't find it very credible when we warn them against doing the same particularly when we're not
radically upending our lives and our economy to transition to renewable fuels. In a dangerous but brilliant
essay, Chris Hayes builds on work by Bill McKibben to put numbers to what we're asking countries and companies to do. The basic estimate is
that we can safely burn about 565 gigatons of carbon dioxide by midcentury. But experts think there's about 2,700 gigatons of carbon dioxide in
proven fossil fuel reserves and much more might yet be discovered. Consider what that means: The work of the climate
movement is to find a way to force the powers that be, from the government of Saudi Arabia to the
board and shareholders of ExxonMobil, to leave 80 percent of the carbon they have claims on in the
ground. That stuff you own, that property you're counting on and pricing into your stocks? You can't
have it. Given the fluctuations of fuel prices, it's a bit tricky to put an exact price tag on how much money all that unexcavated carbon would
be worth, but one financial analyst puts the price at somewhere in the ballpark of $20 trillion. So in order
to preserve a roughly habitable planet, we somehow need to convince or coerce the world's most
profitable corporations and the nations that partner with them to walk away from $20 trillion of
wealth. The nearest thing to an economic analogue in American history, Hayes argues, is abolitionism. But this isn't just about
America. This carbon is locked underground in China and Uzbekistan and Iran and Russia and Nigeria
and Venezuela. It's owned by energy companies, in some cases, but it's owned by nations in others.
The kind of international cooperation (and, perhaps, international redistribution) required to pass,
implement and verify viable carbon caps is completely unprecedented, at least outside of wartime.
Adaptation Solves
No extinction we have time to adapt
Mendelsohn, Professor of Environmental Studies at Yale University, 9
(Robert O., Climate Change and Economic Growth,
http://www.growthcommission.org/storage/cgdev/documents/gcwp060web.pdf)
These statements are largely alarmist and misleading. Although climate change is a serious problem that deserves attention,
societys immediate behavior has an extremely low probability of leading to catastrophic
consequences. The science and economics of climate change is quite clear that emissions over the next few decades will lead to
only mild consequences. The severe impacts predicted by alarmists require a century (or two in the case of Stern
2006) of no mitigation. Many of the predicted impacts assume there will be no or little adaptation. The net economic impacts
from climate change over the next 50 years will be small regardless. Most of the more severe impacts will take more than a century or even a
millennium to unfold and many of these potential impacts will never occur because people will adapt. It is not at all
apparent that immediate and dramatic policies need to be developed to thwart longrange climate risks. What is needed are longrun balanced
responses.
Too Late
Too late to solve warming- we cant avoid it
Dickinson 9
(Pete, 26 August 2009, The Socialist Alternative, Global Warming: Can we avoid it?,
http://www.socialistalternative.org/news/article19.php?id=1142, 7/1/14, JA) Note paper cited is by
Susan Solomon - atmospheric chemist working for the National Oceanic and Atmospheric Administration
Gian-Kasper Plattnerb- Group, Institute of Geophysics and Planetary Physics, UCLA - Reto Knuttic -
Institute for Atmopsheric and Climate Science, PhD
New research is claiming that concentrations of carbon dioxide (the main greenhouse gas, CO2) will remain high for
at least 1,000 years, even if greenhouse gases are eliminated in the next few decades. The climate scientists who
produced this work assert that the effects of global warming, such as high sea levels and reduced rainfall in certain areas, will also
persist over this time scale. (The findings are in a paper published in February in the Proceedings of the National Academy
of Sciences by researchers from the USA, Switzerland and France, www.pnas.org/cgi/doi/10.1073/pnas.0812721106 ) Most previous
estimates of the longevity of global warming effects, after greenhouse gases were removed, have ranged from a few decades to a century, so
this new analysis could represent a development with very serious implications, including political ones. For example, those
campaigning for action on climate change could be disheartened and climate sceptics could opportunistically say that
nothing should be done because it is now too late. The authors of the paper make various estimates of CO2 concentrations based on
the year emissions are cut, assumed to be from 2015 to 2050. They make optimistic assumptions, for instance, that
emissions are cut at a stroke rather than gradually, and that their annual rate of growth before cut-off is 2%, not the 3%
plus witnessed from 2000-05. They then estimate what the effects would be on surface warming, sea level rise and
rainfall over a 1,000-year period using the latest climate models. The results of the melting of the polar ice caps are not included in the
calculations of sea levels, only the expansion of the water in the oceans caused by the surface temperature increase so, as the authors point
out, the actual new sea level will be much higher. The best-case results for surface warming, where action is taken in 2015 to eliminate
emissions, show that over 1,000 years the temperature rises from 1.3 to 1.0 degree centigrade above pre-industrial levels.
The worst case, where action is delayed to 2050, predicts surface temperatures will increase from just under to just over four degrees by 2320
and then remain approximately constant for the rest of the millennium. High levels of CO2 persist in the atmosphere because, over long
timescales, reduction of the gas is dependent on the ability of the oceans to absorb it, but there are limits to
this due to the physics and chemistry of deep-ocean mixing. On the other hand, the amount of heat in the atmosphere that can be
absorbed by the sea, the key way surface temperatures are decreased, is limited by the same scientific laws. As a result,
carbon concentrations cannot fall enough to force temperatures down while there is simultaneously reduced cooling due to
limited heat loss to the oceans.

AT: Phytoplankton
Phytoplankton sequestration is minimal defer to recent scientific studies
Science Daily, 9
*1/29/2009, Iron Fertilization To Capture Carbon Dioxide Dealt A Blow: Plankton Stores Much Less
Carbon Dioxide Than Estimated, http://www.sciencedaily.com/releases/2009/01/090128183744.htm,
7/2/14, TYBG]
A possible solution to global warming may be further away than ever, according to a new report published in the journal Nature. Scientists
measuring how much of the greenhouse gas carbon dioxide is locked away in the deep ocean by
plankton when it dies found that it was significantly less than previous estimates. Plankton is a natural sponge
for carbon dioxide. It occurs naturally in the ocean and its growth is stimulated by iron which it uses to photosynthesis and growth. When
plankton dies it sinks to the bottom of the ocean locking away some of the carbon it has absorbed from the atmosphere. Fertilising plankton by
the artificial addition of iron has long been proposed as a potential way to geo-engineer the removal of carbon dioxide from the atmosphere.
Researchers analysed an area of the Southern ocean known to be naturally rich in iron and their
report reveals that the amount of carbon sequestered to the deep ocean for a given input of natural
iron falls far short of previous geo-engineering estimates. This has serious implications for proposals to influence climate
change through iron fertilisation of the sea.
AT: Sequestration Barry
CO2 emissions of OTEC plants outweigh sequestration
Barry, Naval architect and co-chair of the Society of Naval Architects and Marine
Engineer , 08
(Christopher, 7-1-08, Renewable Energy World, Ocean Thermal Energy Conversion and CO2
Sequestration, http://www.renewableenergyworld.com/rea/news/article/2008/07/ocean-thermal-
energy-conversion-and-co2-sequestration-52762, accessed 7-1-14, YLP)
The actual effectiveness of OTEC in raising ocean fertility and thereby sequestering carbon still has to be
verified, and there has to be a careful examination of other possible harmful environmental impacts an old saying among
engineers is "it seemed like a good idea at the time." The most important issue is that the deep water already has
substantial dissolved carbon dioxide, and so an OTEC plant may actually release more carbon than it
sequesters, or it might just speed up the existing cycle, sending down as much as it brings up with no net effect. This
question has to be answered before OTEC is implemented. It may also be possible to optimize sequestration by being selective about the
depths that water is drawn from, or possibly by adding other trace nutrients, especially those that enhance species that sequester carbon in
shells. An OTEC plant optimized for ocean fertility will also probably be different than one optimized to
generate power, so any OTEC-based carbon scheme has to include transfer payments of some sort it
won't come for free. Finally, who owns the ocean thermal resource? Most plants will be in international waters, though these waters
tend to be off the coasts of the developing world.
AT: Bio-d

IPPC conceded theres no real risk of extinction their own simulations disprove the
impact
Bojanowski, Graduate Geologist and writer at Spiegel Online, 4/26
*Axel, UN Backtracks: Will Global Warming Really Trigger Mass Extinctions?,
http://www.spiegel.de/international/world/new-un-climate-report-casts-doubt-on-earlier-extinction-
predictions-a-960569.html, 7/1/14, TYBG]
The last remaining passenger pigeon, Martha, died a century ago in a Cincinnati zoo. The bird's downfall was having tender, tasty meat so
pleasing to the human palate. Hundreds of species have suffered the same fate in modern times. The last Tasmanian wolf died in an Australian
zoo in 1936. Two years later, the final remaining Schomburgk's deer met its end as a pet in a Thai temple. The Chinese river dolphin hasn't been
sighted for years either. In total, 77 species of mammal, 130 birds, 22 reptiles and 34 amphibians have vanished from the face of the earth since
1500, according to the IUCN Red List of Threatened Species. Humans have shrunk the habitats of many life forms, through unsustainable
agriculture, fishing or hunting. And it is going to get even worse. Global warming is said to be threatening thousands of animal and plant species
with extinction. That, at least, is what the Intergovernmental Panel on Climate Change (IPCC) has been predicting for years. But the UN
climate body now says it is no longer so certain. The second part of the IPCC's new assessment report is due to be
presented next Monday in Yokohama, Japan. On the one hand, a classified draft of the report notes that a further "increased extinction risk for
a substantial number of species during and beyond the 21st century" is to be expected. On the other hand, the IPCC admits that
there is no evidence climate change has led to even a single species becoming extinct thus far.
'Crocodile Tears' At most, the draft report says, climate change may have played a role in the disappearance of a few amphibians, fresh water
fish and mollusks. Yet even the icons of catastrophic global warming, the polar bears, are doing surprisingly
well. Their population has remained stable despite the shrinking of the Arctic ice cap. Ragnar Kinzelbach, a zoologist at the
University of Rostock, says essential data is missing for most other life forms, making it virtually impossible to forecast the potential effects of
climate change. Given the myriad other human encroachments in the natural environment, Kinzelbach says,
"crocodile tears over an animal kingdom threatened by climate change are less than convincing." The
draft report includes a surprising admission by the IPCC -- that it doubts its own computer simulations
for species extinctions. "There is very little confidence that models currently predict extinction risk
accurately," the report notes. Very low extinction rates despite considerable climate variability during past hundreds of thousands of years
have led to concern that "forecasts for very high extinction rates due entirely to climate change may be overestimated." In the last assessment
report, Climate Change 2007, the IPCC predicted that 20 to 30 percent of all animal and plant species faced a high risk for extinction should
average global temperatures rise by 2 to 3 degrees Celsius (3.6 to 5 degrees Fahrenheit). The current draft report says that scientific
uncertainties have "become more apparent" since 2007.

Too many alt causes that the aff cant resolve population, habitat destruction,
pollution, and agriculture
RCF, 13
*Rainforest Conservation Fund, 5) Causes of recent declines in biodiversity,
http://www.rainforestconservation.org/rainforest-primer/2-biodiversity/g-recent-losses-in-
biodiversity/5-causes-of-recent-declines-in-biodiversity, 7/1/14, TYBG]
a. Human population growth: The geometric rise in human population levels during the twentieth century is
the fundamental cause of the loss of biodiversity. It exacerbates every other factor having an impact on
rainforests (not to mention other ecosystems). It has led to an unceasing search for more arable land for food
production and livestock grazing, and for wood for fuel, construction, and energy. Previously undisturbed areas
(which may or may not be suitable for the purposes to which they are constrained) are being transformed into agricultural or pasture land,
stripped of wood, or mined for resources to support the energy needs of an ever-growing human population. Humans also tend to
settle in areas of high biodiversity, which often have relatively rich soils and other attractions for
human activities. This leads to great threats to biodiversity, especially since many of these areas have numerous endemic species.
Balmford, et al., (2001) have demonstrated that human population size in a given tropical area correlates with the number of endangered
species, and that this pattern holds for every taxonomic group. Most of the other effects mentioned below are either consequent to the human
population expansion or related to it. The human population was approximately 600,000 million in 1700, and one billion in 1800. Just now it
exceeds six billion, and low estimates are that it may reach 10 billion by the mid-21st century and 12 billion by 2100. The question is whether
many ecological aspects of biological systems can be sustained under the pressure of such numbers. Can birds continue to migrate, can larger
organisms have space (habitat) to forage, can ecosystems survive in anything like their present form, or are they doomed to impoverishment
and degradation? b. Habitat destruction: Habitat destruction is the single most important cause of the loss of
rainforest biodiversity and is directly related to human population growth. As rainforest land is
converted to ranches, agricultural land (and then, frequently, to degraded woodlands, scrubland, or desert), urban areas
(cf. Brasilia) and other human usages, habitat is lost for forest organisms. Many species are widely distributed and thus,
initially, habitat destruction may only reduce local population numbers. Species which are local, endemic, or which have specialized habitats are
much more vulnerable to extinction, since once their particular habitat is degraded or converted for human activity, they will disappear. Most
of the habitats being destroyed are those which contain the highest levels of biodiversity, such as lowland tropical wet forests. In this case,
habitat loss is caused by clearing, selective logging, and burning. c. Pollution: Industrial, agricultural and waste-based
pollutants can have catastrophic effects on many species. Those species which are more tolerant of pollution will
survive; those requiring pristine environments (water, air, food) will not. Thus, pollution can act as a selective agent. Pollution of water
in lakes and rivers has degraded waters so that many freshwater ecosystems are dying. Since almost 12% of
animals species live in these ecosystems, and most others depend on them to some degree, this is a very serious matter. In developing
countries approximately 90% of wastewater is discharged, untreated, directly into waterways. d.
Agriculture: The dramatic increase in the number of humans during the twentieth century has instigated a
concomitant growth in agriculture, and has led to conversion of wildlands to croplands, massive
diversions of water from lakes, rivers and underground aquifers, and, at the same time, has polluted water
and land resources with pesticides, fertilizers, and animal wastes. The result has been the destruction, disturbance or
disabling of terrestrial ecosystems, and polluted, oxygen-depleted and atrophied water resources. Formerly, agriculture in different regions of
the world was relatively independent and local. Now, however, much of it has become part of the global exchange economy and has caused
significant changes in social organization.
At worst, warming only causes biodiversity gains best scientific studies flow aff
Bastach, Writer for the Daily Caller, 5/15/14
*Michael, Global Warming Is Increasing Biodiversity Around The World,
http://dailycaller.com/2014/05/15/global-warming-is-increasing-biodiversity-around-the-world/,
7/1/14, TYBG]
A new study published in the journal Science has astounded biologists: global warming is not harming biodiversity, but
instead is increasing the range and diversity of species in various ecosystems. Environmentalists have long
warned that global warming could lead to mass extinctions as fragile ecosystems around the world are made unlivable as temperatures
increase. But a team of biologists from the United States, United Kingdom and Japan found that global
warming has not led to a decrease in biodiversity. Instead, biodiversity has increased in many areas on land and in the
ocean. Although the rate of species extinction has increased markedly as a result of human activity across the biosphere, conservation has
focused on endangered species rather than on shifts in assemblages, reads the editors abstract of the report. The study says
species turnover was above expected but do not find evidence of systematic biodiversity loss. The
editors abstract adds that the result could be caused by homogenization of species assemblages by invasive species, shifting distributions
induced by climate change, and asynchronous change across the planet. Researchers reviewed 100 long-term species
monitoring studies from around the world and found increasing biodiversity in 59 out of 100 studies
and decreasing biodiversity in 41 studies. The rate of change in biodiversity was modest in all of the studies, biologists said.
AT: Dead Zones
Dead zones naturally repair themselves because they create iron sulfides that
promote phytoplankton growth which in turn creates more oxygen
Griffin, writer for Science World Report, 5/19
*Catherine, Dangerous Ocean 'Dead Zones' May be Limited by Natural Cutoff Switch
http://www.scienceworldreport.com/articles/14810/20140519/dangerous-ocean-dead-zones-limited-
natural-cutoff-switch.htm, 7/1/14, TYBG]
Dead zones can impact ocean ecosystems in large ways, causing fish species to decline and populations
of animals to move to different locations in order to survive. Now, scientists have found that there may
be a natural limiting switch to keep these ocean systems from developing persistent dead zones--and
it all has to do with iron.
Iron is a crucial catalyst for fueling biological productivity in the ocean. Without it, microscopic plants
called phytoplankton cannot fully consume nitrates and phosphates. This, in turn, limits their growth.
Iron can enter the ocean through river sediments, windblown dust and continental margin sediments.
Yet this iron needs to be dissolved rather than locked up in sediments so that the phytoplankton can
use it; it appears that oxygen, in this case, is the key.
In a high-oxygen environment, most of the iron that is dissolved in the water precipitates and turns into
iron oxide coatings on particles, which sink to the seafloor; this makes it unavailable to phytoplankton.
When a hypoxic environment occurs, though, the iron oxides dissolve and may diffuse back into the
water column. This makes them available to fertilize plankton growth which can make a hypoxic state
worse as the plankton decays and sinks to the seafloor.
"When this moderate hypoxic state occurs, the iron release fuels more biological productivity and the
organic particles fall to the sea floor where they decay and consume more oxygen, making hypoxia
worse," said Florian Scholz, one of the researchers, in a news release. "That leads to this feedback loop
of more iron release triggering more productivity, triggering more iron release."
Yet there's apparently a limiting factor for hypoxia. The researchers examined concentrations of
sediments dating back 140,000 years and made some interesting discoveries.
"But we found that when the oxygen approaches zero a new group of minerals, iron sulfides, are
formed," said Scholz in a news release. "This is the key to the limit switch because when the iron gets
locked up in sulfides, it is no longer dissolved and thus not available to the plankton. The runaway
hypoxia stops and the hypoxic region is limited."
The findings reveal that there is a limit for hypoxic conditions . That said, dead zones still remain and
issue, and we should continue to limit the amount of pollutants which enter our coastal waters.
AT: Acidification
No impact to ocean acidification its over-exaggerated and species can adapt
Ridley, British Scientist and Journalist, 12 (Matt, Jan 7, Taking Fears of Acid Oceans With a
Grain of Salt, Wall Street Journal,
http://online.wsj.com/news/articles/SB10001424052970203550304577138561444464028, Tang)
Coral reefs around the world are suffering badly from overfishing and various forms of pollution. Yet many experts argue that the greatest
threat to them is the acidification of the oceans from the dissolving of man-made carbon dioxide emissions. The effect of acidification,
according to J.E.N. Veron, an Australian coral scientist, will be "nothing less than catastrophic.... What were once thriving coral gardens that
supported the greatest biodiversity of the marine realm will become red-black bacterial slime, and they will stay that way." Humans have
placed marine life under pressure, but the chief culprits are overfishing and pollution. John S. Dykes This is a common view. The Natural
Resources Defense Council has called ocean acidification "the scariest environmental problem you've
never heard of." Sigourney Weaver, who narrated a film about the issue, said that "the scientists are freaked out." The head of the
National Oceanic and Atmospheric Administration calls it global warming's "equally evil twin." But do the scientific data support
such alarm? Last month scientists at San Diego's Scripps Institution of Oceanography and other
authors published a study showing how much the pH level (measuring alkalinity versus acidity) varies
naturally between parts of the ocean and at different times of the day, month and year. "On both a
monthly and annual scale, even the most stable open ocean sites see pH changes many times larger than the
annual rate of acidification," say the authors of the study, adding that because good instruments to measure
ocean pH have only recently been deployed, "this variation has been under-appreciated." Over coral reefs,
the pH decline between dusk and dawn is almost half as much as the decrease in average pH expected over the next 100 years. The noise
is greater than the signal. Another recent study, by scientists from the U.K., Hawaii and Massachusetts, concluded
that "marine and freshwater assemblages have always experienced variable pH conditions," and that "in
many freshwater lakes, pH changes that are orders of magnitude greater than those projected for the 22nd-
century oceans can occur over periods of hours." This adds to other hints that the ocean-acidification
problem may have been exaggerated. For a start, the ocean is alkaline and in no danger of becoming acid
(despite headlines like that from Reuters in 2009: "Climate Change Turning Seas Acid"). If the average pH of the ocean drops to 7.8 from 8.1 by
2100 as predicted, it will still be well above seven, the neutral point where alkalinity becomes acidity. The central concern is that
lower pH will make it harder for corals, clams and other "calcifier" creatures to make calcium
carbonate skeletons and shells. Yet this concern also may be overstated. Off Papua New Guinea and the Italian
island of Ischia, where natural carbon-dioxide bubbles from volcanic vents make the sea less alkaline, and off the Yucatan, where
underwater springs make seawater actually acidic, studies have shown that at least some kinds of
calcifiers still thriveat least as far down as pH 7.8. In a recent experiment in the Mediterranean, reported in
Nature Climate Change, corals and mollusks were transplanted to lower pH sites, where they proved
"able to calcify and grow at even faster than normal rates when exposed to the high [carbon-dioxide]
levels projected for the next 300 years." In any case, freshwater mussels thrive in Scottish rivers, where
the pH is as low as five. Laboratory experiments find that more marine creatures thrive than suffer when carbon dioxide lowers the
pH level to 7.8. This is because the carbon dioxide dissolves mainly as bicarbonate, which many calcifiers use as raw material for carbonate.
Human beings have indeed placed marine ecosystems under terrible pressure, but the chief culprits are overfishing and pollution. By
comparison, a very slow reduction in the alkalinity of the oceans, well within the range of natural
variation, is a modest threat, and it certainly does not merit apocalyptic headlines.
AT: Disease
Satellites monitor environmental factors that prevents spread
Walter-Range and John, Research analysts for the Space Foundation 10 (Micah and Mariel,
research analyst for the Space Foundation, September 1, 2010, Disease and Pandemic Early Warning,
Space Foundation,
http://www.spacefoundation.org/sites/default/files/downloads/Solutions_from_Space_Disease_and_P
andemic_Early_Warning_0.pdf, 7/12/13)
Remote sensing satellites cannot directly detect disease outbreaks but they are able to detect a wide range of
environmental factors, such as ground water, vegetation, or flooding. 1 Before a model can be developed, an
association must be found between environmental factors and the ecology of the disease agent or
host. This is usually possible for vector-borne diseases, in which a third party, or vector, is necessary to
transmit the disease. Malaria, which is spread by mosquitoes, provides a good example. Mosquitoes breed in water, so
they are often more prevalent when there is a greater amount of surface water. Increased amounts of surface water or rainfall, which can be
detected by remote sensing satellites, represent a possible predictor for an outbreak of malaria in regions where the disease is known to exist.
2 These models are more effective when they integrate other data sources that help to identify multiple
links between environmental factors and a disease. In addition, some models incorporate the biological
process of susceptibility, exposure, infection, and recovery. This requires an understanding of what causes people to be
particularly vulnerable to a particular disease, the ways in which people come into contact with the disease, the process by which the infection
affects the body, and the process of recovery. 3 It is also important for these models to include information about the
region being studied, often referred to as geospatial information. For example, predictions of areas at risk of outbreak should take into
account the population density throughout the region. If an area likely to have many mosquitoes is also near a village, there is a higher risk of a
malaria outbreak than would be the case for a very sparsely populated area. Once these associations have been identified,
historical data is used to demonstrate that there is a correlation between the environmental factors
and disease outbreaks. In addition to the satellite imagery and population data, it is necessary to gather
epidemiological data, including information about when and where outbreaks have occurred in the past, in order to validate the
connection. This data can be difficult to acquire, particularly for rural areas or in developing countries. Because of the wide range of
environmental factors that could affect the spread of disease in different areas, it is necessary to have data representing as much of the area of
interest as possible. This first step, which includes identifying and validating links between diseases and environmental factors, is usually carried
out by researchers either in academia or government. 5
Warming doesnt cause disease predictions are speculative, unfounded, and
empirics prove
Moore, Senior Fellow at Hoover Institution for Stanford University, 1
*Thomas, February 2001, Stanford, Why Global Warming Doesn't Cause Disease,
http://web.stanford.edu/~moore/WarmingandDisease.html, accessed 7/1/14, TYBG]
Even if the White House ignores WCR's frequent, informative messages on global warming and health, these officials should pay attention to
the experts on disease. Both the scientific community and the medical establishment say the frightful
forecasts are unfounded, exaggerated, or misleading. Further, and more important for policy-makers
to note, these rumors of an upsurge in disease and early mortality stemming from climate change do
not require action to reduce greenhouse gas emissions. As Science reports: "Predictions that global warming
will spark epidemics have little basis, say infectious-disease specialists, who argue that public health measures will
inevitably outweigh effects of climate." The article adds: "Many of the researchers behind the dire
predictions concede that the scenarios are speculative." The director of the division of vector-borne
infectious diseases at the Centers for Disease Control and Prevention (CDC), Duane Gubler, calls those
prophecies "'gloom and doom' based on 'soft data.'" Others attribute them to "simplistic thinking."
These experts agree that "breakdowns in public health rather than climate shifts are to blame for the recent disease outbreaks." Even El Nino,
our most recent climate scapegoat, cannot take the blame for recent epidemics. The claim that dengue fever epidemics in Latin America in1994
and 1995 were due in part to El Nino is simply wrong. Science quotes dengue experts at the Pan American Health Organization: "The
epidemics resulted from the breakdown of eradication programs aimed at Aedes aegypti in the 1970s and the
subsequent return of the mosquito. Once the mosquito was back the dengue followed." Oh, no! Not U.S. The CDC's Gubler also
dismisses the idea that these diseases may spread into the United States. According to Science, "He calls such
predictions 'probably the most blatant disregard for other factors that influence disease
transmission.'" Mosquito control programs, implemented decades ago, eliminated the insects that
had inflicted these diseases on Americans for centuries. Heat has little, if anything, to do with it. The Gulf
Coast states in the United States are warmer now than the Caribbean islands that are currently suffering from dengue fever. As Gubler says, "If
temperature was the main factor, we would see epidemics in the Southern U.S. We have the mosquito; we have higher temperatures and
constant introduction of viruses, which means we should have epidemics, but we don't." According to Science, even those who have
made these dire predictions agree that the forecasts are speculative, but justify them as playing "a
useful role in consciousness-raising." In other words, let's scare people with hobgoblins in order to convince them to do "the
right thing": Give up cheap energy.
Oil
1NC
AT: Peak Oil
Peak oil is over exaggerated- its nowhere soon- their ev doesnt assume new
extraction tech
Klare, a professor of peace and world security studies at Hampshire College and the
defense correspondent of The Nation, 1/9
*Michael, 1/9/14, TheNation.com, Peak Oil Is Dead. Long Live Peak Oil!,
http://www.thenation.com/article/177859/peak-oil-dead-long-live-peak-oil, 7/1/14, JA]
Among the big energy stories of 2013, peak oilthe once-popular notion that worldwide oil production would soon reach a maximum
level and begin an irreversible declinewas thoroughly discredited. The explosive development of shale oil and
other unconventional fuels in the United States helped put it in its grave. As the year went on, the eulogies came
in fast and furious. Today, it is probably safe to say we have slayed peak oil once and for all, thanks to the
combination of new shale oil and gas production techniques, declared Rob Wile, an energy and economics reporter
for Business Insider. Similar comments from energy experts were commonplace, prompting an R.I.P.headline at Time.com announcing, Peak
Oil is Dead. Not so fast, though. The present round of eulogies brings to mind the Mark Twains famous line: The reports of my death have
been greatly exaggerated. Before obits for peak oil theory pile up too high, lets take a careful look at these assertions. Fortunately, the
International Energy Agency(IEA), the Paris-based research arm of the major industrialized powers, recently did just thatand the results were
unexpected. While not exactly reinstalling peak oil on its throne, it did make clear that much of the talk of a perpetual gusher of American shale
oil is greatly exaggerated. The exploitation of those shale reserves may delay the onset of peak oil for a year or so, the agencys experts noted,
but the long-term picture has not changed much with the arrival of *shale oil+. The IEAs take on this subject is especially noteworthy because
its assertion only a year earlier that the US would overtake Saudi Arabia as the worlds number one oil producer sparked the peak oil is dead
deluge in the first place. Writing in the 2012 edition of its World Energy Outlook, the agency claimed not only that the United States is
projected to become the largest global oil producer by around 2020, but also that with US shale production and Canadian tar sands coming
online, North America becomes a net oil exporter around 2030. That November 2012 report highlighted the use of advanced production
technologiesnotablyhorizontal drilling and hydraulic fracturing (fracking)to extract oil and natural gas from once inaccessible rock,
especially shale. It also covered the accelerating exploitation of Canadasbitumen (tar sands or oil sands), another resource previously
considered too forbidding to be economical to develop. With the output of these and other unconventional fuels set to explode in the years
ahead, the report then suggested, the long awaited peak of world oil production could be pushed far into the future. The release of the 2012
edition of World Energy Outlook triggered a global frenzy of speculative reporting, much of it announcing a new era of American energy
abundance. Saudi America was the headline over one such hosanna in the Wall Street Journal. Citing the new IEA study, that paper heralded a
coming US energy boom driven by technological innovation and risk-taking funded by private capital. From then on, American energy
analysts spoke rapturously of the capabilities of a set of new extractive technologies, especially fracking, to unlock oil and natural gas from
hitherto inaccessible shale formations. This is a real energy revolution, the Journalcrowed. But that was then. The most recent edition of
World Energy Outlook, published this past November, was a lot more circumspect. Yes, shale oil, tar sands, and other unconventional fuels will
add to global supplies in the years ahead, and, yes, technology will help prolong the life of petroleum. Nonetheless, its easy to forget that we
are also witnessing the wholesale depletion of the worlds existing oil fields and so all these increases in shale output must be balanced against
declines in conventional production. Under ideal circumstanceshigh levels of investment, continuing technological progress, adequate
demand and pricesit might be possible to avert an imminent peak in worldwide production, but as the latest IEA report makes clear, there is
no guarantee whatsoever that this will occur. Inching Toward the Peak Before plunging deeper into the IEAs assessment, lets take a quick look
at peak oil theory itself. As developed in the 1950s by petroleum geologist M. King Hubbert, peak oil theory holds that any individual oil field (or
oil-producing country) will experience a high rate of production growth during initial development, when drills are first inserted into a oil-
bearing reservoir. Later, growth will slow, as the most readily accessible resources have been drained and a greater reliance has to be placed on
less productive deposits. At this pointusually when about half the resources in the reservoir (or country) have been extracteddaily output
reaches a maximum, or peak, level and then begins to subside. Of course, the field or fields will continue to produce even after peaking, but
ever more effort and expense will be required to extract what remains. Eventually, the cost of production will exceed the proceeds from sales,
and extraction will be terminated. For Hubbert and his followers, the rise and decline of oil fields is an inevitable consequence of natural forces:
oil exists in pressurized underground reservoirs and so will be forced up to the surface when a drill is inserted into the ground. However, once a
significant share of the resources in that reservoir has been extracted, the fields pressure will drop and artificial meanswater, gas, or
chemical insertionwill be needed to restore pressure and sustain production. Sooner or later, such means become prohibitively expensive.
Peak oil theory also holds that what is true of an individual field or set of fields is true of the world as a whole. Until about 2005, it did indeed
appear that the globe was edging ever closer to a peak in daily oil output, as Hubberts followers had long predicted. (He died in 1989.) Several
recent developments have, however, raised questions about the accuracy of the theory. In particular, major private oil companies have taken
to employing advanced technologies to increase the output of the reservoirs under their control, extending the lifetime of existing fields
through the use of whats called enhanced oil recovery, or EOR. Theyve also used new methods to exploit fields once considered inaccessible
in places like the Arctic and deep oceanic waters, thereby opening up the possibility of a most un-Hubbertian future. In developing these new
technologies, the privately owned international oil companies (IOCs) were seeking to overcome their principal handicap: most of the worlds
easy oilthe stuff Hubbert focused on that comes gushing out of the ground whenever a drill is insertedhas already been consumed or is
controlled by state-owned national oil companies (NOCs), including Saudi Aramco, the National Iranian Oil Company, and the Kuwait National
Petroleum Company, among others. According to the IEA, such state companies control about 80 percent of the worlds known petroleum
reserves, leaving relatively little for the IOCs to exploit. To increase output from the limited reserves still under their controlmostly located in
North America, the Arctic, and adjacent watersthe private firms have been working hard to develop techniques to exploit tough oil. In this,
they have largely succeeded: they are now bringing new petroleum streams into the marketplace and, in doing so, have shaken the foundations
of peak oil theory. Those who say that peak oil is dead cite just this combination of factors. By extending the lifetime of existing fields through
EOR and adding entire new sources of oil, the global supply can be expanded indefinitely. As a result, they claim, the world possesses a
relatively boundless supply of oil (and natural gas). This, for instance, was the way Barry Smitherman of the Texas Railroad Commission
(which regulates that states oil industry) described the global situation at a recent meeting of the Society of Exploration Geophysicists. Peak
Technology In place of peak oil, then, we have a new theory that as yet has no name but might be called techno-dynamism. There is, this
theory holds, no physical limit to the global supply of oil so long as the energy industry is prepared to, and allowed to, apply its technological
wizardry to the task of finding and producing more of it. Daniel Yergin, author of the industry classics, The Prize andThe Quest, is a key
proponent of this theory. He recently summed up the situation this way: Advances in technology take resources that were not physically
accessible and turn them into recoverable reserves. As a result, he added, estimates of the total global stock of oil keep growing. From this
perspective, the world supply of petroleum is essentially boundless. In addition to conventional oilthe sort that comes gushing out of the
groundthe IEA identifies six other potential streams of petroleum liquids: natural gas liquids; tar sands and extra-heavy oil; kerogen oil
(petroleum solids derived from shale that must be melted to become usable); shale oil; coal-to-liquids (CTL); and gas-to-liquids (GTL). Together,
these unconventional streams could theoretically add several trillion barrels of potentially recoverable petroleum to the global supply,
conceivably extending the Oil Age hundreds of years into the future (and in the process, via climate change, turning the planet into an
uninhabitable desert). But just as peak oil had serious limitations, so, too, does techno-dynamism. At its core is a belief that rising world oil
demand will continue to drive the increasingly costly investments in new technologies required to exploit the remaining hard-to-get petroleum
resources. As suggested in the 2013 edition of the IEAs World Energy Outlook, however, this belief should be treated with considerable
skepticism. Among the principal challenges to the theory are these: 1. Increasing Technology Costs: While the costs of developing a resource
normally decline over time as industry gains experience with the technologies involved, Hubberts law of depletion doesnt go away. In other
words, oil firms invariably develop the easiest tough oil resources first, leaving the toughest (and most costly) for later. For example, the
exploitation of Canadas tar sands began with the strip-mining of deposits close to the surface. Because those are becoming exhausted,
however, energy firms are now going after deep-underground reserves using far costlier technologies. Likewise, many of the most abundant
shale oil deposits in North Dakota have now been depleted, requiring an increasing pace of drilling to maintain production levels. As a result,
the IEA reports, the cost of developing new petroleum resources will continually increase: up to $80 per barrel for oil obtained using advanced
EOR techniques, $90 per barrel for tar sands and extra-heavy oil, $100 or more for kerogen and Arctic oil, and $110 for CTL and GTL. The market
may not, however, be able to sustain levels this high, putting such investments in doubt. 2. Growing Political and Environmental Risk: By
definition, tough oil reserves are located in problematic areas. For example, an estimated 13 percent of the worlds undiscovered oil lies in the
Arctic, along with 30 percent of its untapped natural gas. The environmental risks associated with their exploitation under the worst of weather
conditions imaginable will quickly become more evidentand so, faced with the rising potential for catastrophic spills in a melting Arctic,
expect a commensurate increase in political opposition to such drilling. In fact, a recent increase has sparked protests in both Alaska and Russia,
including the much-publicized September 2013 attempt by activists from Greenpeace to scale a Russian offshore oil platforman action that
led to their seizure and arrest by Russian commandos. Similarly, expanded fracking operations have provoked a steady increase in anti-fracking
activism. In response to such protests and other factors, oil firms are being forced to adopt increasingly stringent environmental protections,
pumping up the cost of production further. Please support our journalism. Get a digital subscription for just $9.50! 3. Climate-Related Demand
Reduction: The techno-optimist outlook assumes that oil demand will keep rising, prompting investors to provide the added funds needed to
develop the technologies required. However, as the effects of rampant climate change accelerate, more and more polities are likely to try to
impose curbs of one sort or another on oil consumption, suppressing demandand so discouraging investment. This is already happening in
the United States, where mandated increases in vehicle fuel-efficiency standards are expected to significantly reduce oil consumption. Future
demand destruction of this sort is bound to impose a downward pressure on oil prices, diminishing the inclination of investors to finance
costly new development projects. Combine these three factors, and it is possible to conceive of a technology peak not unlike the peak in oil
output originally envisioned by M. King Hubbert. Such a techno-peak is likely to occur when the easy sources of tough oil have been
depleted, opponents of fracking and other objectionable forms of production have imposed strict (and costly) environmental regulations on
drilling operations, and global demand has dropped below a level sufficient to justify investment in costly extractive operations. At that point,
global oil production will decline even if supplies are boundless and technology is still capable of unlocking more oil every year. Peak Oil
Reconsidered Peak oil theory, as originally conceived by Hubbert and his followers, was largely governed by natural forces. As we have
seen, however, these can be overpowered by the application of increasingly sophisticated technology.
Reservoirs of energy once considered inaccessible can be brought into production, and others once
deemed exhausted can be returned to production; rather than being finite, the worlds petroleum
base now appears virtually inexhaustible. Does this mean that global oil output will continue rising, year after year, without
ever reaching a peak? That appears unlikely. What seems far more probable is that we will see a slow tapering of output over the next decade
or two as costs of production rise and climate changealong with opposition to the path chosen by the energy giantsgains momentum.
Eventually, the forces tending to reduce supply will overpower those favoring higher output, and a peak in production will indeed result, even if
not due to natural forces alone. Such an outcome is, in fact, envisioned in one of three possible energy scenarios the IEAs mainstream experts
lay out in the latest edition of World Energy Outlook. The first assumes no change in government policies over the next 25 years and sees world
oil supply rising from 87 to 110 million barrels per day by 2035; the second assumes some effort to curb carbon emissions and so projects
output reaching only 101 million barrels per day by the end of the survey period. Its the third trajectory, the 450 Scenario, that should raise
eyebrows. It assumes that momentum develops for a global drive to keep greenhouse gas emissions below 450 parts per millionthe
maximum level at which it might be possible to prevent global average temperatures from rising above two degrees Celsius (and so cause
catastrophic climate effects). As a result, it foresees a peak in global oil output occurring around 2020 at about 91 million barrels per day, with a
decline to 78 million barrels by 2035. It would be premature to suggest that the 450 Scenario will be the immediate roadmap for humanity,
since its clear enough that, for the moment, we are on a highway to hell that combines the IEAs first two scenarios. Bear in mind, moreover,
that many scientists believe a global temperature increase of even two degrees Celsius would be enough to produce catastrophic climate
effects. But as the effects of climate change become more pronounced in our lives, count on one thing: the clamor for government action will
grow more intense, and so eventually were likely to see some variation of the 450 Scenario take shape. In the process, the worlds demand for
oil will be sharply constricted, eliminating the incentive to invest in costly new production schemes. The bottom line: global peak oil remains in
our future, even if not purely for the reasons given by Hubbert and his followers. With the gradual disappearance of easy oil, the major
private firms are being forced to exploit increasingly tough, hard-to-reach reserves, thereby driving up the cost of production and potentially
discouraging new investment at a time when climate change and environmental activism are on the rise. Peak oil is dead! Long live
peak oil!
AT: Oil Dependence
Fracking solves oil dependency our evidence is future predictive
Spencer, Middle East Correspondent, 13 [Richard, citing Fatih Birol, chief economist of the IEA, Dec. 23. 2013,
Fracking boom frees the US from old oil alliances, http://www.telegraph.co.uk/earth/energy/oil/10476647/Fracking-boom-frees-the-US-
from-old-oil-alliances.html, 6-28-14, Tang]
Yet this is what he said in his annual outlook on the important trends of the moment: "By around 2020, the United States is
projected to become the largest global oil producer," he wrote. "The result is a continued fall in US oil
imports, to the extent that North America becomes a net oil exporter around 2030. "The United
States, which currently imports around 20 per cent of its total energy needs, becomes all but self-
sufficient in net terms a dramatic reversal of the trend seen in most other energy-importing
countries." Far from sending its armies round the world to secure oil, in other words, before long it
will be sending marketing consultants to sell it. A dramatic reversal indeed: it seems like only yesterday that every
commentator was talking about Peak Oil, the theory that the world's biggest reserves had all been discovered and that the planet's booming
population would all soon be scrapping over the dribbles that still came out of the pipelines that remained. Now, with fracking
technology revolutionising output in the United States - soon to be followed by other countries - that
talk has dried up rather quicker than the pipelines ever did. The effects on the countries concerned, particularly those
belonging to the OPEC oil cartel, are just starting to be realised. The Saudis, for example, have begun to notice. Our country is facing
continuous threat because of its almost total dependency on oil, Prince Alwaleed bin Talal, the country's most prominent businessman, wrote
in an open letter to the oil minister and his own uncle, King Abdullah, in the summer, urging them to wake up to the danger. The world is
increasingly less dependent on oil from OPEC countries including the kingdom." By the time new talks with Iran
over its nuclear programme were announced in the autumn, such sensitive issues had even reached the ever-cautious Saudi press. Moreover,
writers had begun to notice that the issue wasn't just economic - important though it is for Saudi Arabia's oil-dependent economy and public
finances for oil sales to keep up, prices to remain stable, and the cash to keep rolling in. It has been lost on no-one that the United States in
general, and President Barack Obama in particular, is less concerned nowadays to soothe Saudi Arabia's highly-strung nerves on regional
politics than it used to be. The outcome of those talks, a deal about which Saudi Arabia remains deeply nervous, was one result. Royal
spokesmen have fired off an increasingly furious series of warnings of the dangers the United States is running by making concessions to Iran,
as well as by not taking the military route to regime change in Syria, the biggest threat to peace in the Middle East. The Saudi Gazette, for one,
made the linkage clear in a piece by its energy analyst, Syed Rashid Husain. "Roles are getting switched," he wrote. "Global energy
geopolitics is undergoing a major metamorphosis as manifested by recent regional political
developments. After all, Washington is no more that dependent on Middle Eastern crude supplies as
it was until a few years back."
AT: China
Increasing cooperation on climate change and shale boom proves relations are
resilient
Thiegles, Domestic Energy Policy Analyst for EagleFordTexas, 6/20/14
(Shane, US could find unlikely energy ally in China, http://eaglefordtexas.com/news/id/128463/us-find-unlikely-energy-ally-china/, accessed
6/27/14, LLM)
For all that America and China butt heads and position themselves as rivals in the global energy game, however, the truth is that our
ongoing energy markets will be inextricably linked moving forward. Ongoing talk says the US is all but certain
to loosen restrictions on natural gas exports soon, and oil may not be far behind. When that happens, we
will have a huge monetary incentive to sell as much of our massive energy stockpile as we can. Theres
more natural gas in America than we can use, leading to rock bottom utility prices but also an almost total lull in natural gas shale drilling, such
as is found in Louisianas Haynesville and New Yorks Marcellus shales. Demand and production are also growing at almost the same rate,
meaning that if we want this gas to be profitable we have to find more prospective buyers. American companies have already
made deals with China for Liquid Petroleum Gas, and its a safe bet that China will be willing to buy
whatever we have to sell. The US is also positioning itself as a leader in the worldwide fight against
climate change, and a widespread Chinese adoption of US gas would make us look like a diplomatic
leader. Recent studies suggesting natural gas wont reduce long-term ozone damage would also be silenced, with focus shifting to the fight
against a public health hazard. As hydraulic fracturing starts to catch on in the rest of the world, energy
supplies and influence will continue to shift in response. Countries like the UK, Venezuela and Mexico are attempting to
tap into local shale deposits in hopes of replicating US success. If supply raises accordingly, the resulting price drop could make natural gas
unprofitable for the foreseeable future. If the US wants to make money for its efforts, and China wants to buy up
diverse power options, the two could do worse than to form a partnership sooner rather than later.
AT: Grids Fragile
The DOE just invested millions in new grid infrastructure and security measures
solves their impact
DOE, US Department of Energy, 13
*6/11/14, Department of Energy, Energy Department Invests Over $10 Million to Improve Grid
Reliability and Resiliency, http://www.energy.gov/articles/energy-department-invests-over-10-million-
improve-grid-reliability-and-resiliency, accessed 7/2/14, TYBG]
As part of the Obama Administrations commitment to a strong and secure power grid, the Energy
Department today announced more than $10 million for projects that will improve the reliability and
resiliency of the U.S. electric grid and facilitate quick and effective response to grid conditions. This
investment which includes six projects across five states- California, Hawaii, Missouri, North Carolina and Washington will help further
the deployment of advanced software that works with synchrophasor technology to better detect
quickly-changing grid conditions and improve day-to-day grid reliability. Through advanced sensors and
monitoring devices, U.S. utilities now have unprecedented insight into the power grid helping industry
make decisions that may prevent power outages before they happen and adeptly respond to changing
grid conditions without disruption, said Patricia Hoffman, Assistant Secretary for the Energy Departments Office of Electricity
Delivery and Energy Reliability. By partnering with utilities and software developers, the Energy Department
can help the U.S. electric industry maintain more reliable and resilient power systems. In the United States,
advanced sensors and monitoring devices are giving utilities unprecedented visibility to see what is happening throughout the grid. For
example, synchrophasors can measure the instantaneous voltage, current and frequency at specific
locations on the grid giving utilities the ability to foresee and respond to changing grid conditions,
make decisions that prevent power outages and speed up restoration. Synchrophasor technology provides time-
stamped data 30 times per second about 100 times faster than conventional technology. With the support of the Recovery Act, the
Energy Department worked with utilities to deploy more synchrophasors throughout the United
States. In 2009, there were approximately 200 synchrophasors connected to the grid. Today, thanks in part to these Recovery Act
investments, there are about 1,700. By creating software that analyzes and visualizes the complex data captured by synchrophasors, these
projects announced today will help industry better leverage this new technology and maintain a
strong and reliable power grid. The six awards announced today, subject to final negotiation, include:
AT: Cyberattacks
All of their evidence is in the context of paranoid reporters - theres no impact to
cyber terror, and all of their evidence is based on anonymous tips from amateur
cyber-security watch-dogs
Rid, cyber-security writer, 13
*Thomas, 3/13/13, Foreign Policy, The Great Cyberscare,
http://www.foreignpolicy.com/articles/2013/03/13/the_great_cyberscare, accessed 7/1/14, TYBG]
The Pentagon, no doubt, is the master of razzmatazz. Leon Panetta set the tone by warning again and again of an
impending "cyber Pearl Harbor." Just before he left the Pentagon, the Defense Science Board delivered a remarkable report,
Resilient Military Systems and the Advanced Cyber Threat. The paper seemed obsessed with making yet more drastic historical comparisons:
"The cyber threat is serious," the task force wrote, "with potential consequences similar to the nuclear threat of the Cold War." The
manifestations of an all-out nuclear war would be different from cyberattack, the Pentagon scientists helpfully acknowledged. But then they
added, gravely, that "in the end, the existential impact on the United States is the same." A reminder is in order: The world has yet to
witness a single casualty, let alone fatality, as a result of a computer attack. Such statements are a
plain insult to survivors of Hiroshima. Some sections of the Pentagon document offer such eye-
wateringly shoddy analysis that they would not have passed as an MA dissertation in a self-respecting
political science department. But in the current debate it seemed to make sense. After all a bit of fear helps to claim -- or keep --
scarce resources when austerity and cutting seems out-of-control. The report recommended allocating the stout sum of $2.5 billion for its top
two priorities alone, protecting nuclear weapons against cyberattacks and determining the mix of weapons necessary to punish all-out cyber-
aggressors. Then there are private computer security companies. Such firms, naturally, are keen to pocket some of the government's money
earmarked for cybersecurity. And hype is the means to that end. Mandiant's much-noted report linking a coordinated and coherent campaign
of espionage attacks dubbed Advanced Persistent Threat 1, or "APT1," to a unit of the Chinese military is a case in point: The firm offered far
more details on attributing attacks to the Chinese than the intelligence community has ever done, and the company should be commended for
making the report public. But instead of using cocky and over-confident language, Mandiant's analysts should have used Words of Estimative
Probability, as professional intelligence analysts would have done. An example is the report's conclusion, which describes APT1's work:
"Although they control systems in dozens of countries, their attacks originate from four large networks in Shanghai -- two of which are allocated
directly to the Pudong New Area," the report found. Unit 61398 of the People's Liberation Army is also in Pudong. Therefore, Mandiant's
computer security specialists concluded, the two were identical: "Given the mission, resourcing, and location of PLA Unit 61398, we conclude
that PLA Unit 61398 is APT1." But the report conspicuously does not mention that Pudong is not a small neighborhood ("right outside of Unit
61398's gates") but in fact a vast city landscape twice the size of Chicago. Mandiant's report was useful and many attacks indeed originate in
China. But the company should have been more careful in its overall assessment of the available
evidence, as the computer security expert Jeffrey Carr and others have pointed out. The firm made it too easy for Beijing to dismiss the
report. My class in cybersecurity at King's College London started poking holes into the report after 15 minutes of red-teaming it -- the New
York Times didn't. Which leads to the next point: The media want to sell copy through threat inflation. "In Cyberspace,
New Cold War," the headline writers at the Times intoned in late February. "The U.S. is not ready for a cyberwar," shrieked the Washington
Post earlier this week. Instead of calling out the above-mentioned Pentagon report, the paper actually published two supportive articles on it
and pointed out that a major offensive cyber capability now seemed essential "in a world awash in cyber-espionage, theft and disruption." The
Post should have reminded its readers that the only military-style cyberattack that has actually created physical
damage -- Stuxnet -- was actually executed by the United States government. The Times, likewise, should have asked tough
questions and pointed to some of the evidential problems in the Mandiant report; instead, it published what appeared like an elegant press
release for the firm. On issues of cybersecurity, the nation's fiercest watchdogs too often look like hand-tame
puppies eager to lap up stories from private firms as well as anonymous sources in the security
establishment. Finally, the intelligence community tags along with the hype because the NSA and CIA are
still traumatized by missing 9/11. Missing a "cyber 9/11" would be truly catastrophic for America's spies, so erring on the side of
caution seems the rational choice. Yes, Director of National Intelligence James Clapper's recent testimony was more nuanced than reported and
toned down the threat of a very serious cyberattack. But at the same time America's top spies are not as forthcoming with more detailed
information as they could be. We know that the intelligence community, especially in the United States, has far better
information, better sources, better expertise, and better analysts than private companies like Symantec, McAfee, and Kaspersky
Lab. But for a number of reasons they keep their findings and their analysis classified. This means that the quality of
the public debate suffers, as experts as well as journalists have no choice but to rely on industry
reports of sometimes questionable quality or anonymous informants whose veracity is hard to assess.
AT: Hydrogen
Structural barriers prevent hydrogen adoption its too expensive, inefficient, and
difficult to store the lack of infrastructure means even if the aff solves it doesnt
happen before peak oil does
McGlaun, writer for Daily Tech, 13
*Shane, 3/22/13, Daily Tech, VW CEO Says That Hydrogen Fuel Cells Have Failed to Live up to
Promises,
http://www.dailytech.com/VW+CEO+Says+That+Hydrogen+Fuel+Cells+Have+Failed+to+Live+up+to+Pro
mises/article30189.htm, accessed 7/1/14, TYBG]
A few years ago there were a number of automotive manufacturers putting serious money into hydrogen fuel-cell vehicles. These vehicles
promised to have a driving range similar to a conventional gasoline-powered automobile, but produce no emissions to pollute the atmosphere.
However, the vehicles faced several daunting challenges, including the lack of a hydrogen fuel
infrastructure and the fact that hydrogen is highly flammable and difficult to store. Volkswagen CEO Martin
Winterkorn stated this week that hydrogen fuel cells have failed to live up to promises and are unlikely to
become an efficient and cost-effective way to power cars in the near future. Winterkorn said, "I do not see
the infrastructure for fuel cell vehicles, and I do not see how hydrogen can be produced on large scale
at reasonable cost. I do not currently see a situation where we can offer fuel cell vehicles at a
reasonable cost that consumers would also be willing to pay." While Volkswagen doesn't see a near-term future with
hydrogen vehicles, other manufacturers continue to move forward with the technology. Mercedes-Benz reached a deal with Ford and Nissan-
Renault with a goal of selling the first production fuel-cell vehicle starting in 2017. Back in 2010, a study was published
predicting 670,000 fuel cell powered vehicles would be sold annually within a decade. So far, that
prediction doesn't seem likely to come true. A sleek car glides past the undulating hedgerows of a country lane. The only
sounds it makes are snatches of Vivaldi from the stereo, and the exhaust pipe emits nothing more noxious than water vapour. As it passes, a
cloud of butterflies takes flight into the clean summer air. Proponents of hydrogen-powered vehicles have long
envisioned this as the future of motoring. But today, that dream is almost as distant as ever and
increasingly serves as a distraction in the quest to cut greenhouse gas emissions by replacing petrol. At first glance, hydrogen looks like a
suitable alternative. It has a higher energy density (by mass) than petrol, and could be distributed to filling stations through pipelines. And
although specially designed internal combustion engines can burn hydrogen directly, hydrogen is even more efficient when it drives a fuel cell
to generate electricity. A decade ago, governments and funding agencies drew up ambitious plans to develop cheaper fuel cells and to enable
cars to store practicable quantities of hydrogen. In 2003, President George Bush committed $720 million to the research effort. But by 2009,
it was clear that hydrogen was no quick fix, and US energy secretary Steven Chu diverted much of the
funding into battery research. It was the right move. When the hydrogen economy concept was coined in the early
1970s, advocates such as electrochemist John Bockris1 expected cheap, plentiful nuclear power to produce hydrogen by electrolysing water.
Using hydrogen as an energy carrier in this way made sense at the time power-line losses made hydrogen a more efficient way to move
energy over long distances, and battery technology simply wasnt good enough propel electric vehicles much faster or further than a milk float.
But nuclear accidents, although extremely rare, have made many governments wary of investing in
extra nuclear power stations. And they have also exposed the hidden costs of nuclear power: cleaning
up the accidents and dealing with radioactive waste. So, instead, more than 90% of the worlds hydrogen is produced
from fossil fuels, through steam reforming of natural gas, for example, which also produces carbon dioxide. That carbon dioxide could be
sequestered underground, but it isnt, because carbon capture and storage technology is not sufficiently well developed and the costs are
astronomical. Cleaning up Wind or solar power could be used to drive electrolysis plants, but isnt that clean
electricity better used to feed todays more efficient power grids, and to charge lithium-ion batteries that far outstrip
those available in the 1970s? The fuelling points for battery-powered cars are a relatively simple extension to our existing power grid, and new
technology is reducing recharging times. Hydrogen, in contrast, requires an entirely new supply infrastructure.
Thats why the only hydrogen car on the road was, until recently, the Honda FCX Clarity; just a few
dozen drive around southern California the only place in the US with a sufficient network of
hydrogen filling stations. In February, Hyundai launched its Tucson ix35 hydrogen fuel cell vehicle, and hopes to make 1,000 of them
for the European market. Compare that with the European commissions hydrogen roadmap, which forecasts an incredible 1 million hydrogen
fuel cell vehicles by 2020. Storing hydrogen on board a car also requires expensive pressure vessels or
cryogenic systems. Chemists and engineers have worked hard to find alternatives, such as adsorbing
hydrogen onto porous materials, or using hydrogen-dense molecules to release hydrogen on demand.
For example, Matthias Beller at the University of Rostock, Germany, recently unveiled a ruthenium catalyst that can generate hydrogen from
methanol at a relatively mild 6595C. But while the ruthenium catalyst is a lovely bit of chemistry, it is not a
breakthrough for the hydrogen economy: the reaction releases carbon dioxide, which is much harder
to capture from millions of cars than it is at a single power station; the catalyst turnover frequency
reached 4700h1, many orders of magnitude from practicability; and it relies on ruthenium, global
stocks of which are thought to be only about 5,000 tonnes. Blind optimism In February, the UKH2Mobility partnership
issued a report suggesting that 1.5 million hydrogen-powered vehicles could be on the road in the UK by 2030. Yet even this optimistic
report noted that the effort would only reduce carbon dioxide emissions by about 3 million tonnes
less than the world currently emits in one hour. Hydrogen will undoubtedly find transport niches, but talk of
hydrogen powering a substantial proportion of the planets billion cars (and counting) is driven more
by techno-optimism than evidence. Faster and more significant impacts would come from improving battery technology,
investing in clean electricity sources and developing carbon sequestration. The hydrogen economy is alluring, but it is a distraction from the
important task of decarbonising our transport system.
Cant solve oilhydrogen isnt ready and the fuel chain is highly inefficient
Strahan, writer for New Scientist Magazine, 8
[David, 9/28/8, New Sceintist, Whatever Happened to the Hydrogen Economy?
http://www.newscientist.com/article/mg20026841.900-whatever-happened-to-the-hydrogen-
economy.html?full=true, accessed 7/1/14, TYBG]
WHATEVER happened to the hydrogen economy? At the turn of the century it was the nExt big thing, promising a future of infinite clean energy
and deliverance from climate change. Generate enough hydrogen, so the claim went, and we could use it to transform the entire energy
infrastructure - it could supply power for cars, planes and boats, buildings and even portable gadgets, all without the need for dirty fossil fuels.
Enthusiasts confidently predicted the breakthrough was just five to 10 years away. But today, despite ever-worsening news on
global warming and with peak oil looming, the hydrogen economy seems as distant as ever. Even in
Iceland, whose grand ambitions for a renewable hydrogen economy once earned it the title Bahrain of the north, visible progress has been
modest. After years of research, the country now boasts one hydrogen filling station, a handful of hydrogen cars, and one whale-watching boat
with a fuel cell for auxiliary power. A trial of three hydrogen-powered buses ended in 2007, when two were scrapped and the third was
consigned to a transport museum. More trials are planned, but that was before the meltdown of the country's banking system. In California,
where governor Arnold Schwarzenegger promised a "hydrogen highway" with 200 hydrogen filling stations by 2010, there are just five open to
the public. Ten hydrogen-fuelled buses are due to come into service in London by 2010, but a plan for 60 smaller hydrogen vehicles was
recently scrapped. Despite the setbacks, there is still enormous effort going into hydrogen research. "Fuel cells have been a roller
coaster of hype and disillusionment," says Martin Green of Johnson Matthey, which makes fuel-cell components for the car
industry, "but I am more confident now that the hydrogen economy is going to happen than ever before." Real products are now inching closer
to market (see map). Honda claims to be the first company with a fuel-cell car, the FCX Clarity, in large-scale production. The company will
make just 200 of these cars over three years, leasing them to customers for $600 per month, but so far Honda has shifted only three.
Meanwhile General Motors (GM) has released the first 100 of its Equinox fuel-cell cars in a free trial for potential customers around the world.
The company claims to have spent more than $1.2 billion on hydrogen R&D, and its research boss, Larry Burns, believes a market for fuel-cell
vehicles will have emerged by 2014. So could hydrogen finally be ready for take-off, or will the mirage continue to recede? Enthusiasts
claim the remaining hurdles are not so much technical as financial, and that mass production will
bring costs down dramatically. But so far the fuel cell, which lies at the heart of the entire hydrogen
project (see "Hydrogen basics"), has remained stubbornly expensive - and bringing the cost down means
changing the technology. One problem is that hydrogen fuel cells, seen as a way to provide electricity in homes as well as
vehicles, rely on precious-metal catalysts like platinum. A conventional automotive fuel-cell stack contains up to 100 grams of
platinum, which could cost more than $3000 at today's prices. For the hydrogen economy to happen, the amount of
platinum used in fuel cells has to come down, and soon. Green says this won't be a problem. He is convinced that car makers
will be able to slash the amount of platinum needed to just 20 grams per car by the time the technology is commercialised, which he foresees in
the middle of the nExt decade. He also points out that the platinum can be recycled. Yet the numbers still look daunting. Global car production
in 2007 was just over 71 million, and even with only 20 grams of platinum per car a wholesale shift to hydrogen fuel cells would need 1420
tonnes of platinum per year, six times current production. At that rate the world's resources of platinum-group metals would be gone in 70
years, with output peaking long before reserves are exhausted. And that calculation makes no allowance for any growth in car production, or
for the use of fuel cells in homes. "Platinum is really scarce, and only produced in five mines around the world",
says Armin Reller of the University of Augsburg in Germany, a former adviser on hydrogen to the Swiss
government. Reller has studied the resource constraints on a range of new technologies (New Scientist, 23 May 2007, p 34) and is
convinced that hydrogen can only be a partial solution at best, because it won't be possible to get
platinum out of the ground quickly enough. "When you introduce new technologies the dynamics are such that even if you
have the reserves, you can't produce them in time." It looks as if finding an alternative to platinum is a key challenge. For the hydrogen
economy to happen, industry must also come up with clean ways of producing it. Most hydrogen is currently made in refineries
by heating natural gas with steam in the presence of a catalyst, but this usually relies on energy from
fossil fuels and can generate carbon dioxide as a by-product. Because of this, the climate benefits of fuel-
cell vehicles are scarcely better than those of petrol hybrids, according to a 2003 study led by Malcolm Weiss at the
Massachusetts Institute of Technology. To make hydrogen cleanly and in bulk will almost certainly mean using renewable energy to electrolyse
water, though this process is costly and energy-intensive. Here too an enormous research effort is under way. A small British company, ITM
Power, says it has found a way to slash the costs of electrolysis, allowing it to produce a small-scale electrolyser that will eventually be so cheap
that every home could have one. This would also solve the hydrogen distribution problem. Instead of a system of pipelines, production could be
decentralised, with fuel produced close to where it will be consumed. All this because the company has invented a new material which it says
solves a long-standing conundrum of electrolysis. Industrial electrolysis uses huge cells containing a liquid electrolyte like potassium hydroxide
solution. This is alkaline, and so requires a nickel catalyst, much more plentiful and far cheaper than platinum. However, the hydrogen and
oxygen gas must be kept separate within the cell - they are explosive when combined - and the equipment needed to do this with a liquid
electrolyte would make the cell too bulky and costly for home use. In the 1960s, NASA developed fuel cells that replaced liquid electrolytes with
proton exchange membranes (PEMs), and the technology was applied to electrolysers too. However, the membranes were acidic, and an acidic
membrane needs a platinum catalyst. What's more, the membranes themselves remain hugely expensive. Now ITM Power claims to have
found the holy grail of both electrolysis and fuel cell technologies: a membrane that can be made alkaline so nickel can replace platinum. Using
half a dozen commonly available hydrocarbons, it has developed a solid but flexible polymer gel that is three times as conductive as existing
PEMs. Thanks to its simplicity and the fact that it is made from readily available materials, it should also be massively cheaper. The company
claims that with mass production its membrane would cost just $5 per square metre, compared to $500 for existing PEMs. As a result, ITM
Power says the electrolyser would cost $164 per kilowatt of capacity, against a current average of $2000 per kilowatt. To start with, the
company is building 10 of its "green box" electrolysers, each about the size of a large refrigerator. Jim Heathcote, chief executive of ITM Power,
won't say what they will cost - certainly tens of thousands of pounds each - though he claims that mass production will bring the price tag down
to less than 10,000 each. These home electrolysers will be connected to mains water, the company says, and at least partially driven by solar
panels or a wind turbine. The hydrogen produced could be used to drive a generator or fuel cell to produce electricity. It could also drive a car
powered either by a fuel cell or an internal combustion engine converted to run on hydrogen. Heathcote argues this set-up would not only be
low carbon but also reduce reliance on power grids, which he believes will become increasingly unreliable. But do the sums add up? Take
Heathcote's own home, where he has installed 60 square metres of solar panels - more than twice the average on UK properties with solar
installations. Heathcote's array, costing 50,000, generates about 10,000 kilowatt-hours (kWh) per year. Connected to ITM's electrolyser, which
is about 60 per cent efficient, the solar cells would produce enough hydrogen annually to yield 6000 kWh if used to power fuel cells. However,
the average house in the UK uses almost four times as much energy as that each year. If that same hydrogen were used to power ITM's
converted Ford Focus, the results would be scarcely better. Using the output of Heathcote's home, the car could travel about 7200 kilometres a
year, about half the average annual mileage of a British car. "It sounds absurd," Heathcote admits, "but that's how every technology starts.
There are early adopters and then mass production brings costs down hugely." He accepts that many homes will never go completely off-grid,
but he believes that with Extra insulation many could use ITM Power's approach to obtain most of their household energy. And while he also
admits that hydrogen cars will probably never be powered solely from the roof of the house, he maintains the fuel could still be produced by a
home electrolyser using other energy sources, such as off-peak nuclear power. The problems don't end there, though. ITM Power
might have found a way to slash the costs of electrolysis, but nobody has solved a more fundamental problem: the
inefficiency of the whole hydrogen fuel chain. Energy losses The point was made forcefully by Gary Kendall of the
conservation group WWF in a recent report called Plugged In. Kendall, a chemist who previously spent almost a decade working for
ExxonMobil, highlights how the energy losses in the fuel chain - from electrolysis to compression of the hydrogen for use to
inefficiencies in the fuel cell itself - mean that only 24 per cent of the energy used to make the fuel does any
useful work on the road. By contrast, battery-powered electric vehicles and plug-in hybrids, with no electrolysis or compression to
worry about, use 69 per cent of the original energy. "Cars running on hydrogen would need three times the energy of those running directly on
electricity, and that would force us to build many more wind turbines," says Kendall. "The developed world needs to completely decarbonise
electricity generation by 2050, so we can't afford to just throw away three-quarters of the primary energy turning it into hydrogen."
AT: EMP Attack
No real threat of EMPs - the impact would be containable
Weinberger, writer for Foreign Policy, 10
[Sharon, 2/17/10, Foreign Policy, The Boogeyman Bomb,
http://www.foreignpolicy.com/articles/2010/02/17/the_boogeyman_bomb?page=0,0]
If the primary threat is from a crudely constructed EMP weapon launched from a Scud-type missile,
that sort of weapon wouldn't have nearly the capabilities needed to take out U.S. infrastructure, he
argues. Butt estimates that such a device, with a one-kiloton yield, would have to be launched much lower in the atmosphere, and thus would
have more localized effects. "Serious long-lasting consequences of a one-kiloton EMP strike would likely be
limited to a state-sized region of the country," he writes. True, an EMP that affected even a single state would be, no doubt,
traumatic and disruptive, but it would also be recoverable, and more importantly, fall far short of a "continental-scale time machine."
In the end, advocates for EMP preparation could end up being their own worst enemy. The unlikely
scenarios they peddle lend themselves to caricature. And though there are certainly some intellectual heavyweights
among those who have warned about the effects of EMP -- like Johnny Foster, the former head of Lawrence Livermore National Laboratory --
critics have derided EMP defense supporters for relying on the likes of science fiction writer William R. Forstchen to help bolster their case. By
talking about "time machines" and turning the EMP bomb into something that goes bump in the night, those
advocating for better defenses risk pushing the issue further into the margins of science fiction.
AT: US-China War
No US-China war nuclear deterrence and geography mean that despite tensions,
conflict would never erupt
Keck, international relations and defense writer, 13
*Zachary, 7/12/13, The Diplomat, Why China and the US (Probably) Wont Go to War,
http://thediplomat.com/2013/07/why-china-and-the-us-probably-wont-go-to-war/, accessed 7/1/14,
TYBG]
But while trade cannot be relied upon to keep the peace, a U.S.-China war is virtually unthinkable
because of two other factors: nuclear weapons and geography. The fact that both the U.S. and China
have nuclear weapons is the most obvious reasons why they wont clash, even if they remain fiercely
competitive. This is because war is the continuation of politics by other means, and nuclear weapons make
war extremely bad politics. Put differently, war is fought in pursuit of policy ends, which cannot be
achieved through a total war between nuclear-armed states. This is not only because of nuclear weapons destructive
power. As Thomas Schelling outlined brilliantly, nuclear weapons have not actually increased humans destructive capabilities. In fact, there is
evidence to suggest that wars between nomads usually ended with the victors slaughtering all of the individuals on the losing side, because of
the economics of holding slaves in nomadic societies. What makes nuclear weapons different, then, is not just their
destructive power but also the certainty and immediacy of it. While extremely ambitious or desperate
leaders can delude themselves into believing they can prevail in a conventional conflict with a
stronger adversary because of any number of factorssuperior will, superior doctrine, the weather etc.
none of this matters in nuclear war. With nuclear weapons, countries dont have to prevail on the battlefield
or defeat an opposing army to destroy an entire country, and since there are no adequate defenses
for a large-scale nuclear attack, every leader can be absolute certain that most of their country can be
destroyed in short-order in the event of a total conflict. Since no policy goal is worth this level of
sacrifice, the only possible way for an all-out conflict to ensue is for a miscalculation of some sort to
occur. Most of these can and should be dealt by Chinese and the U.S. leaders holding regularly senior
level dialogues like the ones of the past month, in which frank and direct talk about redlines are
discussed. These can and should be supplemented with clear and open communication channels,
which can be especially useful when unexpected crises arise, like an exchange of fire between low-level naval officers in
the increasingly crowded waters in the region. While this possibility is real and frightening, its hard to imagine a plausible
scenario where it leads to a nuclear exchange between China and the United States. After all, at each
stage of the crisis leaders know that if it is not properly contained, a nuclear war could ensue, and the
complete destruction of a leaders country is a more frightening possibility than losing credibility
among hawkish elements of society. In any case, measured means of retaliation would be available to the party wronged, and
behind-the-scenes diplomacy could help facilitate the process of finding mutually acceptable
retaliatory measures. Geography is the less appreciated factor that will mitigate the chances of a U.S.-
China war, but it could be nearly as important as nuclear weapons. Indeed, geography has a history of allowing
countries to avoid the Thucydides Trap, and works against a U.S.-China war in a couple of ways. First, both the
United States and China are immensely large countriesaccording to the Central Intelligence Agency, the U.S. and China
are the third and fourth largest countries in the world by area, at 9,826,675 and 9,596,961 square km respectively. They also have difficult
topographical features and complex populations. As such, they are virtually unconquerable by another power. This is
an important point and differentiates the current strategic environment from historical cases where
power transitions led to war. For example, in Europe where many of the historical cases derive from, each state genuinely had to
worry that the other side could increase their power capabilities to such a degree that they could credibly threaten the other sides national
survival. Neither China nor the U.S. has to realistically entertain such fears, and this will lessen their
insecurity and therefore the security dilemma they operate within. Besides being immensely large countries,
China and the U.S. are also separated by the Pacific Ocean, which will also weaken their sense of
insecurity and threat perception towards one another. In many of the violent power transitions of the past, starting with
Sparta and Athens but also including the European ones, the rival states were located in close proximity to one another. By contrast,
when great power conflict has been avoided, the states have often had considerable distance
between them, as was the case for the U.S. and British power transition and the peaceful end to the
Cold War. The reason is simple and similar to the one above: the difficulty of projecting power across
large distancesparticularly bodies of waters reduces each sides concern that the other will
threaten its national survival and most important strategic interests. True, the U.S. operates extensively in Chinas
backyard, and maintains numerous alliances and partnerships with Beijings neighbors. This undeniably heightens the risk of conflict. At the
same time, the British were active throughout the Western Hemisphere, most notably in Canada, and the Americans maintained a robust
alliance system in Western Europe throughout the Cold War. Even with the U.S. presence in Asia, then, the fact that the
Chinese and American homelands are separated by the largest body of water in the world is
enormously important in reducing their conflict potential, if history is any guide at least. Thus, while every
effort should be made to avoid a U.S.-China war, it is nearly unthinkable one will occur.
AT: Middle East War
No Middle Eastern war - de-escalation and global deterrence checks conflict - best
empirical cases prove no conflict despite tensions
Terrill, writer for the Strategic Studies Institute, 9
[W. Andrew Terrill, September, Strategic Studies Institute, Escalation and intrawar deterrence During
limited wars in the middle east, http://www.strategicstudiesinstitute.army.mil/pdffiles/pub941.pdf,
accessed 7/1/14, TYBG]
The number of declared nuclear powers has expanded significantly in the last 20 years to include Pakistan, India, and
North Korea. Additionally, other powers such as Iran are almost certainly striving for a nuclear weapons
capability while a number of count- ries in the developing world possess or seek biological and
chemical weapons. In this milieu, a central purpose of this monograph by W. Andrew Terrill is to reexamine two
earlier conflicts for insights that may be relevant for ongoing dangers during limited wars involving
nations possessing chemical or biological weapons or emerging nuclear arsenals. Decision-makers from the United States
and other countries may have to consider the circumstances under which a smaller and weaker enemy will
use nuclear weapons or other mass destruction weapons. Some of Dr. Terrills observations may be particularly useful for policymakers
dealing with future crises involving developing nations that possess weapons of mass destruction (WMD). Although it is possible that the United
States could be a party to such a conflict, any crisis involving nuclear weapons states is expected to be of inherent concern to Washington, even
if it is not a combatant. Dr. Terrill has examined two important Middle Eastern wars. These conflicts are the
1973 Arab- Israeli War and the 1991 Gulf War. This monograph may be particularly valuable in providing
readers, including senior military and political leaders, with a discussion of the implications of these
historical case studies in which WMD-armed nations may have seriously considered their use but
ultimately did not resort to them. Both of these wars were fought at the conventional level, although
the prospect of Israel using nuclear weapons (1973), Egypt using biological weapons (1973), or Iraq using
chemical and biological weapons (1991) were of serious concern at various points during the fighting. The prospect
of a U.S. war with WMD-armed opponents (such as occurred in 1991) raises the question of how escalation
can be controlled in such circumstances and what are the most likely ways that intrawar deterrence can break down. This monograph
will consider why efforts at escalation control and intrawar deterrence were successful in the two case studies and
assess the points at which these efforts were under the most intensive stress that might have caused them to fail. Dr. Terrill notes that
intrawar deterrence is always difficult and usually based on a variety of factors that no combatant can
control in all circumstances of an ongoing conflict. The Strategic Studies Institute is pleased to offer this monograph as a
contribution to the national secur- ity debate on this important subject as our nation continues to grapple with a variety of problems associated
with the proliferation of nuclear, biological, and chemical weapons. This analysis should be especially useful to U.S. strategic leaders and intelli-
gence professionals as they seek to address the complicated interplay of factors related to regional security issues and the support of local
allies. This work may also benefit those seeking greater understanding of long range issues of Middle
Eastern and global security. We hope this work will be of benefit to officers of all services as well as other U.S. Government officials
involved in military planning, and that it may cause them to reconsider some of the instances where intrawar deterrence seemed to work well
but may have done so by a much closer margin than future planners can comfortably accept. In this regard, Dr. Terrills work is
important to understanding the lessons of these conflicts which might otherwise be forgotten or
oversimplified. Additionally, an understanding of the issues involved with these earlier case studies
may be useful in future circumstances where the United States may seek to deter wartime WMD use by
potential adversaries such as Iran or North Korea. The two case studies may also point out the inherent difficulties in doing so and the need to
enter into conflict with these states only if one is prepared to accept the strong possibility that any efforts to control escalation have a good
chance of breaking down. This understanding is particularly important in a wartime environment in which all parties should rationally have an
interest in controlling escalation, but may have trouble doing so due to both systemic and wartime misperceptions and mistakes that distort
communications between adversaries and may cause fundamental misunderstandings about the nature of the conflict in which these states
may find themselves embroiled.

S-ar putea să vă placă și