Download 1AC – BIT – “Tear down that wall” edition

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Climate resilience wikipedia , lookup

Myron Ebell wikipedia , lookup

Economics of climate change mitigation wikipedia , lookup

German Climate Action Plan 2050 wikipedia , lookup

Michael E. Mann wikipedia , lookup

2009 United Nations Climate Change Conference wikipedia , lookup

Heaven and Earth (book) wikipedia , lookup

Climate change adaptation wikipedia , lookup

Climate sensitivity wikipedia , lookup

Hotspot Ecosystem Research and Man's Impact On European Seas wikipedia , lookup

Effects of global warming on human health wikipedia , lookup

ExxonMobil climate change controversy wikipedia , lookup

Soon and Baliunas controversy wikipedia , lookup

Climatic Research Unit email controversy wikipedia , lookup

Low-carbon economy wikipedia , lookup

Climate change and agriculture wikipedia , lookup

Instrumental temperature record wikipedia , lookup

Climate change denial wikipedia , lookup

Climate engineering wikipedia , lookup

General circulation model wikipedia , lookup

Climate change in Tuvalu wikipedia , lookup

Economics of global warming wikipedia , lookup

Global warming hiatus wikipedia , lookup

Global warming controversy wikipedia , lookup

United Nations Framework Convention on Climate Change wikipedia , lookup

Citizens' Climate Lobby wikipedia , lookup

Climate governance wikipedia , lookup

Climatic Research Unit documents wikipedia , lookup

Effects of global warming wikipedia , lookup

Global warming wikipedia , lookup

Effects of global warming on humans wikipedia , lookup

Mitigation of global warming in Australia wikipedia , lookup

Carbon Pollution Reduction Scheme wikipedia , lookup

Global Energy and Water Cycle Experiment wikipedia , lookup

Fred Singer wikipedia , lookup

Climate change in the United States wikipedia , lookup

Attribution of recent climate change wikipedia , lookup

Climate change feedback wikipedia , lookup

Solar radiation management wikipedia , lookup

Media coverage of global warming wikipedia , lookup

Effects of global warming on Australia wikipedia , lookup

Climate change, industry and society wikipedia , lookup

Climate change and poverty wikipedia , lookup

Scientific opinion on climate change wikipedia , lookup

Politics of global warming wikipedia , lookup

Public opinion on global warming wikipedia , lookup

Business action on climate change wikipedia , lookup

IPCC Fourth Assessment Report wikipedia , lookup

Surveys of scientists' views on climate change wikipedia , lookup

Transcript
1AC – BIT – “Tear down that wall” edition
1AC vs. K
1: Warming
Lack of U.S. China cooperation over green financing is preventing emission reductions
– Only a Bilateral investment strategy can meet investment demands
Hart et al ’16 - Senior Fellow and Director of China Policy at the Center for American Progress, professor of
energy and environmental policy at Tufts, Senior Fellow at the Center, Research Associate on China and Asia
Policy, Research Assistant on the Energy Policy team, (Melanie, Kelly Sims Gallagher, Pete Ogden, Blaine
Johnson, Ben Bovarnick, “Green Finance: The Next Frontier for U.S.-China Climate Cooperation,”
https://www.americanprogress.org/issues/security/report/2016/06/13/139276/green-finance-the-next-frontierfor-u-s-china-climate-cooperation/, DS)
When it comes to the climate arena, the United States and China are enjoying a wave of international goodwill resulting from the role each played
China and the United States stood
shoulder to shoulder as the first two countries to announce their post-2020 national greenhouse gas
emission reduction targets and remained constructive partners on the path to reaching a historic outcome in Paris this past December.
Now, as the United States and China put new policies in place to achieve their national targets and fulfill
their domestic and international commitments, both countries confront a common challenge: mobilizing
sufficient investment at home to meet domestic energy, climate, and environmental protection goals,
while at the same time steering outbound investments toward sustainable projects in other nations that
support, rather than undermine, those nations’ climate targets. In this Center for American Progress issue brief, the
authors consider the key domestic and international policies that were recently—or are currently
being—put in place by China and the United States to achieve their respective climate goals. In addition, we
evaluate the implications of these policies—both positive and negative—for green investment domestically and globally. Finally, we provide
recommendations for enhanced cooperation in this space. How green financing enables emission
reductions China and the United States emerged from Paris with clear climate goals but incomplete
blueprints for how they would achieve them. To a large extent, this was inevitable, as there are no silver bullets for the kinds
of ambitious transformations that the United States and China have committed to achieving. Rather, both countries need to develop
a range of new policies across a number of interrelated sectors to either replace or build upon the policy
landscape that currently exists. Regardless of the path each country pursues, however, one thing is increasingly clear: Mobilizing
finance will be critical to achieving the needed emission reductions. In this regard, China is guided by three top-tier
in rallying other nations to achieve the iconic Paris climate agreement. In November 2014,
national climate targets: Peak carbon dioxide emissions around 2030, and aim to peak before 2030 if possible Increase the nonfossil fuel portion
of the nation’s energy mix from 11.2 percent at year-end 2014 to around 20 percent around 2030 Reduce carbon intensity—which is the amount
of carbon emitted per unit of gross domestic product, or GDP—to 60 percent to 65 percent below 2005 levels around 2030. China
already
has a number of domestic policy measures in place to move it toward these climate goals. Those existing
policy measures include increasingly stringent energy efficiency standards for motor vehicles, industrial
equipment, and appliances; a feed-in-tariff scheme that pays renewable energy producers a premium
for the power they generate; and fast-track coal control and emission peak programs that impose particularly ambitious coal use and
emission reduction targets in regions that, when added together, produced more than 66 percent of China’s GDP in 2014. In addition, under
China’s new five-year plan for 2016 to 2020, its leaders are working to reform the electric regulatory
system and impose more stringent coal caps in the nation’s inland and western regions. Power-sector reform
will be particularly critical to this effort because China’s state-run power grid has been a bottleneck for clean energy
expansion. In September 2015, Chinese President Xi Jinping stated that Beijing plans to move the nation
toward a “green dispatch” system that would put renewable energy at the top of the priority list for
transmission across the nation’s overloaded power grids. That would be a critical step toward meeting
China’s international commitment to nearly double the nonfossil fuel portion of its energy mix by 2020.
China’s other major new climate policy is a national emissions trading system that is expected to cover the nation’s power sector as well as six or
more major industries starting in 2017. The
effectiveness of these and other new policies remains to be seen, but
there is wide recognition that any path forward will require scaled-up investment. When Chinese officials
speak of “green finance”—which they do increasingly frequently—they are referring precisely to the
public and private investment that China will require to meet its environmental challenges, which include
its climate targets. According to the latest estimates, China will need to invest up to $6.7 trillion in low-carbon
industries by 2030, or around $300 billion to $445 billion per year over the next 15 years to meet its goals
under the Paris Agreement. According to China’s Institute of Finance and Capital Markets, at most, only 10 percent to 15
percent of that investment will come from public funds; the vast majority will need to come from the
private sector. Meanwhile, in the absence of comprehensive energy and climate legislation, the Obama administration is
working to implement a series of policies and regulations needed to put the United States on a track to
achieve its Paris commitment of 26 percent to 28 percent reduction in greenhouse gas pollution below
2005 levels by 2025. As of 2015, U.S. emissions were 12 percent below 2005 levels, so the trajectory is consistent with the target. At the
federal level, this includes the Clean Power Plan, which will for the first time regulate greenhouse gas
emissions from power plants; performance standards for motor vehicles; regulations on methane
emissions from new oil and gas sources; and reforms to U.S. policy on coal leasing on public lands, all of
which are being complemented by action at the state and local levels. All told, the United States and China
are making significant efforts to reduce domestic emissions. Both countries are demonstrating strong
leadership on domestic climate policy, and that has opened up new opportunities for mutually beneficial
bilateral and multilateral cooperation. The United States and China are already collaborating through the Climate Change Working
Group, which has launched multiple collaborative projects under the U.S.-China Strategic and Economic Dialogue, or S&ED; the U.S.-China Clean
Energy Research Center, which brings U.S. and Chinese experts together for joint clean energy technology development; and the Mission
Innovation initiative, which aims to raise research and development funding across multiple sectors, including clean energy sectors in the United
States and China. U.S. and Chinese officials also are engaged in a Domestic Policy Dialogue, formally established at the 2015 S&ED, which is a
bilateral forum for sharing lessons learned from each nation’s climate policy experiences to date. Going forward, there is room to expand
these initiatives. Possible areas for enhanced cooperation on domestic policy include reducing non-carbon
dioxide greenhouse gas emissions, improving measurement capabilities for land-use and forestry-sector
climate impacts and for policies for the power sector, technological innovation, and resilience policy.
Mobilizing green financing to meet domestic investment needs Despite the array of collaborative exchanges that are
already underway, the United States and China are not yet collaborating in any significant manner on one of
their most important shared challenges: how to mobilize private-sector investment to achieve their
emission reduction goals. Building out a new clean energy economy requires significant investment capital.
A BIT is necessary to facilitate a transition to renewable energy sources – BIT creates
bilateral cooperation and open, demanding markets
Winglee 15 -- Winglee is a former Research Assistant at a DC think tank where she worked on U.S.-China
economic relations. Her current research focus is on the intersection of sustainable and economic development
(Michelle, “A Bright Spot in US-China Relations: Renewable Energy”, http://thediplomat.com/2015/08/a-brightspot-in-us-china-relations-renewable-energy//bj)
Rather than letting clean energy fall victim to another trade dispute, the U.S.
and China should recognize the opportunity in
cooperation on bilateral investment that could bypass trade frictions and help both sides capture the
positive externalities of green technology. The United States and China are currently in the midst of negotiating a
Bilateral Investment Treaty (BIT), which has the potential to create new incentives to invest in each other’s
clean energy sector. With China not in the U.S.-led Trans Pacific Partnership (TPP) trade agreement and the
United States not involved in Asia’s Regional Comprehensive Economic Partnership (RCEP) trade initiative, the U.S.China Bilateral Investment Treaty offers a singular opportunity for the two countries to engage, and
not to favor the red or blue, but the green. According to Melanie Hart of the Center for American Progress, moving
toward a clean energy economy in the United States will require more than $1 trillion of investment in
the electricity grid, new fuels, mass transit, power generation, and manufacturing. The United States is a
relatively secure investment destination, home of leading solar technology, and has a strong domestic
market for clean energy. With Obama’s new regulatory plan under the EPA, establishing first-ever national standards to limit carbon
pollution from power plants, demand for clean energy is expected to increase. Meanwhile, China, with about $3.8
trillion in foreign exchange reserves, is increasingly employing its money towards outward investment
and has strong incentives to invest in clean energy. In China, coal accounts for about 60 percent of China’s CO2
emissions, which are causing massive health problems because of the smog they generate as well as social
discontent. In June, Chinese Premier Li Keqiang submitted a carbon-curbing plan to the UN, pledging to cut China’s greenhouse gas
emissions per unit of gross domestic product by 60-65 percent from 2005 levels. However, even with the right incentives, supply does not
always meet demand. Good
policies are necessary to capitalize on opportunities. As Hart points out, foreign
companies operating in the United States are disadvantaged. U.S. tax credits for residents and corporations that
generate energy through renewable sources primarily help large and well-established companies that can pay the high upfront costs for
renewable projects. Foreign
and smaller companies with less operational capacity need investment incentives
that can help reduce considerably high upfront costs and risk from the start. Another clean energy
incentive, loan guarantees issued by the U.S. Department of Energy, would be especially hard for a Chinese company to
obtain given the political controversies of U.S. government benefits to a Chinese company. Meanwhile,
companies like Apple, Google, and even Goldman Sachs have been trailblazing investment in solar energy. In February, Apple Chief Executive,
Tim Cooke announced an $850 million agreement to buy enough solar energy from lead developer, First Solar, to power all of its California
operations. Though Cook certainly deserves credit for proactively decreasing the company’s carbon footprint, U.S. tax policies and creative
financing techniques have also made this commercially profitable. In a Wall Street Journal interview with Lisa Jackson, the woman overseeing
Apple’s environmental policy, she commented, “The difference in what we’re going to pay for the power through this deal and what we would
pay commercially is hundreds of millions of dollars.” On the other side of the Pacific, China is creating financial incentives for clean energy too,
though by providing free or low-cost loans and artificially cheap input components, land, and energy designated to promote the renewables
sector. In April, Apple made forays into the China arena, agreeing to back two larger solar farms in China.
Both sides have recognized
the need to adjust domestic policy and provide government support, but can the two countries work
together? Few companies have been able to help capture clean energy’s positive externality, and the
U.S. and China have yet to figure out how to make collaboration happen at the international policy level.
Financial support for clean energy does not measure up to the tax breaks and other policies propping up fossil fuels. An
IMF study estimated that the cost of global fossil fuel subsidies in 2015 would amount to $5.3 trillion or $14.5 billion a day. China’s energy
hungry domestic market could help validate new technologies that burn coal more cleanly. The
U.S. demand for residential solar
has also risen dramatically and stands to benefit from Chinese investments that could help finance more
clean energy jobs. As the two biggest carbon emitters globally, the United States and China have the
most to gain from allowing clean energy to access international markets of scale. The United States has
the opportunity to set a new tone before Xi’s state visit to the White House this September and seize upon this
opportunity where interests align. The visit could perpetuate economic tensions and frictions that have lasted since China’s
ascension to the WTO in 2001, or establish a more cooperative relationship towards a sustainable future that
better aligns economic incentives with environmental ones under a green BIT.
Specifically, BIT creates a broader green financing framework that spills over globally
Schwartz ’16 - vice chairman of the Goldman Sachs Group Inc. and the Beijing-based chairman of Goldman
Sachs in the Asia-Pacific region, (Mark, “China gears up for “green finance” to fight environmental crisis,”
http://blogs.ft.com/beyond-brics/2016/02/24/china-gears-up-for-green-finance-to-fight-environmental-crisis/,
DS)
In the 1990s, trade was the defining issue of the US-China economic relationship. Today, as much as any other issue,
the environment binds the two giants of
the global economy together. This week, leaders from the international financial community are gathering in Shanghai for preparatory meetings in advance of the G20
summit in Hangzhou this September. Among the most prominent items on the agenda is green finance– public and private investment in environmental protection and climate change
The US-Chinese economic relationship stands at a critical point. For three decades, China has enjoyed unprecedented growth
fueled by government-led investment and the export of manufactured goods, many of them to the American market. China’s leadership is now navigating a
difficult transition from an investment-led to a consumer-driven model. In a nation of 1.4bn people with a $10tn economy, this
transition is a significant challenge. Green finance offers an important avenue for China to demonstrate its commitment
while delivering positive results for its citizens. China’s environmental challenges are serious. Rapid urbanisation has left nearly two-thirds of China’s
groundwater unfit for human consumption and a fifth of its arable land contaminated. In Beijing alone, last December saw the Chinese leadership issue its first red alert warning citizens of
dangerously high smog levels. Later in the month, they had to issue the warning again. Environmental challenges, moreover, can no longer be
confined to any one country or continent. Carbon emissions or environmental degradation from China, as
with those from any other country, pose a threat to the world’s shared environment. The Chinese
leadership recognises the severity of this challenge and has embarked on a series of green finance
initiatives to respond. For the first time in Chinese Communist Party history, green finance was written into the party’s latest fiveyear plan last November. The Green Finance Task Force of the China Council for International Cooperation on Environmental Development also released
recommendations last year for establishing a modern green finance system. These included creating a green stock index, a green
ratings system, public-private green funds, and nationwide carbon and pollution trading markets. In
December, the People’s Bank of China established a green bond market to complement green bank
lending. China was also the first country to publish national guidelines on the issuance of green bonds. Green finance in China presents a massive opportunity for international investors.
The chief economist of the Research Bureau at the People’s Bank of China (PBoC), Ma Jun, and leader of the green finance effort for China, estimates that China will need to
invest at least $320bn per year in green sectors over the next five years. Yet, current fiscal resources
can cover no more than 15 per cent of that total. The opening of China’s green market can bring
significant private sector investment to meet this shortfall. Investors are interested: Goldman Sachs
estimates that “green services” is a $1tn potential market over the next five years. Clean energy is another
significant market opportunity. Harnessing market principles and innovative financial structures such as
securitisations and yield vehicles can catalyze access to deep liquid public capital markets. Green finance
also offers an opportunity to further embed investment within the broader framework of US-China
relationship, where the environment has been an important pillar of cooperation. In 2014, the United States and China
mitigation.
signed a bilateral agreement on carbon reduction. Our two countries worked closely with the international community to successfully reach a global climate agreement at the COP21 negotiations
a bilateral investment treaty can lead to further market opening
in multiple areas, including green finance. China is also making green finance a focus area of its G20 presidency. As part of these efforts, the PBoC, together with Bank of England,
is chairing a Green Finance Study Group to develop recommendations to mobilise green investment globally . Of course, China still has significant steps to take
in Paris in November. I for one am optimistic that ongoing negotiations for
to align its green finance efforts with global standards. China’s current guidelines support some broader definitions for what green bonds can be used for and there is room for international
Progress on green finance, nevertheless, can help restore
international confidence that China is committed to a balanced and sustainable future — an outcome that
China’s population and investors worldwide would welcome.
harmonisation. Implementation and enforcement of green finance efforts have to follow.
Studies show a BIT would cause widespread renewables shift
Luh ’15 – staff writer for Prospect Journal of International Affairs, (Angela, “LOOKING FORWARD IN U.S.CHINA RELATIONS: OPPORTUNITIES AND RISKS,” https://prospectjournal.org/2015/05/08/looking-forward-in-u-schina-relations-opportunities-and-risks/, DS)
3. Trade and investment A US-China Bilateral Investment Treaty (BIT), a policy initiative that encourages increased cross-border investment,
has been in the works for a while now. The treaty
would incentivize Chinese foreign direct investment in the U.S.
and vice versa. It would also give the U.S. and China a rules-based apparatus for their transactions. A BIT can serve two important
purposes. One, it would encourage China to move forward with open market reforms. Improved market access could deter China’s anticompetition policies that have blocked U.S. businesses from entering the Chinese market. Second,
it is crucial for the U.S. to open
its own doors to Chinese investment. One of the areas that would greatly benefit from Chinese
investment is the renewable energy sector. China is the world’s largest investor in renewable energy
technology; a recent report suggests that China could get 85 percent of its electricity from renewable
resources by 2050. Industry sensitivities have prevented large-scale Chinese energy investments in the
U.S. in the past. The U.S. should put its resources behind the BIT and encourage U.S.-China cooperation
in the development of renewables.
Development of a competitive green tech industry is the ONLY chance to stave off
extinction from warming
Hood ‘15 -- (Marlowe Hood, "Renewables key in race against climate change clock,"
http://phys.org/news/2015-11-renewables-key-climate-clock.html /KentDenver-NK)
Any plausible game plan for capping the rise of Earth's surface temperature depends on replacing
fossil fuels with energy sources that generate little or no carbon pollution.¶ That means renewables,
especially solar and wind, both of which face fewer constraints to growth than more established clean
energy: a river can be dammed only so many times, and nuclear remains expensive and controversial.¶ But humanity has dithered for so long
in the fight against global warming, experts say, that the window of opportunity for decarbonising the global economy fast enough to avoid
devastating climate change is barely ajar.¶ "The
cost and difficulty of mitigating greenhouse gases increases every
year, time is of the essence," Maria van der Hoeven, executive director of the International Energy Agency, said in a special IEA
report on energy and climate change released earlier this year.¶ The world's nations -– gathering in Paris in a month to
ink the first-ever universal climate pact—have set a target of limiting global warming to two degrees
Celsius (3.6 degrees Fahrenheit) above pre-industrial levels.¶ Cross that red line, scientists say, and
there will, almost literally, be hell to pay.¶ Science also tells us that, if we are to respect the 2 C limit, future greenhouse gas
emissions cannot exceed a total "budget" of about 1,000 billion tonnes of carbon dioxide.¶ Carbon-cutting pledges from nearly 150 nations,
unveiled on Friday, put us on track for a 3 C world.¶ This is a vast improvement on doing nothing. But even this unprecedented effort would use
up three quarters of that carbon budget by 2030, leaving very little margin for closing the remaining gap.¶ That's
where the
transition from fossil fuels to renewables comes in.¶ "The economics have been shifting on both sides
of the equation," said Alden Meyer, a veteran climate specialist with the Union of Concerned
Scientists in Washington. "The least-cost global strategy is to rapidly reduce our reliance on fossil fuels
and switch into the clean-energy economy."¶ Scaling up quickly¶ US President Barack Obama speaks with Commander Col
Ronald Jolly as he tours a solar array at Hill Air Force Base in Utah¶ Energy production accounts for two-thirds of global greenhouse gas
emissions, and thus transformation of this sector is crucial, he and other experts said.¶ "Decarbonising
energy is probably the
quickest way to decarbonise the world," Adnan Amin, director general of the International Renewable Energies Agency, told
AFP.¶ The question, however, is whether solar, wind and other clean energy options can scale up quickly enough.¶ According the UN
climate science body, the Intergovernmental Panel on Climate Change (IPCC), low-carbon energy must
account for at least 80 percent of global electricity production by 2050 to have a better-than-even
chance of staying under the 2 C threshold.¶ The good news is that renewables are expanding rapidly and attracting
investment.¶ Nearly half of all new installed power generation capacity in 2014 was in renewables—37 percent wind, a third solar and a quarter
hydro, according to the IEA.¶ Investment in the sector totalled more than a quarter of a trillion dollars in the same year, an 8.5 percent increase
over 2013.¶ "Capital markets have already begun to shift away from dirty technology to clean technology," Christina Figueres, executive
secretary of the UN Framework Convention on Climate Change, told journalists Friday in releasing an analysis of national emissions-reduction
pledges.¶ Confounding predictions only a decade ago, the cost of solar and wind energy has plummeted.¶ Future greenhouse gas emissions
cannot exceed a total "budget" of about 1,000 billion tonnes of carbon dioxide, scienc¶ Future greenhouse gas emissions cannot exceed a total
"budget" of about 1,000 billion tonnes of carbon dioxide, science shows¶ "Generating electricity from renewables is cost competitive on the
grid or beating most conventional sources" is some areas, said Amin.¶ Fossil fuels highly subsidised¶ In poor countries, this holds out the
possibility of skipping past the fossil fuel stage of development, much in the way some regions went from no phones to cell phones.¶ "I think
India" – where 300 million people are without electricity – "is realising that it may be easier and more cost effective for them to provide
sustainable energy services to hundreds of millions of villagers through a decentralised renewable-based strategy," said Meyer. "They and other
countries are poised to leapfrog the fossil fuel age."¶ India has invested massively in clean energy, and pledged to install 175 gigawatts of
renewable capacity by 2022.¶ The question is whether solar, wind and other clean energy options can scale up quickly enough to combat
climate change¶ The
question is whether solar, wind and other clean energy options can scale up quickly
enough to combat climate change¶ But renewables only account for about 20 percent of global
electricity generation, and three-quarters of that is hydro. Of total energy consumption -– overwhelmingly dominated by coal, oil and
gas –- less than five percent comes from clean technology, excluding nuclear.¶ The transition towards a low-carbon economy is also hampered
by fossil fuel subsidies totalling more than half-a-trillion dollars every year, four times the amount allocated for renewables.¶ India has invested
massively in clean energy, and vows to install 175 gigawatts of renewable capacity by 2022¶ India has invested massively in clean energy, and
vows to install 175 gigawatts of renewable capacity by 2022¶ Which is why, experts say, the Paris climate summit, which starts at the end of
this month, is so crucial.¶ "COP21" -– the Paris climate summit -– "needs to give a global and long-term signal to the world economy that is
relevant to investors," said Martin Kaiser, a climate analyst from Greenpeace.
Warming is real and anthropogenic – short-term trends are irrelevant
Adams ‘16 (Andrew; 4/16/16; Degree in agriculture and cites NASA and IPCC studies; Prince George Citizen, “There’s no debating
scientific facts,” http://www.princegeorgecitizen.com/opinion/columnists/there-is-no-debating-scientific-facts-1.2229437)
Last week I wrote about the signs of early spring and put a few jabs at climate
change deniers. This column did exactly what I had
hoped. It sparked conversation on the topic. Those who commented on the article were in fact climate change deniers, stating random
outliers of data in the overall trend, which is akin to the Republican senator of Oklahoma who brought a snowball to the senate
floor as evidence that global warming was a hoax. I am so glad this type of outlandish behavior has not manifested itself in Canadian politics as
of yet. Weather is what you get and climate is what you expect. This week I
hope to explain climate change to those who
don't fully understand the science behind it. I write this column with a mere bachelor of science and only a handful of classes in a
human and environmental interaction masters program before I left school to tackle other adventures that I felt academia would only prevent
me from doing all the while furthering my student debt. So while I am not an expert on this topic I do however have an understanding of the
scientific process and natural processes that allow us to understand climate change. Glancing into my personal library one could reasonably
make the statement that I may have a better understanding than your average Joe. It's
true the climate has always been
changing. While observed records of our climate indeed are not extreme in age, pollen in lake sediment,
trapped air bubbles and neutrons in glaciers can give us a reasonable degree of accuracy (of the past
800,00 years according to NASA) when looking to the past climate fluctuations. In our last century of climatic
observations we have observed an overall increase of approximately .74 degrees Celsius increase in global
temperatures according to NASA and the IPCC. While this number does not seem significant, it is when you live in an extreme
environment such as the arctic. Think back to your history book's description of the Franklin expedition, now remember last week's stories from
CBC on the cruise ships traveling the Northwest Passage with thousands of people aboard the ships. 97
percent of climate scientist
agree that this warming (which is happening) is not caused by orbital variation nor sun spots or solar flares.
These experts agree this climate change is anthropogenic. While I believe Prince George has no doubt its share of
scientific geniuses, I don't believe that there is a scientific genius in P.G. that is more informed on climate
change than the leading 97 percent of top climate scientists. It is true that the climate has been warm
before and this is not the problem. The problem is the rate at which the change is occurring. According to
NASA, "As the Earth moved out of ice ages over the past million years, the global temperature rose a
total of four to seven degrees Celsius over about 5,000 years. In the past century alone, the
temperature has climbed 0.7 degrees Celsius, roughly ten times faster than the average rate of ice-age-recovery
warming." We are now in the sixth great extinction on Earth. In fact geologists are now calling our current Epoch the
Anthropocene as our industrial existence has now left its mark geologically on Earth forever. In 1750,
there was 250 PPM of carbon dioxide (the most important greenhouse gas) in our atmosphere now there is 400 PPM. If
you were to drive a car somehow up through our atmosphere for 100 kilometres you would then be in outer space. This is how small our
atmosphere is. It
is ludicrous to think that all of our industrial emissions have not been able to change the
composition of our thin veil of an atmosphere It saddens me that some still deny these dire facts because we have work to
do and no time to waste. There is no one to blame but ourselves. To those who think this is a nefarious plot against the
common man from the government and scientists, I think you must first assume our government is intelligent enough to push such a plot as
this onto the public and ask yourself, why would they do such a thing, what would be the benefit, and also, "Have I been spending too much
time on YouTube watching conspiracy theories?" P.s. The Earth is not flat.
It’s happening fast – timeframe is a reason to vote aff.
Holthaus ’15 (Eric; 9/5/15; Meteorologist with Slate, citing MIT studies; Rolling Stone, “The Point of No Return: Climate Change
Nightmares Are Already Here,” www.rollingstone.com/politics/news/the-point-of-no-return-climate-change-nightmares-are-already-here20150805)
Attendant with this weird wildlife behavior is a
stunning drop in the number of plankton — the basis of the ocean's
food chain. In July, another major study concluded that acidifying oceans are likely to have a "quite
traumatic" impact on plankton diversity, with some species dying out while others flourish. As the oceans absorb
carbon dioxide from the atmosphere, it's converted into carbonic acid — and the pH of seawater declines.
According to lead author Stephanie Dutkiewicz of MIT, that trend means "the whole food chain is going to be
different." The Hansen study may have gotten more attention, but the Dutkiewicz study, and others like it, could have even more dire
implications for our future. The rapid changes Dutkiewicz and her colleagues are observing have shocked some of their fellow scientists into
thinking that yes, actually, we're
heading toward the worst-case scenario. Unlike a prediction of massive sealevel rise just decades away, the warming and acidifying oceans represent a problem that seems to have
kick-started a mass extinction on the same time scale. Jacquelyn Gill is a paleoecologist at the University of Maine.
She knows a lot about extinction, and her work is more relevant than ever. Essentially, she's trying to save the species that are alive
right now by learning more about what killed off the ones that aren't. The ancient data she studies shows "really compelling
evidence that there can be events of abrupt climate change that can happen well within human life
spans. We're talking less than a decade."
It’s reversible – we haven’t reached the tipping point – structural policy changes
create a positive domino effect.
Lemoine & Traeger ‘16 (Derek & Christian; 1/18/16; Assistant Professor of Economics at the University of Arizona & PhD in
Economics at the University of Heidelberg, Department of Agricultural & Resource Economics; Nature Climate Change 6, “Economics of tipping
the climate dominoes,” pg 514-519, http://www.nature.com/nclimate/journal/v6/n5/full/nclimate2902.html)
The threat of climate tipping points plays a major role in calls for aggressive emission reductions to
limit warming to 2 °C (refs 1,2,3,4). The scientific literature is particularly concerned with the possibility of
a ‘domino effect’ from multiple interacting tipping points 5, 6, 7, 8, 9, 10. For instance, reducing the effectiveness of
carbon sinks amplifies future warming, which in turn makes further tipping points more likely. Nearly all of the preceding quantitative economic
studies analyse optimal policy in the presence of a single type of tipping point that directly reduces economic output11, 12, 13, 14. This type of
tipping point affects the potential for further tipping points only indirectly: the
resulting reduction in emissions will generally
reduce the likelihood of triggering further tipping points. So far, only a single paper analyses optimal
climate policy in the presence of tipping points that alter the physical climate system 15, specifically a
temperature feedback tipping point and the carbon sink tipping point described above. These two tipping points
could interact directly; however, that paper considers only a single type of tipping point at a time. The present study synthesizes the tipping
point models from the previous literature to provide the first analysis of optimal climate policy when tipping points can directly interact. Our
study integrates all three types of previously modelled tipping points into a single integrated assessment model that combines smooth and
reversible changes with irreversible regime shifts. Each tipping point is stochastically triggered at an unknown threshold. We solve for the
optimal policy under Bayesian learning. Optimality means that resources within and across periods are distributed to maximize the expected
stream of global welfare from economic consumption over time under different risk states. The
optimal policy must anticipate all
possible thresholds, interactions and future policy responses. The anticipation of learning acknowledges
that future policymakers will have new information about the location of temperature thresholds and
can also react to any tipping points that may have already occurred. Learning over the threshold location also avoids
the assumption implicit in ref. 14 that tipping will eventually occur with certainty if temperatures stay permanently above the level where
tipping points are possible. Finally, going beyond the conventional focus on optimal policy, we also calculate the welfare cost of delaying
optimal climate policy. We demonstrate the value of monitoring for tipping points that have already been triggered, so that policy
can
adjust and reduce the probability of a single tipping point turning into a domino effect.
Defer to consensus – denialists are biased and ignore real science.
Cook ’16 (John; 4/16/16; Climate Communication Fellow for the Global Change Institute at The University of Queensland; IOP Science,
“Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,”
http://iopscience.iop.org/article/10.1088/1748-9326/11/4/048002/pdf)
Efforts to measure scientific consensus need to identify a relevant and representative population of
experts, assess their professional opinion in an appropriate manner, and avoid distortions from
ambiguous elements in the sample. Approaches that have been employed to assess expert views on anthropogenic global warming (AGW)
include analyzing peer reviewed climate papers (Oreskes 2004; C13), surveying members of the relevant scientific community (Bray and von Storch 2007, Doran and
Zimmerman 2009, Bray 2010, Rosenberg et al 2010, Farnsworth and Lichter 2012, Verheggen et al 2014, Stenhouse et al 2014, Carlton et al 2015), compiling public
statements by scientists (Anderegg et al 2010), and mathematical analyses of citation patterns (Shwed and Bearman 2010). We
define domain
experts as scientists who have published peer-reviewed research in that domain, in this case, climate
science. Consensus estimates for these experts are listed in table 1, with the range of estimates resulting primarily from differences in selection of the expert
pool, the definition of what entails the consensus position, and differences in treatment of no position responses/ papers. The studies in table 1 have taken various
approaches to selecting and querying pools of experts. Oreskes (2004) identified expressions of views on AGW in the form of peer-reviewed papers on ‘global
climate change’. This analysis found no papers rejecting AGW in a sample of 928 papers published from 1993 to 2003, that is, 100% consensus among papers stating
a position on AGW. Following
a similar methodology, C13 analyzed the abstracts of 11 944 peer-reviewed
papers published between 1991 and 2011 that matched the search terms ‘global climate change’ or
‘global warming’ in the ISI Web of Science search engine. Among the 4014 abstracts stating a position
on human-caused global warming, 97.1% were judged as having implicitly or explicitly endorsed the
consensus. In addition, the study authors were invited to rate their own papers, based on the contents of the full paper, not just the abstract. Amongst
1381 papers self-rated by their authors as stating a position on human-caused global warming, 97.2%
endorsed the consensus. Shwed and Bearman (2010) employed citation analysis of 9432 papers on global warming and climate published from 1975
to 2008. Unlike surveys or classifications of abstracts, this method was entirely mathematical and blind to the content of
the literature being examined. By determining the modularity of citation networks, they concluded, ‘Our results reject the claim
of inconclusive science on climate change and identify the emergence of consensus earlier than
previously thought’ (p. 831). Although this method does not produce a numerical consensus value, it
independently demonstrates the same level of scientific consensus on AGW as exists for the fact that
smoking causes cancer. Anderegg et al (2010) identified climate experts as those who had authored at least 20 climate-related publications and chose
their sample from those who had signed public statements regarding climate change. By combining published scientific papers and
public statements, Anderegg et al determined that 97%–98% of the 200 most-published climate
scientists endorsed the IPCC conclusions on AGW. Other studies have directly queried scientists, typically choosing a sample of
scientists and identifying subsamples of those who self-identify as climate scientists or actively publish in the field. Doran and Zimmerman (2009)
surveyed 3146 Earth scientists, asking whether ‘human activity is a significant contributing factor in
changing mean global temperatures,’ and subsampled those who were actively publishing climate
scientists. Overall, they found that 82% of Earth scientists indicated agreement, while among the
subset with greatest expertise in climate science, the agreement was 97.4%. Bray and von Storch (2007) and Bray
(2010) repeatedly surveyed different populations of climate scientists in 1996, 2003 and 2008. The questions did not specify a time period for climate change
(indeed, in 2008, 36% of the participants defined the term ‘climate change’ to refer to ‘changes in climate at any time for whatever reason’). Therefore, the
reported consensus estimates of 40% (1996) and 53% (2003) (which included participants not stating a view on AGW) suffered from both poor control of expert
selection and ambiguous questions. Their 2008 study, finding 83% agreement, had a more robust sample selection and a more specific definition of the consensus
position on attribution. Verheggen et al (2014) surveyed 1868 scientists, drawn in part from a public repository of climate scientists (the same source as was used by
Anderegg et al), and from scientists listed in C13, supplemented by authors of recent climate-related articles and with particular effort expended to include
signatories of public statements critical of mainstream climate science. 85% of all respondents (which included a likely overrepresentation of contrarian nonscientists) who stated a position agreed that anthropogenic greenhouse gases (GHGs) are the dominant driver of recent global warming. Among
respondents who reported having authored more than 10 peer-reviewed climate related publications,
approximately 90% agreed that greenhouse gas emissions are the primary cause of global warming.
Stenhouse et al (2014) collected responses from 1854 members of the American Meteorological Society (AMS). Among members whose area of
expertise was climate science, with a publication focus on climate, 78% agreed that the cause of global
warming over the past 150 years was mostly human, with an additional 10% (for a total of 88%) indicating the warming was caused
equally by human activities and natural causes. An additional 6% answered ‘I do not believe we know enough to determine the degree of human causation.’ To
make a more precise comparison with the Doran and Zimmerman findings, these respondents were emailed one additional survey question to ascertain if they
thought human activity had contributed to the global warming that has occurred over the past 150 years; among the 6% who received this question, 5% indicated
there had been some human contribution to the warming. Thus , Stenhouse
et al (2014) concluded that ‘93% of actively
publishing climate scientists indicated they are convinced that humans have contributed to global
warming.’ Carlton et al (2015) adapted questions from Doran and Zimmerman (2009) to survey 698 biophysical scientists across various disciplines, finding
that 91.9% of them agreed that (1) mean global temperatures have generally risen compared with pre1800s levels and that (2) human activity is a significant contributing factor in changing mean global
temperatures. Among the 306 who indicated that ‘the majority of my research concerns climate change or the impacts of climate change’, there was 96.7%
consensus on the existence of AGW. The Pew Research Center (2015) conducted a detailed survey of 3748 members of the American Association for the
Advancement of Science (AAAS) to assess views on several key science topics. Across this group, 87% agreed that ‘Earth is warming due mostly to human activity.’
Among a subset of working PhD Earth scientists, 93% agreed with this statement. Despite the diversity
of sampling techniques and approaches, a consistent picture of an overwhelming consensus among
experts on anthropogenic climate change has emerged from these studies. Another recurring finding is
that higher scientific agreement is associated with higher levels of expertise in climate science (Oreskes
2004, Doran and Zimmerman 2009, Anderegg 2010, Verheggen et al 2014). How can vastly different interpretations of consensus arise? A significant
contributor to variation in consensus estimates is the conflation of general scientific opinion with expert
scientific opinion. Figure 1 demonstrates that consensus estimates are highly sensitive to the expertise of the
sampled group. An accurate estimate of scientific consensus reflects the level of agreement among
experts in climate science; that is, scientists publishing peer-reviewed research on climate change. As
shown in table 1, low estimates of consensus arise from samples that include non-experts such as scientists
(or non-scientists) who are not actively publishing climate research, while samples of experts are
consistent in showing overwhelming consensus. Tol (2016) reports consensus estimates ranging from 7% to 100% from the same
studies described above. His broad range is due to sub-groupings of scientists with different levels of expertise. For example, the sub-sample with 7% agreement
was selected from those expressing an 'unconvinced' position on AGW (Verheggen et al 2014). This selection criterion does not provide a valid estimate of
consensus for two reasons: first, this subsample was selected based on opinion on climate change, predetermining the level of estimated consensus. Second, this
does not constitute a sample of experts, as non-experts were included. Anderegg (2010) found that nearly one-third of the unconvinced group lacked a PhD, and
only a tiny fraction had a PhD in a climate-relevant discipline. Eliminating less published scientists from both these samples resulted in consensus values of 90% and
97%–98% for Verheggen et al (2014) and Anderegg et al (2010), respectively. Tol's (2016) conflation of unrepresentative non-expert sub-samples and samples of
climate experts is a misrepresentation of the results of previous studies, including those published by a number of coauthors of this paper. In addition to varying
with expertise, consensus estimates may differ based on their approach to studies or survey responses that do not state an explicit position on AGW. Taking a
conservative approach, C13 omitted abstracts that did not state a position on AGW to derive its consensus estimate of 97%; a value shown to be robust when
compared with the estimate derived from author responses. In contrast, in one analysis, Tol (2016) effectively treats no-position abstracts as rejecting AGW,
thereby deriving consensus values less than 35%. Equating no-position papers with rejection or an uncertain position on AGW is inconsistent with the expectation of
decreasing reference to a consensual position as that consensus strengthens (Oreskes 2007, Shwed and Bearman 2010). Powell (2015) shows that applying Tol's
method to the established paradigm of plate tectonics would lead Tol to reject the scientific consensus in that field because nearly all current papers would be
classified as taking 'no position'. 4. Conclusion We
have shown that the scientific consensus on AGW is robust, with a
range of 90%–100% depending on the exact question, timing and sampling methodology. This is supported by
multiple independent studies despite variations in the study timing, definition of consensus, or differences in methodology including surveys of scientists, analyses
of literature or of citation networks. Tol (2016) obtains lower consensus estimates through a flawed methodology, for example by conflating non-expert and expert
views, and/or making unsupported assumptions about sources that do not specifically state a position about the consensus view . An accurate understanding of
scientific consensus, and the ability to recognize attempts to undermine it, are important for public climate literacy. Public perception of the scientific consensus has
been found to be a gateway belief, affecting other climate beliefs and attitudes including policy support (Ding et al 2011, McCright et al 2013, van der Linden et al
2015). However, many in the public, particularly in the US, still believe scientists disagree to a large extent about AGW (Leiserowitz et al 2015), and many political
leaders, again particularly in the US, insist that this is so. Leiserowitz et al (2015) found that only 12% of the US public accurately estimate the consensus at 91%–
100%. Further, Plutzer et al 2016 found that only 30% of middle-school and 45% of high-school science teachers were aware that the scientific consensus is above
80%, with 31% of teachers who teach climate change presenting contradictory messages that emphasize both the consensus and the minority position.
Misinformation about climate change has been observed to reduce climate literacy levels (McCright et al 2016, Ranney and Clark 2016), and manufacturing doubt
about the scientific consensus on climate change is one of the most effective means of reducing acceptance of climate change and support for mitigation policies
(Oreskes 2010, van der Linden et al 2016). Therefore, it should come as no surprise that the most common argument used in contrarian op-eds about climate
change from 2007 to 2010 was that there is no scientific consensus on human-caused global warming (Elsasser and Dunlap 2012, Oreskes and Conway 2011). The
generation of climate misinformation persists, with arguments against climate science increasing relative to policy arguments in publications by conservative
organisations (Boussalis and Coan 2016). Consequently,
it is important that scientists communicate the overwhelming
expert consensus on AGW to the public (Maibach et al 2014, Cook and Jacobs 2014). Explaining the 97% consensus has
been observed to increase acceptance of climate change (Lewandowsky et al 2013, Cook and
Lewandowsky 2016) with the greatest change among conservatives (Kotcher et al 2014).From a broader
perspective, it doesn't matter if the consensus number is 90% or 100%. The level of scientific
agreement on AGW is overwhelmingly high because the supporting evidence is overwhelmingly
strong.
Climate change produces massive human injustices on improvised nations,
communities, and populations—crosses lines of race and gender—policy response key
Quipu 13
Project Quipu, examining the manner in which financial news is reported in the popular media, The Hot
Spring Network proposes to create a system whereby live-update, rss-technology, and financial and
editorial expertise, come together to produce a reliable up-to-the-minute resource for evaluating broad
economic trends and engagements, without limiting analysis to single-parameter references like GDP or
individual stock indices, “Climate Justice is About Preventing Structural Violence”, March 11,
https://web.archive.org/web/20130311092246/http://www.casavaria.com/cafesentido/2013/03/11/91
20/climate-justice-is-about-preventing-structural-violence/
When we discuss climate change, global warming or the human-caused destabilization of global climate
patterns, we think of science, of energy, of the natural environment. We do not, often enough, think
about justice. And when we do, it is in the context of the right of rapidly industrializing nations to emit
as much CO2 as the nations that led the industrial revolution from the early 19th to the early 20th
centuries.¶ Then, when we are more thoughtful, we come to Tuvalu, Vanuatu, the Maldives and
Micronesia, where entire nations may need to be evacuated, as the rising seas spurred on by rampant
CO2 emissions, rush in. (Incidentally, in the US, places like the Rockaways, in Queens, and lowlying areas
of Staten Island, the New Jersey shore and the entire state of Delaware, are now making plans for
possible evacuation to higher ground, as sea levels rise and become increasingly expensive to manage.)¶
But there are other ways in which climate destabilization plays havoc with the calculus of human justice.
The question of justice begins, surely, with the unconstrained emission of climate-destabilizing gases, or
greenhouse gases. But it leads us, eventually, to the downstream impacts, felt in places like rural
Pakistan, where the melting of glaciers has contributed to a rash of catastrophic floods, whole regions
have been set back decades by the devastation.¶ In places like South Sudan and Darfur, in western
Sudan, or northern Nigeria, or northwestern China, devastating and expanding droughts, which amount
to comprehensive desertification in some cases, have left communities without reliable access to
drinking water. This often means women are forced by culture and by circumstance, to spend hours
each day migrating in search of water.¶ The relationship between clean water access and the rights and
security of women and girls is often much closer than people realize. At the 2011 World Bank Civil
Society Forum, it was cited by the head of the World Bank as one of the main drags on progress in the
civil liberties and economic opportunity enjoyed by women and girls around the world. The know-how
to build wells and access to reliable water pumps can break that barrier and free women’s time and
energy to motivate more self-determined activities, including education, leadership and political
decision-making.¶ Where the climate is more comprehensively destabilized, we face the worst
degradation of the human condition, and this poses real challenges, structurally, politically and
socially, to the advancement of rights, as political and cultural conventions prioritize the practical
application of power. We need to address this dysfunction, as we plan our response to the climate crisis,
and consider policy options that will liberate individuals, families and communities, decentralizing
political power and giving human beings a voice.¶ Stakeholders need to have a voice that is significant,
and that is heard, and we need to examine the frameworks through which we motivate change, in order
to make sure they do not block that necessary hearing, without which we will not achieve the best
possible outcome for real people.
Scenario One is extinction –
Warming collapses the planet.
Griffin ’15 (David; 4/14/15; Claremont Philosophy Professor, citing diversity of climatologists; CNN, “The climate is ruined. So can
civilization even survive?” http://www.cnn.com/2015/01/14/opinion/co2-crisis-griffin/)
Although most of us worry about other things, climate
scientists have become increasingly worried about the survival
of civilization. For example, Lonnie Thompson, who received the U.S. National Medal of Science in 2010, said that virtually all
climatologists "are now convinced that global warming poses a clear and present danger to
civilization." Informed journalists share this concern. The climate crisis "threatens the survival of our civilization,"
said Pulitzer Prize-winner Ross Gelbspan. Mark Hertsgaard agrees, saying that the continuation of
global warming "would create planetary conditions all but certain to end civilization as we know it."
These scientists and journalists, moreover, are worried not only about the distant future but about the condition of the planet for their own
children and grandchildren. James Hansen, often considered the world's leading climate scientist, entitled his book "Storms of My
Grandchildren." The
threat to civilization comes primarily from the increase of the level of carbon dioxide
(CO2) in the atmosphere, due largely to the burning of fossil fuels. Before the rise of the industrial age, CO2 constituted only 275 ppm
(parts per million) of the atmosphere. But it is now above 400 and rising about 2.5 ppm per year. Because of the CO2 increase, the
planet's average temperature has increased 0.85 degrees Celsius (1.5 degrees Fahrenheit). Although this
increase may not seem much, it has already brought about serious changes. The idea that we will be safe from
"dangerous climate change" if we do not exceed a temperature rise of 2C (3.6F) has been widely accepted. But many informed people have
rejected this assumption. In the opinion of journalist-turned-activist Bill McKibben, "the one degree we've raised the temperature already has
melted the Arctic, so we're fools to find out what two will do." His warning is supported by James Hansen, who declared that "a target of two
degrees (Celsius) is actually a prescription for long-term disaster." The
burning of coal, oil, and natural gas has made the
planet warmer than it had been since the rise of civilization 10,000 years ago. Civilization was made
possible by the emergence about 12,000 years ago of the "Holocene" epoch, which turned out to be the
Goldilocks zone - not too hot, not too cold. But now, says physicist Stefan Rahmstorf, "We are catapulting ourselves way out of
the Holocene." This catapult is dangerous, because we have no evidence civilization can long survive with
significantly higher temperatures. And yet, the world is on a trajectory that would lead to an increase of 4C (7F) in this century. In
the opinion of many scientists and the World Bank, this could happen as early as the 2060s. What would "a 4C world" be like?
According to Kevin Anderson of the Tyndall Centre for Climate Change Research (at the University of
East Anglia), "during New York's summer heat waves the warmest days would be around 10-12C (1821.6F) hotter [than today's]." Moreover, he has said, above an increase of 4C only about 10% of the
human population will survive. Believe it or not, some scientists consider Anderson overly optimistic. The
main reason for pessimism is the fear that the planet's temperature may be close to a tipping point that
would initiate a "low-end runaway greenhouse," involving "out-of-control amplifying feedbacks." This
condition would result, says Hansen, if all fossil fuels are burned (which is the intention of all fossil-fuel corporations and many governments).
This result "would make most of the planet uninhabitable by humans." Moreover, many scientists
believe that runaway global warming could occur much more quickly, because the rising temperature
caused by CO2 could release massive amounts of methane (CH4), which is, during its first 20 years, 86
times more powerful than CO2. Warmer weather induces this release from carbon that has been stored in methane hydrates, in
which enormous amounts of carbon -- four times as much as that emitted from fossil fuels since 1850 -- has been frozen in the Arctic's
permafrost. And yet now the Arctic's temperature is warmer than it had been for 120,000 years -- in other words, more than 10 times longer
than civilization has existed. According to Joe Romm, a physicist who created the Climate Progress website, methane release from thawing
permafrost in the Arctic "is the most dangerous amplifying feedback in the entire carbon cycle." The amplifying feedback works like this: The
warmer temperature releases millions of tons of methane, which then further raise the temperature, which in turn releases more methane.
The resulting threat of runaway global warming may not be merely theoretical. Scientists have long
been convinced that methane was central to the fastest period of global warming in geological history,
which occurred 55 million years ago. Now a group of scientists have accumulated evidence that
methane was also central to the greatest extinction of life thus far: the end-Permian extinction about
252 million years ago. Worse yet, whereas it was previously thought that significant amounts of permafrost would not melt, releasing
its methane, until the planet's temperature has risen several degrees Celsius, recent studies indicate that a rise of 1.5 degrees would be enough
to start the melting. What
can be done then? Given the failure of political leaders to deal with the CO2
problem, it is now too late to prevent terrible developments. But it may -- just may -- be possible to
keep global warming from bringing about the destruction of civilization. To have a chance, we must,
as Hansen says, do everything possible to "keep climate close to the Holocene range" -- which means,
mobilize the whole world to replace dirty energy with clean as soon as possible.
Scenario Two is diseases –
Climate change unleashes catastrophic diseases – makes normal response impossible.
Hoberg & Brooks ’15 (Eric & Daniel; 2/16/15; Field biologist, former Chief Curator of the U.S. National Parasite Collection of the
Agriculture Research Service, U.S. Department of Agriculture & Professor of Evolutionary Biology at the University of Toronto, specializes in
biodiversity, systematics, and conservation biology; RSTB, “Evolution in action: climate change, biodiversity dynamics and emerging infectious
disease,” rstb.royalsocietypublishing.org/content/370/1665/20130553?con&dom=zum&src=syndication)
Host–parasite systems are ubiquitous. Understanding the factors that generate, maintain and constrain
these associations has implications for broad ecological and environmental issues including the dynamics of
EIDs [29,39,61], biological control, biological introductions and invasions, and biotic responses to climate change [25]. The
Stockholm Paradigm postulates that parasite specialists can shift rapidly to novel (naive) hosts via EF. EF
between hosts and parasites occurs with high enough frequency to influence host range dynamics and
the diversity of species and interactions among species. Although no quantitative statement of this importance can yet be
made, it is clear from the above discussion that shifts onto relatively unrelated hosts appear routinely in phylogenetic analyses and are
observed readily in contemporary time. These observations are fundamental for EID studies: EIDs
arise when parasite species
begin infecting and causing disease in host species with which they have no previous history of
association. If the nature of host specificity is such that the potential for EF is small, host shifts are likely to be rare and attention can be
focused on managing each EID as it emerges. Little attention need be paid to its origins, beyond a search for the taxonomic identity of the
parasite acting as the pathogen, and its immediate reservoir. If the potential for EF is large, however, then host shifts are likely to be common,
and a more predictive, pre-emptive framework for managing EID will be needed, greatly increasing the challenge of an already difficult
problem. Humanity
has tended to react to emerging diseases as they occur, using our understanding of
epidemiology in an attempt to mitigate the damage done. If the Stockholm Paradigm reflects a
fundamentally correct explanation of the evolution of interspecific associations, then reactive
management policies for dealing with emerging diseases cannot be economically sustainable. This
implies that an additional strategy that could be employed in conjunction with those reactive tactics is
being proactive. We can use our knowledge of what has happened in the past to help us anticipate the
future. It is a kind of evolutionary risk assessment. Just as we cannot stop climate change, we cannot
stop these emerging diseases. We believe, however, that proactive risk management [36,62] is less
expensive and thus more effective, than responding after the crisis. A broader macroevolutionary picture for general
processes of expansion and invasion is emerging, which links historical and contemporary systems. Historical conservatism is pervasive, and it is
evident that equivalent mechanisms have structured faunal assembly in the biosphere and that episodes of expansion and isolation have
alternated over time. Fine-scale (landscape) processes as a mosaic within larger events, while important, are idiosyncratic and more strongly
influenced by chance and founder events. Thus, in
contemporary associations, under the influence of accelerating
change, we cannot always predict which components of the biota will come into proximity or contact,
the duration of these events or the temporal order in which faunal mixing occurs. In these instances, the
importance of adaptation may be diminished, whereas the persistence of parasites and pathogens
through broad sloppy fitness space can be seen as the capacity to use rapidly changing resources
without narrow restriction to any particular ecological/environmental setting. Climate and disturbance-driven
taxon pulses coupled with oscillations in host range can be expected to influence the frequency of EID, because they create episodes of
geographical range shifts and expansions. The episodes, in turn, increase biotic mixing and the opportunities for EF to occur. The current EID
crisis is ‘new’ only in the sense that this is the first such event that scientists have witnessed directly. Previous
episodes through
earth history of global climate change and ecological perturbation, broadly defined, have been
associated with environmental disruptions that led to EID [16,17,62]. From an epidemiological standpoint,
episodes of global climate change should be expected to be associated with the origins of new host–
parasite associations and bursts of EID. The combination of taxon pulses and EF suggests that host and parasite species with the
greatest ability to disperse should be the primary sources of EID [58,62–64]. Palaeontological studies suggest that species with large
geographical ranges and with high ability to disperse are most successful at surviving large-scale environmental perturbation and mass
extinctions [65]. Thus, the
species most successful at surviving global climate changes will be the primary
sources of EID, so host extinction will not limit the risk of EID. The planet is thus an evolutionary and
ecological minefield of EID through which millions of people, their crops and their livestock wander
daily.
That causes extinction – burnout is wrong.
Kerscher ’14 (Karl-Heinz; 2014; Professor, unclear where because every website about him is in German; Wissenschaftliche Studie,
“Space Education,” 92 Seiten)
The death toll for a pandemic is equal to the virulence, the deadliness of the pathogen or pathogens, multiplied by
the number of people eventually infected. It has been hypothesized that there is an upper limit to the
virulence of naturally evolved pathogens. This is because a pathogen that quickly kills its hosts might not
have enough time to spread to new ones, while one that kills its hosts more slowly or not at all will allow
carriers more time to spread the infection, and thus likely out-compete a more lethal species or strain. This simple model
predicts that if virulence and transmission are not linked in any way, pathogens will evolve towards low
virulence and rapid transmission. However, this assumption is not always valid and in more complex
models, where the level of virulence and the rate of transmission are related, high levels of virulence
can evolve. The level of virulence that is possible is instead limited by the existence of complex populations
of hosts, with different susceptibilities to infection, or by some hosts being geographically isolated. The size of the host population and
competition between different strains of pathogens can also alter virulence. There are numerous historical examples of
pandemics that have had a devastating effect on a large number of people, which makes the possibility
of global pandemic a realistic threat to human civilization.
Scenario Three is biodiversity –
Warming destroys biodiversity.
Science Daily ’11 (Science Daily; 9/24/11; Senckenberg Research Institute and Natural History Museum, citing the Biodiversity &
Climate Research Centre; Science Daily, “Global warming may cause higher loss of biodiversity than previously thought,"
https://www.sciencedaily.com/releases/2011/08/110824091146.htm)
If global warming continues as expected, it is estimated that almost a third of all flora and fauna species
worldwide could become extinct. Scientists from the Biodiversity and Climate Research Centre (Biodiversität
und Klima Forschungszentrum, BiK-F) and the SENCKENBERG Gesellschaft für Naturkunde discovered that the
proportion of actual biodiversity loss should quite clearly be revised upwards: by 2080, more than 80 % of
genetic diversity within species may disappear in certain groups of organisms, according to researchers in the title story
of the journal Nature Climate Change. The study is the first world-wide to quantify the loss of biological diversity
on the basis of genetic diversity. Most common models on the effects of climate change on flora and fauna concentrate on
"classically" described species, in other words groups of organisms that are clearly separate from each other morphologically. Until now,
however, so-called cryptic diversity has not been taken into account. It encompasses the diversity of genetic variations and deviations within
described species, and can only be researched fully since the development of molecular-genetic methods. As
well as the diversity of
ecosystems and species, these genetic variations are a central part of global biodiversity. In a
pioneering study, scientists from the Biodiversity and Climate Research Centre (BiK-F) and the Senckenberg Gesellschaft für Naturkunde
have now examined the influence of global warming on genetic diversity within species. Over 80 percent
of genetic variations may become extinct The distribution of nine European aquatic insect species, which still exist in the
headwaters of streams in many high mountain areas in Central and Northern Europe, was modelled. They have already been widely
researched, which means that the regional
distribution of the inner-species diversity and the existence of
morphologically cryptic, evolutionary lines are already known. If global warming does take place in the
range that is predicted by the Intergovernmental Panel on Climate Change (IPCC), these creatures will be pushed back
to only a few small refugia, e.g. in Scandinavia and the Alps, by 2080, according to model calculations. If Europe's climate warms up
by up to two degrees only, eight of the species examined will survive, at least in some areas; with an increase in temperature
of 4 degrees, six species will probably survive in some areas by 2080 . However, due to the extinction of local populations,
genetic diversity will decline to a much more dramatic extent. According to the most pessimistic projections, 84
percent of all genetic variations would die out by 2080; in the "best case," two-thirds of all genetic
variations would disappear. The aquatic insects that were examined are representative for many species of mountainous regions of
Central Europe.
Rapid biodiversity decline causes extinction – triggers global environmental collapse
and exacerbates threats.
Torres ’16 (Phil; 4/11/16; Founder of the X-Risks Institute, an affiliate scholar at the Institute for Ethics and Emerging Technologies; The
Bulletin, “Biodiversity loss: An existential risk comparable to climate change," thebulletin.org/biodiversity-loss-existential-risk-comparableclimate-change9329)
According to the Bulletin of Atomic Scientists, the two greatest existential threats to human civilization stem from climate change and nuclear
weapons. Both pose clear and present dangers to the perpetuation of our species, and the increasingly dire climate situation and nuclear
arsenal modernizations in the United States and Russia were the most significant reasons why the Bulletin decided to keep the Doomsday Clock
set at three minutes before midnight earlier this year. But there
is another existential threat that the Bulletin overlooked in its
Doomsday Clock announcement: biodiversity
loss. This phenomenon is often identified as one of the many consequences of climate
change, and this is of course correct. But biodiversity loss is also a contributing factor behind climate change. For
example, deforestation in the Amazon rainforest and elsewhere reduces the amount of carbon dioxide removed from
the atmosphere by plants, a natural process that mitigates the effects of climate change. So the causal relation between
climate change and biodiversity loss is bidirectional. Furthermore, there are myriad phenomena that are driving biodiversity
loss in addition to climate change. Other causes include ecosystem fragmentation, invasive species, pollution, oxygen depletion caused by
fertilizers running off into ponds and streams, overfishing, human overpopulation, and overconsumption. All of these phenomena have a direct
impact on the health of the biosphere, and all would conceivably persist even if the problem of climate change were somehow immediately
solved. Such considerations warrant decoupling biodiversity loss from climate change, because the former has been consistently subsumed by
the latter as a mere effect. Biodiversity
loss is a distinct environmental crisis with its own unique syndrome of
causes, consequences, and solutions—such as restoring habitats, creating protected areas (“biodiversity parks”), and practicing
sustainable agriculture. The sixth extinction. The repercussions of biodiversity loss are potentially as severe as
those anticipated from climate change, or even a nuclear conflict. For example, according to a 2015 study published
in Science Advances, the best available evidence reveals “an exceptionally rapid loss of biodiversity over the
last few centuries, indicating that a sixth mass extinction is already under way.” This conclusion holds, even on the most
optimistic assumptions about the background rate of species losses and the current rate of vertebrate extinctions. The group classified as
“vertebrates” includes mammals, birds, reptiles, fish, and all other creatures with a backbone. The article argues that, using its conservative
figures, the average loss of vertebrate species was 100 times higher in the past century relative to the background rate of extinction. (Other
scientists have suggested that the current extinction rate could be as much as 10,000 times higher than normal.) As the authors write, “The
evidence is incontrovertible that recent extinction rates are unprecedented in human history and highly unusual in Earth’s history.” Perhaps the
term “Big Six” should enter the popular lexicon—to add the current extinction to the previous “Big Five,” the last of which wiped out the
dinosaurs 66 million years ago. But the concept of biodiversity encompasses more than just the total number of species on the planet. It also
refers to the size of different populations of species. With respect to this phenomenon, multiple studies have confirmed that wild populations
around the world are dwindling and disappearing at an alarming rate. For example, the 2010 Global Biodiversity Outlook report found that the
population of wild vertebrates living in the tropics dropped by 59 percent between 1970 and 2006. The report also found that the population of
farmland birds in Europe has dropped by 50 percent since 1980; bird populations in the grasslands of North America declined by almost 40
percent between 1968 and 2003; and the population of birds in North American arid lands has fallen by almost 30 percent since the 1960s.
Similarly, 42 percent of all amphibian species (a type of vertebrate that is sometimes called an “ecological indicator”) are undergoing
population declines, and 23 percent of all plant species “are estimated to be threatened with extinction.” Other studies have found that some
20 percent of all reptile species, 48 percent of the world’s primates, and 50 percent of freshwater turtles are threatened. Underwater, about 10
percent of all coral reefs are now dead, and another 60 percent are in danger of dying. Consistent with these data, the 2014 Living Planet
Report shows that the global population of wild vertebrates dropped by 52 percent in only four decades—from 1970 to 2010. While biologists
often avoid projecting historical trends into the future because of the complexity of ecological systems, it’s tempting to extrapolate this figure
to, say, the year 2050, which is four decades from 2010. As it happens, a 2006 study published in Science does precisely this: It projects past
trends of marine biodiversity loss into the 21st century, concluding that, unless significant changes are made to patterns of human activity,
there will be virtually no more wild-caught seafood by 2048. Catastrophic consequences for civilization. The
consequences of this
rapid pruning of the evolutionary tree of life extend beyond the obvious. There could be surprising effects of biodiversity
loss that scientists are unable to fully anticipate in advance. For example, prior research has shown that localized
ecosystems can undergo abrupt and irreversible shifts when they reach a tipping point. According to a 2012
paper published in Nature, there are reasons for thinking that we may be approaching a tipping point of this sort in the global ecosystem,
beyond which the consequences could be catastrophic for civilization. As the authors write, a
planetary-scale transition could
precipitate “substantial losses of ecosystem services required to sustain the human population.” An
ecosystem service is any ecological process that benefits humanity, such as food production and crop
pollination. If the global ecosystem were to cross a tipping point and substantial ecosystem services
were lost, the results could be “widespread social unrest, economic instability, and loss of human life.”
According to Missouri Botanical Garden ecologist Adam Smith, one of the paper’s co-authors, this could occur in a matter of decades—far more
quickly than most of the expected consequences of climate change, yet equally destructive. Biodiversity
loss is a “threat
multiplier” that, by pushing societies to the brink of collapse, will exacerbate existing conflicts and
introduce entirely new struggles between state and non-state actors. Indeed, it could even fuel the rise of terrorism.
(After all, climate change has been linked to the emergence of ISIS in Syria, and multiple high-ranking US officials, such as former US Defense
Secretary Chuck Hagel and CIA director John Brennan, have affirmed that climate change and terrorism are connected.) The reality is that we
are entering the sixth mass extinction in the 3.8-billion-year history of life on Earth, and the impact of this event could be felt by civilization “in
as little as three human lifetimes,” as the aforementioned 2012 Nature paper notes. Furthermore, the widespread decline of biological
populations could plausibly initiate a dramatic transformation of the global ecosystem on an even faster timescale: perhaps a single human
lifetime. The
unavoidable conclusion is that biodiversity loss constitutes an existential threat in its own
right. As such, it ought to be considered alongside climate change and nuclear weapons as one of the most
significant contemporary risks to human prosperity and survival.
Scenario Four is resources –
Emissions collapse sustainable food chains and oceanic production.
Mills ’15 (Robyn; 10/13/15; Media & Communications Officer at the University of Adelaide, citing Australian Research Council Associate
Professor Ivan Nagelkerken; Adelaide.edu, “Global marine analysis suggests food chain collapse,”
https://www.adelaide.edu.au/news/news81042.html)
Global marine analysis suggests food chain collapse Tuesday, 13 October 2015 A world-first global
analysis of marine responses to climbing human CO2 emissions has painted a grim picture of future
fisheries and ocean ecosystems. Published today in the journal Proceedings of the National Academy of Sciences (PNAS), marine
ecologists from the University of Adelaide say the expected ocean acidification and warming is likely to
produce a reduction in diversity and numbers of various key species that underpin marine ecosystems
around the world. “This ‘simplification’ of our oceans will have profound consequences for our current way of life,
particularly for coastal populations and those that rely on oceans for food and trade,” says Associate Professor Ivan
Nagelkerken, Australian Research Council (ARC) Future Fellow with the University’s Environment Institute. Associate Professor
Nagelkerken and fellow University of Adelaide marine ecologist Professor Sean Connell have conducted a ‘meta-analysis’ of the data from 632
published experiments covering tropical to artic waters, and a range of ecosystems from coral reefs, through kelp forests to open oceans. “We
know relatively little about how climate change will affect the marine environment,” says Professor
Connell. “Until now, there has been almost total reliance on qualitative reviews and perspectives of potential
global change. Where quantitative assessments exist, they typically focus on single stressors, single
ecosystems or single species. “This analysis combines the results of all these experiments to study the
combined effects of multiple stressors on whole communities, including species interactions and
different measures of responses to climate change.” The researchers found that there would be “limited
scope” for acclimation to warmer waters and acidification. Very few species will escape the negative
effects of increasing CO2, with an expected large reduction in species diversity and abundance across the globe. One exception
will be microorganisms, which are expected to increase in number and diversity. From a total food web
point of view, primary production from the smallest plankton is expected to increase in the warmer waters but this often doesn’t translate
into secondary production (the zooplankton and smaller fish) which shows decreased productivity under ocean acidification. “With higher
metabolic rates in the warmer water, and therefore a greater demand for food, there is a mismatch with
less food available for carnivores ─ the bigger fish that fisheries industries are based around,” says Associate
Professor Nagelkerken. “There will be a species collapse from the top of the food chain down.” The analysis also
showed that with warmer waters or increased acidification or both, there would be deleterious impacts on habitat-forming species for example
coral, oysters and mussels. Any
slight change in the health of habitats would have a broad impact on a wide
range of species these reefs harbour. Another finding was that acidification would lead to a decline in dimethylsulfide gas (DMS)
production by ocean plankton which helps cloud formation and therefore in controlling the Earth’s heat exchange.
Loss of resource chains and ocean life cause global famine and extinction.
Young ’14 (Grace; 1/17/14; thesis submitted for a Bachler of Science in Mechanical & Ocean Engineering at MIT; thesis, “Missiles &
Misconceptions: Why We Know More About the Dark Side of the Moon than the Depths of the Ocean”
http://mseas.mit.edu/publications/Theses/Grace_C_Young_BS_Thesis_MIT2014.pdf)
The misconceptions that drove spending on space were mirrored in our lack of knowledge about the ocean's importance. Our ambivalence
about the ocean is reflected in the vast disparity in research funding. Today, however, we are beginning to understand how dependent we are
on the ocean, and how the
impact of human-induced climate change, pollution, and overfishing on the ocean are far
more threatening to our survival than whether we “control the heavens." The ocean, which cover's 71% of Earth's surface, produces
at least half the oxygen we breath and filters deadly carbon dioxide.86 It is a crucial regular of global climate and weather, but one we do not
understand. Since 1950 there has been a dramatic increase in extreme weather,87 requiring billions of dollars spent globally towards repair and
response efforts. Moreover, eight of the world's top ten largest cities are located on the seacoast. The ocean they adjoin is profoundly changing
in complex ways we do not understand. Marine
species are disappearing before we know of their existence. These
species are not only matters of curiosity, but can hold secrets to understanding life and medicine, and are integral to the health of
marine ecosystems.{ The oceans have become 26% more acidic since the start of the Industrial Revolution and
continue to acidify at an unprecedented rate.88 Acidification affects marine ecosystems; it especially harms shelled creatures
such as oysters and muscles that filter water,89 but can benefit sea grass and other invasive plants that will overwhelm ecosystems and
accelerate the extinction of marine animal species.90 At
the same time acidification from climate change is threatening
entire ecosystems, industrial and agricultural pollution, plus increasing volumes of human trash are
threatening to overwhelm the ocean's ability to regenerate. The National Academy of Science estimated that in 1975
more than 750 tons of garbage was dumped into the ocean every hour.91 Fortunately, in 1987 the US ratified Marpol Annex V, an international
treaty that made it illegal to throw non-biodegradable trash overboard from ships in the waters of signatory countries. While this is progress,
the MARPOL law is difficult to enforce. Governments do not know where or when dumping happens because there is no infrastructure for
monitoring or policing the vast oceans. Sadly, Nature magazine reported that during the 1990s debris in the waters near Britain doubled, and
debris in the Southern Ocean encircling Antarctica increased one hundred fold.92 Today we do not know how much trash is in the ocean.
Author Donovan Hohn noted in 2008, “Not even oceanographers can tell us exactly how much floating scruff is out there; oceanographic
research is simply too expensive and the ocean too varied and vast."93 But the number is not good. Stranded
whales and other
marine life with trash filling their bellies serve as a powerful harbinger for what is to come (Figure 11), and
more oceanographic research is needed. Along with pollution and climate change, overfishing is among the greatest threats
facing our ocean and human wellbeing. A study in Science projected that all commercial fish and seafood species will
collapse by 2048.94 Already, populations of large fish, including tuna, swordfish, marlin, cod, halibut, skates, ounder, and others, have
reduced by 90% since 1950, according to a 2003 study in Nature.95 A world without seafood will harm developing nations the most. More
than 3.5 billion people globally depend on the ocean for their primary source of food, and most of those people
are in fast-growing developing regions of Asia and Africa.96 In 20 years, the number could double to 7 billion.97 Fortunately, according to a
pivotal paper published in Science in 2006, overfishing
is proven to be a reversible problem, but only if humans act
effectively within the next decade.98 Otherwise, global malnutrition and famine is on the horizon as so far
aquaculture has not been able to keep up with the dramatic losses of wild catch. “Unless we fundamentally change the way we manage all the
oceans species together, as working ecosystems, then this century is the last century of wild seafood," marine ecologist Steve Palumbi
warned.99 NOAA has made substantial progress in regulating US fisheries, although that fact must be taken with a grain of salt because the US
imports 91% of its seafood.100 Moreover, the most catastrophic overfishing is occurring in international waters where traditional industrial
fishing nations continue to resist stronger efforts at global regulation. Realizing the ocean's importance to humankind, President Kennedy
became a staunch advocate for ocean research shortly before he died. Exactly a month before his assassination, he asked Congress to double
the nation's ocean research budget and greatly expand ocean research for the sake of worldwide security and health. He called for a global
ocean research initiative: The ocean, the atmosphere, outer space, belong not to one nation or one ideology, but to all mankind, and as science
carries out its tasks in the years ahead, it must enlist all its own disciplines, all nations prepared for the scientific quest, and all men capable of
sympathizing with the scientific impulse.101 He had no chance to see his plans through, however, and his successor, Lyndon Johnson, was
focused on space as the “high ground" and “control of the heavens" for perceived military and geo-political reasons. 4.3 Extent of
Oceanographic Knowledge During the space race, leaders believed that the ocean was an already conquered territory. In 1962, President
Kennedy called space a “new ocean,"102 although 95% of the ocean remains unseen by human eyes.103 As mentioned previously, Johnson
suggested space technology would be to the 20th century what ships were to the British Empire for the past millennia,104. Kennedy echoed
Johnson's words: We set sail on this new sea because there is new knowledge to be gained, and new rights to be won, and they must be won
and used for the progress of all people. For space science, like nuclear science and technology, has no conscience of its own. Whether it will
become a force for good or ill depends on man, and only if the United States occupies a position of preeminence can we help decide whether
this new ocean will be a sea of peace or a new terrifying theater of war.105 The truth remains, however, that we have not conquered the seas.
As discussed in Sections 2.2 and 3.2, ocean exploration
has largely been a surface affair. 90% of the ocean's
volume, the dark, cold environment we call the deep sea, is largely unknown.106 In 1960, when Jacques Piccard and Don Walsh became
the first men to reach the deepest part of the ocean, they saw only saw two fish,107 so it was mistakenly envisioned that the deep ocean was
essentially lifeless. In reality, however, it is teaming with life. Tim Shank, a deep-sea biologist at Woods Hole Oceanographic Institution,
explained why the explorers did not see much life near the Mariana Trench: The waters above the Challenger Deep are extremely unproductive
in part because algee at the surface prevents food from being cycled in deeper waters. \If it had been a trench with a productive water column,
like the Kermadec Trench near New Zealand, I think he would have seen much more biology," he told Nature.108 Fantastic photos from
Cousteau's shallow water missions helped to fill the gap, showing brilliant life in sea, but those only scratched the surface. An estimated two
thirds of marine species are yet to be discovered.109 In 2014, NASA's budget is $17 billion. Its space exploration budget alone is $3.8 billion,110
hundreds of times more than NOAA's office of ocean exploration and research budget of $23.7 million.111k The discrepancy in funding for
ocean exploration, particularly in comparison to that for space, has lasting effects that inhibit efforts for continued exploration. After his
mission to the Mariana Trench in 2012, James Cameron candidly told the press that the state of today's ocean exploration is “piss poor."112 He
continued, The public needs to understand that the US government is no longer in a leadership position when it comes to science and
exploration, as they were in the 1960s and 1970s. We have this image of ourselves in this country as number one, leading edge, that sort of
thing and it is just not the case.113 Cameron, who privately funded his journey to the Mariana Trench, noted that private individuals such as
Eric Schmidt, Google's former chief executive and founder of the Schmidt Ocean Institute, have made strides in trying to up for what
governments are not doing, but progress is still slow due to lack of government infrastructure. Author Ben Hellwarth explains: [P]rivate groups-including the team of Jacques Cousteau, who was as great a pitchman and fundraiser as anyone--would find sea dwelling and exploration a
tough business to pursue, especially without a government-primed infrastructure and market like the one that evolved for space travel. The
situation was something like tech mogul Elon Musk trying to launch SpaceX without the benefit of a space station or the many trails NASA
blazed with its billions.114 To illustrate, Hellwarth elaborates with the recent history of the undersea habitat Aquarius: The kind of public
interest and unbridled enthusiasm that has long sustained the space program and NASA's multibillion-dollar budgets has never materialized for
like-minded quests into the ocean. Last year's near closure of the world's only sea base was the latest case in point. If you can't name this
unique, American-run undersea outpost, you are not alone, and that's at least part of the problem. It's called the Aquarius Reef Base, and for
the past two decades, this school-bus-sized structure has been operating a few miles south of the Florida Keys and a few fathoms below the
surface. From its beginning Aquarius has typically had to squeak by on less than $3 million a year, sometimes much less than a drop in the fiscal
bucket by space program standards. (NASA's estimated cost of a single space shuttle launch, for example, was $450 million.) Then last year the
National Oceanic and Atmospheric Administration, which owns Aquarius, decided to pull the plug on the base. An organized effort to save
Aquarius created an unusual surge in media and other attention, not major front-page headlines, to be sure, but there was at least a discernible
spike.115 Even after the Cold War ended in the early 1990s with fall of Berlin Wall, NASA's budget remained dramatically larger than budgets
for ocean research. The reason for the budget disparity has less to do with commercial or military reasons, and more to do with lingering geopolitical issues and inertia from the Cold War, including constituencies in Congress, an independent governmental agency, and established
defense contractors that benefit from government-funded space exploration. Contractors such as Boeing and Lockheed Martin, for example,
have immense capacity to lobby Congress for further funding. Ocean exploration, on the other hand, had almost no constituency outside of the
scientific community, which alone has little political clout. Because of the lingering effects of misconceptions, ocean exploration lags far behind
space exploration, to the point that our dearth of oceanographic knowledge may result in serious harm to humankind in the next generation. 5
Conclusion: Will There Be a Sputnik for the Ocean? The sea, the great unifier, is man's only hope. Now, as never before, the old phrase has a
literal meaning: We are all in the same boat. Jacques Cousteau Since 5000 BC, humans have progressed from star-gazers to moon-walkers and
from shallow-water swimmers to deep-sea explorers. Technological innovation drove exploration in both space and sea to unprecedented
levels, particularly during the mid-1900s. With the start of the Cold War, however, ocean exploration proceeded at a snail's pace compared to
space research. This sudden shift in priority was due to misconceptions about the military and geopolitical importance of space and the ocean's
importance to human wellbeing. Looking back, there are many \what ifs" in the history of exploration. For example, what if Eisenhower had his
wish of making NASA part of the Department of Defense? Then we most likely would not have reached the moon or Mars because those NASA
missions were not primarily military-oriented. What if the Soviets launched the first deep sea vehicle rather than the first orbiting satellite?
Might there have been a Sputnik-like reaction towards the ocean rather than space? What if Kennedy wasn't assassinated and got his wish of
creating a global ocean research initiative in the 1960s? Looking ahead, progress in
ocean exploration and management
looks dire. This is especially tragic because marine environments and ecosystems are degrading, even
disappearing, at the fastest rate in 300 million years,116 as they face the triple threat of acidification,
warming, and deoxygenation. “The health of the ocean is spiraling downwards far more rapidly than we
had thought.. . . The situation should be of the gravest concern to everyone since everyone will be affected by changes in the ability of the
ocean to support life on Earth," Professor Alex Rogers of Oxford University emphasizes.117 The US government probably will not fund the
necessary research anywhere near the scale it continues to fund space research. As such, scientists are increasingly looking for private and
industrial support. James Cameron, the Cousteau legacy, and Eric Schmidt among others are showing that privately-funded ocean exploration is
possible. The underfunded and oft-delayed “SeaOrbiter" project, which aims to be the ocean equivalent of a space station, shows how difficult
fund-raising for such projects can be.118 Yet SeaOrbiter would cost a tiny fraction of a single space shuttle flight. That the ocean was a place for
international collaboration probably hurt it during the decades of Cold War hysteria; but hopefully we can now use that to an advantage, to
bring nations together. The European Organization for Nuclear Research (CERN) showed how large-scale multinational research, funded by a
combination of governments and industry sectors, can be successful. The future of ocean exploration might depend on a oceanographic version
of CERN. Or, it could be in research studies tied to national interests, like the space program. As a recent national forum on the future of the
ocean stated, ocean exploration as an urgent necessity, and an issue of national security.119 Let us hope that not only the US government, but
also the entire global community recognizes the importance of aggressive ocean research and management before it is too late.
Plan:
The United States federal government should offer guaranteed investment in federal
and state infrastructure programs in exchange for both countries acceding to the U.S.
China Bilateral Investment Treaty
2: Framing
Scenario analysis is pedagogically valuable – enhances creativity and self-reflexivity,
deconstructs cognitive biases and flawed ontological assumptions, and enables the
imagination and creation of alternative futures.
Barma et al. 16 – (May 2016, [Advance Publication Online on 11/6/15], Naazneen Barma, PhD in Political
Science from UC-Berkeley, Assistant Professor of National Security Affairs at the Naval Postgraduate
School, Brent Durbin, PhD in Political Science from UC-Berkeley, Professor of Government at Smith
College, Eric Lorber, JD from UPenn and PhD in Political Science from Duke, Gibson, Dunn & Crutcher,
Rachel Whitlark, PhD in Political Science from GWU, Post-Doctoral Research Fellow with the Project on
Managing the Atom and International Security Program within the Belfer Center for Science and
International Affairs at Harvard, “‘Imagine a World in Which’: Using Scenarios in Political Science,”
International Studies Perspectives 17 (2), pp. 1-19,
http://www.naazneenbarma.com/uploads/2/9/6/9/29695681/using_scenarios_in_political_science_isp
_2015.pdf)
**FYI if anyone is skeptical of Barma’s affiliation with the Naval Postgraduate School, it’s worth looking at her publication
history, which is deeply opposed to US hegemony and the existing liberal world order:
a)
b)
co-authored an article entitled “How Globalization Went Bad” that has this byline: “From terrorism to global warming,
the evils of globalization are more dangerous than ever before. What went wrong? The world became dependent on
a single superpower. Only by correcting this imbalance can the world become a safer place.”
(http://cisac.fsi.stanford.edu/publications/how_globalization_went_bad)
most recent published scenario is entitled “World Without the West,” supports the a Non-Western reinvention of the
liberal order, and concludes that “This argument made a lot of people uncomfortable, mostly because of an endemic
and gross overestimation of the reach, depth and attractiveness of the existing liberal order”
(http://nationalinterest.org/feature/welcome-the-world-without-the-west-11651)
Over the past decade, the “cult of irrelevance” in political science scholarship has been lamented by a growing
chorus (Putnam 2003; Nye 2009; Walt 2009). Prominent scholars of international affairs have diagnosed the roots of the gap between academia and policymaking,
made the case for why political science research is valuable for policymaking, and offered a number of ideas for enhancing the policy
relevance of scholarship in international relations and comparative politics (Walt 2005,2011; Mead 2010; Van Evera 2010; Jentleson and Ratner 2011; Gallucci 2012; Avey and Desch 2014). Building on these insights, several
initiatives have been formed in the attempt to “bridge the gap.”2 Many of the specific efforts put in place by these projects focus on providing scholars
with the skills, platforms, and networks to better communicate the findings and implications of their research to the
policymaking community, a necessary and worthwhile objective for a field in which theoretical debates, methodological training, and publishing norms tend more and more toward the abstract and
esoteric.
Yet enhancing communication between scholars and policymakers is only one component of bridging the gap between international affairs theory and practice.
Another crucial component of this bridge is the generation of substantive research programs that are actually policy
relevant—a challenge to which less concerted attention has been paid. The dual challenges of bridging the gap are especially acute for graduate students, a particular irony since many enter the discipline with the explicit
hope of informing policy. In a field that has an admirable devotion to pedagogical self-reflection, strikingly little
attention is paid to techniques for generating policy-relevant ideas for dissertation and other research topics. Although
numerous articles and conference workshops are devoted to the importance of experiential and problem-based learning, especially through techniques of simulation that emulate policymaking processes (Loggins 2009; Butcher
2012; Glasgow 2012; Rothman 2012; DiCicco 2014), little has been written about the use of such techniques for generating and developing innovative research ideas.
This article outlines an experiential and problem-based approach to developing a political science research program
using scenario analysis. It focuses especially on illuminating the research generation and pedagogical benefits of this technique by describing the use of scenarios in the annual New Era Foreign Policy
Conference (NEFPC), which brings together doctoral students of international and comparative affairs who share a demonstrated interest in policy-relevant scholarship.3 In the introductory section, the article outlines the practice
of scenario analysis and considers the utility of the technique in political science. We argue that scenario analysis should be viewed as a tool to stimulate problem-based learning for doctoral students and discuss the broader
scholarly benefits of using scenarios to help generate research ideas. The second section details the manner in which NEFPC deploys scenario analysis. The third section reflects upon some of the concrete scholarly benefits that
have been realized from the scenario format. The fourth section offers insights on the pedagogical potential associated with using scenarios in the classroom across levels of study. A brief conclusion reflects on the importance of
developing specific techniques to aid those who wish to generate political science scholarship of relevance to the policy world.
What Are Scenarios and Why Use Them in Political Science?
Scenario analysis is perceived most commonly as a technique for examining the robustness of strategy. It can immerse decision makers in future states
that go beyond conventional extrapolations of current trends, preparing them to take advantage of
unexpected opportunities and to protect themselves from adverse exogenous shocks. The global petroleum company Shell, a
pioneer of the technique, characterizes scenario analysis as the art of considering “what if” questions about possible future worlds. Scenario analysis is thus typically seen as
serving the purposes of corporate planning or as a policy tool to be used in combination with simulations of decision making. Yet scenario
analysis is not inherently limited to these uses. This section provides a brief overview of the practice of scenario analysis and the motivations underpinning its
uses. It then makes a case for the utility of the technique for political science scholarship and describes how the scenarios deployed at
NEFPC were created.
The Art of Scenario Analysis
We characterize scenario analysis as the art of juxtaposing current trends in unexpected combinations in
order to articulate surprising and yet plausible futures, often referred to as “alternative worlds.” Scenarios
are thus explicitly not forecasts or projections based on linear extrapolations of contemporary patterns,
and they are not hypothesis-based expert predictions. Nor should they be equated with simulations,
which are best characterized as functional representations of real institutions or decision-making processes (Asal 2005).
Instead, they are depictions of possible future states of the world, offered together with a narrative of the
driving causal forces and potential exogenous shocks that could lead to those futures. Good scenarios thus rely on explicit causal propositions that, independent of
one another, are plausible—yet, when combined, suggest surprising and sometimes controversial future worlds. For example, few predicted the dramatic fall in oil prices toward the end of 2014. Yet independent driving forces,
such as the shale gas revolution in the United States, China’s slowing economic growth, and declining conflict in major Middle Eastern oil producers such as Libya, were all recognized secular trends that—combined with OPEC’s
decision not to take concerted action as prices began to decline—came together in an unexpected way.
While scenario analysis played a role in war gaming and strategic planning during the Cold War, the real antecedents of the contemporary practice are found in corporate futures studies of the late 1960s and early 1970s (Raskin et
al. 2005). Scenario analysis was essentially initiated at Royal Dutch Shell in 1965, with the realization that the usual forecasting techniques and models were not capturing the rapidly changing environment in which the company
operated (Wack 1985; Schwartz 1991). In particular, it had become evident that straight-line extrapolations of past global trends were inadequate for anticipating the evolving business environment. Shell-style scenario planning
“helped break the habit, ingrained in most corporate planning, of assuming that the future will look much like the present” (Wilkinson and Kupers 2013, 4). Using scenario thinking, Shell anticipated the possibility of two Arabinduced oil shocks in the 1970s and hence was able to position itself for major disruptions in the global petroleum sector.
Building on its corporate roots, scenario analysis has become a standard policymaking tool. For example, the Project on Forward Engagement advocates linking systematic foresight, which it defines as the disciplined analysis of
alternative futures, to planning and feedback loops to better equip the United States to meet contemporary governance challenges (Fuerth 2011). Another prominent application of scenario thinking is found in the National
Intelligence Council’s series of Global Trends reports, issued every four years to aid policymakers in anticipating and planning for future challenges. These reports present a handful of “alternative worlds” approximately twenty
years into the future, carefully constructed on the basis of emerging global trends, risks, and opportunities, and intended to stimulate thinking about geopolitical change and its effects.4 As with corporate scenario analysis, the
technique can be used in foreign policymaking for long-range general planning purposes as well as for anticipating and coping with more narrow and immediate challenges. An example of the latter is the German Marshall Fund’s
EuroFutures project, which uses four scenarios to map the potential consequences of the Euro-area financial crisis (German Marshall Fund 2013).
Several features make scenario analysis particularly useful for policymaking.5 Long-term global trends
across a number of different realms—social, technological, environmental, economic, and political—combine in often-unexpected ways to
produce unforeseen challenges. Yet the ability of decision makers to imagine, let alone prepare for,
discontinuities in the policy realm is constrained by their existing mental models and maps. This limitation is
exacerbated by well-known cognitive bias tendencies such as groupthink and confirmation bias (Jervis 1976;
Janis 1982; Tetlock 2005). The power of scenarios lies in their ability to help individuals break out of conventional
modes of thinking and analysis by introducing unusual combinations of trends and deliberate
discontinuities in narratives about the future. Imagining alternative future worlds through a structured
analytical process enables policymakers to envision and thereby adapt to something altogether
different from the known present.
Designing Scenarios for Political Science Inquiry
Scenarios are
essentially textured, plausible, and relevant stories that help us imagine how the future political-economic world could
be different from the past in a manner that highlights policy challenges and opportunities. For example, terrorist organizations are a known threat that have captured the attention of the policy
The characteristics of scenario analysis that commend its use to policymakers also make it well suited to helping political scientists generate and develop policy-relevant research programs.
community, yet our responses to them tend to be linear and reactive. Scenarios that explore how seemingly unrelated vectors of change—the rise of a new peer competitor in the East that diverts strategic attention, volatile
commodity prices that empower and disempower various state and nonstate actors in surprising ways, and the destabilizing effects of climate change or infectious disease pandemics—can be useful for illuminating the nature and
limits of the terrorist threat in ways that may be missed by a narrower focus on recognized states and groups. By illuminating the potential strategic significance of specific and yet poorly understood opportunities and threats,
scenario analysis helps to identify crucial gaps in our collective understanding of global politicaleconomic trends and dynamics. The notion of “exogeneity”—so prevalent in social science scholarship—applies to models of reality,
Very simply, scenario analysis can throw into sharp relief often-overlooked yet pressing
questions in international affairs that demand focused investigation.
not to reality itself.
Scenarios thus offer, in principle, an innovative tool for developing a political science research agenda. In practice,
achieving this objective requires careful tailoring of the approach. The specific scenario analysis technique we outline below was designed and refined
to provide a structured experiential process for generating problem-based research questions with contemporary international policy relevance.6 The first step in the process of creating the scenario set described here was to
identify important causal forces in contemporary global affairs. Consensus was not the goal; on the contrary, some of these causal statements represented competing theories about global change (e.g., a resurgence of the nationstate vs. border-evading globalizing forces). A major principle underpinning the transformation of these causal drivers into possible future worlds was to “simplify, then exaggerate” them, before fleshing out the emerging story
with more details.7 Thus, the contours of the future world were drawn first in the scenario, with details about the possible pathways to that point filled in second. It is entirely possible, indeed probable, that some of the causal
claims that turned into parts of scenarios were exaggerated so much as to be implausible, and that an unavoidable degree of bias or our own form of groupthink went into construction of the scenarios. One of the great strengths
of scenario analysis, however, is that the scenario discussions themselves, as described below, lay bare these especially implausible claims and systematic biases.8
An explicit methodological approach underlies the written scenarios themselves as well as the analytical process around them—that of case-centered, structured, focused comparison, intended especially to shed light on new
The use of scenarios is similar to counterfactual analysis in that it modifies certain
variables in a given situation in order to analyze the resulting effects (Fearon 1991). Whereas counterfactuals are
traditionally retrospective in nature and explore events that did not actually occur in the context of known history, our scenarios are deliberately forwardlooking and are designed to explore potential futures that could unfold. As such, counterfactual analysis is especially well suited to identifying how individual events might
causal mechanisms (George and Bennett 2005).
expand or shift the “funnel of choices” available to political actors and thus lead to different historical outcomes (Nye 2005, 68–69), while forward-looking scenario analysis can better illuminate surprising intersections and
We see scenarios as a complementary resource for
exploring these dynamics in international affairs, rather than as a replacement for counterfactual analysis, historical case studies, or
sociopolitical dynamics without the perceptual constraints imposed by fine-grained historical knowledge.
other methodological tools.
In the scenario process developed for NEFPC, three distinct scenarios are employed, acting as cases for analytical comparison. Each scenario, as detailed below, includes a set of explicit “driving forces” which represent hypotheses
about causal mechanisms worth investigating in evolving international affairs. The scenario analysis process itself employs templates (discussed further below) to serve as a graphical representation of a structured, focused
investigation and thereby as the research tool for conducting case-centered comparative analysis (George and Bennett 2005). In essence, these templates articulate key observable implications within the alternative worlds of the
scenarios and serve as a framework for capturing the data that emerge (King, Keohane, and Verba 1994). Finally, this structured, focused comparison serves as the basis for the cross-case session emerging from the scenario
analysis that leads directly to the articulation of new research agendas.
The scenario process described here has thus been carefully designed to offer some guidance to policyoriented graduate students who are otherwise left to the relatively unstructured norms by which political
science dissertation ideas are typically developed. The initial articulation of a dissertation project is generally an idiosyncratic and personal undertaking (Useem 1997; Rothman 2008),
whereby students might choose topics based on their coursework, their own previous policy exposure, or the topics studied by their advisors. Research agendas are thus typically developed by looking for “puzzles” in existing
research programs (Kuhn 1996). Doctoral students also, understandably, often choose topics that are particularly amenable to garnering research funding. Conventional grant programs typically base their funding priorities on
extrapolations from what has been important in the recent past—leading to, for example, the prevalence of Japan and Soviet studies in the mid-1980s or terrorism studies in the 2000s—in the absence of any alternative method for
identifying questions of likely future significance.
The scenario approach to generating research ideas is grounded in the belief that these traditional
approaches can be complemented by identifying questions likely to be of great empirical importance in
the real world, even if these do not appear as puzzles in existing research programs or as clear extrapolations
from past events. The scenarios analyzed at NEFPC envision alternative worlds that could develop in the medium (five
to seven year) term and are designed to tease out issues scholars and policymakers may encounter in
the relatively near future so that they can begin thinking critically about them now. This timeframe
offers a period distant enough from the present as to avoid falling into current events analysis, but not
so far into the future as to seem like science fiction. In imagining the worlds in which these scenarios might come to pass, participants learn
strategies for avoiding failures of creativity and for overturning the assumptions that prevent scholars
and analysts from anticipating and understanding the pivotal junctures that arise in international
affairs.
Valid, descriptive theories of the world are an essential prerequisite to emancipatory
critique – epistemic decolonization is impossible without reclaiming the concept of
objectivity.
Jones 04 – (August 2004, Branwen Gruffydd, PhD in Development Studies from the University of Sussex,
Senior Lecturer in International Political Economy at Goldsmiths University of London, “From
Eurocentrism to Epistemological Internationalism: power, knowledge and objectivity in International
Relations,” Paper presented at Theorising Ontology, Annual Conference of the International Association
for Critical Realism, University of Cambridge, http://www.csog.group.cam.ac.uk/iacr/papers/Jones.pdf)
The ‘common-sense’ view pervading recent discussions of epistemology, ontology and methodology in IR asserts that objectivity implies valuefree neutrality. However, objective social inquiry has an inherent tendency to be critical, in various senses. To the
extent that objective knowledge provides a better and more adequate account of reality than other
ideas, such knowledge is inherently critical (implicitly or explicitly) of those ideas. 30 In other words critical social
inquiry does not (or not only) manifest its ‘criticalness’ through self-claimed labels of being critical or
siding with the oppressed, but through the substantive critique of prevailing ideas. Objective social
knowledge constitutes a specific form of criticism: explanatory critique. The critique of dominant ideas or
ideologies is elaborated through providing a more adequate explanation of aspects of the world, and in so
doing exposing what is wrong with the dominant ideology. This may also entail revealing the social conditions which give rise to ideologies, thus exposing the
necessary and causal relation between particular social relations and particular ideological conceptions.
the reproduction of those structured relations
is in the interests of the powerful, whereas transformation of existing structured relations is in the interests of the weak.
Because ideas inform social action they are casually efficacious either in securing the reproduction of
existing social relations (usually as an unintended consequence of social practice), or in informing social action aimed at transforming social relations. This is why
ideas cannot be ‘neutral’. Ideas which provide a misrepresentation of the nature of society, the causes of unequal social conditions, and the conflicting interests of the weak and powerful, will tend to
In societies which are constituted by unequal structures of social relations giving rise to unequal power and conflicting interests,
help secure the reproduction of prevailing social relations. Ideas which provide a more adequate account of the way society is structured and how structured social relations produce concrete conditions of inequality and
ideas which are false are ideological and, in serving to promote
the reproduction of the status quo and avoid attempts at radical change, are in the interests of the powerful. An account which is objective
will contradict ideological ideas, implicitly or explicitly criticising them for their false or flawed accounts of reality. The criticism here arises not, or not only, from pointing out the coincidence
between ideologies and the interests of the powerful, nor from a prior normative stance of solidarity with the oppressed, but from
exposing the flaws in dominant ideologies through a more adequate account of the nature and causes
of social conditions 31 .
exploitation can potentially inform efforts to change those social relations. In this sense,
A normative commitment to the oppressed must entail a commitment to truth and objectivity,
because true ideas are in the interest of the oppressed, false ideas are in the interest of the
oppressors. In other words, the best way to declare solidarity with the oppressed is to declare one’s
commitment to objective inquiry 32 . As Nzongola-Ntalaja (1986: 10) has put it:
It is a question of whether one analyses society from the standpoint of the dominant groups, who have
a vested interest in mystifying the way society works, or from the standpoint of ordinary people, who
have nothing to lose from truthful analyses of their predicament.
The philosophical realist theory of science, objectivity and explanatory critique thus provides an
alternative response to the relationship between knowledge and power. Instead of choosing
perspectives on the basis of our ethical commitment to the cause of the oppressed and to
emancipatory social change, we should choose between contending ideas on the basis of which
provides a better account of objective social reality. This will inherently provide a critique of the
ideologies which, by virtue of their flawed account of the social world, serve the interests of the powerful.
Exemplars of explanatory critique in International Relations are provided in the work of scholars such as Siba Grovogui, James Gathii, Anthony Anghie, Bhupinder Chimni, Jacques Depelchin, Hilbourne Watson, Robert Vitalis,
Sankaran Krishna, Michel-Rolph Trouillot 33 . Their work provides critiques of central categories, theories and discourses in the theory and practice of IR and narratives of world history, including assumptions about sovereignty,
international society, international law, global governance, the nature of the state. They expose the ideological and racialised nature of central aspects of IR through a critical examination of both the long historical trajectory of
imperial ideologies regarding colonized peoples, and the actual practices of colonialism and decolonisation in the constitution of international orders and local social conditions. Their work identifies the flaws in current ideas by
revealing how they systematically misrepresent or ignore the actual history of social change in Africa, the Caribbean and other regions of the Third World, both past and present – during both colonial and neo-colonial periods of
the imperial world order. Their work reveals how racism, violence, exploitation and dispossession, colonialism and neo-colonialism have been central to the making of contemporary international order and contemporary doctrines
of international law, sovereignty and rights, and how such themes are glaring in their absence from histories and theories of international relations and international history.
Objective social knowledge which accurately depicts and explains social reality has these qualities by
virtue of its relation to its object, not its subject. As Collier argues, “The science/ideology distinction is an epistemological one, not a social one.” (Collier 1979: 60). So,
for example, in the work of Grovogui, Gathii and Depelchin, the general perspective and knowledge of conditions in and
the history of Africa might be due largely to the African social origins of the authors. However the judgement
that their accounts are superior to those of mainstream IR rests not on the fact that the authors are African, but on the
greater adequacy of their accounts with respect to the actual historical and contemporary production of
conditions and change in Africa and elsewhere in the Third World. The criteria for choosing their accounts over others derives from the relation between the ideas and their
objects (what they are about), not from the relation between the ideas and their subjects (who produced them). It is vital to retain explicitly some commitment to objectivity in social inquiry, to the notion that the proper criterion
for judging ideas about the world lies in what they say about the world, not whose ideas they are.
A fundamental problem which underlies the origin and reproduction of IR’s eurocentricity is the overwhelming dominance
of ideas produced in and by the west, and the wilful and determined silencing of the voices and histories of the
colonised. But the result of this fundamental problem is flawed knowledge about the world.
Eurocentricity is therefore a dual problem concerning both the authors and the content of knowledge,
and cannot be resolved through normative commitments alone. It is not only the voices of the colonised, but the histories of colonialism, which have
been glaring in their absence from the discipline of International Relations.
Overcoming eurocentricity therefore requires not only concerted effort from the centre to create space
and listen to hitherto marginalised voices, but also commitment to correcting the flaws in prevailing
knowledge – and it is not only ‘the Other’ who can and should elaborate this critique. A vitally important implication of
objectivity is that it is the responsibility of European and American, just as much as non-American or nonEuropean scholars, to decolonise IR. The importance of objectivity in social inquiry defended here can perhaps be seen as a
form of epistemological internationalism. It is not necessary to be African to attempt to tell a more
accurate account of the history of Europe’s role in the making of the contemporary Africa and the rest of the world, for
example, or to write counter-histories of ‘the expansion of international society’ which detail the systematic barbarity of so-called Western
civilisation. It is not necessary to have been colonised to recognise and document the violence, racism, genocide and dispossession which have characterised European expansion over five hundred years.
Root cause explanations of International Relations don’t exist – methodological
pluralism is necessary to reclaim IR as emancipatory praxis and avoid endless political
violence.
Bleiker 14 – (6/17, Roland, Professor of International Relations at the University of Queensland,
“International Theory Between Reification and Self-Reflective Critique,” International Studies Review,
Volume 16, Issue 2, pages 325–327)
This book is part of an increasing trend of scholarly works that have embraced poststructural critique but want to ground it in more positive political foundations, while retaining a reluctance to return to the positivist tendencies
that implicitly underpin much of constructivist research. The path that Daniel Levine has carved out is innovative, sophisticated, and convincing. A superb scholarly achievement.
the key challenge in international relations (IR) scholarship is what he calls “unchecked reification”: the widespread and
dangerous process of forgetting “the distinction between theoretical concepts and the real-world things
they mean to describe or to which they refer” (p. 15). The dangers are real, Levine stresses, because IR deals with some of the
most difficult issues, from genocides to war. Upholding one subjective position without critical scrutiny
can thus have far-reaching consequences. Following Theodor Adorno—who is the key theoretical influence on this book—Levine takes a post-positive position and assumes that the
For Levine,
world cannot be known outside of our human perceptions and the values that are inevitably intertwined with them. His ultimate goal is to overcome reification, or, to be more precise, to recognize it as an inevitable aspect of
thought so that its dangerous consequences can be mitigated.
Levine proceeds in three stages: First he reviews several decades of IR theories to resurrect critical moments when scholars displayed an acute awareness of the dangers of reification. He refreshingly breaks down
distinctions between conventional and progressive scholarship, for he detects self-reflective and critical moments in scholars that are usually associated with
straightforward positivist positions (such as E.H. Carr, Hans Morgenthau, or Graham Allison). But Levine also shows how these moments of self-reflexivity never lasted long and were driven
out by the compulsion to offer systematic and scientific knowledge.
outlines why IR scholars regularly closed down critique. Here, he points to a range of factors and phenomena, from peer review processes to the speed at which academics are
meant to publish. And here too, he eschews conventional wisdom, showing that work conducted in the wake of the third debate, while explicitly post-positivist and critiquing the reifying
tendencies of existing IR scholarship, often lacked critical self-awareness. As a result, Levine believes that many of the respective authors
failed to appreciate sufficiently that “reification is a consequence of all thinking—including itself” (p. 68).
The second stage of Levine's inquiry
sustainable critique”: a form of self-reflection that can counter the dangers of reification.
not just something that is directed outwards, against particular theories or theorists. It is also
inward-oriented, ongoing, and sensitive to the “limitations of thought itself” (p. 12).
The third objective of Levine's book is also the most interesting one. Here, he outlines the path toward what he calls “
Critique, for him, is
The challenges that such a sustainable critique faces are formidable. Two stand out: First, if the natural tendency to forget the origins and values of our concepts are as strong as Levine and other Adorno-inspired theorists believe
they are, then how can we actually recognize our own reifying tendencies? Are we not all inevitably and subconsciously caught in a web of meanings from which we cannot escape? Second, if one constantly questions one's own
perspective, does one not fall into a relativism that loses the ability to establish the kind of stable foundations that are necessary for political action? Adorno has, of course, been critiqued as relentlessly negative, even by his
second-generation Frankfurt School successors (from Jürgen Habermas to his IR interpreters, such as Andrew Linklater and Ken Booth).
He starts off with depicting reification not
as a flaw that is meant to be expunged, but as an a priori condition for scholarship. The challenge then is not to let it go unchecked.
The response that Levine has to these two sets of legitimate criticisms are, in my view, both convincing and useful at a practical level.
Methodological pluralism lies at the heart of Levine's sustainable critique. He borrows from what Adorno calls a “constellation”: an
attempt to juxtapose, rather than integrate, different perspectives. It is in this spirit that Levine advocates multiple methods to understand the same event or
phenomena. He writes of the need to validate “multiple and mutually incompatible ways of seeing” (p. 63, see
also pp. 101–102). In this model, a scholar oscillates back and forth between different methods and paradigms , trying to
understand the event in question from multiple perspectives. No single method can ever adequately
represent the event or should gain the upper hand. But each should, in a way, recognize and capture details
or perspectives that the others cannot (p. 102). In practical terms, this means combining a range of methods
even when—or, rather, precisely when—they are deemed incompatible. They can range from
poststructual deconstruction to the tools pioneered and championed by positivist social sciences.
The benefit of such a methodological polyphony is not just the opportunity to bring out nuances and new
perspectives. Once the false hope of a smooth synthesis has been abandoned, the very incompatibility of
the respective perspectives can then be used to identify the reifying tendencies in each of them. For Levine, this is
how reification may be “checked at the source” and this is how a “critically reflexive moment might thus be
rendered sustainable” (p. 103). It is in this sense that Levine's approach is not really post-foundational but, rather, an attempt
to “balance foundationalisms against one another” (p. 14). There are strong parallels here with arguments advanced by assemblage thinking and complexity theory—
links that could have been explored in more detail.
Including the state in analysis is necessary for effective scholarship – the historical
dominance of “state-centrism” is an argument for, not against, its relevance.
Booth 14 – (7/25, Ken, former E. H. Carr Professor of the Department of International Politics at
Aberystwyth University, “International Relations: All That Matters,” google books)
Scholars love to debate the definition of their discipline. This is hardly surprising, as there is always a great deal riding on where one draws the line between what is in or out. In this book,
international relations’ is defined simply as the international level of world politics. By ‘international level’ I mean the interactions largely (but not
of sovereign states; by ‘world politics’ I mean ‘who gets what, when and how across the world’, to stretch Harold Lasswell’s classical definition of ‘politics’. The
reason for accentuating the international level of world politics is twofold. First, as already mentioned, the international is a level with enormous ‘causal weight’. Second, to engage
with ‘who gets what, when and how across the world’ without a coherent focus such as ‘the
‘
exclusively)
international level’ is to invite bewilderment in the face of information overload. This problem is evident in many of the doorstep-sized textbooks
about ‘world’ or ‘global’ politics: what in the world is not a matter of ‘world politics’? The formulation proposed offers a distinct focus (‘the
international’), while being empirically open (‘the world’). I owe this way of thinking largely to C.A.W. Manning, an early doyen of IR, who described
academic international relations as having ‘a focus but not a periphery’. By focusing on the international, critics will say that
I have succumbed to a ‘state-centric’ view of the world. This is the idea that states are the fundamental reality of world politics. Such a view is sometimes
also described as being ‘statist’, meaning endorsing the idea that the state is and should be the highest level of both political decision-making and loyalty. My position is more
complicated: I want to recognize the empirical significance of states and their relations without being
statist politically or ethically. This is like an atheist arguing about ‘religion’. An atheist cannot for long discuss religion without talking about God, but this does not
make the atheist ‘God-centric’; it only means that the atheist is aware of the significance of God when talking about religion. The book will argue that the international level
of world politics is state-dominated in an empirical sense (some states are the most powerful ‘actors’ in the world) without
succumbing to state-centrism in a normative sense (believing that the contemporary states-system represents the best of all possible worlds). Later
chapters will underline that states are not the only actors at the international level: some multinational corporations have more clout than some states. Nonetheless, it would be
foolish to play down the continuing significance of especially the most powerful states in determining ‘who
gets what’ across the world, or the continuing ‘causal weight’ of state interactions in shaping the ‘when and how’ of things happening. Recognizing these
empirical realities is perfectly consistent with accepting that one of the aims of studying IR is to challenge what
is done, and why, and consider whether different worlds are possible and desirable. Matters of continuity and change are always present in international relations. According to the
‘realist’ tradition (explained later), the international level or system has had, again in Waltz’s term, a distinct ‘texture’ (a persisting set of
characteristics) over the centuries. This continuity allows us to have a time-transcending understanding of the situations, dilemmas and crises faced
by leaders and peoples in other places in other eras. Critics of this view - those dazzled by what’s new - tend to argue that talk of ‘texture’ exaggerates
continuity. This is mistaken. There can be no doubt that we live in a new era when it comes to technology and its potential, for example, but have relations
between political units fundamentally changed? We cannot, and should not, assume that everything will always be the same, but it would be
foolish to underestimate the stubborn continuities of state interactions.
Middle Range theorizing is good – proceding from the particular to the universal
avoids the dangers of reification inherent in top down theory formation.
Bennett 13 – (2013, Andrew, PhD in Public Policy from the Kennedy School of Government at Harvard
University, Professor of Government at Georgetown University, “The mother of all isms: Causal
mechanisms and structured pluralism in International Relations theory,” European Journal of
International Relations 19(3) 459 –481)
scholars can use the taxonomy to develop middle-range or typological theories about how combinations of mechanisms interact in
shaping outcomes for specified cases or populations.. A typological theory is a theory that not only defines individual independent variables and the hypothesized causal mechanisms that shape their effects,
but provides ‘contingent generalizations on how and under what conditions they [these variables] behave in specified conjunctions or configurations to produce effects on speci - fied
dependent variables’ (George and Bennett, 2005: 235). Typological theories can model complex causal relations, including non-linear
relations, high-order interaction effects, and processes involving many variables. Examples from comparative politics as well as IR include theories on
alliance burden-sharing (Bennett et al., 1994), national unification (Ziblatt, 2006), welfare capitalism (Esping-Andersen, 1990), national political economies (Hall and
Soskice, 2001), and rebel–home state–host state relations (Bennett, 2012; for analysis of some of these examples and a long list of typologies in IR and comparative politics, see Collier et al., 2012). The taxonomy
thus encompasses the building blocks of theorized mechanisms that can be brought together in different conjunctions to develop typological theories on how
combinations of variables behave. Typological theories allow for cumulative theorizing as scholars can add variables or re-conceptualize them to higher or lower levels
of abstraction (Elman, 2005), and such theories can be fruitfully and cumulatively modified as they encounter anomalous cases or expand to encompass new types of cases (for dis Finally,
cussion of the example of the evolution of typological theorizing on alliance burden- sharing, see Bennett, 2011). Adding variables of course adds to the complexity of the theory, but researchers can pare back this complexity by
controlling for some of the vari - ables and exploring only subsets of the full typological space in any one study. An additional advantage of shifting from the isms and
rooting the IR field more
clearly in theories about causal mechanisms is that this can re-energize interchanges among the IR subfield, the other subfields of political
science, and the other social sci - ences. These dialogues should be a two-way street, with borrowing and learning in both directions. The IR field has already shared with the American politics subfield many theories about
mechanisms of power and institutions, for example, but the taxonomy of mechanisms serves as a reminder that the study of American politics could benefit by paying closer attention to theories on legitimacy, persuasion, norms,
socialization, and identity that have been developed more fully in IR. Cross-field discourse with compara - tive politics can also benefit by moving away from the tribal language of the isms, which is not widely used in comparative
politics. Where the two subfields have intersected in ways that focus on causal mechanisms rather than paradigmatic isms, cross-pollination and collaboration have flourished. This is particularly true in the study of civil and ethnic
conflicts and their interaction with foreign states and transnational actors. In this research program, explanations have focused on mechanisms involving greed, grievance, transac - tions costs, mobilization, framing, informational
asymmetries and credible commitments problems, ethnic security dilemmas, principal–agent relations, and many other factors, and comparativists and IR scholars have collaborated and drawn readily on each other in their
research (Checkel, 2012; Collier and Hoeffler, 2004; Fearon and Laitin, 2003; Kalyvas, 2003; King, 2004; Lichbach, 1998; Salehyan, 2009). Similarly, theoretical con - cepts on causal mechanisms translate far more readily between IR
political science has moved increasingly over the
last two decades toward mechanism-based explanations of complex phenomena. This shift is related to the devel - opment of variants of scientific
and economics, psy - chology, sociology, and history than does the ingrown and esoteric language of the isms Conclusions The field of
realism in the philosophy of science, but it has been hampered by limited understanding among political scientists of how mechanism-based explanations differ from explanations built upon earlier philosophies of science. As Peter
our ontological theories of how politics works, which increasingly embrace complexity,
have outrun our epistemological notions of how to study politics, which still cling to the vestiges of forms of positivism that lost favor among
philosophers decades ago. A focus on explanation via reference to causal mechanisms offers one way of bringing our ontological
assumptions, epistemological approaches, and research methods back into alignment. Yet political scientists have been
Hall has persuasively argued (2003),
hesitant to commit fully to this move because they have lacked a clear sense of the philosophical costs and benefits of mecha - nism-based explanations. The present article has argued that although mechanism-ori - ented
explanations are not without their own drawbacks, they are an improvement over Kuhn’s concept of ‘paradigms’ and Lakatos’s notion of ‘research programs.’ Whereas Kuhnian paradigms and Lakatosian research programs both
foundered, in different ways, on the difficulties of justifying large sets of partially testable inter related ideas, the con - cept of theories about discrete causal mechanisms allows for middle-range typological theories that are more
localized, if also more complex. IR scholars also need assurance that explanation via mechanisms does not entirely lack the key attraction of paradigms or research programs: a structured discourse that provides a framework
around which we can organize cumulative research findings. Why should we move away from the ‘isms’ — realism, liberalism, and constructivism in the IR subfield; rational choice, historical institutionalism, and other ‘isms’ in the
study of American and comparative politics — and toward causal mechanisms if the latter contribute only to a hodgepodge of discrete explanations of individual cases? Here, the taxonomy of causal mechanisms introduced above
extant paradigms and research programs have implicitly relied on causal mechanisms all along and
can be mapped onto an approach that focuses on explanatory mechanisms without reifying them into
grand schools of thought. Cumulation and progress enter in as increasingly refined theories on individual mecha - nisms and as improvements in typological theories on how combinations of
shows how
mechanisms interact to shape outcomes in problem-based research programs. Mechanism-oriented theorizing poses important costs, particularly a loss of parsi - mony compared to extant paradigms and research programs. Still,
there is a strong philosophical basis
for rooting the study of politics in theories about causal mechanisms, and it is possible to do so while maintaining a
structured discourse and cumulating research findings.
researchers using this approach to theory-building can choose different trade-offs along the spectrum between parsimony and complexity. In the end,