Download Paura - Project Anticipation

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Scientific opinion on climate change wikipedia , lookup

Public opinion on global warming wikipedia , lookup

Media coverage of global warming wikipedia , lookup

IPCC Fourth Assessment Report wikipedia , lookup

Surveys of scientists' views on climate change wikipedia , lookup

Years of Living Dangerously wikipedia , lookup

Effects of global warming on humans wikipedia , lookup

Transcript
The notion of “existential risk”
as a framework for the anticipation
of technoscientific progress
Roberto Paura
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
Definition:
An existential risk is one where an adverse
outcome would either annihilate Earthoriginating intelligent life or permanently and
drastically curtail its potential.
(Bostrom, 2002)
A genealogy of the idea
Perception of these risks emerged very recently as an outcome of three trends:
 The growing awareness of the interdependence between humanity and the biosphere, and the
perception of humanity as a planetary species.
 The emergence of the theory of mass extinctions and the related comprehension of the
mechanisms causing extinctions.
 The rise of a technological civilization and of tools developed by mankind able to compromise
its own existence as a civilization.
These three trends reached a point of convergence during the second half of the 20th Century.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
A Genealogy of the Idea / 1
Rachel Carson’s Silent Spring (1962) suggested the impact
of DDT on birds’ food chain and the possible
consequences for humankind. Awareness of biosphere’s
vulnerability to humankind industrialization.
Stewart Brand persuaded NASA to release a photo of the
whole Earth (the «Blue Marble») in 1968. It will led to the
ecological movement represented by the Whole Earth
Catalog and the first Earth Day (1970).
In 1969 James Lovelock presented for the first time at a
seminar the «Gaia hypothesis», then developed along
with Lynn Margulis. Gaia. A New Look at Life on Earth
(1979) suggested the biosphere’s ability of self-regulation
and the potential negative impact of humankind’s
activities. It led to the awareness of the risks related to
ozone depletion discovered in the 1970s.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
A Genealogy of the Idea / 2
From 1900 to 1960 world population doubled from 1,5 to 3
billions.
Paul R. Ehrlich’s The Population Bomb (1968) relaunched
Malthus’ thesis on the risks of overpopulation in a world of
finite resources (food above all). However, the Green
Revolution avoided the nightmare scenarios depicted by
Ehlrich (billions of death for famine, wars for resources
etc.)
The Club of Rome, established in 1968, raised the global
awareness of the risks of an unlimited growth in a finite
world. In 1972 The Limits to Growth report showed the
long-term trends of economic growth and the systemic
risks related, in particular the strong connection between
human development and the depletion of natural
resources.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
A Genealogy of the Idea / 3
During the 1960s, scientists started to take into
consideration the possibility that the Earth could be hit by
asteroids and comets. Some rare minerals that could be
explained as the byproduct of past violent collisions were
found in the Artizona Meteor Crater.
In 1980 Luis and Walter Alvarez published on Science the
paper «Extraterrestrial Cause for Cretaceous-Tertiary
Extinction», proposing that a space collision was the
cause of dinosaurs extinction. A proof was found beneath
the Yucatan peninsula.
Now we know five mass extinctions in Earth’s history («The
Big Five»), and this led to the conclusion that life can
extinguish on a large scale in very short times.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
A Genealogy of the Idea / 4
Since the first atomic tests, concerns arose about the issue
of nuclear fallout. In 1954 fallout of the Bravo Test hit the
crew of the Japanese ship Daigo Fukuryu Maru, raising the
awareness of radiations’ scattering on large scale.
In 1966 RAND Corporation calculated that multiple
nuclear explosions could rise such a quantity of ashes in
the atmosphere to obscure the sunlight. In 1983 a team of
scientists published on Science the paper «Nuclear Winter:
Global Consequences of Multiple Nuclear Explosions»,
which showed that, as a consequence of a nuclear
global war, sunlight would be 99% covered, and the
globale temperatures would drop of 20-35° C, causing the
death of billions for famine.
Since 1947, the Doomsday Clock by the Bulletin of Atomic
Scientists warns about the existential risk of a nuclear war.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
A Genealogy of the Idea / 5
The issue of climate change includes aspects of all the
three trends analyzed before: awareness of the impact of
human civilization on the biosphere, fear of a new mass
extinction (the «sixth extinction», the first one caused not
by external events but by a living species) and a growing
comprehension of the side effects of techoscientific
progress.
The new notion of «Anthropocene», proposed by a
number of geologists and still debated, summarizes the
idea that human development is changing the whole
Earth, with potential catastrophic effects not just for the
biosphere, but even for our civilization.
Alan Weisman’s The World Without Us (2007) and Elizabeth
Kolbert’s The Sixth Extinction (2014) popularized the idea
of climate change as an existential risk.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
A Genealogy of the Idea / 6
In The Imperative of Responsibility: In Search of Ethics for
the Technological Age (1979) Hans Jonas argued that the
ecological crisis comes from a unrestrained scientific and
technological development occuring without an
updated ethical framework to serve as a guide. Now that
our technological power is threating the natural balance,
our responsibility spreads beyond interhuman relations to
the biosphere and should incorporate long-term effects in
any forecast. Humanity should be responsible towards the
future generations and the world they will inherit.
The precautionary principle (Rio Declaration, 1992): «In
order to protect the environment, the precautionary
approach shall be widely applied by States according to
their capabilities. Where there are threats of serious or
irreversible damage, lack of full scientific certainty shall
not be used as a reason for postponing cost-effective
measures to prevent environmental degradation».
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
Classifying existential risks / 1
Bostrom (2002) proposed a first taxonomy based on the
assumption that the final goal of human civilization is to
reach a post-human level of maturity («the singularity»):
 Bangs: Earth-originating intelligent life goes extinct in
relatively sudden disaster resulting from either an
accident or a deliberate act of destruction.
 Crunches: the potential of humankind to develop into
posthumanity is permanently thwarted although human
life continues in some form.
 Shrieks: some form of posthumanity is attained but it is
an extremely narrow band of what is possible and
desirable.
 Whimpers: a posthuman civilization arises but evolves in
a direction that leads gradually but irrevocably to
either the complete disappearance of the things we
value or to a state where those things are realized to
only a minuscule degree of what could have been
achieved.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
Classifying existential risks / 2
Later, Bostrom (2013) proposed the following taxonomy
based on the possible outcomes of existential risks:
 Human extinction: humanity goes extinct prematurely,
i.e. before reaching technological maturity.
 Permanent stagnation: humanity survives but never
reaches
technological
maturity
(subclasses:
unrecovered collapse, plateauing, recurrent collapse).
 Flawed realization: humanity reaches technological
maturity but in a way that is dismally and irremediably
flawed (subclasses: unconsummated realization,
ephemeral realization).
 Subsequent ruination: humanity reaches technological
maturity in a way that gives good future prospects, yet
subsequent developments cause the permanent
ruination of those prospects.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
Classifying existential risks / 3
Distinction between endogenous (or anthropogenic) risks
and exogenous risks (e.g. asteroid collision).
In Our Final Hour (2003) Lord Martin Rees lists the following
anthropogenic risks:
 Nuclear war or mega-terroristic attacks.
 Biohazards: speading of high-mortality pathogens (e.g.
virures) in the environment, natural or engineered for
terroristic purposes.
 Laboratory mistakes: for instance, the gray goo
scenario in the field of nanotechnology, or the
production of strange matters in particle colliders.
 Human impact on environment: extinction of living
species, overpopulation, depletion of natural resources,
climate change, global warming.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
Classifying existential risks / 4
Apart from the ones already mentioned by Rees, John
Casti (2012) lists other risks for the technological civilization:
 Internet black-out.
 Collapse of global supply chain.
 Electromagnetic pulse (EMP) in the atmosphere to
deactivate all electonic systems.
 Collapse of globalization and a new world disorder.
 Depletion of fossil fuels before to turn into a post-fossil
civilization, and collapse to a pre-industrial state.
 Interruption of water and electicity supply chain.
 Global deflation and collapse of financial markets
worldwide.
All these X-Events are related to the growing systemic
complexity of our civilization. According to Casti, we
need to semplify our systems or to adopt more complex
tools to anticipate and manage these risks.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
Studying existential risks / 1
Co-founders:
• Huw Price, Bertrand Russell Professor of Philosophy, Cambridge
• Martin Rees, Emeritus Professor of Cosmology & Astrophysics, Cambridge
• Jaan Tallinn, Co-founder of Skype
Research agenda:
 Developing a general methodology for the management of extreme risk. The aim is to
develop a set of methods and protocols specifically designed for the identification,
assessment and mitigation of this class of risks. These methods will complement other
projects on risks in particular domains.
 Analysis of specific potential risks. Focus areas include: artificial intelligence;
biotechnology risks; climate change and ecological risks; and systemic risks and fragile
networks.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
Studying existential risks / 2
Managing Extreme Technological Risks (ETRs)
 Evaluation of ETRs: understanding the limitations of
standard cost-benefit analysis approaches when
applied to ETRs, and developing a version more suited
to this context.
 Horizon-Scanning for ETRs: identifying potential ETRs
well ahead of time will provide opportunity to study
them and, if necessary, take steps to mitigate or avoid
them.
 Responsible Innovation and ETRs: how to engage with
the broad scientific community encouraging risk
awareness and social responsibility in the development
of technologies with the greatest transformative
potential without discouraging innovation.
 ETRs and the Culture of Science: studying epistemic risk,
inductive risk, scientific pluralism and the uses and
limitations of the precautionary principle, within the
context of managing ETRs.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
Studying existential risks / 3
Specific risk areas:
 Unsustainable environmental change and potentially catastrophic
ecosystem shifts. The aim is to improve understanding of how
ecological tipping points, climate change and socio-political
systems interact with each other.
 Dispel
unjustified
concern
regarding
biosciences
and
biotechnologies, while highlighting risks that need to be taken
seriously.
 Forward planning and research to avoid unexpected
catastrophic consequences in the development of artificial
intelligence.
 A better understanding of the emergent systemic risks of fragile
networks: complex interactions between a rising global
population, pressure on natural resources, more complex supply
chains, and an increasing reliance on both interconnected
technologies and interconnected markets.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
Studying existential risks / 4
Founder: Nick Bostrom, Professor of Philosophy, University of Oxford
Research Agenda:
 Macrostategy: detailed analysis of future technology capabilities
and impacts, existential risk assessment, anthropics, population
ethics, human enhancement ethics, game theory, consideration
of the Fermi paradox, and other indirect arguments.
 Main question: methodological challenges for risks with low
probabilities and high stakes. For instance, estimating the risk
of black hole formation at LHC (Ord et al., 2010). «We can
reduce the possibility of unconscious bias in risk assessment
through the simple expedient of splitting the assessment into
a ‘blue’ team of experts attempting to make an objective risk
assessment and a ‘red’ team of devil’s advocates
attempting to demonstrate a risk, followed by repeated turns
of mutual criticism and updates of the models and
estimates».
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
Studying existential risks / 5
Research Agenda:
 AI Safety
 Technical: to solve the technical challenges of building AI
systems that remain safe even when highly capable. For
instance, how avoid that AI «will not learn to prevent (or
seek!) being interrupted by the environment or a human
operator» (Orseau & Armstrong, 2016).
 Strategic: to understand and shape the strategic landscape
of long-term AI development. Examples of this kind of
research include preventing arms races between groups, the
possible dynamics of an intelligence explosion, and the
extent to which inputs like hardware and software will
contribute to long-run AI development.
 Main question: the global desirability of different forms of
openness in AI development, overcoming secretive and
proprietary approaches that could increase the risks of
an AI race (Bostrom, 2017).
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
Studying existential risks / 6
Founders:
• Jaan Tallinn, Co-founder of Skype
• Max Tegmark, Professor of Cosmology, MIT & Co-founder of the
Foundational Questions Institute
• Meita Chita-Tegmark, Ph.D. candidate in Developmental Sciences at
Boston University
• Viktoriya Krakovna, Research scientist at DeepMind
• Anthony Aguirre, Professor of Physics, University of Califonia Santa Cruz
& Co-founder of the Foundational Questions Institute
Research Agenda:
 Benefits and risks of artificial intelligence
 In January 2015 FLI published an open letter on research priorities for robust and beneficial
artificial intelligence, signed by thousands of experts world-wide; a donation of $10 millions
by Elon Musk followed, to work on these issues. In January 2016 an AI Safety Conference
was organized in Puerto Rico.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
Studying existential risks / 6
Research agenda:
 AI Safety
 In July 2015 FLI published an open letter signed by AI and robotic researches concerning
autonomous weapons, able to select and engage targets without human intervention. They
asked for a global ban.
 In January 2016 FLI organized the Asilomar conference on Beneficial AI, preceded by the
Asilomar AI Principles, 23 principles on the research policy, ethics, values, longer-term issues
related to AI. The aim was to reply the success of the famous Asilomar conference on
recombinant DNA in 1975, where hundreds of scientists agreed upon voluntary guidelines to
ensure the safety of that new technology.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
Studying existential risks / 7
Research agenda:
 Benefits and risks of biotechnology: how can we live
longer and healthier lives while avoiding risks such as
engineered pandemics?
 The case of CRISPR
 Climate change: after transforming our environment to
allow farming and burgeoning populations, how can we
minimize negative impact on climate and biodiversity?
 Nuclear technology: how can we enjoy nuclear power
and ultimately fusion power while avoiding nuclear
terrorism or war?
 FLI is working in support of the UN Nuclear Weapons
Ban Negotiations, on the assumption that nuclear
weapons represent one of the biggest threats to our
civilization (even if US and other 40 nations decided
to boycott the negotiations).
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
Conclusions/ 1
In which sense the notion of existential risk works better
than the precautionary principle in the anticipation of
technoscientific progress?
The far future argument (Baum, 2015): «people should care
about human civilization into the far future, and thus, to
achieve far future benefits, should seek to confront these
catastrophic threats».
Problems:
 people prefer to spend more money and time to help
close friends and family than distant acquaintances;
they presumably would spend even less for members of
far future generations;
 current electoral structures favor the short-term;
 the rise of decentralized capitalist/democratic political
economies and the fall of authoritarian political
economies has diminished major long-term planning.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
Conclusions/ 2
 The precautionary principle is more set on the potential
side effects of current findings, while scholars of
existential risks take into proper consideration even very
unlikely events or findings (e.g. asteroid impacts, grey
goo) that could pose a great threat to human
civilization in the long term.
 Effective use of foresight and anticipatory tools.
 Applying the precautionary principle at every new
scientific finding could stop the technoscientific
progress at all (e.g. GMO, CRISPR). Existential risk
analysis encourages technoscientific progress while
considering at the same time the ways to promote
beneficial outputs and to avoid threats and risks.
 For instance, existential risk organizations do not ask
for stopping R&D programs in strong artificial
intelligence, but in the meantime they develop
strategies to make AI beneficial and to ban
potential hazards like autonomous weapons.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
References
-
Baum S.D. (2015), The Far-Future Argument for Confronting Catastrophic Threats to Humanity: Practical
Significance and Alternatives, «Futures», vol. 72.
Bostrom N. (2002), Existential Risks. Analyzing Human Extinction Scenarios and Related Hazards, «Journal of
Evolution and Technology», vol. 9 no. 1.
Bostrom N. (2013), Existential Risk Prevention as Global Priority, «Global Policy», vol. 4 no. 1.
Bostrom N. (2017), Strategic Implications of Openess in AI Development, «Global Policy» (forthcoming).
Carson R. (1962), Silent Spring, Houghton Mifflin, Boston.
Casti J.L. (2012), X-Events . The Collapse of Everything, William Morrow, New York.
Ehrlich P.R. (1968), The Population Bomb, Sierra Club, San Francisco.
Jonas H. (1984), The Imperative of Responsibility: In Search of Ethics for the Technological Age, University of
Chicago Press.
Kolbert E. (2014), The Sixth Extinction, Henry Holt and Company, New York.
Lovelock J. (1979), Gaia. A New Look at Life on Earth, Oxford University Press.
Ord T., Hillerbrand R., Sandberg A. (2010), Probing the improbable: methodological challenges for risks with low
probabilities and high stakes, «Journal of Risk Research», vol. 13 no. 2.
Orseau L., Armstrong S. (2016), Safely Interruptible Agents, in «Uncertainty in Artificial Intelligence», Proceedings
of the 32d Conference, Jersey City, NJ, USA.
Rees M. (2003), Our Final Hour, Basic Books, New York.
Turco R.P., Toon O.B., Ackerman T.P., Pollack J.B., Sagan C. (1983), Nuclear Winter: Global Consequences of
Multiple Nuclear Explosions, «Science», vol. 222 no. 4630.
Weisman A. (2007), The World Without Us, St. Martin’s Press, New York.
International Workshop on Anticipation, Agency and Complexity – University of Trento, 6-8 Apil 2017
www.instituteforthefuture.it
[email protected]
facebook.com/italianinstituteforthefuture
@institutefuture