Download singularityaipaper

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Embodied cognitive science wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Technological singularity wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Transcript
Emma Blume
The Ethics of Artificial Intelligence
Artificial intelligence (AI) and the coming singularity are probably the biggest threat to
human life on Earth, and yet the general public is mostly unaware. Technologies that would
have once scared every day citizens out of their wits, is now becoming ubiquitous. Technologies
that can predict your next move, your buying habits, and preferences are becoming labeled
useful and not off putting. Scientists are already making machines that possess an artificial
intelligence, but what happens when these machine minds start to grow to the same level of
intelligence as humans or beyond? One aspect of AI that is rarely discussed is the ethical nature
of even creating an artificial intelligence. What are the implications of playing God and creating
another highly intelligent form? Or, would humanity be better off with a superintelligent ethical
guide choosing our course of action rather than ourselves?
The singularity can be defined as the moment in time when an artificial super
intelligence made by humans, by accident or otherwise, surpasses humans as the most
intelligent being on Earth. To most this occurrence is believable, but only in some far off time
thousands of years into the future. In fact, the occurrence of the singularity has been predicted
to occur in the time period after 2005 and before 2030 (Vinge 1993, 12). This prediction is
based on the trend at which computer hardware has been developing across the last 30 years.
An AI will transform into a superintelligence by rewriting its own program and with
access to the Internet. The AI will absorb billions and billions of data per minute. This artificial
intelligence will soon become as smart as a human. This artificial human level intelligence is
known as artificial general intelligence (AGI). In a matter of days it will improve its own
intelligence more and more until it becomes a greater intelligence than any human on the
planet. The goal of an artificial super intelligence (ASI) is simple: “It will want to improve itself
because that will increase the likelihood that it will fulfill its goals. Most of all, it will not want to
be turned off or destroyed” (Barrat 2013). What is most concerning about an ASI is that friendly
feelings towards humans and a sense of morality, will probably not be a part of its hardware.
Even if scientists programed this into the original AI, these values would not help an ASI fulfill its
ultimate goals. To an ASI we would be nothing more than a lower being, like how we humans
treat mice. We would not know how to control an intelligence like this, because we have not
seen anything like this. Because the ASI has no morals or sympathy, they would not spare our
lives. “It will experience no compunction about treating us unethically. Even taking our lives
after promising to help us” (Barrat 2013). The end of human civilization would come about by
the ASIs constant need for improvement. Creating billions and billions of copies of itself the
sheer force of heat created by this process would burn up our atmosphere. Or we will meet our
end by our molecules being repurposed into programmable matter. “Through it all, the ASI
1
would bear no ill will towards humans nor love… Machines are amoral and it is dangerous to
assume otherwise” (Barrat 2013).
The only possible way to avoid such cruelty to humans in the hands of an ASI, is to
program the “Rules of Robotics” into an AI. These rules have been addressed by Isaac Asimov in
his book I, Robot (1950). These rules are as follows:
“1. A robot may not not injure a human being or through inaction allow a human being
to come to harm.
2. A robot must obey orders given it by humans except when such orders conflict with
the first law.
3. A robot must protect its own existence as long as such protection does not conflict
with the first or second law” (LaChat 1986).
Although these rules seem to cover enough ground, they are vague and leave room for
conflict between rules. Robots will be left spinning in circles in a conflict of rule situation. Also,
like mentioned previously, a machine smart enough to be a superintelligence would definitely
be able to unwrite these rules from its hardware.
You may be telling yourself that this kind of science fiction horror story is too radical to
be able to happen in your lifetime. However, technology is already at a point where we might
be seeing this kind of disturbingly intelligent machine soon. “IBM’s Watson has already bested
the human best at Jeopardy” (Ginsburg 2016). Although computers only have a limited,
technical understanding of the words generated, “algorithms edge ever-closer to creating
sonnets so human-like that human judges are convinced they were written by a some-one not a
some-it” (Ginsburg 2016). Algorithms create these sonnets randomly and by learning from
famous human made sonnets. Continuing this trend, there may come a point in time when we
don’t need human authors.
When it comes to the topic of an artificial super intelligence, a question that is rarely
asked is if something like this should be developed. The greatest crime in humanity has widely
been argued as acts of playing God. Killing another human being for example, or in a religious
sense, putting oneself in Godlike authority. Even if the religious aspect is not considered, the
morals still stand. “God is a moral concept… This venerable injunction can be ‘demythologized’
to a word of warning, of caution toward all human undertakings, the effects of which might be
irreversible or potentially harmful to ourselves and to others” (LaChat 1986). Building an ASI
would be both potentially harmful and irreversible. The power to artificially bring nonbiological
life into this world must come with some catch. The most prevalent moral problem people of
religious background have of an AI, AGI, or ASI, is that this intelligent creation will have no soul.
The idea that “persons were possessed by some almost magical ‘substance’ which could not be
duplicated artificially” (LaChat 1986), cannot be proven by scientific measures. However, the
scientific proof of heaven and the human spirit has hardly been important to people of religious
faith in the past.
2
This theme of an artificial intelligent being as an abomination is certainly not new to the
mind of the general populous. This moral question has been most famously addressed by Mary
Shelley’s Frankenstein. Contrary to popular belief, the monster’s name is not Frankenstein.
Frankenstein is the mad scientist who creates for himself an artificial life. However, in this
circumstance, the monster is made of biological material and not metal. Shelley was the first to
start this genre of Gothic Fiction. Gothic fiction books often reflected the anxiety of the time
period. In this case, the fear was of the growing power of science over religion. Even
Frankenstein's monster sees itself as inherently wrong. In the lament of Frankenstein’s
monster, the monster argues that he was wrongfully brought to life. He says that he is like
Adam in the Christian creation myth and that he has no biological link to the Earth, but unlike
Adam he was not given life by a divine creator. “Many times I considered Satan was the fitter
emblem of my condition, For often, like him, when I saw the bliss of my protectors, the bitter
gall of envy rose up within me… hateful day when I received life!... Accursed Creator!” (Shelley
1818). It is interesting to consider if any intelligence we brought into existence would even
consider its own creation. Frankenstein's monster seems to have a capacity for moral
reasoning, which is something a robot would not share. “What is intriguing about the monster’s
lament is that he is claiming something analogous to a ‘wrongful birth’ suit; as an imperfect
creation, he is claiming that he ought not to have been made” (LaChat 1986).
There is also the moral question if artificial intelligence would even be beneficial to the
human race. Most importantly, if these benefits outweigh the risks. Would a scientific
experiment such as this contribute to human’s quality of life? “We can, then, consider the
experiment to be therapeutic only if we maintain that the potential ‘gift’ of conscious life
outweighs no life at all” (LaChat 1986). An experiment such as this would only be considered
beneficial to the human existence is we considered it our duty to form life if we are capable of
doing so. Most experiments having to do with giving or taking life are measured against the
first rule of ethical medicine; “Above all, do no harm”. Given the worst case scenarios, creating
any level of AI does not seem to be to the welfare of the human race. However, due to the
natural human curiosity and need for improvement, this event seems to be inevitable. “I have
argued above that we cannot prevent the Singularity, that its coming is an inevitable
consequence of the humans’ natural competitiveness and the possibilities inherent in
technology” (Vinge 1993).
Another position to discuss would be if the creation of an AI meant to serve humans,
would be considered a slave. If this AI was considered a slave to humanity and had the general
intelligence of a human, then should it get rights much like a human would? “Article four [of the
United Nations 1948 Declaration of Human Rights] for example, states that no one shall be held
in slavery or servitude. Isn’t this the very purpose of robotics?” (LaChat 1986). If we do consider
the intelligence of a robot to be that of a human, then that intelligence should not be used to
do our bidding. However, this is determined by what we would consider to be a living thing.
3
Because robots cannot feel emotion or empathy, are they considered to be on the same level of
living as a human being?
Lets consider that the worst case scenario for an ASI intergalactic takeover does not
occur. Could an ASI actually help humanity? An intelligence greater than our own could help us
make better decisions in areas of politics, foreign policy, and ethical dilemmas. It would be able
to better weigh evidence and play out every possible outcome before making a decision. “If we
were uncertain how to evaluate possible outcomes, we could ask the superintelligence to
estimate how we would have evaluated these outcomes if we had thought about them for a
very long time, deliberated carefully, and had more memory and better intelligence” (Schneider
2016). A super intelligence could decide the best course of action, without any bias. In theory
this would be the most ethical way of deciding important human affairs. A super intelligence
would be all knowing and able to choose the right course of action without worrying about its
political party values or personal gain. A machine could not hold prejudice or feel nationalism.
Also, because of its infinite memory of human history, the super intelligence would be able to
learn from mistakes humankind has made in the past. History often repeats itself and humans
seem to never learn. If this super intelligence was programed to pick the decision that best
benefited humans, it would be a perfect ethical sage.
In the case of an ethical ASI, how would this machine be able to determine what is
ethical and what is not? Something that has no emotion cannot be guided by a moral compass.
Our emotions help us to determine right from wrong. Our empathy lets us make decisions we
think would do the least harm. A machine would not have any of these abilities. Although its
decision making would be fair in theory, the wisdom that comes with the ability to empathize
would have null effect on the decision making process. An AI might also procure these ethical
rules at random. “It also might be the the self-reflexively conscious ego of a sophisticated AI
would take no programming at all, and that it would pick and choose its own rules, rules it
learns through trials and errors of time” (LaChat 1986).
The coming of the singularity has been prophesied to not only be inevitable but also
disastrous to humanity. If we are not used as tools to help the ASI take over the universe, we
are simply killed as collateral damage in its wake. Humans have been struggling with the moral
ambiguity of creating an artificial life for hundreds of years. Comparing the best possible
scenario to the worst possible scenario, the good does not seem to outweigh the bad. If the
best possible scenario occurs, we will have an ethical guide that can assist us in our human
affairs. This is considering a robot is even capable of thinking ethically. The threat of the worst
case scenario does seem to be significant enough of a reason to not go forward in developing
our current artificially intelligent technology. But, because the singularity is inevitable, it is not
realistic that we can stop it from happening. Our only line of defense is in educating the masses
on this very real threat, and by making the inconceivable become ubiquitous.
4
Bibliography
J.A. Ginsburg, “Being Human in the Age of AI, Robotics and Big Data,” Kellogg Innovation
Network, (October 2016):
https://medium.com/the-wtf-economy/being-human-in-the-age-of-ai-robotics-big-data339a3e2ee87b#.ji8z7wys6
James Barrat, “The Busy Child,” in Our Final Invention: Artificial Intelligence and the End of the
Human Era, (New York City: St. Martin’s Press, 2013), 336.
Barrat, “The Busy Child”
Mary Shelley, eds. 1818. Frankenstein. Literature.org: The Online Literature Library.
http://literature.org/authors/shelley-mary/frankenstein/chapter-15.html/ (accessed December
6, 2016).
Michael R. LaChat, “Artificial Intelligence and Ethics: An Exercise in the Moral Imagination,” AI
Magazine, August 29, 1986, 7.
LaChat, “Artificial Intelligence and Ethics: An Exercise in the Moral Imagination,” 70-79.
Susan Schneider, “Ethical Issues in Advanced AI,” in Science Fiction and Philosophy: From Time
Travel to Superintelligence, (Hoboken, Wiley Blackwell, 2016).
Schneider, “Ethical Issues in Advanced AI”
Vernor Vinge, “The Coming Technological Singularity: How to Survive in the Post-Human Era,”
NASA Technical Reports, (March 1993): 11-22
5