Download Ethical Dilemmas of Artificial Intelligence

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Alasdair MacIntyre wikipedia , lookup

Ethics wikipedia , lookup

Consequentialism wikipedia , lookup

Business ethics wikipedia , lookup

Lawrence Kohlberg's stages of moral development wikipedia , lookup

Kantian ethics wikipedia , lookup

Moral development wikipedia , lookup

Morality throughout the Life Span wikipedia , lookup

Cosmopolitanism wikipedia , lookup

Moral disengagement wikipedia , lookup

Ethics in religion wikipedia , lookup

Neohumanism wikipedia , lookup

Morality and religion wikipedia , lookup

Critique of Practical Reason wikipedia , lookup

Morality wikipedia , lookup

Moral relativism wikipedia , lookup

Emotivism wikipedia , lookup

Ethics of eating meat wikipedia , lookup

Moral responsibility wikipedia , lookup

Ethical intuitionism wikipedia , lookup

Secular morality wikipedia , lookup

Thomas Hill Green wikipedia , lookup

Speciesism wikipedia , lookup

Transcript
Ethical Problems of Artificial Intelligence
By Zach R. Williams
Even as scientists are working toward a new generation of artificial intelligence,
the ethics and morals of such a thing are already being debated among those same
scientists, philosophers and ethicists. Many questions arise when discussing the
proposition of an artificially intelligent entity and since such an entity seems not only
possible but even likely in the near future, these are questions that must be considered
sooner rather than later. It is not only the possibility of creating an artificially intelligent
being that must be considered but the ethical consequences of such action.
There are some that would argue against the creation of such entities all together.
The problem, many argue, is in creating a fully conscious, self aware and possibly
autonomous being and simply using said being for our own selfish purposes. As Tom
Fadial wrote, “The current task, however, requires the creation of a race of conscious
beings for a specific purpose (cleaning up our mess). This raises a different concern as it
would involve using conscious beings as means and not ends in themselves.” (1) What
Fadial is referring to here is what is known as Immanuel Kant’s Practical Imperative, a
staple in ethical theory which states: “So act as to treat humanity, whether in thine own
person or in that of any other, in every case as an end withal, never as a means only...”(2)
This means that one should never treat another as an end or with instrumental value.
Granted, while some could argue that these entities are not necessarily part of humanity,
they are of humanity as well as conscious and self aware which gives them a value all
their own. In an article by Dylan Evans of the BBC News he informs us that, “The US
military plans to have a fifth of its combat units fully automated by the year 2020.” (3)
1
This, again, is a place where we would be using conscious beings for our own ends.
Furthermore should an army of said entities be created, what happens to Asimov’s laws
which do not pertain to robots specifically created to do harm? By treating a conscious
being as a tool to be used for one’s own ends we not only run the risk of destroying our
own moral character but as Tom Fadial puts it, “Without ensuring the quality of life of
our creations, we run the risk of “playing god” and losing, of creating beings that would
rather not have been created.” (1)
Some would continue to argue that said entities are still not a part of humanity
and therefore deserve no ethical consideration. However in the development of
artificially intelligent entities, scientists and engineers continue to make them more and
more human-like in shape and appearance. According to Dylan Evans of BBC News,
“David Hanson, an American scientist who once worked for Disney, has developed a
novel form of artificial skin that bunches and wrinkles just like human skin, and the robot
heads he covers in this can smile, frown, and grimace in very human-like ways.” (3) If
these entities are not human, then why are they created to mimic humans in every
conceivable way?
As ridiculous as this may be seen by some, these are valid quandaries that deserve
some thinking on. After all there was a time when animals were not seen as moral
characters and were abused before the notion of animal rights became the popular view.
The same can be said of slaves over a century ago. Slaves were not considered to be
moral characters or members of humanity, they were property as these entities could be
viewed, it is logical to assume that having this same discussion concerning slaves rights
would have been laughable to a slave master. One of South Korea’s leading roboticists,
2
Jong-Hwan Kim puts it this way, ““as robots will have their own internal states such as
motivation and emotion, we should not abuse them. We will have to treat them in the
same way that we take care of pets.” (4) This is valid because the animal rights
movement and how we treat animals in our society today could have a great bearing on
how we would treat artificially intelligent entities in the future. Such a notion has been
discussed at length by D. Calverley as he concludes that “not withstanding the divergence
between animals and androids, the way in which animals are viewed as moral objects
worthy of consideration is a meaningful way to look at androids.” (4) This is because
over time animals have been given moral weight as their consciousness and will have
been recognized. As AIE’s, or androids as Calverley calls them, become more and more
human, such things must be considered.
David Levy takes this notion a step further when considering the issues of our
own humanity and the effects it will have on the next generation should we treat such
beings as property. “If our children see it as acceptable behaviour from their parents to
scream and shout at a robot or to hit it, then, despite the fact that we can program robots
to feel no such pain or unhappiness, our children might well come to accept that such
behaviour is acceptable in the treatment of human beings.” (4) This is no small issue of
concern since children will emulate the behavior of their parents or others they may
admire. To put it in a somewhat different way Jaron Lanier is concerned for our own
sense of humanity saying that giving robots rights may “Widen the moral circle too
much.” Or we may lose a sense of what makes us special as human beings should such a
thing as robot rights occur.
3
Wendell Wallach, author of the book Moral Machines: Teaching Robots Right
From Wrong has several other concerns pertaining to the development of actually
teaching an artificially intelligent entity morals or right from wrong. He asks, “Do rulebased ethical theories such as utilitarianism, Kant’s categorical imperative or even
Asimov’s laws for robots, provide practical procedures (algorithms) for evaluating
whether an action is right?” (5) As well as, “How might an artificial agent develop
knowledge of right and wrong, moral character, and the propensity to act appropriately
when confronting new challenges?” (5) One must ask if such a thing as moral character
or a knowledge of right and wrong can be programmed. Do humans learn this way or is
it the experience of doing right and wrong things that teaches us? Furthermore, should
AIE’s be programmed with emotion in order to understand human behavior better and if
so what effect will this have on how they act and even feel?
Ultimately these are all questions that are being considered as the invention of
artificial intelligence continues. Dozens of other questions remain regarding the ethical
theory of robots, called roboethics by some. Should any robots have rights or only
certain robots? Should robots and humans be allowed to be in a relationship? Should
robots be allowed to own property or have money? Throughout history humanity has
learned only when it has made a mistake, the lessons of slavery are not the least of these.
As time goes on these are all questions that must be answered and with any hope they
will be answered before a mistake, much like slavery or animal rights, is made.
4
Bibliography
1. Fadial. T.F. 2010. The Ethics of AI: Part One.
http://erraticwisdom.com/2010/04/05/the-ethics-of-ai-part-one
2. Somers. C.S. 2004. Vice & Virtue In Everyday Life Belmont, CA.
Wadsworth/ Thomson Learning.
3. Evans. D.E. 2007 The Ethical Dilemmas of Robotics
http://news.bbc.co.uk/2/hi/6432307.stm
4. Levy. D.L. 2009. The Ethical Treatment of Artificially Conscious
Robots.
http://www.springerlink.com/content/qn43n18226551422/fulltext.pd
f
5. Wallach. W.W. 2010. The Challenge of Moral Machines.
http://philosophynow.org/issue72/72wallach2.htm
5