Download Yale University “Imagine that you work for a computer company. You

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

The City and the Stars wikipedia , lookup

Human-Computer Interaction Institute wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Robot wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

List of Doctor Who robots wikipedia , lookup

Robotics wikipedia , lookup

Cubix wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Transcript
ROBOT MORALS
AND
HUMAN ETHICS: THE SEMINAR
Wendell Wallach
Yale University
“Imagine that you work for a computer company. You have just
been assigned to a team which is charged with coming up with an
approach for building a robot capable of making moral decisions. Where
would you start? How would you proceed? Is such a task one that you
believe is even possible?”
This is the situation I posed to undergraduates at the beginning of
the first class in a seminar titled, Robot Morals and Human Ethics. The
seminar was an introduction to the new field of inquiry that is variously
known as machine ethics, machine morality, artificial morality,
computational ethics, robot ethics, and friendly AI.
Students’ fascination with emerging technologies, nurtured by
science fiction and new gadgets, provides a natural well-spring of issues
that can be used to teach ethics. In particular, a broad array of
philosophical, practical, and societal concerns naturally arise in the study
of robotics.
The students were an eclectic group, with some majoring in obvious
fields such as computer science, engineering, and philosophy. But there
were other members of the seminar whose major was psychology,
political science, cognitive science, or English. Very few students knew
much about research in fields other than their major. But they soon came
to realize that designing a computer or robot capable of factoring moral
considerations into its choices and actions is a truly interdisciplinary task.
The students, primarily juniors and seniors peppered with a couple
of motivated underclassmen, had all elected to be in the seminar. This, no
doubt, was a luxury that gave me some latitude to assign quite a bit of
reading. At the time of these seminars (it was taught twice) there was no
single text that covered the field, so all the readings were from primary
sources .
The field of machine morality is driven by a series of questions.
87
88
Teaching Ethics, Fall 2010
1. Do we need robots capable of making moral decisions? When? For
what?
2. Do we want computers making moral decisions?
3. Are robots the kinds of entities capable of making moral decisions?
4. Whose morality or what morality should be implemented?
5. How can we make ethics computable?
Only a few other teachers will have the opportunity or be interested
in offering a full course on machine ethics. But given the breath of
themes we touched upon, others may find one or two of the topics from
this seminar useful for exposing their students to particular aspects of
ethics.
In our earlier meetings, we surveyed ethical and societal challenges
posed by the development of service robots and the military use of
weapons-carrying robots. The basic point I hoped students would grasp
is that the increasing autonomy of computers and robots meant that
engineers and designers could not always predict what these systems
would do when the robots encountered new situations with new inputs.
Some basic evaluation of moral considerations by the robots would be
necessary to ensure that their actions did not cause harm to humans or
other entities worthy of moral consideration. In other words,
implementing moral decision making in robots is a practical ethical
challenge.
However, I did poll students at the end of the first meeting as to
whether they believed robots would ever be able to make sophisticated
moral decisions. They were roughly divided on whether this was possible.
I polled them again on this question at the end of our last meeting.
During our first class, the students generally agreed that
implementing moral decision making faculties within robots was an
important task, although they differed on whether this was possible, and
if possible, whether the robots would be doing anything more than
executing instructions built into their program by engineers. A central
question was whether deterministic machines could make moral
decisions? This led directly into a discussion of free will and
determinism. I could generally rely on one of the computer science
majors raising the question of whether decisions made by humans were
any freer than those that would eventually be made by thinking machines.
Thus an underlying tension in the course was set up very early in our
discussions. In what ways do we humans truly differ from the artificial
entities we will create?
Wendell Wallach: Robot Morals and Human Ethics: The Seminar
89
Some of the students were aware of predictions that robots will
equal and then exceed human intelligence within the next few decades.
This transition is often referred to as the Singularity. But in order to keep
the Singularity from being a premise that would color all of our
reflections, I tried to ground the discussion more in the technologies we
have today, and what we can anticipate over the next decade. What exists
between here and the more intelligent robots being predicted? Might
some of the societal, ethical, and technological challenges encountered
along the way arrest the development of robots with higher order
cognitive faculties?
In considering how to implement moral decision making within
robots two approaches come to mind:
1. Top-down: methods for implementing a set of rules (The Ten
Commandments or Asimov’s Laws for Robots) or a top-down
theory (utilitarianism or Kant’s categorical imperative) into a
computational system.
2. Bottom-up: approaches inspired by either evolution or moral
development that would facilitate a robot acquiring sensitivity to
moral considerations.
Three of the sessions were spent on top-down and bottom-up
approaches. These afforded the opportunity to teach the basics of
deontological and consequential theories, evolutionary psychology, and
moral education. Beyond introducing these subjects, our discussion
probed whether any of these theories or approaches is computationally
tractable. Could they be implemented in a robot?
Students who were unfamiliar with fields of research outside their
major were at best getting an educated dilettante’s introduction.
Whenever a new subject came up, we broke for a five to ten minute
mini-lecture. A student who had majored in the relevant field would
attempt to teach her colleagues about the prisoner’s dilemma, rule
utilitarianism, genetic algorithms, or Lawrence Kohlberg’s theory of
moral development. After the student finished, I would underscore the
key points. Sometimes the student’s explanations were rather poor, so I
needed to be prepared with clear and quick summaries of key subject
matter. Another trick was to anticipate the themes for the coming week,
and to spend the last few minutes of the class with a mini-lecture that
outlined the fields which would be touched upon in our next meeting.
I had the luxury of motivated, expressive students. But if that is
absent, this course could also be taught through lectures. Better yet
would be lectures together with a once a week seminar.
90
Teaching Ethics, Fall 2010
For a mid-semester assignment, students were asked to pick a
top-down ethical theory and discuss whether that theory could be
implemented in a robot. The first year I taught the course, about a third
of the students elected to write on implementing rule utilitarianism in a
robot. Surprisingly, in the second year five students explored whether a
robot might be capable of evaluating choices and actions using Kant’s
categorical imperative.
The strengths and weaknesses of either a top-down or bottom-up
approach soon became apparent to all the members of the class, and we
then turned to a hybrid approach that might combine the two.
Interestingly, virtue ethics can be looked at as a hybrid, given that the
virtues can be described as top-down aspects of character, but they are
also habits of mind and body learned through experience. Some theorists
have proposed that the connectionist learning by a neural network has
similarities to the kind of learning Aristotle proposes in the Nicomachean
Ethics.
In our meetings we also touched upon philosophy of computing,
engineering ethics, and whether computers should be allowed to make
moral decisions. I assigned Caroline Whitbeck’s 1995 essay on “Teaching
Ethics to Scientists and Engineers: Moral Agents and Moral Problems”
to the students. I believed it would be helpful for them to appreciate the
differences in how engineers would approach moral problems from the
way they are discussed by moral philosophers or in political debates.
Up to this point our classes had addressed the prospect of robots
making appropriate ethical choices within limited contexts. However,
there was a constant pressure to consider whether more advanced
systems could be full artificial moral agents. A session was dedicated to
criteria for attributing moral agency, and whether a mindless system can
be a moral agent. This opened up into a broader discussion of what
machines could understand, emotional intelligence, embodied cognition,
and social skills. What, in addition to being able to reason about moral
challenges, would an artificial agent need in order to function as a moral
agent? Could we imagine a robot being a moral agent if it lacked
emotions of its own, consciousness, an understanding of the semantic
content of symbols, free will, or empathy, or if it was not embodied in a
world with other agents? Certainly even a disembodied computer might
make appropriate ethical choices within specific contexts without all
these additional attributes or cognitive skills. But full moral agency might
depend upon many capabilities in addition to moral reasoning,
capabilities that are often taken for granted when discussing humans.
Wendell Wallach: Robot Morals and Human Ethics: The Seminar
91
Those without a background in computer science were surprised to
learn that there are active research programs in affective computing,
machine consciousness, embodied cognition, social robotics, and a theory of mind for robots. However, this does not mean it will be easy to
reproduce all these human-like abilities within computational systems.
Indeed, robots may be capable of acquiring certain functional skills in
ways that would be very different from the manner in which humans perform the same tasks. For example, sonar and other sensors have been
more useful in helping robots navigate around rooms and buildings than
visual perception.
In some respects computers have the potential to be better moral
decision makers than human. Their rationality is less bounded, and therefore they may be able to evaluate a greater array of responses to a difficult
challenge than a human would consider. In turn they might come up with
a better solution to a complex situation. Presumably, computers will not
be subject to jealousy, sexual pressures, or emotional hijackings, unless
simulated emotions are programmed into them. That is, computers are
more likely to emulate a stoic ideal. It has also be argued by Ronald Arkin
that a robot soldier is likely to be more ethical than a human soldier
because the robot can be relied upon to obey the laws of warfare and
rules of engagement.
Considerations as to whether robots could be full moral agents
opened the floodgates to a host of futuristic issues. Who should be held
accountable when a robot commits a harmful act? Could a robot be held
accountable for its own action? Does punishing a robot for a violation
make any sense? Should an advanced robot be given rights? If robots will
eventually be smarter than humans, then how can we ensure that such
advanced systems will be friendly to humans and human concerns? If we
cannot rely on future robots being friendly, should we relinquish research
on advanced forms of artificial intelligence?
A great deal of intellectual excitement is generated by the breadth of
serious philosophical and practical ethical questions that converge on the
field of machine ethics. While the explicit topic is implementing moral
decision making in robots, the implicit theme is human decision making
and ethics. How do we make moral decisions? Reflecting on building
artificial moral agents forces one to approach human moral decision
making in a particularly comprehensive manner.
At the end of the last meeting I again asked the students whether
they believed we would develop robots capable of making sophisticated
moral decision. Most of the students believe we would, although a
92
Teaching Ethics, Fall 2010
significant minority of the students still felt we were unlikely to be able to
build robots with full human-like cognitive capabilities.
NOTE
The bibliography for this seminar is available upon request. The
original seminars relied on primary source materials, as there was no one
text at the time that surveyed the breadth of subjects we touched upon.
Moral Machines: Teaching Robots Right from Wrong (Oxford University Press
2009), a book I co-authored with Colin Allen, can serve as a text for
anyone interested in teaching all or some aspect of this seminar.
[email protected]