Download Philosophy of Artificial Intelligence: Robotics

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Human–computer interaction wikipedia , lookup

The City and the Stars wikipedia , lookup

Technological singularity wikipedia , lookup

Kevin Warwick wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Intelligence explosion wikipedia , lookup

Index of robotics articles wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Adaptive collaborative control wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Self-reconfiguring modular robot wikipedia , lookup

Robot wikipedia , lookup

Robotics wikipedia , lookup

Robotic automation software wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Transcript
Philosophy of Artificial Intelligence: Robotics
My paper is about the philosophy of robotics in artificial intelligence. I will discuss many
things, which will include rights for robots, ethical treatment of robots, the difference between
a “robot” and an android. I think that by better understanding robots today, humankind will
have a better idea of what is possible in the future.
First, I will discuss the difference between a robot and an android. Although they have
been used interchangeably, an android by definition is an robot which possesses human form,
whereas a robot may, but does not need to. (1) Human form, in this instance, is considered the
ability to pass for human in natural light. Robots are currently used in factories across the
world, and don’t necessarily have to resemble humans.
Android (Left) – Robot (Right)
With the rise of technology, robots are being used more and more often. You’ll find
robots in factories, military and in schools and healthcare. As robots become more common
place in society and people become more used their presence, humans will eventually bring
robots into their homes (Like Bicentennial Man). With the potential introduction into people’s
homes, less robotic, more human-like entities will be desired. This is where androids come in.
As seen in the picture above, we have already achieved nearly realistic human features for a
robot and as technology, computer technology in particular, becomes faster, smaller and more
efficient, mankind comes closer to achieving a true AI entity.
As robots take that fateful step into the human world, their comes a time when the
purpose of robots or androids will be asked. That time is now! In the Spring of 2007, South
Korea, a leading force in the world of technology and robotics, set out to codify the future rules
for robots. They seek to treat intelligent robots ethically and deal with some of the social issues
brought about by interaction with robots, mainly addiction to robot interaction and human
control over robots. The Korean Charter’s initial rules will be similar to Asimov’s three laws of
robotics, but it is thought that robotics technology is not at a level to require such rules at this
point. (2)
Presently, rights for robots put robots in the place of silent slaves, or pieces of property.
Computers, robots and androids cannot be accused of or charged with crimes, as they are not
considered people, although one must remember that several races of humans have been
treated in the same fashion in the past. The question, though, is not whether today’s
computers and robots should have rights, but whether future robots, that possess near human
intelligence, should be treated as humans. It may also be a crime for humans to tamper with a
robot’s coding, allowing it to neglect or go against its built in safety features. (3)
It has long been a thought of science fiction fanatics that robots may someday oppress
or destroy mankind. Films like The Matrix, Blade Runner and Terminator show robots in a
menacing, malevolent position. Whether to gain freedom, as in The Matrix, or as a means of
survival, as in Blade Runner, these robots hurt and kill human beings.
Asimov combats these potential fears with his three laws, but as humans come closer to
achieving an artificially intelligent robot, at what point will humans be forced to allow robots
some level of freedom. And with this freedom, will robots act as humble servants, as
benevolent protectors or the eventual destroyers of mankind? Given human nature, which will
inevitably be programmed into robots, these machines might fill all of these positions, just as
humans have in the past. I believe that given the different intelligences, biases and upbringing
that all robots will have, these machines will be as diverse as humans. It will be mankind’s
ultimate decision to allow robots the freedom that humans so desire or keep robots in a state
of eternal servitude to mankind.
Bibliography
(1) Edmond Woychowsky - April 13, 2010 - http://www.techrepublic.com/blog/geekend/thedifference-between-robots-and-androids/4626 (2) Robot Code of Ethics to Prevent Android Abuse, Protect Humans - Stefan Lovgren - March 16,
2007 - http://news.nationalgeographic.com/news/2007/03/070316-robot-ethics.html
(3) The Legal Rights of Robots - Robert A. Freitas Jr.
http://www.rfreitas.com/Astro/LegalRightsOfRobots.htm
(4) Joel Marks - Man in the Middle: Animals, Humans and Robots
http://www.philosophynow.org/issue72/Man_in_the_Middle_Animals_Humans_and_Robots