Download Source Sheet, #5

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

AI winter wikipedia , lookup

Robot wikipedia , lookup

List of Doctor Who robots wikipedia , lookup

Technological singularity wikipedia , lookup

Robotics wikipedia , lookup

Embodied cognitive science wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Transcript
Source Sheet, #5
Sana Masud
Professor Dempster
UNIV 112
21 October 2014
Research Question: What is artificial intelligence? Is the creation of artificial
intelligence ethical?
MLA Citation: Yampolskiy, Roman V. "Safety Engineering for Artificial General
Intelligence." Topoi 32.2 (2013): 217-26. Springer Link. Web. 2 Nov. 2015.
Background: Roman V. Yampolskiy holds a PhD from the Univerisity of Buffalo. He is
a computer scientist at the University of Louisville, known for his work on behavioral
biometrics, security of cyberworlds and artificial intelligence safety.
Main Claim:
Machines ethics is wrong, and instead we should focus on science safety engineering.
Sub Claims:
1. Machines will be able to self improve
2. Intelligence is the ability to achieve complex goals in complex environments
3. Intelligent robots can have a negative effect on humanity.
4. The human brain and its ethical structures are dependent on the environment the
human developed. Because goal systems are non antrhomorphic, robots will pose
very different ethical questions.
5. Saftey engineering keeps the machine contained to prevent it from doing any
harm.
Evidence:
1. Creating a machine with intelligence allows it to be able to build upon itself.
2. Humanity already has nuclear capabilities that can destroy us, if we create
machines with the same capabilities, they will be able to destroy us too.
3. Military Robots are already being deployed in conflict areas.
4. Humans would have nothing that the robots deemed important. Because we
cannot provide them with anything for their benefit, there is no reason they should
follow our laws.
5. The proposal is to study AIs in virtual worlds in order to study it without releasing
it into the world.
Quotations & Responses:
 “Despite many areas of commonality, ethical norms are not universal, and so a
single “correct” deontological code based on any predefined abstract principles
could never be selected over others to the satisfaction of humanity as a whole; nor
could the moral values of a single person or culture be chosen for all humanity.”
This quote brings up a great point about the difference of ethics between humans
and robots. Because robots are not created by their experiences, their ethics must
Source Sheet, #5





be developed by humans, but because of the many difference experiences,
choosing a set of values to instill on such a universal creation is difficult.
“An intelligence will consume all possible resources in achieving its goals, unless
its goals specify otherwise. If a superintelligence does not have terminal values
that specifically optimize for human well-being, then it will compete for resources
that humans need, and since it is, by hypothesis, much more powerful than
humans, it will succeed in monopolizing all resources.” This quote explains the
importance of monitoring robot growth. As superintelligent beings, they can
easily outcompete humans for any resources, so we must be careful from the early
developmental stages.
“…enforcing mutual cooperation through laws, has no inherent significance if a
single intelligence is far more powerful than the entire state. Thus, direct reward
and punishment will not be sufficient to cause all superhuman AIs to cooperate.”
Because our laws and vodes of conduct mean nothing to robots, they have no
reason to abide by them. As completely different entities, we must find a way to
control the machines.
“The Artificial Intelligence Confinement Problem is defined as the challenge of
restricting an artificially intelligent entity to a confined environment from which it
can’t exchange information with the outside environment via legitimate or covert
channels, unless such information exchange is authorized by the confinement
authority. “ Containing AIs in a virtual reality is the safest place to conduct
research on the nature of these intelligent machines. However, because of their
intelligence, these machines can break the protocol of these constructed spaces
and can escape. Therefore, creating a securely closed environment that ensures
the intelligence in locked in is key.
“…intelligent machines designed to serve us should not be designed to have
human-like characteristics. They should not desire freedom, social status, and
other human values; they should not feel suffering and pain as qualia and in
general they should not have those features that make us ascribe rights to
humans. “ Robots should not be created like humans so as not to create a strong
emotional bond between humans and robots that may endanger some aspect of
human life. This quote points out that robots should be as removed from being
human as possible in order to keep humans as safe as possible.
“Questions & Conclusions: This source gives the counterargument for
developing machine ethics and raises potent questions and solutions for the
difficulty of governing super intelligent machines. It raises a great point of safety
engineering which seems to address the differences between robots and machines
not accounted for in other articles.