Download Philosophical issues of artificial intelligence

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Turing test wikipedia , lookup

Artificial intelligence in video games wikipedia , lookup

AI winter wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Technological singularity wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Transcript
Philosophical issues of artificial intelligence
Alexey Melnikov
Institute for Theoretical Physics, University of Innsbruck
Philosophy of Scientific Computing, December 1, 2016, Innsbruck
Outline
✤
Interaction between philosophy and artificial intelligence
✤
Two big philosophical questions of artificial intelligence
✤
Risks of developing artificial intelligence
Alexey Melnikov. Philosophical issues of artificial intelligence
2
How can philosophy help AI?
Philosophy plays a relevant role for AI in clarifying its goals and
methods.
Philosophers are around for much longer than computers, and they
are trying to resolve many of the same questions that artificial
intelligence (AI) science is addressing:
✤
How can minds work?
✤
How do human minds work?
✤
Can nonhumans have minds?
“Much of AI already builds on works by philosophers” (Sloman, 1995)
Philosophy has an influence on AI both from a historical and a
methodological point of view.
V. Schiaffonati. Minds and Machines 13: 537–552 (2003)
Alexey Melnikov. Philosophical issues of artificial intelligence
3
How can AI help philosophy?
AI offers powerful tools to philosophy in answering different
questions.
Some philosophers of science use the computational approach
provided by AI, partly because it has the tools to give detailed, causal
explanations of intelligent behavior.
Philosophers are making philosophical claims based on computer
simulations of machine learning agents.
Alexey Melnikov. Philosophical issues of artificial intelligence
4
General intelligence
Artificial intelligence algorithms are becoming better in tasks that were
considered to be very difficult for a computer program.
Handwriting recognition
Classification of images
F. Zamora-Martínez et. al., Pattern Recogn. 47, 1642 (2014)
Y. Jia et al., in Proc. of the 22nd ACM MM, 675 (2014)
Alexey Melnikov. Philosophical issues of artificial intelligence
5
General intelligence
Artificial intelligence algorithms are becoming better in tasks that were
considered to be very difficult for a computer program.
Robot adapts to injury
Self-driving cars
A. Cully et. al., Nature 521, 503 (2015)
google.com/selfdrivingcar
Alexey Melnikov. Philosophical issues of artificial intelligence
6
General intelligence
Artificial intelligence algorithms are becoming better in tasks that were
considered to be very difficult for a computer program.
Playing video games
Mastering the game of Go
V. Mnih et. al., Nature 518, 529 (2015)
D. Silver et. al., Nature 529, 484 (2016)
Alexey Melnikov. Philosophical issues of artificial intelligence
7
General intelligence
Artificial intelligence algorithms are becoming better in tasks that were
considered to be very difficult for a computer program.
Current AI algorithms can only solve specific, externally-provided
problems.
Can a single AI agent be good in several different tasks?
Can a machine solve any problem that a person would
solve by thinking?
Can a machine display general intelligence?
Alexey Melnikov. Philosophical issues of artificial intelligence
8
The first big question: weak AI
Can a machine be intelligent?
The positive answer to this question would mean
the existence of the so-called weak AI
Similar questions:
✤
Can a machine act intelligently?
✤
Can a machine think?
✤
Can a machine solve any problem that a person would solve by
thinking?
S. Russell and P. Norvig. Artificial intelligence: A Modern Approach, 3rd edition (Prentice Hall, 2009)
Alexey Melnikov. Philosophical issues of artificial intelligence
9
The first big question: weak AI
Arguments that support the existence of the weak AI
(existence of intelligent machines):
✤
Every aspect of learning or any other feature of intelligence can be
so precisely described that a machine can be made to simulate it
(Dartmouth proposal).
✤
The mind can be viewed as a device operating on bits of information
according to formal rules (Dreyfus).
✤
AI in computer science is usually defined as the quest for the best
agent program on a given architecture (Russell, Norvig).
Weak Al is by definition possible:
for any digital architecture with k bits of program storage
there are exactly 2 k agent programs, and all we have to do to
find the best one is enumerate and test them all.
Alexey Melnikov. Philosophical issues of artificial intelligence 10
The first big question: weak AI
Arguments against the existence of the weak AI:
✤
AI in computer science uses symbolic planning, automated theorem
proving, computer vision, machine learning, data mining. Once we
understand how to solve a problem using these techniques, it is no
longer considered to require intelligence. AI never gets credit for its
achievements (Piater).
✤
Scaling up machine learning does not lead to human-level AI (no
general intelligence).
✤
Human intelligence relies not only on conscious symbolic
manipulation, but also on unconscious instincts, which would never
be captured by formal rules (Dreyfus).
Alexey Melnikov. Philosophical issues of artificial intelligence 11
The first big question: weak AI
Similar question:
Can a machine think?
“The question of whether Machines Can Think is about as relevant as
the question of whether Submarines Can Swim” (Dijkstra, 1984).
The practical possibility of “thinking machines” has been with us
for only about 50 years, not long enough for speakers of English to
settle on a meaning for the word “think” — does it require “a brain”
or just “brain-like parts”.
Alexey Melnikov. Philosophical issues of artificial intelligence 12
Turing test
Can a machine be intelligent?
Can a machine act intelligently?
Can a machine think?
Instead of asking these difficult questions we should probably ask
whether machines can pass a behavioral intelligence test (Turing).
If a machine acts as intelligently as a human,
then it is as intelligent as a human
https://commons.wikimedia.org/wiki/
File:Turing_Test_Version_3.svg
Alexey Melnikov. Philosophical issues of artificial intelligence 13
Turing test. Some criticism
If a machine acts as intelligently as a human,
then it is as intelligent as a human
The Turing test is explicitly anthropomorphic
(human behavior is attributed to non-human
entities).
Aeronautical engineers do not define the
goal of their field as “making machines that
fly so exactly like pigeons that they can fool
other pigeons” (Russell, Norvig).
http://csunplugged.org/the-turing-test/
http://benniemols.blogspot.co.at/2010_10_01_archive.html
https://commons.wikimedia.org/wiki/
File:Turing_Test_Version_3.svg
Alexey Melnikov. Philosophical issues of artificial intelligence
14
The second big question: strong AI
Can a machine act intelligently by actually
thinking (not just by simulating thinking)?
The positive answer to this question would mean
the existence of the so-called strong AI
Similar questions:
✤
Can a machine have a mind, mental states, and consciousness in the
same way that a human being can?
✤
Can it feel how things are?
✤
Are human intelligence and machine intelligence the same?
✤
Is the human brain essentially a computer?
S. Russell and P. Norvig. Artificial intelligence: A Modern Approach, 3rd edition (Prentice Hall, 2009)
Alexey Melnikov. Philosophical issues of artificial intelligence 15
The second big question: strong AI
Position of the AI researchers
“Most AI researchers take the weak AI hypothesis for granted, and
don't care about the strong AI hypothesis — as long as their
program works, they don't care whether you call it a simulation of
intelligence or real intelligence” (Russell, Norvig).
Alexey Melnikov. Philosophical issues of artificial intelligence 16
The second big question: strong AI
Many philosophers have worries that a machine that passes the
Turing test would still not be actually thinking, but would be only a
simulation of thinking.
On the other hand, why should we insist on a higher standard for
machines than we do for humans? After all, in ordinary life we
never have any direct evidence about the internal mental states of
other humans.
Alexey Melnikov. Philosophical issues of artificial intelligence 17
Chinese Room argument
Imagine a hypothetical system that is clearly running a program
and passes the Turing test, but that equally clearly does not
understand anything of its inputs and outputs (Searle, 1980).
The system consists of a human, who
understands only English, equipped with a rule
book, written in English. The instructions may
include writing symbols on new slips of paper,
finding symbols in the stacks, rearranging the
stacks, and so on. Eventually, the instructions
will cause one or more symbols to be
transcribed onto a piece of paper that is passed
back to the outside world.
http://theness.com/neurologicablog/
index.php/ai-and-the-chinese-roomargument/
Alexey Melnikov. Philosophical issues of artificial intelligence 18
Risks of developing AI
✤
We develop an AI that can learn complex concepts
✤
We develop an AI that can learn more complex concepts than we can
✤
An AI develops an AI that can learn more complex concepts than its
creator can
✤
The latter AI does the same thing, only better and faster
✤
We can no longer understand what the AI does
✤
We can no longer control the AI
The Singularity
An AI develops a better AI that faster develops an even better AI,
and so on …
Alexey Melnikov. Philosophical issues of artificial intelligence 19
Risks of developing AI
✤
People might lose their jobs to machines
✤
AI machines might be used toward undesirable ends
✤
The use of AI machines might result in a loss of accountability
✤
The success of AI might mean the end of humans
OR
✤
Robots will take over all our labor, we will have more leisure time
✤
Diseases will be eliminated
✤
Death will no longer be inevitable
✤
There will be no wars, hunger, natural and human-made
environmental disasters
Thank you for your attention!
Alexey Melnikov. Philosophical issues of artificial intelligence 20