Download Three Hard Problems (for computers)

Document related concepts

Computer Go wikipedia , lookup

AI winter wikipedia , lookup

Technological singularity wikipedia , lookup

Chinese room wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Intelligence explosion wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Transcript
True or false?
1. Machine intelligence will never equal human intelligence.
True or false?
1. Machine intelligence will never equal human intelligence.
2. Humans are the most intelligent things that God could possibly create.
True or false?
1. Machine intelligence will never equal human intelligence.
2. Humans are the most intelligent things that God could possibly create.
3. Humans are really just a very sophisticated machine.
True or false?
1. Machine intelligence will never equal human intelligence.
2. Humans are the most intelligent things that God could possibly create.
3. Humans are really just a very sophisticated machine.
4. Humans have never created anything that can outperform humans.
True or false?
1. Machine intelligence will never equal human intelligence.
2. Humans are the most intelligent things that God could possibly create.
3. Humans are really just a very sophisticated machine.
4. Humans have never created anything that can outperform humans.
5. Some machines are already more intelligent than some humans.
What is Intelligence?
What (who) is intelligent?
Worlds First “Automated”
Chess Player
The Turk (von Kempelen)
Sample Conversation with Eliza
Eliza:
Me:
Eliza:
Me:
Eliza:
Me:
Eliza:
Me:
Eliza:
Me:
Eliza:
Me:
Eliza:
Me:
Eliza:
Me:
Eliza:
Please tell me your problem.
I'm overwhelmed
Did you come to me because you are overwhelmed?
I did
Say, do you have any psychological problems?
I'm paranoid delusional
How long have you been paranoid delusional?
About three decades
What does that suggest to you?
That I'm in big trouble
Do you believe it is normal to be in big trouble?
Only if one is abnormal
Are you saying no just to be negative?
I didn't say no
You are being a bit negative.
What does that suggest to you?
We were discussing you --not me.
What (who) is intelligent?
Where is the intelligence?
http://www.primates.com/chimps/chimpanzee-picture.jpg
http://upload.wikimedia.org/wikipedia/commons/4/4e/Human_and_chimp_brain.png
Some ways intelligence is
measured
•
•
•
•
•
•
•
IQ test
Nobel prize
SAT, ACT, grades
Loquaciousness
Silence (“better to remain silent…”)
Success
Survival
Can a machine be intelligent?
Area A
Area B
Area C
Schema
GM/NASA Robot
Watson
Intelligence
Turing Test
Intelligence
Turing Test
A Brief History of Machine Intelligence
Commander
Data
1997
Difference engine
Babbage
1879
“Automated chess”
von Kempelen
1770
Pascaline
Pascal
1653
Artificial duck
Vaucanson
1739
Eniac
Mauchly and Eckert
1946
Machine of the Year
Time
1983
R2D2
In a Galaxy,
Far, Far Away
Consider the robots in various Sci-Fi movies…
What technical problems must be solved in order to have robots
with the capabilities presented in these movies?
Consider the robots in various Sci-Fi movies…
What technical problems must be solved in order to have robots
with the capabilities presented in these movies?
(a) Voice recognition
(b) Realistic speech synthesis (with inflection)
(c) Natural language understanding
(d) 3D vision processing
(e) Extensive, readily accessible knowledge base
(f) Rapid learning ability
(g) Commonsense reasoning
(h) Power supply to sustain long-lasting and powerful motion
(i) Durable components
A few real robots…
http://asimo.honda.com/asimotv/
Humor12.com
http://i.livescience.com/images/080418-human-brain-02.jpg
Humor12.com
Intelligence
The creative use of acquired knowledge in a
variety of environmentally constrained
situations.
This presumes the following:
hierarchical knowledge base, associative
memory, learning, symbol processing, concept
formation, problem solving, use of rules, creative
generalization, autonomy, multi-faceted
capabilities
“I believe that understanding intelligence involves understanding
how knowledge is acquired, represented, and stored; how
intelligent behavior is generated and learned; how motives, and
emotions, and priorities are developed and used; how sensory
signals are transformed into symbols; how symbols are
manipulated to perform logic, to reason about the past, and plan
for the future; and how the mechanisms of intelligence produce
the phenomena of illusion, belief, hope, fear, and dreams—and
yes even kindness and love. To understand these functions at a
fundamental level, I believe, would be a scientific achievement
on the scale of nuclear physics, relativity, and molecular
genetics.” (James Albus)
Possible Objections
Computing Machinery and Intelligence (A. M. Turing, 1950)
• (1) The Theological Objection Thinking is a
function of man's immortal soul. God has given
an immortal soul to every man and woman, but
not to any other animal or to machines. Hence no
animal or machine can think.
• (2) The 'Heads in the Sand' Objection "The
consequences of machines thinking would be too
dreadful. Let us hope and believe that they
cannot do so."
• (3) The Mathematical Objection There are a
number of results of mathematical logic which
can be used to show that there are limitations to
the powers of discrete-state machines.
Possible Objections
(continued)
• (4) The Argument from Consciousness This
argument is very well expressed in Professor
Jefferson's Lister Oration for 1949, from which I
quote. "Not until a machine can write a sonnet or
compose a concerto because of thoughts and
emotions felt, and not by the chance fall of
symbols, could we agree that machine equals
brain-that is, not only write it but know that it had
written it. No mechanism could feel (and not
merely {p.446} artificially signal, an easy
contrivance) pleasure at its successes, grief
when its valves fuse, be warmed by flattery, be
made miserable by its mistakes, be charmed by
sex, be angry or depressed when it cannot get
what it wants."
Possible Objections
(continued)
(5) Arguments from Various Disabilities These
arguments take the form, "I grant you that you
can make machines do all the things you have
mentioned but you will never be able to make
one to do X". Numerous features X are
suggested in this connection. I offer a selection:
Be kind, resourceful, beautiful, friendly, have
initiative, have a sense of humor, tell right from
wrong, make mistakes, fall in love, enjoy
strawberries and cream, make some one fall in
love with it, learn from experience, use words
properly, be the subject of its own thought, have
as much diversity of behavour as a man, do
something really new.
Possible Objections
(continued)
• (6) Lady Lovelace's Objection Our most detailed
information of Babbage's Analytical Engine comes from a
memoir by Lady Lovelace. In it she states, "The Analytical
Engine has no pretensions to originate anything. It can
do whatever we know how to order it to perform" (her
italics).
• (7) Argument from Continuity in the Nervous
System The nervous system is certainly not a discretestate machine. A small error in the information about the
size of a nervous impulse impinging on a neuron, may
make a large difference to the size of the outgoing impulse.
It may be argued that, this being so, one cannot expect to
be able to mimic the behavour of the nervous system with
a discrete-state system.
Possible Objections
(continued)
• (8) The Argument from Informality of Behavour It is not
possible to produce a set of rules purporting to describe
what a man should do in every conceivable set of
circumstances.
• (9) The Argument from Extra-Sensory Perception …
These disturbing phenomena seem to deny all our usual
scientific ideas. How we should like to discredit them!
Unfortunately the statistical evidence, at least for telepathy,
is overwhelming. It is very difficult to rearrange one's ideas
so as to fit these new facts in... The idea that our bodies
move simply according to the known laws of physics,
together with some others not yet discovered but
somewhat similar, would be one of the first to go. This
argument is to my mind quite a strong one.
Three AI debates (Franklin)
1. Is AI even possible
2. How should it be done?
3. Representations or not?
Is AI Possible?
Strong AI: Appropriately programmed computers have cognitive
states (i.e., are minds); programs are cognitive theories
Weak AI: Computers are only/just tools for the study of the mind
A proposal for the Dartmouth summer research project on Artificial Intelligence
“We propose that a 2 month, 10 man study of artificial intelligence be
carried out during the summer of 1956 at Dartmouth College in
Hanover, New Hampshire. The study is to proceed on the basis of
the conjecture that every aspect of learning or any other feature
of intelligence can in principle be so precisely described that a
machine can be made to simulate it. An attempt will be made to
find how to make machines use language, form abstractions and
concepts, solve kinds of problems now reserved for humans,
and improve themselves. We think that a significant advance can
be made in one or more of these problems if a carefully selected
group of scientists work on it together for a summer.”
J. McCARTHY, Dartmouth College
M.L. MINSKY,
Harvard University
N. ROCHESTER, I.B.M Corporation
C.E.SHANNON, Bell Telephone Laboratories
August 31, 1955
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Main topics for discussion at the AI conference,
Dartmouth College 1956:
1.
Automatic Computers
2.
How Can a Computer be Programmed to Use a Language
3.
Neuron Nets
4.
Theory of the Size of a Calculation
5.
Self-Improvement (Machine Learning)
6.
Abstractions
7.
Randomness and Creativity
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Some Hard Problems
What do you see?
What do you see?
Interpret…
• The spaceship
photographed
Seattle flying
to Mars.
• Time flies like
an arrow…
Interpret…
• The spaceship
photographed
Seattle flying
to Mars.
• Time flies like
an arrow…
Four Hard Problems for:
Humans
Computers
1.
2.
3.
4.
1. Vision
2. Natural
Language
Processing
3. Commonsense
Reasoning
4. Generalization
Calculus
Chess
Perfect recall
Constructing
precise
algorithms for
difficult
problems
Moravec, H. (1998) “When will computer hardware match the human
brain?” in Journal of Evolution and Technology, Vol. 1
Moravec, H. (2003). “Robots After All”, Communications of the ACM. October
What is the key missing ingredient?
• Speed? (Moravec)
• Knowledge? – (Cyc/Lenat)
• Algorithm? – (e.g., vision, language)
Some Questions…
•
•
•
•
What can machines learn?
How can machines have emotions?
Can machines ever be conscious?
If we can build intelligent machines,
should we?
• What is the future of mankind in a
world of intelligent machines?
Ingredients for an intelligent system
1.
2.
3.
4.
5.
6.
7.
8.
9.
perception/sensory processing (e.g., vision)
action (locomotion/mobility, reaching/grasping)
memory (learning, knowledge representation)
thought (reasoning, problem solving, planning,
prediction, decision making, concept formation,
categorization, generalization)
attention
motivation
language
creativity
consciousness
AI and Big Questions
A.
B.
C.
D.
E.
F.
G.
H.
I.
J.
K.
L.
M.
N.
What does it mean to be human?
What is intelligence?
Is intelligence inherently limited?
How does meaning arise from mindless mechanisms?
What defines purpose?
What is creativity?
Do we search the space of possibilities or do we
create it?
How can mechanistic views of humans be reconciled
with perspectives of meaning and value?
What is consciousness?
What does it mean to understand something?
What will advances in artificial intelligence/life mean
for humans?
What are the essential differences in humans and
other species?
How do we categorize things?
Why do we ask big questions?
“Embodiment”
http://www.is.umk.pl/~duch/Wyklady/komput/w12/cog_shop_research.html
Cog and Rodney: Which is the “person”?
Anne
Foerst
Is AI Possible? (Continued)
John Searle attacks the following:
1) that the appropriately programmed computer has cognitive states
2) that the programs explain human cognition
3) ELIZA, SHRDLU, or "any Turing machine simulation of human mental
phenomena" as such
Weapon: The Chinese Room Experiment
Searle, J. (1980).
“Minds, Brains, and
Programs.” The
Behavioral and Brain
Sciences, vol. 3.
Minds, Brains, and Programs
1. The Systems Reply
Person does not understand the Chinese story, but system does...
Searle: Person can internalize deciphering symbols and scratchpad,
do calculations in his head (sans room), but he still does not
understand Chinese. Since the system is now in him, neither does it.
2. The Robot Reply
A different kind of program drives a robot which interacts with the
world, thus really understanding the Chinese story (with the requisite
mental states)...
Searle: This approach "tacitly concedes that cognition is not solely a
matter of formal symbol manipulation." Furthermore, perceptual and
motor skills add nothing to understanding or intentionality. We can
extend the original thought experiment to make the human the robot's
homunculus, but the human still doesn't understand.
3. The Brain Simulator Reply
Our (now parallel) program simulates neural activity in the brain of a
person who understands Chinese...
Searle: Doesn't this beg the question? Isn't strong AI supposed to be
about any program working? (i.e., the idea that we should be able to
understand mind without understanding the brain). "If we had to know
how the brain worked in order to do AI, we wouldn't bother with AI."
4. The Combination Reply
Just combine the three previous replies (i.e., a super-duper Turing test
passing neural robot)...
Searle: We probably would ascribe intentionality to the device in the
absence of information about how it worked, but this doesn't help
strong AI, since we are basing our judgment on looks and behavior,
not on formal programs alone. "If we knew independently how to
account for its behavior without such assumptions, we would not
attribute intentionality to it, especially if we knew it had a formal
program."
5. The Other Minds Reply
People only know that other people understand anything (e.g.,
Chinese) by their behavior, and since a (hypothetical) computer can
pass the requisite behavior tests, we must say it is cognitive...
Searle: The issue is not about how one knows that others have
cognitive states, but what is involved in attributing cognitive states to
them (amen). Must be more than just computational processes and
related output.
6. The Many Mansions Reply
Arguments against strong AI only applies to current technology...
Searle: Redefines strong AI "as whatever artificially produces and
explains cognition." One can't be expected to argue against a
changing hypothesis...
The Computational Complexity Reply
Penrose: "there might be some `critical' amount of complication in an
algorithm which it is necessary to achieve in order that the algorithm
exhibit mental qualities." (The Emperor's New Mind p.20)
Searle: not addressed in original article – Elsewhere, uses a team of
non-Chinese speaking people
The Searle Doesn't Understand What It Means To Understand
Reply
Ok, so he’s not here to defend himself! – We probably don’t
understand this, either… However, what is understanding? (Penrose
touches on this lightly, but not adequately p.19) – Is it a label? - a
feeling? self-awareness? Compression (Baum)? What is a mental or
cognitive state?
How Should AI Be Done?
Top down: Psychological (behavioral) level
Bottom up: Physiological (neural) level
Emergence…
Representations or Not?
Of course… (for any non-trivial level of intelligence)
The real issues are
1. to what extent the representations must exist from the
outset (nature) versus being learned (nurture).
2. are the representations implicit or explicit
Search
Search
Some Maze-Following Algorithms
Pheromone trail
Traveler deposits “pheromones” as it traverses the maze
Always takes path containing least amt of pheromones (use clockwise or random
check when faced with choice of paths having equal amts of pheromones)
Stack memory
 Traveler remembers choice made at each branch by placing them on a stack
 Processes choices in clockwise fashion
 Backtracks at dead ends to last choice point, tries any untried path(s)
 Updates stack as maze is traversed
Wall to right (or left)
Traveler always makes turns at dead ends or choice points that keep the wall to its
right (or left)
Clone
Traveler clones itself at each branch and the maze is processed in parallel. At
least one version of the traveler will find the exit.
Depth-first or breadth-first search
Treat the branches as nodes (vertices) and the paths as arcs (edges) and perform
a depth-first or breadth-first search on the resulting graph.
Additional Considerations
Paths of varying widths
Loops
General route detection and marking schemes
(for variable width paths)
Other obstacles (barriers in path, predators, etc.)
Learning maze following behaviors…
Learning (remembering) the most efficient path
Interesting questions
How would a human solve a maze?
What aspects of this problem are pertinent to modeling human wayfinding?
How is maze following indicative of the more general problem of
“search?”