Download G52HPA: History and Philosophy of Artificial Intelligence Outline of

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Computer Go wikipedia , lookup

AI winter wikipedia , lookup

Chinese room wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Intelligence explosion wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Transcript
Outline of this lecture
G52HPA:
History and Philosophy of Artificial
Intelligence
• what is intelligence
• is AI possible in principle
– philosophical implications of AI
Lecture 2: Introduction to AI
• is AI possible in practice
Tony Pridmore and Natasha Alechina
School of Computer Science
• history of AI
{tpp,nza}@cs.nott.ac.uk
• future prospects
© Brian Logan 2008
What are AI programs?
G52HPA Lecture 2: Introduction
2
Starting from the beginning
• AI as a field is now 50 years old
• what is intelligence?
• much has been accomplished, e.g., playing chess, expert systems,
autonomous cars etc. etc.
• is it the sort of thing that could be artificial?
– if so, what are the philosophical implications?
• however the status of these results is less clear—are they:
• even if intelligence could be artificial, is an artificial intelligence
feasible
– examples of intelligence
– simulations of intelligence
• clues from the history of AI and from philosophy
– ‘just’ computer programs …
• implications for the future of AI
• what can be accomplished in AI?
© Brian Logan 2008
G52HPA Lecture 2: Introduction
3
What is intelligence?
© Brian Logan 2008
G52HPA Lecture 2: Introduction
4
Examples of tasks requiring intelligence
• playing chess
• passing a (high school) chemistry exam
• planning space missions
• giving legal advice
• translating spoken English into spoken Swedish
• booking a holiday
• assembling flat-pack furniture
• learning how to play table tennis
© Brian Logan 2008
G52HPA Lecture 2: Introduction
5
© Brian Logan 2008
G52HPA Lecture 2: Introduction
6
What is intelligence?
Is AI possible in principle?
• it could be that (Strong) AI is impossible in principle
• would a system that can perform these tasks be intelligent?
• even if it were, does this mean that general (human-level) AI is
possible?
– there may be something about intelligence which means that no
artificial system could be intelligent—souls or quantum
microtubules, …?
• if not, is there some other list of tasks which we would be happy to
equate with human-level AI?
– or that no artificial system could be intelligent in the same way a
person
• conversely, if AI is possible in principle, what does this imply about
human intelligence?
© Brian Logan 2008
G52HPA Lecture 2: Introduction
7
Philosophy of AI
G52HPA Lecture 2: Introduction
8
The Chinese room
Accepting that (Strong) AI is possible in principle has implications for a
wide range of philosophical issues:
• imagine a room containing a person who understands only English
• the room contains a rule book (written in English), and various stacks of
paper—some blank some with indecipherable inscriptions
• intentionality—how thoughts and other mental content can be about
something
• there is a small opening in the wall through which come slips of paper with
indecipherable symbols
• what it means to ‘know’ or ‘learn’ something
• the human finds matching symbols in the rule book and follows the
instructions, which may include writing symbols on new bits of paper, finding
symbols in the stacks, rearranging the stacks etc.
• what it means to be responsible for an action
• what it means to be conscious
• eventually the instructions tell the human to write one or more symbols on a
piece of paper and pass it through the opening in the wall
• and many others …
© Brian Logan 2008
© Brian Logan 2008
G52HPA Lecture 2: Introduction
9
The Chinese room and Strong AI
© Brian Logan 2008
G52HPA Lecture 2: Introduction
10
Searle’s argument
• now assume the symbols are Chinese characters
1. certain kinds of objects are incapable of conscious understanding (of
Chinese)
• from the outside, we see a system that is taking input in the form of
Chinese sentences, and generating answers in Chinese that are
obviously “intelligent”
2. the person, paper and rule books are objects of this kind
• the human plays the role of the CPU, the rule book is the program and
the stacks of paper are memory
3. if each of a set of objects is incapable of conscious understanding,
then any system constructed from the objects is incapable of
conscious understanding
• but nothing in the room understands Chinese
4. therefore there is no conscious understanding of Chinese in the
Chinese room
• running the right program does not necessarily generate
understanding
© Brian Logan 2008
G52HPA Lecture 2: Introduction
11
© Brian Logan 2008
G52HPA Lecture 2: Introduction
12
Systems reply to Searle
•
if each of a set of objects is incapable of conscious understanding,
then any system constructed from the objects is incapable of
conscious understanding
–
•
Is AI possible in practice?
• even if AI is possible in principle, it may be impossible in practice
– computational requirements
humans are composed of molecules: if molecules are incapable
of conscious understanding, human are incapable of conscious
understanding
– the effort required to program it
• what are the hard problems in AI?
variant of the “Systems Reply” to Searle—although the human does
not understand Chinese, the entire system (human, rule book and
paper) does understand Chinese
© Brian Logan 2008
G52HPA Lecture 2: Introduction
13
History of AI
– some theoretical results (everything is “AI-complete”)
– clues from the history of AI—what has turned out to be easy and
what has turned out to be (surprisingly) hard
© Brian Logan 2008
G52HPA Lecture 2: Introduction
14
A selective history of AI
The history of AI can broken down into three main phases
• the understanding of intelligent behaviour in animals, humans and
artificial systems was the original and is the ultimate goal of AI
• initial focus on ‘universal’ solutions
• early AI projects combined several capabilities, such as sensing,
problem-solving and action, in a single system
• fragmentation into sub-disciplines
• (partial) re-integration of results from sub-disciplines
• individual components, e.g., problem-solving, often stressed
“universal methods”
© Brian Logan 2008
G52HPA Lecture 2: Introduction
15
General Problem Solver
© Brian Logan 2008
G52HPA Lecture 2: Introduction
16
Shakey the robot (1966–1972)
• GPS (Newell & Simon 1961) solved simple puzzles (theorem proving,
cryptarithmetic etc) using means-ends analysis
• designed to imitate human problem solving methods
• order in which the program considered subgoals and the actions
performed were similar to the way humans solved the same problem
• typical of the weak methods used in the early period of AI
Shakey was the first mobile robot to
reason about its actions.
• multiple sensors (TV camera, a
triangulating range finder, and
bump sensors)
• connected to DEC PDP-10 and
PDP-15 computers via radio and
video links
• programs for perception, worldmodeling, and acting (simple
motion, turning, and route
planning).
• weak methods use general search techniques to combine simple
problem-solving steps into complete solutions
© Brian Logan 2008
G52HPA Lecture 2: Introduction
17
© Brian Logan 2008
G52HPA Lecture 2: Introduction
18
Fragmentation of AI
Knowledge-based systems
From the 1970’s AI fragmented into sub-disciplines each looking at a small part of
the overall problem of intelligence, e.g.:
• weak methods rely on general (domain independent) heuristics
• often don’t scale well to larger problems—combinatorial explosion of
possible solutions
• problem-solving and search
• knowledge representation
• in the 1970s and 1980s the focus changed, placing a greater emphasis on
domain knowledge—knowledge-based systems (KBS)
• reasoning
• planning
• use of domain-specific knowledge allows larger reasoning steps
• learning
• natural language processing
• vision
• KBSs could handle typically occurring (rather than toy) problems in narrow
domains, e.g., medical diagnosis
• and many others …
• can be characterised as a move from “first principles” to “expert knowledge”
or from what can be done (i.e., legal moves) to what should be done
© Brian Logan 2008
G52HPA Lecture 2: Introduction
19
Neural networks
© Brian Logan 2008
G52HPA Lecture 2: Introduction
20
Genetic algorithms
• one of the earliest approaches to AI—initial work by McCulloch & Pitts in
1943 and Hebb in 1949
• based on the idea of natural selection—new solutions are produced by
combining and mutating a population of existing solutions, with the “fittest”
solutions being kept for the next “generation”
• by 1962 Rosenblatt had shown that Perceptrons (single layer networks) could
trained to match any input data, if a match was possible at all
• early work by Friedberg in 1958 used “machine evolution” to mutate a
(machine code) program into one that had good performance on a given task
• however in 1969 Minsky & Papert showed that Perceptrons have significant
limitations, and many people lost interest in the neural approach
• however little progress was demonstrated and interest waned
• in the mid-1980s, multi-layer networks and the backpropagation learning
algorithm triggered a resurgence of interest in neural networks
• in the 1970s better problem representations and faster CPUs resulted in
renewed interest in GAs
• applications include classification problems, e.g, handwriting recognition—
harder to see how to apply NNs to other AI problems such as planning
• now a widely used technique for solving combinatorial problems, even though
GAs are often slower than, e.g, stochastic hill climbing
© Brian Logan 2008
G52HPA Lecture 2: Introduction
21
The whole iguana
© Brian Logan 2008
G52HPA Lecture 2: Introduction
22
Intelligent agents
• while a lot of good work has been done in these subfields, this approach does
have limitations
– independently developed part-solutions may make incompatible
assumptions
An agent is a complete system which integrates a range of (often
relatively shallow) competences.
For example, the Oz project at CMU developed a range of ‘Broad Agents’
which integrated:
– we may end up solving the wrong problem, e.g., the ‘scene understanding
problem’ in vision
• goals and reactive behaviour
– the ‘homunculus problem’
• natural language
• emotional state and its effect on behaviour
• memory and inference
• at some point we have to understand how all the various bits fit together
in artificial creatures called ‘Woggles’ capable of
• need for work on complete systems
© Brian Logan 2008
G52HPA Lecture 2: Introduction
participating in simple childrens’ stories.
23
© Brian Logan 2008
G52HPA Lecture 2: Introduction
24
Current state of the art
Xavier the robot (1993-2003)
• playing chess
Xavier is an office delivery robot:
• picks up and delivers post, faxes
and printouts, returns library
books, recycling cans, getting
coffee, telling jokes
• determines the order in which to
visit offices, plans a path from the
current location to the next office
to be visited, and follows the path
reliably, avoiding static and
dynamic obstacles
• responds to commands from a
Web interface
• passing a (high school) chemistry exam
• planning space missions
• giving legal advice
• translating spoken English into spoken Swedish
• booking a holiday
• assembling flat-pack furniture
• learning how to play table tennis
© Brian Logan 2008
G52HPA Lecture 2: Introduction
25
Xavier and the elevator
© Brian Logan 2008
G52HPA Lecture 2: Introduction
26
Things AI is good at
• in some areas AI systems match or exceed human-level performance:
– grandmaster level chess & checkers (bridge, backgammon, poker
…)
– complex planning and scheduling problems
– high school maths and physics problems
– and many others …
• i.e., problems that require specialist knowledge and/or complex
reasoning
© Brian Logan 2008
G52HPA Lecture 2: Introduction
27
Things AI is not so good at
© Brian Logan 2008
G52HPA Lecture 2: Introduction
28
Bedtime stories
“One day Joe Bear was hungry. He asked his friend Irving Bird where some honey
was. Irving told him there was a beehive in the oak tree. Joe threatened to hit
Irving if he didn’t tell him where some honey was. The End.”
• AI is not so good at problems that (many) people find easy:
– moving around in the physical world
“Joe Bear was hungry. He asked Irving Bird where some honey was. Irving
refused to tell him, so Joe offered to bring him a worm if he’d tell him where some
honey was. Irving agreed. But Joe didn’t know where any worms were, so he
asked Irving, who refused to say. So Joe offered to bring him a worm if he’d tell
him where a worm was. Irving agreed. But Joe didn’t know where any worms
were, so he asked Irving, who refused to say. So Joe offered to bring him a worm
if he’d tell him where a worm was … [eventually] The End.”
– understanding natural language
– ‘commonsense’ reasoning
– making up childrens’ stories
– and many others …
“Henry Squirrel was thirsty. He walked over to the river bank where his good
friend Bill Bird was sitting. Henry slipped and fell in the river. Gravity drowned.
The End.”
• i.e., problems that require large amounts of different kinds of
knowledge and/or imprecise or approximate reasoning
© Brian Logan 2008
G52HPA Lecture 2: Introduction
29
© Brian Logan 2008
G52HPA Lecture 2: Introduction
30
The future of AI
The next lecture
• will we ever get any better at these problems, or will AI always be
limited to a narrow range of topics?
Philosophy I: Representation & Intentionality
Suggested reading:
• Russell & Norvig (2003), chapter 26;
• Dennett (1996), chapter 1
© Brian Logan 2008
G52HPA Lecture 2: Introduction
31
© Brian Logan 2008
G52HPA Lecture 2: Introduction
32