Download Strong Physical Symbol System hypothesis

Document related concepts

Human-Computer Interaction Institute wikipedia , lookup

Intelligence explosion wikipedia , lookup

Ecological interface design wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

AI winter wikipedia , lookup

Enactivism wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Human–computer interaction wikipedia , lookup

Personal knowledge base wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Computer Go wikipedia , lookup

Chinese room wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Transcript
Com1005: Machines and
Intelligence
Amanda Sharkey
Last week ....
 Early AI programs
 The Logic Theorist
 GPS General Problem Solver
 Relationship to human thought?
 AI hype?
 AI techniques
 Search

Means-Ends-Analysis
 Chess
 Illusion and AI
 Comparison to humans
 search
Early history of AI continued
 Lighthill Report 1973 – ended most support for
AI in UK
 Early enthusiasm for AI
 AI hype
 E.g. GPS – only useful for simple problems of a
particular kind.
 approach depends of pre-set rankings – too
complicated for complex problems.
 Didn’t “scale up”
 Lack of knowledge?
Adding knowledge?
 Microworlds
 Expert Systems
 CYC
Microworld approach
 Minsky: supervised students looking at





microworlds
Blocks world
Set of solid blocks placed on tabletop. Task is to
rearrange blocks using robot hand.
Shrdlu: Terry Winograd (1972) at MIT
Natural language understanding program
Knows about environment, can reason, plan and
learn.
Shrdlu
 Terry Winograd,
MIT
 “Understanding
natural language”
(1972)
 Simulated robot
arm and blocks
world
Label
type
size
X-position Y-position
a
box
large
3
4
b
box
small
2
2
c
ball
large
4
2
d
ball
small
3
3
e
pyramid large
2
3
f
pyramid small
2
6
 Winograd’s method: based on logic and idea





that words point to things in the world.
E.g pick up the ball to the right of the small box
Known instruction – pick up
Find object that satisfies constraints – ball c and
d
Ambiguous – can ask.
If answer ‘the large one’ -> ball c
 But Shrdlu’s knowledge of the world was limited.











E.g. from Haugeland.
Build a steeple
SORRY I DON’T KNOW THE WORD ‘STEEPLE’
A ‘steeple’ is a stack that contains two green cubes and a pyramid.
I UNDERSTAND
Trade you the steeple for three red cubes
SORRY I DON’T KNOW THE WORD ‘TRADE’
A ‘trade’ is a free exchange of ownership
SORRY I DON’T KNOW THE WORD ‘FREE’
Sorry, I thought you were smarter than you are
SORRY I DON’T KNOW THE WORD ‘SORRY’.
 Shrdlu: domain-specific knowledge (as
opposed to domain-general) about
microworld.
 But does it really understand even its
microworld?
Expert systems
 Depth of knowledge about constrained
domain.
 Commercially exploitable, real applications
 Knowledge stored as production rules
 If the problem is P then the answer is A
Artificial Intelligence
 Understanding mind and intelligence
 Creating it, or modelling it
 AI and applications
 Using AI techniques to do useful things
 Creating the illusion of AI
Expert systems
 Basic idea – experts have knowledge, and this
knowledge can be given to computer program.
 1. Requires knowledge base – interview and
observe experts and convert words and actions
into knowledge base
 2. Reasoning mechanisms to apply knowledge
to problems: inference engine
 3. Mechanisms for explaining their decisions
 IF THEN rules + facts + interpreter
 Forward chaining (start with facts and use
rules to draw new conclusions)
 Backward chaining (start with hypothesis, or
goal, to prove and look for rules to prove that
hypothesis).







Forward chaining – simple example
Rule 1: IF hot AND smoky THEN ADD fire
Rule 2: IF alarm-beeps THEN ADD smoky
Rule 3: IF fire THEN ADD switch-on sprinklers
FACT1: alarm beeps
FACT2: hot
(i) check to see rules whose conditions hold (r2). Add
new fact to working memory (FACT3: smoky)
 (ii) check again (r1). Add new fact (FACT4: fire)
 (iii) check again (r3) Sprinklers on!
 Expert systems usually use production
rules (IF-THEN)
 E.g MYCIN
 knowledge based system for diagnosis and
treatment of infectious diseases of the blood.
 Developed at Stanford University, California in
mid to late 1970s.




Example of MYCIN rule
If
1. the stain of the organism is gram-positive and
2. the morphology of the organism is coccus,
and
 3. the growth conformation of the organism is
clumps
 Then there is suggestive evidence (0.7) that the
identity of the organism is staphylococcus.
 1979: performance of MYCIN shown to be
comparable to that of human experts.
 But never used in hospitals
 Knowledge base incomplete – didn’t know full
spectrum of infectious diseases
 Needed too much computing power
 Interface not good.
 Dendral
 Expert’s assistant – could work out from
data from mass spectographs which
organic compound was being analysed.
 Heuristic search technique constrained by
knowledge of human expert.
 Advantages of expert systems




Human experts can lose expertise
Ease of transfer of artificial expertise
No effect of emotion
Low cost alternative (once developed)
 Disadvantages of expert systems
 Lack of creativity, not adaptive, lacks sensory experience,
narrow focus, no common sense knowledge
 E.g won’t notice if medical history says patient weighs 14 pounds
and is 130 years old.
 More like idiot savants (retarded person who can
perform well in one domain), or automated reference
manuals.
 Hubert Dreyfus criticisms
 1972 What computers can’t do
 1992 What computers still can’t do
 More to expert understanding than following rules
 E.g learning to drive a car.
 Novice, thinking consciously
 Expert, can decide what to do without thinking
 But expert systems can still be a useful
tool, especially when used together with a
human expert.
 As long as we don’t expect too much of
them.
Interim Summary
 Classic AI techniques
 Search
 Knowledge representation
 Knowledge
 Microworlds – Shrdlu and blocks world
 Expert Systems
Knowledge representation
 Symbolic AI
 traditional AI
 Good-old fashioned AI (GOFAI)
 Emphasis on giving computers knowledge
about the world.
 Expert Systems
 problems: brittle
 No common sense
 Common sense?
 Making inferences – Scripts
 CYC
 Roger Schank and colleagues in 1970’s.
 Top down approach to language
understanding
 E.g. SAM Script Applier Mechanism
 Aim – to simulate human’s ability to
understand stories, and answer questions
 SAM: Script Applier Mechanism:
 E.g. “a woman walked into a restaurant. A waiter




showed her to the table. She ordered a rare steak and a
bottle of beer. As she left, she smiled at the cashier.”
We assume she sat on a chair, the waiter brought her
food, she ate it, and she paid for it.
None of these things stated in the text.
We can infer them because we have knowledge of
stereotypical situations like eating in restaurants.
Roger Schank in 1970s (Scripts, Plans, Goals and
Understanding)
 John went to New York by bus. On the bus he
talked to an old lady. When he left the bus, he
thanked the driver. He took the subway to
Leone’s. On the subway his pocket was picked.
He got off the train and entered Leone’s. He had
some lasagne. When the bill came he
discovered he couldn’t pay. The management
told him he would have to was dishes. When he
left, he caught a bus to New Haven.
SAM
 How did John get to Leone’s?
 JOHN TOOK A BUS TO NEW YORK AND THEN HE TOOK THE SUBWAY














TO LEONE’S.
Did anything unusual happen on the bus?
I DON’T REMEMBER ANYTHING.
Did anything unusual happen on the subway?
A THIEF PICKED JOHN’S POCKET.
What did John order?
JOHN ORDERED LASAGNE
Did anything unusual happen at the restaurant?
JOHN DISCOVERED THAT HE COULDN’T PAY THE BILL SO HE HAD
TO WASH DISHES.
Why didn’t John have any money?
BECAUSE A THIEF PICKED JOHN’S POCKET
How did John get back to New Haven?
JOHN WENT FROM LEONE’S BY SUBWAY AND THEN HE WENT BACK
TO NEW HAVEN BY BUS.
How was John able to take the bus to New Haven?
PROBABLY JOHN HAD A TICKET.
 Leone story uses restaurant script, bus
script and subway script.
 Possible to show that people have
knowledge of a number of scripts –
sequences of actions that are typically
performed in certain situations.
 Computer programs given knowledge of stereotypical
situations.
 If certain “headers” of scripts are mentioned (e.g. he
went to the RESTAURANT then stored knowledge about
typical events is retrieved.
 E.g. Restaurant script has several scenes
 Entering, Ordering, Eating, Exiting
 Eating – the waiter brings the food to the customer, the customer
eats the food.
 Exiting: the customer asks for the bill, the waiter brings the bill, the
customer pays the waiter, the customer leaves a tip, the customer
leaves the restaurant.
 They could use them to infer unmentioned events.
 Scripts – related to Minsky’s Frames
 Expected structure of knowledge about a
domain.
 E.g. Mention “room” and we have
expectations
 Some always true – e.g. 4 walls
 Some may be so – e.g. that there is a window
 Top-down approaches
CYC (short for encyclopedia)
 Begun in 1984 by Doug Lenat and Edward Feigenbaum
 Aim to build knowledge base of common sense
knowledge which could allow AI systems to perform
human-like reasoning
 Trying to include all that humans know but wouldn’t
usually say.
 Frames and slots
 E.g. South Yorkshire
 Largest city: Sheffield
 Residents: Amanda Sharkey, Noel Sharkey
 Country: UK
 Supposed to reach a point where it could directly read
texts, and self program.
 CYC – belief that intelligence and
understanding are rooted in the explicit
language like data structures
 CYC – large knowledge base
 But still like an expert system
 How is it connected to the real world?
 Knowledge and knowledge representation
key to:
 Traditional AI
 Classical AI
 Symbolic AI
 Different terms for same idea
Assessment
 20% written assignment (essay)
 5% group presentations
 25% practical assignment (next semester)
 50% exam (end of next semester)
Presentations
 5-10 minute presentations in weeks 10
and 11
 In tutorial groups
 Choose from following list of topics....










Who was Alan Turing?
Computers versus Humans: the important differences
Is the mind a computer?
Artificial Intelligence and Games
What challenges are left for Artificial Intelligence?
The social effects of Artificial Intelligence: the good, the
bad and the ugly
Chatbots
Computers and emotions
AI and the media
Fact or Fiction?: Artificial Intelligence in the movies
 Newell and Simon (1981)
 The physical symbol system hypothesis:
 A physical symbol system has the necessary and
sufficient means for general intelligent action.
 A computer is a physical symbol system –
 It manipulates symbols in accordance with
instructions in program.
 It can make logical inferences and “reason”
 In propositional logic, procedures for





manipulating symbols
E.g. Modus ponens
If p, then q,
So when proposition p occurs then q follows
Symbols can represent states of affairs in the
world, but can be processed without considering
what they represent.
Thought as logical manipulation of symbols
 Physical Symbol System hypothesis
 A physical symbol system has the necessary




and sufficient means for general intelligent
action
Strong Physical Symbol System hypothesis
Only computers are capable of thought.
Human mind is a computer
Human thinking consists of symbol manipulation
Symbolic model of mind
 Traditional view: language of thought
(Fodor, 1975)
 The mind is a symbol system and
cognition is symbol manipulation
 Symbols refer to external phenomena
 They can be stored in and retrieved from
memory, and transformed according to
rules.
 Also known as Functionalism
 Physical symbol system hypothesis – closely
related to Functionalism or Multiple Realisability.
 Thinking (symbol manipulation) can be carried
out on any machine
 Machine could be made of swiss cheese.
 Mind is the software – can run on any hardware
 Brains, or computers, or machine made of cogs,
levers and springs (Babbage’s Analytical engine?).
Strong AI: appropriately programmed computer
really is a mind, can be said to understand, and
to have other cognitive states
Weak AI: a computer is a valuable tool for the
study of mind; can make it possible to formulate
and test hypotheses rigorously.
.
Strong AI: appropriately programmed computer
really is a mind, can be said to understand, and
to have other cognitive states
Weak AI: a computer is a valuable tool for the
study of mind; can make it possible to formulate
and test hypotheses rigorously.
.
Chinese Room Argument
 John Searle: philosopher and critic of AI
 “according to strong AI, the computer is
not merely a tool in the study of mind;
rather the appropriately programmed
computer really is a mind, in the sense
that computers given the right programs
can be literally said to understand and
have other cognitive states”
Chinese room
 Gedanken (thought) experiment
 Imagine An operator in a room, with sets of rules about how to
manipulate symbol structures
 Slots in wall of room – paper can be passed in, and
out.
 Example of rule: if pattern is X, write 10001011010 on
next empty line of exercise book named input store.
 Carry out manipulations of those bits, then pair them
with chinese characters and pass out of box
 Symbols mean nothing to operator
 Instruction sets, and rules, correspond to
program that simulates ability of native
Chinese speaker.
 Symbols passed in and out correspond to
sentences in meaningful dialogue.
 Chinese room is able to pass the Turing
test!
 Searle: behaviour of operator is like that of
computer running program.
 Operator does not understand Chinese,
only understands instructions for
manipulating symbols.
 Computer running program does not
understand any more than the operator
does.
 Operator only needs syntax, not semantics
 Syntax – knowledge of formal properties of
symbols and how they can be combined.
 Semantics – relating symbols to real
world.
 Strong AI: Machine can be said to
understand the story.
 Searle – like the operator in the Chinese
room, the computer does not understand
the story.
 It just carries out certain operations in
response to its input, and produces
outputs as answers to questions.
 Argument against Turing test
 - computer succeeding in imitation game
will have same mental states as human.
 But in Chinese room




Ask system if it understands Chinese
“Of course I do”
Ask operator
“search me, it’s just a bunch of meaningless
squiggles”.
Arguments against Chinese Room
 Systems response
 The operator may not understand Chinese, but
the system as a whole understands Chinese.
 Searle’s rebuttal: if symbol operator doesn’t
understand Chinese, why should you be able to
say that operator + bits of paper + room
understands Chinese?
 System only behaves as though it understands
Chinese.
 Searle – question of whether a symbol
manipulator is capable of thought is not an
empirical one.
 Example of an empirical question: Are all
opthalmologists in New York over 25 years
of age?
 Are all opthamologists in New York eye
specialists? – not an empirical question.
Symbol Grounding
 One answer to Chinese Room
 Computer needs a way of relating its symbols to




objects in the real world.
Traditional view – meaning of symbols comes
from connecting them to the world
“ in the right way”
Stevan Harnad: thought is symbol manipulation,
but symbols are grounded in simpler
representations of the world.
E.g. idea of “zebra” grounded in representations
of horse and stripes.
 Other solutions to symbol grounding
problem
 Ways of escaping from circularity of
defining symbols in terms of symbols.
 Adaptive behaviour and embodied
cognition – knowledge about objects in
real world.
Summary
 Knowledge?
 Microworlds
 Expert Systems
 Common sense knowledge
 Scripts and Frames
 CYC
 Symbolic AI
 Physical Symbol System Hypothesis
 Chinese Room
 Assignments
 Presentations – in weeks 10 and 11 in tutorial groups
 Next week – written assignments issued. – due in Week 8