Download The Turing Test Turing`s own objections

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Computer Go wikipedia , lookup

Intelligence explosion wikipedia , lookup

Visual Turing Test wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

John Searle wikipedia , lookup

Turing test wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Chinese room wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Transcript
One of major divisions in AI (and you can see it in those
definitions above) is between:
* those who think AI is the only serious way of finding
out how WE work (since opening heads doesnt yet
tell you much)
and
those who want computers to do very smart things,
independently of how WE work.
Cognitive scientists vs. Engineers.
Think about a reading computer that read English
(very well) from Right to Left!
What follows, if anything, from its success?
There is another group separate from the Cognitive
Scientists and Engineers we just distinguished: it is those
who are interested in attributing mental capacities to
machines--and this group could overlap with either of the
first two.
For Dennett machines and people are in roughly the
same position: we have a language for talking about
how they work and why, which he calls FOLK
PSYCHOLOGY---i.e. the propositional attitudes
BELIEVE, INTEND etc.
Their interest is the mentality of machines, not the machine-likeness of
humans. Here is Dennett, the major US philosopher concerned with
AI:
In a recent conversation with the designer of a chessplaying program I heard the following criticism of a rival
program: It thinks it should get its queen out early. This
ascribes a propositional attitude to the program in a very
useful and predictive way, for the designer went on to say
one can usually count on chasing that queen around a
board. But for all the many levels of explicit
representation to be found in that program, nowhere is
there anything roughly synonymous with ‘I should get my
queen out early.’ explicitly tokened.
Contrast Dennett, who doesnt really think people or
machines have mental states--they are the same position
with respect to ‘as if’ explanation---it behaves AS IF it
wants to get its queen out early
Strong vs. Weak AI
An important distinction we shall need later, due to the
philosopher John Searle.
For him, WEAK AI is like Cognitive Science above
(I.e. about people): it uses the machine
representations and hypotheses to mimic human
mental function, but never ascribes those properties
to the machine.
For Searle, STRONG AI is the claim that machines
programmed with the appropriate behaviour, are
having the same mental states as people would who
had the same behaviour--i.e. that machines can have
MENTAL STATES.
But he says that in neither case should we
assume those correspond to anything
real inside, in the brain or the program.
The Turing Test
Q
Q
.
Turing’s test was about whether or not an
interrogator could tell a man from a woman!
Q
Q
Q
An interrogator in another room asks
questions of a subject by teletype(!),
trying to determine their sex.
The subject is sometimes a man and
sometimes a woman.
Turing in 1950 published a
philosophical paper designed to stop
people arguing about whether or not
machines could think.
He proposed that the question be
replaced with a test, which was not
quite what is now called the Turing
|Test.
Turing’s own objections:
Q
Q
Q
If, after some agreed time, the interrogator
cannot distinguish situations where a
machine has been substituted for the
man/woman, we should just agree to say
the machine can think (says Turing).
NOTICE: the question of whether it is a
machine never comes up in the questions.
Nowadays, the ‘Turing Test’ is precisely
about whether the other is a machine or
not.
Q
Turing considered, and dismissed, possible
objections to the idea that computers can think.
Some of these objections might still be raised today.
Some objections are easier to refute than others.
Objections considered by Turing:
1. The theological objection
2. The ‘heads in the sand’ objection
3. The mathematical objection
4. The argument from consciousness
5. Arguments from various disabilities
6. Lady Lovelace’s objection
7. Argument from continuity in the nervous system
(8.) The argument from informality of behaviour
(9.) The argument from extra-sensory perception
1
The theological objection
Q
‘…Thinking is a function of man’s immortal
soul. God has given an immortal soul to
every man and woman, but not to any other
animal or to machines. Hence no animal or
machine can think…’
Q
Why not believe that God could give a soul to
a machine if He wished?
The mathematical objection
Heads in the sand objection
Q
Q
i.e. The consequence of machines
thinking would be too dreadful. Let us
hope and believe that they cannot do so.
- related to theological argument; idea
that Humans are superior to the rest of
creation, and must stay so……...
‘.. Those who believe in ..(this and the
previous objection).. would probably not
be interested in any criteria..’
Q
Q
Argument from consciousness
Q
‘…This argument is very well expressed in Professor
Jefferson’s Lister Oration for 1949, from which I quote.
“Not until a machine can write a sonnet or compose a
concerto because of thoughts and emotions felt, and not
by the chance fall of symbols, could we agree that
machine equals brain – that is not only write it but know
that it had written it. No mechanism could feel (and not
merely artificially signal, an easy contrivance) pleasure
at its successes, grief when its valves fuse, be warmed
by flattery, be made miserable by its mistakes, be
charmed by sex, be angry or depressed when it cannot
get what it wants”..’
Only way one could be sure that a machine thinks is to
be that machine and feel oneself thinking.
- similarly, only way to be sure someone else thinks, is
to be that person.
How do we know that anyone is conscious? solipsism.
Instead, we assume that others can think and are
conscious----it is a polite convention. Similarly, could
assume that machine which passes Turing test is so
too
Argument from continuity in the nervous
system
Q
Nervous system is continuous: the digital
computer is discrete state machine.
I.e. in the nervous system a small error in
the information about the size of a nervous
impulse impinging on a neuron may make
a large difference to the size of the
outgoing impulse.
Discrete state machines: move by sudden
jumps and clicks from one state to
another. For example, consider the
‘convenient fiction’ that switches are either
definitely on, or definitely off.
However, discrete state machine can still
give answers that are indistinguishable
from a continuous machine.
Consciousness
Q
Thought and consciousness do not always go together.
Freud and unconscious thought.
Thought we cannot introspect about. (eg searching for
forgotton name)
Blindsight (Weiskrantz) – removal of visual cortex, blind in
certain areas, but can still locate spot without
consciousness of it.
Arguments from various disabilities ie ‘I grant that you can
make machines to all the things you have mentioned but
you will never be able to make one do X’.
eg be kind, resourceful, beautiful, friendly, have initiative,
have a sense of humour, tell right from wrong, make
mistakes, fall in love, enjoy strawberries and cream, make
someone fall in love with it, learn from experience, use
words properly, be the subject of its own though, have as
much diversity of behaviour as a man, do something really
new.
These criticisms often disguised forms of argument from
consciousness.
Q
Q
Other objections
Copeland (1993) [see ‘Artificial Intelligence: a
philosophical introduction’] discusses 4 further
objections to Turing Test. The first three of these
he dismisses, and the fourth he incorporates into a
modified version of the Turing Test.
1. Too conservative: Chimpanzee objection
Chimpanzees, dolphins, dogs, and pre-linguistic
infants all can think (?) but could not pass Turing
Test.
But this only means that Turing Test cannot be a
litmus test (red = acid, not red = non acidic).
- nothing definite follows if computer/animal/baby
fails the test.
Ie negative outcome does not mean computer
cannot think.
(In philosophical terms: TT gives a sufficient not a
necessary condition of thought)
Results of mathematical logic which can be
used to show that there are limitations to the
powers of discrete-state machines.
eg halting problem: will the execution of a
program P eventually halt or will it run for
ever? Turing (1936) proved that for any
algorithm H that purports to solve halting
problems there will always be a program Pi
such that H will not be able to answer the
halting problem correctly.
i.e. Certain questions cannot be answered
correctly by any formal system.
But, similar limitations may also apply to the
human intellect.
Lady Lovelace’s objection:
Q
(memoir from Lady Lovelace about Babbage’s
Analytical Engine)
Babbage (1792-1871) and Analytical Engine:
general purpose calculator. Entirely mechanical.
Entire contraption never built – engineering not up
to it and no electricity!
‘..The Analytical Engine has no pretensions to
originate anything. It can do whatever we know
how to order it to perform..’
A computer cannot be creative, it cannot originate
anything, only carry out what was given to it by the
programmer.
But computers can surprise their programmers. –
ie by producing answers that were not expected.
Original data may have been given to computer, but
may then be able to work out its consequences and
implications (cf. level of chess programs and their
programmers).
2. Too easy: Sense organs objection
Q
Q
Q
Turing Test focuses only on verbal responses:
the computer is able to use words without really
knowing their meanings.
(like taking a driving test that consists only of
answering questions).
Equip it with sense organs, and you can then
test knowledge of world (remember turning the
turtle over in ‘Blade Runner’ when spotting
robots.
But there many words which can be investigated
without computer having sense organs. Eg
‘teasing’.
Think about getting to know people through
webchat--can you?
2
3. Simulation objection
Q
The simulation of X is never an X.
In the original Victorian parlour game, if
man misleads the interrogator into thinking
he is a woman, doesn’t mean he IS a
woman.
But there are two kinds of simulation:
a) simulation1 which lacks essential
features of what is being simulated. Eg in
simulated death the person is not dead.
b) simulation2, exactly like what is being
simulated, but not produced in standard
way (e.g. aircraft in windtunnels).
Could claim that a program that passed the
Turing Test was a simulation2
You don’t argue with me.
WHY DO YOU THINK I DON’T ARGUE WITH YOU?
You are afraid of me.
DOES IT PLEASE YOU TO THINK I AM AFRAID OF
YOU?
My father is afraid of everybody.
WHAT ELSE COMES TO MIND WHEN YOU THINK OF
YOUR FATHER
Bullies
DOES THAT HAVE ANYTHING TO DO WITH THE
FACT THAT YOUR BOYFRIEND MAKE YOU COME
HERE?
But, when we look at how Eliza works, this conversation seems
less impressive.
1. Transform input to change point of view.
Eg change you to I etc.
If pattern matches then give stored response
Otherwise give general purpose response
4. Black box objection.
Q
Q
This objection is presented by Copeland (1993), and not
rejected by him.
- device whose inner workings are unknown.
Turing Test: relying only on outward behaviour.
But might look inside computer, to see how it ‘thinks’.
ie might have program which just has list of sentence to
be output in response to sentences it encounters. All
possibilities covered, and computer not thinking at all,
just matching sentences.
Later you will see more of programs like ELIZA: Joseph
Weizenbaum’s program with a therapeutic interviewer.
Men are all alike.
IN WHAT WAY?
They’re always bugging us about something or
other.
CAN YOU THINK OF A SPECIFIC EXAMPLE?
Well, my boyfriend made me come here.
YOUR BOYFRIEND MADE YOU COME HERE?
Pattern *need* [can you do without]
Eg I need some help, that much seems certain.
WHAT WOULD IT MEAN TO YOU IF YOU GOT
SOME HELP?
*Mother* [tell me more about your family]
eg.
Perhaps I could learn to get along with my mother.
TELL ME MORE ABOUT YOUR FAMILY.
Eliza lacks
•
Means of recognizing grammatical structure of
language
•
Means of converting users query into a
representation of the underlying knowledge from which
inferences can be drawn.
Parry, the paranoid program:
Consider PARRY, a much better
program:
Q
Q
Q
Q
PARRY had far more to say than
ELIZA and wanted to tell you things
It modelled a psychiatrically disordered
person (PARanoid) and hence it was
easy to excuse when it didn’t make
sense.
It remembered what you had said and
got ‘annoyed’ and ‘frightened’
It was claimed it had passed the TT.
Colby, K.M., Weber, S., Hilf, F.D. (1971)
Artificial Paranoia, Artificial Intelligence,
2, 1-25.
The computer is in capital letters
Why are you in the hospital?
I SHOULDN’T BE HERE.
Who brought you here?
THE POLICE.
What trouble did you have with the police?
COPS DON’T DO THEIR JOB.
He says I’m depressed much of the time.
I AM SORRY TO HERE THAT YOU ARE DEPRESSED.
It’s true that I am unhappy.
DO YOU THINK COMING HERE WILL HELP YOU
NOT TO BE UNHAPPY?
I need some help, that much seems certain.
WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME
HELP?
Perhaps I could learn to get along with my mother.
TELL ME MORE ABOUT YOUR FAMILY.
My mother takes care of me.
WHO ELSE IN YOUR FAMILY TAKES CARE OF YOU?
My father.
YOUR FATHER
You are like my father in some ways.
WHAT RESEMBLANCE DO YOU SEE?
You are not very aggressive, but I think you don’t want me
to notice that.
WHAT MAKES YOU THINK I AM NOT VERY
AGGRESSIVE?
But notice here the AI theory
being snuck in……...
Q
Q
We don’t have any INDEPENDENT
evidence that people do assign
grammatical structures to what they hear
in dialogue,
nor that they use a great deal of
background knowledge when carrying on
a superficial conversations…..
What do they do instead?
COPS ARREST THE WRONG PEOPLE.
They arrest the right ones sometimes.
THE LAW SHOULD GET THOSE ITALIAN
CROOKS.
Are you Italian?
MY BACKGROUND IS BRITISH BUT I
WAS BORN IN THIS COUNTRY.
Do you know any Italian crooks?
I TRY TO AVOID THE UNDERWORLD.
Are you afraid of the underworld?
3
THE MAFIA ARE THE WORST
GANGSTERS.
Are you connected with the Mafia?
YOU KNOW, THEY KNOW ME.
How do they know you?
THEY SPY ON ME.
Why do they?
THEY WANT TO GET FALSE EVIDENCE
TO FRAME ME.
What do they have against you?
THEY ARE OUT TO GET ME.
What for?
I HAVE INCRIMINATING EVIDENCE
AGAINST THE MAFIA.
Are you sure of all this?
YOU DON’T BELIEVE ME?
I am doing my best.
…silence
CAN’T YOU THINK OF A DECENT
QUESTION TO ASK?
You seem hostile to me.
SOME OF YOUR REMARKS ANNOY ME.
I am sorry.
FORGET THE EXCUSES, YOU GUYS
ARE ALL ALIKE.
Loebner Test [Turing test + money]
Colby’s PARRY
Q
Q
Q
Q
Q
Perhaps best ever performance, many users,
robust, but not a normal subject (I.e. a
paranoid)
primitive individual models, some control of
dialogue process; but it had lots to say!
Primitive simulation of intentionality
not syntax but fast pattern matching
much better than ELIZA
1997 winner: Intelligent Research Ltd,
designed and largely programmed by
members of Natural Language
Processing in this Department.
Fragment of winning transcript
Program: hi, my name is Catherine, what’s
yours?
Judge2: My name is Nathan.
Program: Well, its nice to meet you
Nathan. Did you see that story on CNN
last night about the lesbian couple who
came out at a White House party on
Sunday.
How is this achieved in PARRY?
WHAT IS YOUR MAIN PROBLEM
_______________________________
Q WHAT BE YOU MAIN PROBLEM
Q BE YOU MAIN PROBLEM
Q WHAT BE MAIN PROBLEM
Q WHAT BE YOU PROBLEM ←
Q WHAT BE YOU MAIN
Q
Judge2: No. I’m sorry I missed that. How
did the Prez take it?
Program: Ellen Degeneres was one of
them - she was kissing her lover in front
of Bill Clinton. Do you think he was very
embarassed?
Judge2: You tell me, I didn’t see it.
Program: Are you still a friend of Bill
Clinton
Judge2: Yes. I would like to talk about
something else. Do you know anything
about sculpture.
Competition started in 1991 by Hugh
Loebner.
Judges type questions in at terminal:
replied to either by human or a
computer contestant.
Judges rate responses according to how
human-like they appear: Prize of 2000
dollars to most human-like. Grand prize
of 100,000 dollars if a program does
better than one of the humans.
Program: Well I suppose he’s doing his best like
most of us, but he does seem to attract an
UNUSUAL amount of bad publicity, wouldn’t
you say -- can it
Judge2: Do you know what sculpture is?
Program: Well you’re more of a fan of his than I
am I guess --- though I can’t SWEAR what I’d
have done in that locked hotel room, I must
say.
SOME DAYS IT’S BETTER THAN OTHERS AT
CHANGING THE SUBJECT!
4
Maybe the idea of Turing Test should be abandoned.
Reason 1: Unitary notion of ‘intelligence’ too
simplistic. Too simplistic to think that it is useful to
assess whether computers possess ‘intelligence’, or
the ability to think.
Better to break down this question into smaller
questions.
- similar to idea that unitary measure of intelligence
(ie intelligence as measured by IQ tests) is not very
useful
- better to have tests that reveal the relative strengths
and weaknesses of individuals.
Could assess computers in terms of more specific
abilities; eg ability of robot to navigate across a
room, ability of computer to perform logical
reasoning, metaknowledge (knowledge of own
limitations).
Reason 2: Too anthropocenctric.
Too anthropocentric to insist that program
should work in same way as humans.
Dogs are capable of cognition, but would
not pass Turing Test. Still, producing
machine with cognitive and communicative
abilities of a dog would be (another)
challenge for AI.
But how can we NOT be anthropocentric
about intelligence? We are the only really
intelligent things we know, and language is
closer to our intelligence than any other
function we have…?
Potted history of IQ tests
Turing Test (as now interpreted!) suggests that we base our
decision about whether a machine can think on its outward
behaviour, and whether we confuse it with humans.
Concept of Intelligence in humans
We talk about people being more or less intelligent. Perhaps
examining the concept of intelligence in humans will provide
an account of what it means to be intelligent.
What is intelligence? Intelligence is what is measured by
intelligence tests.
Early research begun into individual differences:
1796: assistant at Greenwich Observatory recording when stars
crossed the field of the telescope. Consistently reported
observations eight-tenths of a second later than Astronomer
Royal.
Discharged! Later realized that observers respond to stimuli at
different speeds – the assistant wasn’t misbehaving, he just
couldn’t do it as quickly as the Astronomer Royal.
Francis Galton, in latter half of 19th century: interested in
individual differences.
He developed measures of keenness of senses, and mental
imagery: early precursors of intelligence tests. Found
evidence of genius occurring often in certain families.
Stanford-Binet IQ test
Alfred Binet (1857-1911) tried devising tests to find out how
“bright” and “dull” children differ.
His aim was educational – to provide appropriate education
depending on ability of child.
Emphasis on general intelligence.
Idea of quantifying the amount of intelligence a person has.
Block design: pictured
designs must be copied
with blocks; tests ability to
perceive and analyse
patterns.
Verbal item:
Arithmetic. Verbal problems
testing arithmetic
reasoning.
Nature of Intelligence
Binet, and Wechsler, assuming that intelligence is a general capacity.
Spearman: also proposed individuals possess a general intelligence
factor g in varying amounts, together with specific abilities.
Thurstone (1938): believed intelligence could be broken down into a
number of primary abilities. Used factor analysis to identify 7 factors
•
verbal comprehension
•
word fluency
•
number
•
space
•
memory
•
perceptual speed
•
reasoning
Thurstone devised test based on these factors;
Test of primary mental abilities.
But the predictive power of Test for primary mental abilities was no
greater than for Wechsler and Binet tests, and several of these factors
correlated with each other..
Perhaps for now (till opening
heads helps) behaviour is all we
have.
Q
Q
Increasingly complex programs means
that looking inside machines doesn’t tell
you why they are behaving the way they
are.
Those who don’t think the TT effective
must show why machines are in a
different position from our fellow humans
(I.e. not from OURSELVES!). Solipsism
again.
Stanford-Binet test makes use of concept of mental age versus chronological
age.
Intelligence quotient (IQ) produced as ratio of mental age to chronological
age.
Items in the test are age-graded, and mental age corresponds to level achieved
in test. Bright child’s mental age is above his or her chronological age, slow
child’s mental age is below his or her mental age.
Move of emphasis from general to specific abilities
World War 1: US test ‘Army Alpha’. Tested simple reasoning, ability to
follow directions, arithmetic and information. Used to screen thousands of
recruits, sorting into high/low/intermediate responsibilities.
Beginning of measures of specialized abilities:
Realisation that rating on single dimension not very informative. ie different
jobs require different aptitudes.
eg 1919 Seashore: Measures of Musical Talent.
Tested ability to discriminate pitch, timbre, rhythm etc.
1939: Wechsler-Bellevue scale: goes beyond composite performance to
separate scores on different tasks. eg mazes, recall of information, memory
for digits etc.
Items divided into performance scale and verbal scale.
eg Performance item:
IQ tests: provide one view of what intelligence
is.
History of intelligence testing shows that our
conception of what is intelligence is subject to
change.
Change from assuming there is a general
intelligence factor, to looking at specific abilities.
But emphasis is still on quantification, and
measuring how much intelligence a person
possesses – doesn’t really say what intelligence
is.
Specific and general theories seem to have
similar predictictive abilities about individual
outcomes.
5
Try this right now:
PICK OUT THE ODD ONE
Cello
Q Harp
Q Drum
Q Violin
Q Guitar
Q
Limitations of ability tests:
1.
IQ scores do not predict achievement very well, although they can
make gross discriminations. The predictive value of tests is better at
school (correlation between .4 and .6 between IQ scores on StanfordBinet and Wechsler and school grades), but less good at university.
•
Possible reasons for poor prediction: Difficult to devise tests
which are culturally fair, and independent of educational experience.
E.g. pick one word that doesn’t belong with the others.
Arguments about meaning and
understanding (and programs)
Q
Q
Cello harp drum violin guitar
Children from higher income families chose ‘drum’; those from lower
income families picked ‘cello’.
•
Tests do not assess motivation or creativity.
2.
Human-centred: Animals might possess an intelligence, in a way
that a computer does not, but it is not something that will show up in an
IQ test.
3.
Tests only designed to predict future performance; they do not help
to define what intelligence is., but again, the search for definitions is
rarely helpful.
Searle’s Example
The Chinese Room
Q
Searle’s Chinese Room argument
The Symbol Grounding argument
Bar-Hillel’s argument about the
impossibility of machine translation
The Chinese Room
An operator sits in a room; Chinese symbols
come in which O. does not understand. He
has explicit instructions (a program!) in
English in how to get an output stream of
Chinese characters from all this, so as to
generate “answers” from “questions”. But of
course he understands nothing even though
Chinese speakers who see the output find it
correct and indistinguishable from the real
thing.
Q
Q
Q
Important philosophical critic of Artificial
Intelligence. See also recent book:
Searle, J.R. (1997) The Mystery of
consciousness. Granta Books, London
Weak AI: computer is valuable tool for
study of mind, ie can formulate and test
hypotheses rigorously.
Strong AI: appropriately programmed
computer really is a mind, can be said to
understand, and has other cognitive
states.
Read chapter 6 in Copeland (1993):
The curious case of the Chinese Room.
Clearer account: pgs 292-297 in
Sharples, Hogg, Hutchinson, Torrance
and Young (1989) ‘Computers and
Thought’ MIT Press: Bradford Books.
Original source: Minds, Brains and
Programs: John Searle (1980)
Can digital computers think?
Searle is an opponent of strong AI, and
the Chinese room is meant to show
what strong AI is.
It is an imaginary Gedankenexperiment
like the Turing Test.
Could take this as an empirical argument
- wait and see if AI researchers manage
to produce a machine that thinks.
Empirical means something which can be
settled by experimentation and
evidence gathering.
Example of empirical question:
Are all ophthalmologists in New York over
25 years of age?
6
Example of non-empirical question:
are all ophthalmologists in New York eye
specialists?
Searle - ‘can a machine think’ is not an
empirical question. Something following
a program could never think.
Contrast this with Turing, who believed
‘Can machines think?’ was better seen as
a practical/empirical question, so as to
avoid the philosophy (it didn’t work!).
Chinese Room
|Operator in room with pieces of paper.
Symbols written on paper which operator
cannot understand.
Slots in wall of room - paper can come in
and be passed out.
Operator has set of rules telling him/her
how to build, compare and manipulate
symbol-structures using pieces of paper
in room, together with those passed in
from outside.
Example of rule:
if the pattern is X, write
100001110010001001001 on the next
empty line of the exercise book labelled
‘input store’
once input transformed into sets of bits,
then perform specified set of
manipulations on those bits. Then pair
final result with Chinese characters, in
‘Output store’ and push through Output
slot.
But symbols mean nothing to operator.
Instructions correspond to program which
simulates linguistic ability and
understanding of native speaker of
Chinese.
Sets of symbols passed in and out
correspond to sentences of meaningful
dialogue.
More than this: Chinese Room program is
able to pass the Turing Test with flying
colours!
According to Searle, behaviour of
operator is like that of computer running
a program. What point do you think
Searle is trying to make with this
example?
Searle: Operator does not understand
Chinese - only understands instructions
for manipulating symbols.
Behaviour of operator is like behaviour of
computer running same program.
Computer running program does not
understand any more than the operator
does.
Searle: operator only needs syntax, not
semantics.
Semantics - relating symbols to real
world.
Syntax - knowledge of formal properties
of symbols (how they can be
combined).
Mastery of syntax: mastery of set of rules
for performing symbol manipulations.
Mastery of semantics: to have
understanding of what those symbols
mean (this is the hard bit!!)
Example: from Copeland.
Arabic sentence
Jamal hamati indaha waja midah
2 syntax rules for arabic:
a) To form the I-sentence corresponding
to a given sentence, prefix the whole
sentence with the symbols ‘Hal’
b) To form the N-sentence corresponding
to any reduplicative sentence, insert the
particle ‘laysa’ in front of the predicate
of the sentence.
What would I sentence and N sentence
corresponding to Arabic sentence be.
(sentence is reduplicative and its
predicate consists of everything
following ‘hamati’)?
Jamal hamati indaha waja midah
7
But syntax rules tell us nothing about the
semantics. Hal forms an interrogative, and
laysa forms a negation. Question asks
whether your mother-in-law’s camel has belly
ache:
Hal jamal hamati indaha waja midah
and second sentence answers in the negative:
Laysa indaha waja midah
According to Searle, computers just engaging in
syntactical manoeuvres like this.
Searle: Program carries out certain
operations in response to its input, and
produces certain outputs, which are
correct responses to questions.
But hasn’t understood a question any
more than an operator in the Chinese
Room would have understood Chinese.
Treat the Chinese Room system as a
black box and ask it (in Chinese) if it
understands Chinese “Of course I do”
Ask operator (if you can reach them!) if
he/she understands Chinese “search me, its just a bunch of
meaningless squiggles”.
Remember back to PARRY
PARRY was not designed to show
understanding, but was often thought to
do so. We know it worked with a very
simple but large mechanism:
•
•
•
•
•
•
Why are you in the hospital?
I SHOULDN’T BE HERE.
Who brought you here?
THE POLICE.
What trouble did you have with the police?
COPS DON’T DO THEIR JOB.
Questions: is Searle’s argument
convincing?
Does it capture some of your doubts
about computer programs?
Responses to Searle:
1. Insist that the operator can in fact
understand Chinese Like case in which person plays chess
who does not know rules of chess but is
operating under post-hypnotic
suggestion.
Compare blind-sight subjects who can
see but do not agree they can---consciousness of knowledge may be
irrelevant here!
Strong AI: Machine can literally
be said to understand the
responses it makes
Searle’s argument is that like the
operator in the Chinese Room,
PARRY’s computer does not
understand anything it responds--which
is certainly true of PARRY but is it true
in principle, as Searle wants?
Suppose for a moment Turing had
believed in Strong AI. He might have
argued:
a computer succeeding in the imitation
game will have same mental states that
would have been attributed to human.
Eg understanding the words of the
language been used to communicate.
But, says Searle. the operator cannot
understand Chinese.
2. Systems Response (so called by Searle)
concede that the operator does not understand
Chinese, but that system as a whole, of which
operator is a part, DOES understand
Chinese.
Copeland: Searle makes an invalid argument
(operator = Joe)
Premiss - No amount of symbol manipulation
on Joe’s part will enable Joe to understand
the Chinese input.
8
Therefore No amount of symbol
manipulation on Joe’s part will enable
the wider system of which Joe is a
component to understand the Chinese
input.
Burlesque of the same thing clearly
doesn’t follow.
Recent restatement of Chinese Room
Argument
From Searle (1997) The Mystery of
Consciousness
1. Programs are entirely syntactical
2. Minds have a semantics
3. Syntax is not the same as, nor by itself
sufficient for, semantics
Therefore programs are not minds. QED
Premiss: Bill the cleaner has never sold
pyjamas to Korea.
Therefore the company for which Bill
works has never sold pyjamas to Korea.
Searle’s rebuttal of systems reply: if
symbol operator doesn’t understand
Chinese, why should you be able to say
that symbol operator (Joe) plus bits of
paper plus room understands Chinese.
System as a whole behaves as though it
understands Chinese. But that doesn’t
mean that it does.
Step 1: - just states that a program written down
consists entirely of rules concerning
syntactical entities, that is rules for
manipulating symbols. Physics of
implementing medium (ie computer) is
irrelevant to computation.
Step 2: - just says what we know about human
thinking. When we think in words or other
symbols we have to know what those words
mean - a mind has more than uninterpreted
formal symbols running through it, it has
mental contents or semantic contents.
Step 3: - states the general principle that
Chinese Room thought experiment
illustrates. Merely manipulating formal
symbols does not guarantee presence
of semantic contents.
‘..It does not matter how well the system
can imitate the behaviour of someone
who really does understand, nor how
complex the symbol manipulations are;
you can not milk semantics out of
syntactical processes alone..’
(Searle, 1997)
The Internalised Case
Suppose the operator learns up all these rules
and table and can do the trick in Chinese. On
this version, the Chinese Room has nothing in
but the operator.
Can one still say the operator understands
nothing of Chinese?
Consider: a man appears to speak French
fluently but say, no he doesn’t really, he’s just
learned up a phrase book. He’s joking, isn’t
he?
Q
Q
You cannot really contrast a person with
rules-known-to-the person
We shall return at intervals to the
Chomsky view that language behaviour
in humans IS rule following (and he can
determine what the rules are!)
Searle says this shows the need
for semantics but semantics
means two things at different
times:
Q
Q
Access to objects via FORMAL objects
(more symbols) as in logic and the formal
semantics of programs.
Access to objects via physical contact
and manipulation--robot arms or
prostheses (or what children do from a
very early age).
9
Semantics fun and games
Programs have access only to syntax (says S.).
If he is offered a formal semantics (which is of
one interpretation rather than another) –
that’s just more symbols ( S’s silly reply ).
Soon you’ll encounter the ‘formal semantics of
programs’ so don’t worry about this bit.
If offered access to objects via a robot prothesis
from inside the box: Searle replies that’s just
more program or it won’t have reliable
ostension/reference like us.
Later moves:
Q
Q
S makes having the right stuff necessary for
having I-states (becoming a sort of
biological materialist about people;
thinking/intentionality requires our biological
make up i.e. carbon not silicon. Hard to
argue with this but it has no obvious
plausilility).
He makes no program necessary – This is
just circular – and would commit him to
withdrawing intentionality from cats if ….
etc. (Putman’s cats).
Remember Strong AI is the straw
man of all time
“computers, given the right programs can be
literally said to understand and have other
cognitive states”. (p.417)
Searle has never been able to show that any AI
person has actually claimed this!
Consider the internalised Chinese “speaker”: is
he mentally ill? Would we even consider he
didn’t understand? What semantics might he
lack? For answering questions about S’s
paper? ; for labels, chairs, hamburgers?
The residuum in S’s case is intentional states.
[Weak AI – mere heuristic tool for study of the
mind]
The US philosopher Putnam
made it hard to argue that things
must have certain properties.
Q
Q
Q
Q
He said: suppose it turned out that all cats
were robots from Mars.
What would we do?
Stop calling cats ‘cats’--since they didn’t have
the ‘necessary property’ ANIMATE?
Just carry on and agree that cats weren’t
animate after all?
Dennett: I-state is a term in S’s vocabulary for
which he will allow no consistent set of
criteria – but he wants people/dogs in and
machines out at all costs.
Suppose an English speaker learned up
Chinese by tables and could give a good
performance in it? (And would be like the
operator OUT OF THE ROOM)
Would Searle have to say he had no I-state
about things he discussed in Chinese?
Symbol grounding
Is there any solution to the issues raised
by Searle’s Chinese Room? Are there
any ways of giving the symbols real
meaning?
Harnad, S. (1990) The Symbol
Grounding Problem. Physical D 42,
335-346.
Copy of paper can be obtained from:
(http://www.cogsci.soton.ac.uk/harnad/genpub.html)
computation consists of manipulation of
meaningless symbols.
For them to have meaning they must be
grounded in non-symbolic base.
Like the idea of trying to learn Chinese
from a Chinese dictionary.
Not enough for symbols to be ‘hooked up’
to operations in the real world. (See
Searle’s objection to robot answer.)
Symbols need to have some intrinsic
semantics or real meaning.
For Hanard, symbols are grounded in
iconic representations of the world.
Alternatively, imagine that symbols
emerge as a way of referring to
representations of the world - representations that are built up as a result of
interactions with the world.
10
Does Harnard’s account of symbol
grounding really provide an answer to
the issues raised by Searle’s Chinese
Room?
What symbol grounding do humans
have?
Symbols are not inserted into our heads
ready-made.
For example, before a baby learns to
apply the label ‘ball’ to a ball, it will
have had many physical interactions
with it, picking it up, dropping it, rolling
it etc.
For instance, a robot that learns from scratch
how to manipulate and interact with objects in
the world.
(Remember Dreyfus argument that intelligent
things MUST HAVE GROWN UP AS WE DO)
In both accounts, symbols are no longer empty
and meaningless because they are grounded
in non-symbolic base - i.e. grounded in
meaningful representations.
(Cf. formal semantics on this view!)
Another famous example linking
meaning/knowledge to
understanding:
Q
Q
Q
This is the argument that we need stored
knowledge to show understanding.
Remember McCarthy’s dismissal of
PARRY--not AI because it did not know
who was president.
Is knowledge of meaning different from
knowledge? ‘The Edelweiss is a flower
that grows in the Alps’.
Famous example from history of
machine translation (MT)
Q
Q
Q
Q
Q
Q
Q
Bar-Hillel’s proof that MT was IMPOSSIBLE
(not just difficult)
---------------------------------------Little Johnny had lost his box
He was very sad
Then he found it
The box was in the PEN
Johnny was happy again
Child eventually form concept of what
‘roundness’ is, but this is based on long
history of many physical interactions
with the object.
Perhaps robotic work in which symbols
emerge from interactions with the real
world might provide a solution.
See work on Adaptive Behaviour e.g.
Rodney Brooks.
Bar-Hillel’s argument:
Q
Q
Q
Q
The words are not difficult nor is the
structure
To get the translation right in a language
where pen is NOT both playpen and
writing pen,
You need to know about the relative
sizes of playpens, boxes and writing
pens.
I.e you need a lot of world knowledge
One definition of AI is:
knowledge based processing
Q
Q
Q
Bar-Hillel and those who believe that in
AI, look at the ‘box’ example and
AGREE about the problem (needs
knowledge for its solution)
DISAGREE about what to do (for AI it’s
a task, for B-H impossible)
11