Download `aboutness` is - Kansas State University

Document related concepts

Technological singularity wikipedia , lookup

Collaborative information seeking wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Transcript
Lecture 41 of 42
Philosophy of Mind
Discussion: Final Exam Review
Wednesday, 10 December 2008
William H. Hsu
Department of Computing and Information Sciences, KSU
KSOL course page: http://snipurl.com/v9v3
Course web site: http://www.kddresearch.org/Courses/Fall-2008/CIS730
Instructor home page: http://www.cis.ksu.edu/~bhsu
Reading for Next Class:
Chapters 1-14, 22 – 23, 26, Russell & Norvig 2nd edition
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
PHILOSOPHY OF MIND
© 2006 Hilary Greaves
http://www.rci.rutgers.edu/~hgreaves/teaching/phil103/lectures.htm
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
The Two Central Problems in the Philosophy
of Mind
 The Problem of Other Minds
 How can I know that other minds exist?
Thoughts
Feelings
Sensory experiences
etc.
Vs.
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
The Problem of Other Minds (cont’d)
 How can I know what is going on in other minds?
Afraid
Looking
forward to
next
summer’s
holiday
Vs.
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
The Mind – Body Problem
 How are minds and their
contents related to the physical,
chemical & biological world?
MIN
D
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Two (apparently unique) aspects of mental
phenomena

Consciousness
 Your mind is conscious. But as far as we know, ordinary bits of physical matter are not
conscious.
 “What is consciousness?”
 You know!
CONSCIOUS
-NESS
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Intentionality (or aboutness)
 Mental states can be about other things in the world.
 Your thought that Bush is a jerk is about George Bush.
aboutness
Bush is
a jerk
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Intentionality/aboutness
 Ordinary physical objects aren’t about anything.
 The desk is not about the chair.
aboutness
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Intentionality/aboutness
 (A painting can be about
something...
aboutness
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Intentionality/aboutness
 ... But that’s only because it was painted by someone who was thinking about the
thing he was painting.)
aboutness
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Terminological note
 ‘Intentionality’ has no special connection to intentions.




Bush’s knowledge that Saddam Hussein is alive is about Saddam.
Tom’s fear of the dentist is about dentists.
My desire that Australia will be fun is about Australia.
Your intention to eat dinner this evening is about your dinner.
Mental states that have
‘aboutness’ (=‘intentionality’)
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Intentionality/aboutness
 The challenge: an adequate explanation of what minds are should
explain how mental goings-on can be about things, since
ordinary physical goings-on do not have this feature.
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Why explaining ‘aboutness’ is difficult

thoughts about things that exist
 There's no such thing as 'Intentional string'.
 The asymmetry problem: Aboutness can't be resemblance
Bush is a
jerk
aboutness
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
An attempt to explain what ‘aboutness’ is
 My thought is about Bush = my thought was ‘caused in the right
sort of way’ by Bush.
aboutness
Bush is
a jerk
Causal chain
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Why the causal account doesn’t work
 We can also think about things
that don’t exist.
It would
be fun to
ride a
unicorn.
Causal chain??
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Cartesian Dualism
 Minds and Matter are two fundamentally different kinds of
“substances”
Thoughts
Feelings
Sensory experiences
etc.
Mental stuff
(minds)
Consciousness
Intentionality
Thought
Physical stuff
(matter)
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Problems with Cartesian Dualism (I)
 It's mysterious what the "mind substance" is.
 What it’s not:
 Not made of matter
 Not located in space
 Not ‘extended’
 ...
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Problems with Cartesian Dualism (II)
 It’s a mystery how minds can interact causally with things made of
matter.
 Mental causing physical:
I’m going
to raise
my arm.
Causes
Mental
substance
(mind)
CIS 530 / 730: Artificial Intelligence
Matter
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Problems with Cartesian Dualism (II)
 Physical causing mental:
Sound
experienc
e
Causes
sound waves
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Problems with Cartesian Dualism (III)
 Are dualism and gradual
evolution consistent?
?
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Problems with Cartesian Dualism (IV)
 It makes the Other Minds problem very hard to solve.
Thoughts
Feelings
Sensory experiences
etc.
Vs.
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
“Materialist” solutions to the Mind – Body
Problem
 Materialism: The claim that everything in the universe is made up of
matter.
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
The advance of materialism

The advance of science has made materialism look very plausible to many philosophers &
scientists.
 Astronomy: The heavenly bodies are made of matter and obey the laws of physics.
“heavens”
earth
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
The advance of materialism
 Evolutionary biology: No non-material processes or forces (e.g. God)
are needed to explain the design in the biological world.
Spooky
stuff?
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
The advance of materialism
 Advances in physiology & understanding the genetic code: There is no need
for a “life force” ("elan vital").
“life
force”
Dead tiger
Live tiger
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
The advance of materialism
 Advances in understanding the
way the brain works – helped by
imaging technology – makes it
increasingly plausible to believe
that mental phenomena like
thought and consciousness
might be explained in terms of
brain processes
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Spooky
stuff??
Computing & Information Sciences
Kansas State University
The “Reductive Materialism” hypothesis
 The “Reductive Materialism” hypothesis:
 social sciences can be reduced to (i.e. explained by appeal to)
psychology
 psychology can be reduced to biology
 biology can be reduced to chemistry
 chemistry can be reduced to physics
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Behaviorism
 The way we tell that (e.g.) you're in pain is by observing your
behavior. The inspiration for behaviorism: maybe what it means to
say you're in pain also involves your behavior.
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
‘Behaviorism’ in psychology
 Methodological (or: psychological) behaviorists included B. F. Skinner
 Methodological behaviorism: If there is anything more to mental states than dispositions
to behave in certain ways, that 'extra bit' has nothing to do with science.
I am feeling
angry
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
‘Behaviorism’ in philosophy
 Analytical (or: philosophical) behaviorists included Ludwig
Wittgenstein and Gilbert Ryle: there is nothing more.
I am feeling
angry
EQUALS
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Analytical behaviorism
 Analytical behaviorism: Claims about
mental states and processes can be
“translated” into claims about patterns of
behavior.
 Example:
 “Tom has a toothache” = “Tom moans;
Tom says his tooth hurts; if someone
touches Tom’s tooth, Tom screams; etc."
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Analytical behaviorism
 “Jenny is hungry” = ??
 “Jason wishes he could quit school” = ??
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Behaviorism and the Problem of Other Minds
I am feeling
angry
EQUALS
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Problems for behaviorism (I): Undefinable
mental states
 Some mental states don't seem to be definable in this way.
 Listening to Bob Dylan
 Thinking about how big the universe is
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Problems for behaviorism (II): Circular
definitions
 Behaviorist “definitions” of mental state terms all turned out to be
circular (or just plain wrong).
 ‘Tom believes it will rain today’ =
‘Tom will either stay at home or drive to school today’
IF Tom wants to stay dry, and Tom doesn’t believe there’s a shelter at
the bus stop, and...
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Problems for behaviorism (II): Circular
definitions
 “James believes the exam will be hard”
= ....??
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Problems for behaviorism (III):
The 'inverted spectrum' problem

It seems possible for one person to have their spectrum of color experiences 'inverted'
relative to another's. But according to behaviorism, this is not possible.
The
flower
looks
yellow
The
flower
looks
yellow
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Problems for behaviorism (IV): Some wrong
predictions
 For behaviorists, two people who behaved in just the same way would have
the same mental states. But there are cases in which this is clearly crazy.
 Dennett’s thought experiment: curare plus “amnestic”
Administer
general
anaesthetic
“How did
it feel?”
operation
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Problems for behaviorism (IV): Some wrong
predictions
 “Curare”: paralyses all voluntary muscles
 “Amnestic”: Has no effect until 2 hours after ingestion, whereupon it wipes out
memory of those two hours
Administer
curare +
amnestic
“How did
it feel?”
operation
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
... The problem (an argument against
behaviorism):
(P1) Behaviorism predicts that the patient who is given general anaesthetic
has the same experiences as the patient who is given curare + amnestic.
(P2) But that’s wrong! One is unconscious, and the other is in excruciating
pain.
Therefore,
(C) Behaviorism is false.
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
The (Type-Type) Identity Theory
 The type – token distinction
the
the
 Greaves's belief that snow is white & your belief that snow is white.
Snow is
white
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Snow is
white
Computing & Information Sciences
Kansas State University
The (Type-Type) Identity Theory


The type-type identity theory: Mental state
types are identical with brain state types
Examples of mental state types:




the belief that snow is white
a burning pain in the index finger
the thought that 17 is a prime number
Example of a type-type identification:

"Pain is c-fibers firing"
pain
Same event
(type)
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
How the theory deals with the Other Minds
Problem
 ..??
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Problem for the (Type) Identity Theory:
“Chauvinism” about the mental
 According to type-type identity theory:
 Animals with brains significantly different from
ours can’t feel pain or have other mental states
 If there are organisms in the universe whose
chemical composition is different from ours,
they can’t feel pain or pleasure, and they can’t
think.
pain
 Extra-terrestrials can’t even think about math.
7x5=35
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Functionalism (aka “machine functionalism”)
 The emergence of computers and the computer model of the mind.
 Computers are symbol manipulators
 Programs specify how the symbols are to be manipulated.
2
5
3
CIS 530 / 730: Artificial Intelligence
ADDING
PROGRAM
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Programs and physical devices
 Many different sorts of physical devices can run the same
program...
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Minds and programs
 Minds are to brains as programs are to computers
 “The mind is the brain’s program”
PROGRAM
Inputs (mouse
clicks, keyboard
strokes)
Outputs (screen
displays; printouts)
PROGRAM
Outputs (body
movements; screams;
sentences spoken)
Inputs (light rays;
hammers hitting
thumb)
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
‘Functional states’


Mental state concepts are functional concepts; mental states are functional states (not
physical states)
‘Functional states’ for the adding program:
Input 2
Computer state:
‘ready to do an
add calculation’
Input 3
Computer state:
‘remember that a
2 has been input,
and get ready to
add a 2nd
number’
Output 5
Functional states
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
‘Functional states’ cont’d
Input: See that
it’s raining
Functional states
Desire to stay
dry
Belief that it’s
raining
Output: Look for
car keys
Belief that
there is no
shelter at the
bus stop
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
What is...
 The functionalist solution to the
Other Minds Problem?
 The functionalist solution to the
chauvinism problem?
pain
?
pain
?
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Strong AI
 A radical (?) implication of functionalism: “Strong AI”
-- It is possible to build artificial minds with real
mental states.
 A computer running the same program that your brain
is running would have the same mental states that
you have.
 It would be conscious, and thus feel pains and
pleasures, have emotions, etc.
 It would have thoughts with real intentionality.
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Can machines think?
 If they can, then the fact that functionalism predicts
that they can counts in favor of the functionalist
theory. If not, it counts as an objection to
functionalism.
 The Turing test – if a machine passes the Turing
test, we cannot tell that it isn't really thinking
 It is a further step to say that if a machine passes the
Turing test, then it is thinking. But perhaps (?) this
extra step is very plausible.
“What do you think
about Saddam’s
trial?”
“I don’t normally approve of
the death penalty, but this guy
deserves everything he gets.”
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Searle’s Critique of Functionalism and Strong
AI

Roger Schank’s project: Getting a
computer to understand stories and
answer questions about them the way
people do.
Did the man eat the
hamburger?
A man went into a
restaurant and ordered a
hamburger. When the
hamburger arrived it was
burnt to a crisp, and the
man stormed out of the
restaurant angrily,
without paying for the
burger
or leaving a tip.
CIS 530 / 730: Artificial
Intelligence
No
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
What Schank’s computer is doing
 It’s manipulating symbols.
 Does this mean that it understands what the symbols mean (i.e.
understands the story, and understands its replies to the questions)?
Thinking of
a restaurant
scene
CIS 530 / 730: Artificial Intelligence
aboutness
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Searle’s argument
 Understanding is an ‘intentional state’ – when you understand a story,
you understand what it is about.
 Searle’s going to argue that the computer could be manipulating symbols
in all the right ways, without understanding the story (= without having
‘intentionality’).
 The "Chinese room argument": An argument that passing the Turing test
is not sufficient or thinking.
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
The setup of the "Chinese room”
Input:
CIS 530 / 730: Artificial Intelligence
Output:
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Searle’s argument
(P1) Neither Searle nor any other part of the Chinese Room really understands Chinese.
Therefore,
(C1) The Chinese Room [i.e. the system] does not understand Chinese. (From (P1))
(P2) But the Chinese Room perfectly simulates someone who does understand Chinese.
Therefore,
(C2) Simulating understanding is not sufficient for having understanding. (From (P2), (C1))
Therefore,
(C3) Even if Schank's computer perfectly simulates human understanding of stories, it does not
follow that Schank's computer really understands stories.
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Searle’s Account of Intentionality
 It is a “causal product” of the right kind of biological system.
adding
Not a “causal
product” of the
symbol
manipulation
2
5
3
CIS 530 / 730: Artificial Intelligence
ADDING
PROGRAM
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Searle’s Account of Intentionality
Window
breaks
Pick up brick and
throw at window
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Causal
product of
brick being
thrown
Computing & Information Sciences
Kansas State University
Searle’s Account of Intentionality
 Searle’s Account of Intentionality
 It cannot be created simply by
symbol manipulation.
 Searle makes the same claims
for consciousness.
intentionalit
y
intentionalit
y
???
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
A Problem for Searle’s View
 Either intentionality and consciousness are restricted to brains
like ours
 in which case he is committed to chauvinism
 Or brains quite different from ours can also produce intentionality
& consciousness
 in which case the Other Minds Problem looks to be unsolvable since
we can’t tell which brains just simulate consciousness & intentionality
& which really have it.
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Some questions for YOU to ponder
 Can a suitably sophisticated computer which is NOT made out of
“meat” like the human brain
 have real intentionality (and thus real thoughts)?
 have real consciousness (feel real pain & pleasure, and know what it
is like to experience colors & tastes)?
 Does it matter? If so, why?
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University
Some questions for YOU to ponder
 Can an extra-terrestrial with a suitably sophisticated brain that is very
different from our brain
 have real intentionality (and thus real thoughts)?
 have real consciousness (feel real pain & pleasure, and know what it is like to
experience colors & tastes)?
 How can we KNOW whether the computer or the extra-terrestrial has
REAL consciousness and REAL intentionality?
CIS 530 / 730: Artificial Intelligence
Wednesday, 10 Dec 2008
Computing & Information Sciences
Kansas State University