Download epiphenomenal

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Human–computer interaction wikipedia , lookup

Kevin Warwick wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Technological singularity wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Chinese room wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Transcript
A New Artificial Intelligence 5
Kevin Warwick
Philosophy of AI II
• Here we will look afresh at some of the
arguments
• Brain prosthesis experiment
• Chinese room problem
• Technological Singularity
Brain Prosthesis Experiment
• Assume - we fully understand the working
of human brain cells and can make devices
which perform exactly the same function.
• Surgical techniques have developed so that
we can replace individual neurons with their
microscopic equivalents without interrupting
the workings of the brain
• Cell by cell the whole brain is replaced. It is
then restored, by a reversal, to its original
Question
• Would the person’s consciousness remain the
•
•
•
•
same throughout the process?
If the individual smells a flower when in both
versions, either:
(a) consciousness that generates the resultant
feelings still operates in the technological version,
which is conscious in the same way, or
(b) conscious mental events in the normal brain
have no connection to behavior and are missing in
the technological brain, which is not conscious.
Presumably after reversal the individual will be
conscious although they may suffer memory loss
Oofle Dust and Modelling
• Version (b) is called epiphenomenal, something
•
•
occurs but has no effect in the real world. Oofle
dust – not science!
Version (a) does require that replacement neurons,
and their connections, are identical to the original.
If we can, using present day physics, accurately
form a model of the human brain then we should be
able to carry out the experiment.
One argument says that although we might be able
to copy the neurons extremely closely, we will never
be able to copy them ‘exactly’. Subtle differences
due to chaotic behavior or quantum randomness
would still exist and these differences are critical.
Penrose/Warwick
• Penrose - it is our present day understanding of
•
physics that is to blame. For the very small
elements that cannot be copied “such noncomputational action would have to be found in
an area of physics that lies outside the presently
known physical laws”. If we could discover these
laws then version (a) would be quite possible.
Warwick - In this argument we are not
concerned whether or not the technological
brain is conscious but whether or not it is
conscious in the same way as the original
human brain. In the discussion of rational AI,
the possibility of artificial intelligence to be
conscious, in its own way, is not in question.
What is in question is whether this could be
identical to human consciousness.
Reality Check
• No matter how good the model, there will be
•
differences between the human and
technological brain. But the model could be very
close, which means that the form of
consciousness exhibited by the technological
brain could be so close to that of the human
brain, as makes no difference.
This is a philosophical exercise. The human brain
is a complex organ, full of highly connected
neurons. If one neuron is actually removed then
the overall effect may be negligible but may be
dramatic, with the individual’s behavior changing
completely.
The Chinese Room
• The Chinese Room argument is a neat
argument originated by John Searle in an
attempt to show that a symbol processing
machine (a computer) can never be properly
described as having a “mind" or
“understanding“ or being “conscious”, no
matter how intelligently it may behave.
The Argument I
• A computer takes Chinese characters as input and follows
•
•
•
the instructions of a program to produce other Chinese
characters, which it presents as output.
The computer does this so convincingly that it comfortably
passes the Turing Test: it convinces a human Chinese
speaker that it is itself a human Chinese speaker.
It could be argued that the computer "understands" Chinese
– Strong AI
Without "understanding" we cannot describe what the
machine is doing as "thinking". Because it does not think, it
does not have a "mind" in anything like the normal sense of
the word. Therefore "strong AI" is mistaken (Searle).
The Argument II
• Suppose that you are in a closed room and that you
•
have a book with an English/Czech version of the same
program. You can receive Chinese characters, process
them according to the instructions, and produce Chinese
characters as output. As the computer had passed the
Turing test this way, it is fair to deduce that you will be
able to do so as well
There is no difference between the computer’s role in
the first case and the role you play in the latter. Each is
simply following a program which simulates intelligent
behavior. Yet you do not understand a word of Chinese.
Since you do not understand Chinese we can infer that
the computer does not understand Chinese either.
The Argument III
• Searle’s argument is that you have something
•
•
•
more than the machine
You could ‘learn’ Chinese – but the machine??
You can ‘understand’ a language – the machine
cannot - you have something extra, that the
computer does not have! (consciousness)
Humans have beliefs, while thermostats and
adding machines and shoes don't
Comments on the Chinese Room
•
•
•
•
•
Obviously human-centric
Much philosophical discussion
But what exactly is the conclusion?
Searle -There are “properties" in human
neurons that give rise to the mind. These
properties cannot be detected by anyone outside
the mind, otherwise the computer couldn't pass
the Turing Test.
This implies the human mind is epiphenomenal –
oofle dust
Points
• Can do exactly the same argument
(Machine comms etc) to prove that
machines are conscious but humans are
not
• Human/shoe – machine/cabbage
• Learning/programmed?
• Much philosophical argument though!
Technological Singularity
• Vinge (1993) “Within 30 years we will
have the technological means to create
superhuman intelligence”
• Warwick (1998) “There is no proof, no
evidence, no physical or biological pointers
that indicate that machine intelligence
cannot surpass that of humans”.
• Moravec (2000) Robots will match human
intelligence in 50 years then exceed it –
they will become our “Mind Children”.
Ray Kurzweil
• “a strong trend toward the merger of
human thinking with the world of machine
intelligence”.
• “There will no longer be any clear
distinction between humans and
computers”
• Singularity – point where humans lose
control
Steven Hawking
• “In contrast with our intellect, computers
double their performance every 18
months.”
• “The danger is real that they could
develop intelligence and take over the
world.”
• “We must develop as quickly as possible
technologies that make a direct
connection between brain and computer.”
Cyborgs
• What is the intelligence, consciousness,
ability of a combined human/machine
brain?
Next
• Turing Update – can machines
communicate like a human?
Contact Information
• Web site: www.kevinwarwick.com
• Email: [email protected]
• Tel: (44)-1189-318210
• Fax: (44)-1189-318220
• Professor Kevin Warwick, Department of
Cybernetics, University of Reading,
Whiteknights, Reading, RG6 6AY,UK