Download ppt - CSE, IIT Bombay

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

AI winter wikipedia , lookup

Technological singularity wikipedia , lookup

Artificial intelligence in video games wikipedia , lookup

Chinese room wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Turing test wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Transcript
CAN MACHINES THINK?

Seminar by





Annervaz (07305063)
Jaideep (06305R01)
K.V.M.V Kiran (04005031)
L. Srikanth (04005029)
Vasudevan (06405004)

“I propose to consider the question, 'Can
machines think?' ”

-Alan Turing
Motivation

Computers today capable of doing task which were
previously thought to be exclusively in human domain

Any limitation to machine's capability?

Can machine do everything that human brain can do?

Machine “Intelligence” <=> Human Intelligence

Hot debate in AI community
Outline

Introduction

Can Machines Think?

Imitation game – Turing test

Proponents and opponents of TT

Introduction to interactive proof

TT as Interactive proof

Conclusion
“can” “machines” “think”



“can”

Theoretically possible?

Practically realizable?
“machines”

Every engineering technique permitted

Manner of operation need not be completely known
“think”

“thinking is as thinking does”

Intelligence necessary and sufficient for thinking
Intelligence (contd...)

“Intelligence” -

Based on action:

Rational action: Act to achieve best (possible) outcome,
given what one knows

Human actions:Turing Test designed to measure this
Intelligence...(contd)

Based on thinking:



Rational thinking:Irrefutable reasoning process;
argument structures that always yield correct conclusions
given correct premises
Human thinking: studied in cognitive science which
attempts to combine AI and experimental techniques
from psychology
Artificial Intelligence:

when an entity other than a natural life form possesses
“intelligence”
Outline

Introduction

Can Machines Think?

Imitation game – Turing test

Proponents and opponents of TT

Introduction to interactive proof

TT as Interactive proof

conclusion
Can Machines Think?
Mathematical Objection

Gödel's Theorem
Gödel's Theorem
• Any sufficiently powerful consistent axiomatic system is
necessarily incomplete in that there will always be
statements that can neither be proved nor disproved
from their axioms.
• Corollary:- No sufficiently powerful formal system is
powerful enough to prove its own consistency
Gödel's Theorem – Proof Outline


The key idea is to construct a proposition P which asserts “P
is not provable”
P is neither provable nor disprovable, as both leads to
contradiction

If P were false, P would be provable

If P were true, P is not provable
Implications of Gödel's
Theorem
• Roger Penrose’s claim:
“Human mathematicians are not using a knowably
sound algorithm in order to ascertain mathematical truth”.
If they were, it would constitute an algorithm which
can assert it’s own soundness which Gödel’s theorem
proves is impossible.
Implications of Gödel's
Theorem
• People seem to simply be able to “see” the truth of some
statements (“intuition” and “insight”)
•
Provability is a weaker notion than truth
• Lucas Argument
• “Some Gödel statements, the machine will be unable to
produce as true, although a mind can see that it is true. And
so the machine will not be an adequate model of the mind ”
Counter arguments
• Even humans fallible to Gödel's theorem
• “Lucas cannot prove this statement”
• Every human other than Lucas can see this statement is true
• But Lucas can never prove this, from just being ‘inside’ his
‘system’.
• Mind may be exhibiting ‘rational’ inconsistency,and thereby
being a formal system consistent with Gödel's theorem
Can Machines Think?
Consciousness Objection

Arguments

no mechanism can feel:


Anger, grief, warmth, pleasure
Counter Arguments

What is feeling?

Machine can have its own set of feelings.

Aren't external actions enough?
Can Machines Think?
Lady Lovelace Objection


Arguments

Machine is deterministic

Machine cannot originate anything
Counter Arguments

Learning Machines

Low level determinism does not imply high level
predictable behavior
Can Machines Think?
Continuity vs. discreteness

Arguments

Nervous system is continuous

Discrete state system like digital computer can't simulate
nervous system

Counter arguments:

Why can real thought be located only in continous-state
system?

Discrete-state system may still be intelligent

Church-Turing Hypothesis
Can Machines Think?
Theological Objection


Arguments

Thinking is a part of Human soul

Man is made in gods own image

Intelligent machine can be a threat to humans
Counter Arguments

Why gods image cannot be passed on?

Why be so pessimistic and selfish?
Outline

Introduction

Can Machines think?

Imitation game – Turing test

Proponents and opponents of TT

Introduction to interactive proof

TT as Interactive proof

conclusion
Imitation game – Turing test

Objective: Interrogator determines which is man and
which is woman

B: tries to help C

A: tries and cause C to make the wrong
Imitation game – Turing test

Replace A with machine

Will the interrogator decide wrongly as often as earlier?

This replaces 'Can Machines think?'

Practical version: “Will an average interrogator have
more than, say 70% chance, of making the right
identification after 5min of questioning?”
Turing Test

Test of adequacy of an agent's verbal behavior

TT is based on the idea that ability to produce
sensible verbal response is intelligence

Tests the “human action” part of intelligence

Can be theoretically formulated as below-
Turing Test(contd..)


Premise 1:If an agent passes a TT, then it produces a
sensible sequence of verbal responses to a sequence
of verbal stimuli.
Premise 2: If an agent produces a sensible sequence
of verbal response to a sequence of verbal stimuli, then
it is intelligent.

Conclusion:Therefore, if an agent passes TT then it is
intelligent.
Outline

Introduction

Machines can't think

Counter arguments - 'machines can think !'

Imitation game – Turing test

Proponents and opponents of TT

Introduction to interactive proof

TT as Interactive proof

conclusion
Opponents of Turing Test


It is widely agreed that TT is not a necessary
condition for intelligence.
Any machine would require sensory organs and
sociological training to pass TT: very difficult task
even for an “intelligent” machine.

TT is of little value in guiding actual AI research

Total test: should involve responding to all inputs,
not just verbal
Block's Argument

TT merely tests behavior

TT is silent about internal working.

Memorizing machine: machine that stores sensible
responses to all possible sequences of verbal inputs

Practically infeasible but possible in principle

Such a machine does not fit into our concept of “intelligence”

So TT is not a sufficient condition for intelligence

Some extra-conditions on the working of the machine are
required
Proponents-Stuart's argument

TT a sufficient condition for intelligence!

logically necessary for intelligence

Extra conditions can be revealed by a TT

“Slight weakening” of proof criterion required

Weakening: Statistical Proof instead of a logical one

Weakening makes no conceivable difference from a
practical standpoint
TT Rephrased

Premise 1:

If an agent passes k rounds of a TT of at least one
minute in length, then (with a prob. of error exponentially
small in k) it has a capacity to produce a sensible
sequence of verbal responses to a sequence of verbal
stimuli that is logarithmic in the storage capacity of the
agent, whatever they may be.
TT Rephrased (continued...)

Premise 2:

If an agent has the capacity to produce a sensible
sequence of verbal responses to a sequence of verbal
stimuli that is logarithmic in the storage capacity of the
agent, whatever they may be, then it is intelligent.

Conclusion:

If an agent passes k rounds of a TT of at least 1 minute
in length (with probability of error exponentially small in
k) it is intelligent
Outline

Introduction

Machines can't think

Counter arguments - 'machines can think !'

Imitation game – Turing test

Proponents and opponents of TT

Introduction to interactive proof

TT as Interactive proof

conclusion
Interactive proof System

An interactive proof system is an abstract machine that models
computation as the exchange of messages between two parties

Prover, P having unlimited computation power

Verifier, V with polynomially bounded computation power

Assertion 's'

Randomization and Interaction

Multiple rounds of message-passing
Interactive Proof Example
graph non-isomorphism

Graph Isomorphism
– if there is any edge between any
two vertices in first graph, then there should be an edge in
between corresponding vertices in other graph

Given graphs G0 and G1

P's assertion s

G0 and G1 are NOT isomorphic
Interactive Proof example
(s:Graphs G0&G1 are Not Isomorphic)

P(bit – b')[infinite resources]

V(bit – b)[limited resources]

b = rand(0,1) [Prob=0.5]
G' = random permute(Gb)
Send G' to P

s: proved, if b==b'





b'=0 If Isomorphic(G0,G')
b'=1 otherwise
Send b' to V
Interactive Proof Example(continued...)

V selects Gb randomly, b=0 or 1

V does a random permutation G' of Gb

V sends G' to P

P checks if G' isomorphic to G0.

If so,sends back b' = “0”, else b' = “1”

V checks if b=b'. If so, V accepts proof- assertion proved
else V rejects proof
Interactive Proof Example
Graph Non-Isomorphism



B
Truth value of assertion
B'
Conclusion about assertion
0
True(not isomorphic)
0
True(not isomorphic)
1
True(not isomorphic)
1
True(not isomorphic)
0
False(isomorphic)
0
True(not isomorphic)
1
False(isomorphic)
0
False(isomorphic)
If G0 and G1 are isomorphic, then clue provides no help in guessing
the number
Prover guesses randomly, being wrong about half the time
Probability of false positive after k rounds is 1 in 2k.
Outline

Introduction

Machines can't think

Counter arguments - 'machines can think !'

Imitation game – Turing test

Proponents and opponents of TT

Introduction to interactive proof

TT as Interactive proof

conclusion
TT As Interactive Proof

capacity conception



If an agent has the capacity to produce a sensible
sequence of verbal responses to a sequence of verbal
stimuli, whatever they may be, then it is intelligent
generalizability
compactness conception


If an agent has the capacity to produce a sensible
sequence of verbal responses to a sequence of verbal
stimuli, whatever they may be, without requiring storage
exponential in the length of the sequence,then it is
intelligent
logarithmic storage
TT As Interactive Proof
capacity

P: Computer

V: Interrogator

assertion s: “P has the capacity to produce sensible
sequence of verbal responses to a sequence of verbal
stimuli, whatever they may be”
TT As Interactive Proof
capacity

space: all possible verbal stimuli sequences

tp: fraction of space for which P can perform correctly

tl: lower bound of tp for acceptance

if tp>tl then P has general capacity
TT As Interactive Proof
capacity

select sample (size K) uniformly

t: fraction of sample for which P can perform correctly

ts: lower bound of t for passing
TT As Interactive Proof
capacity
false positive



tp<tl and t>ts

Pr[t>ts] < e-ck, using Chernoff bounds

Pr(false positive) decrease exponentially with k
Choice of ts,tl does not change basic natue of argument

Similarly, Pr(false negative) decreases exponentially with
k
TT As Interactive Proof
compactness


length of sequence is greater than logarithm of storage
capacity
By Quantum theory and bounded volume of universe,
information capacity of universe estimated to be 10185


Turing Test of less than 1 min enough to judge
Above “Critical TT length” too short? Counter-intuitive?
TT As Interactive Proof
compactness

Reasons for short TT length requirement:



TT unrestricted; all queries of any sort on any topic
allowed.
Machine we wish to unmask is of a particular sort- one
that has memorized answers to every possible such
query
In IP samples are independent. Here, judge free to use
knowledge from previous responses

Only reduces probability of error
Properties of IP and TT

non transferability


provide proof only to verifier
lack of closure under composition

fails under composition
What we Proved

Premise 1:

If an agent passes k rounds of a TT of at least one
minute in length, then (with a prob. of error exponentially
small in k) it has a capacity to produce a sensible
sequence of verbal responses to a sequence of verbal
stimuli that is logarithmic in the storage capacity of the
agent, whatever they may be.
What we proved

Premise 2(modified compactness conception-based):

If an agent has the capacity to produce a sensible
sequence of verbal responses to a sequence of verbal
stimuli that is logarithmic in the storage capacity of the
agent, whatever they may be, then it is intelligent.

Conclusion:

If an agent passes k rounds of a TT of at least 1 minute
in length (with probability of error exponentially small in
k) it is intelligent
Conclusions






Varied definitions of intelligence, AI
Not much consensus among experts on how to
determine or measure it
Numerous attempts made to define and measurescientists,philosophers, engineers
One such attempt, TT quite popular and proved with
slight weakening to be sufficient for intelligence.
AI has influenced and has been influenced by various
other fields.
The debate has both inspired as well as distracted from
research of practical value
References


Block, N. 1981. Psychologism and behaviorism. Philosophical
Review XC(1):5-43.
Dennett, D. C. (1985) Can machines think? In: How we know, ed.
M. Shafto, Harper and Row.

Harnad, Stevan. (2006) The Annotation Game: On Turing (1950) on
Computing, Machinery, and Intelligence.

Hofstadter, Douglas R. (1999), Gödel, Escher, Bach, Basic Books.

James H. Moor. An analysis of the Turing test. Philosophical
Studies, 30:249-257, 1976.
References

Stalker, D. (1978), ‘Why Machines Can’t Think: A Reply to
James Moor’, Philosophical Studies 34, pp. 317-320.pp. 317–
320.

Shieber, S. M. To appear. The Turing test as interactive proof,
Nous.

Stuart M. Shieber 2006. Does the Turing Test Demonstrate
Intelligence or Not? In Proceedings of the Twenty-First
National Conference on Artificial Intelligence (AAAI-2006),
Boston, MA,16-20 July.

Turing, A. (1950), ‘Computing Machinery and Intelligence’,