Download Document

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Affective computing wikipedia , lookup

Computer vision wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

AI winter wikipedia , lookup

Human–computer interaction wikipedia , lookup

Computer chess wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Computer Go wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Transcript
Why Artificial Intelligence is
Very Hard
Theo Pavlidis
Distinguished Professor Emeritus
Stony Brook University
[email protected]
http://theopavlidis.com
What is Artificial Intelligence?
• A machine that replicates the functionality of
the human brain. (General or Strong AI)
“Around the Corner” since about 1945.
• A machine that does a specific task that
traditionally has been done by humans.
(Narrow or Weak AI). Each specific
application is treated as an engineering
problem. Numerous successes.
Sept. 2008
T. Pavlidis
2
Successes in Narrow AI
(Seen in daily life)
• Restricted Speech Recognition (in Banking
and Airline reservation systems, etc)
• Credit Card Fraud Detection
• Web Tools (Shopping Suggestions, Mechanical
Translation, etc)
• Simple Robots (Roomba)
• 1D and 2D Bar Codes (in stores and in
shipping)
Sept. 2008
T. Pavlidis
3
Successes in Narrow AI
(Not Seen Everyday)
•
•
•
•
•
Chess Playing Machines
Optical Character Recognition
Industrial Inspection
Biometrics (Fingerprints, Iris, etc)
Medical Diagnosis
Sept. 2008
T. Pavlidis
4
Restricted Speech Recognition
• Grammar driven models (using low level context) have
been quite successful.
• High level context is even better. For example, matching a
speech fragment to a name on a list.
• Successful applications include Airline reservation systems
and Call Center monitoring.
• See a demonstration of using voice for web search in
http://www.youtube.com/watch?v=npRtTdGeWQA . The
system is a product of Nuance Open Voice Search and it
relies on personalization.
Sept. 2008
T. Pavlidis
5
Web Shopping:
Learning User Preferences
Sept. 2008
T. Pavlidis
6
Household Robot
http://store.irobot.com/home/index.jsp
Sept. 2008
T. Pavlidis
7
Making Reading Easy for
Computers
• Bar codes and two-dimensional symbologies
are much easier to read than text because:
– They are formally defined.
– They include well-defined error detection or, in
some cases, error correction codes thus providing
their own context.
Sept. 2008
T. Pavlidis
8
Examples of Two-Dimensional Symbologies
Maxicode (UPS)
Sept. 2008
PDF417 (Fed Ex, DMV)
T. Pavlidis
9
Chess Playing Machines - 1
• Chess is a deterministic game, so a computer
could derive a winning solution analytically.
However the number of all possible positions
is so large (10120) that using even the fastest
available computer it will take billions of years
to consider all possible moves.
• Skilled players may look at 20 moves ahead
by pruning, i.e. ignoring non-promising
moves.
Sept. 2008
T. Pavlidis
10
Chess Playing Machines - 2
• Around 1980 Ken Thompson developed a
chess playing program called Belle based on
a minicomputer with a hardware attachment
used to generate moves very fast.
• Belle defeated all other computer programs
and became the world champion.
• The use of special chess knowledge and
special purpose hardware became the
preferred approach since then.
Sept. 2008
T. Pavlidis
11
Deep Blue
(The IBM machine that beat the human world champion)
• A major focus of the effort was the
development of special purpose hardware.
• An expert chess player (Murray Campbell )
contributed the evaluation functions of the
moves generated by the hardware.
• The project had as a consultant an
international grandmaster (Joel Benjamin
who had played Kasparov to a draw in 1994).
Sept. 2008
T. Pavlidis
12
Optical Character Recognition
(OCR)
• Printed text characters have small shape
variability and high contrast with the background.
• Spelling checkers (or ZIP code directories in
postal applications) introduce low level context.
• Reading of the checks sent for payment to
American Express relies heavily on context.
– Payments are supposed to be in full and the amount
due is known, so the number written on a check is
analyzed to confirm whether it matches the amount
due or not
Sept. 2008
T. Pavlidis
13
An Aside: Why did OCR mature when
the need for it was diminished?
• The algorithms used in the products of the 1990s
were known earlier but they were too complex to
be implemented effectively with the digital
technology of earlier times.
• When computer hardware became cheap enough
for good OCR, it also became cheap enough for
PCs, the Internet, and direct bank transfers.
• Keep this in mind in your business plans!
Sept. 2008
T. Pavlidis
14
Features of Narrow AI
• Each Problem is Solved Separately even
though certain common mathematical tools
may be used (statistics, graph theory, signal
processing, etc).
• Each Solution Relies Heavily on Specific
Environment Constraints and performance
(compared to that of humans) drops when
these constraints are relaxed.
Sept. 2008
T. Pavlidis
15
Why Not General AI?
• Why “waste” time with all the special cases and
not solve the general problem once for all?
• Why not use a “brain model” to solve all these
problems?
• Are advances in general computer technology
(hardware, systems) likely to help? Why not
wait for them rather than solving problems
piecemeal?
Sept. 2008
T. Pavlidis
16
Humans may be machines, but they
are very different from computers
Sept. 2008
T. Pavlidis
17
Some Experiments
Sept. 2008
T. Pavlidis
18
Can you read these words?
Sept. 2008
T. Pavlidis
19
Can you read these words?
Sept. 2008
T. Pavlidis
20
Reading Demo - 1
Sept. 2008
T. Pavlidis
21
Reading Demo - 1
Tentative binding on the letter shapes (bottom
up) is finalized once a word is recognized (top
down). Word shape and meaning over-ride early
cues.
Sept. 2008
T. Pavlidis
22
Reading Demo -2
New York State lacks proper facilities
for the mentally III.
The New York Jets won Superbowl III.
• Human readers may ignore entirely the shape of
individual letters if they can infer the meaning
through context.
Sept. 2008
T. Pavlidis
23
Reading dot-matrix print and
fine laser print
From: T. Pavlidis ``Context Dependent Shape Perception,''in Aspects of
Visual Form Processing, (C. Arcelli, L. P. Cordella, and G. Sanniti di Baja,
eds.) World Scientific, 1994, pp. 440-454.
Sept. 2008
T. Pavlidis
24
What Neuroscientist Say
• “Perceptions emerge as a result of
reverberations of signals between different
levels of the sensory hierarchy, indeed across
different senses”. The author then goes on to
criticize the view that “sensory processing
involves a one-way cascade of information
(processing)”
• Source: V.S. Ramachandran and S. Blakeslee Phantoms in the
Brain, William Morrow and Company Inc., New York, 1998 (p. 56)
Sept. 2008
T. Pavlidis
25
The Importance of Context
• “Human intelligence almost always thrives on
context while computers work on abstract
numbers alone. … Independence from
context is in fact a great strength of
mathematics.”
• Source: Arno Penzias Ideas and Information,
Norton, 1989, p. 49.
Sept. 2008
T. Pavlidis
26
The Big Difference Between
Humans and Machines
• Humans (and animals) use prior knowledge
to deal with sensory input. The process
involves a complex of bottom-up and topdown processes.
• It is hard to develop algorithms for a
barely understood process.
• Certainly, we cannot match human behavior
by a machine, unless the machine has prior
knowledge of its environment.
Sept. 2008
T. Pavlidis
27
The Big Obstacle to General AI
• We have too little knowledge of how the
brain works, especially how context is
inferred and brought into play.
• Adding more CPU power helps only if we
understand the problem (as in the case of
chess), so general advances in computing
are not likely to help.
Sept. 2008
T. Pavlidis
28
Brain Models maybe Counter-productive
• Once we accept that humans and computers
are fundamentally different machines we
should not try to imitate the way humans
solve a problem.
• We should attack problems in their own right
given the nature of digital computers. Chess
playing machines are a prime example.
Sept. 2008
T. Pavlidis
29
How to Choose a Problem to Work On
• Problem should be well defined in an algorithmic sense
and context should be available.
– For an example relying heavily on context see:
http://www.theopavlidis.com/technology/BoxDimensions/overview.htm
• In processing the input, it should be clear what kind of
information we need to extract. (Mathematical model of
the physical world must exist.)
• Do not be too concerned about limitations in present
day computer power.
Sept. 2008
T. Pavlidis
30
Acknowledgements
• I want to thank Prof. Paul Pavlidis of the
University of British Columbia for several
constructive comments on an earlier draft of
this presentation.
• The link to the speech recognition system of
Nuance was provided by Prof. Amanda Stent
of Stony Brook University.
Sept. 2008
T. Pavlidis
31