Download Where has Computational Intelligence got to (in Canada)?

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Artificial intelligence in video games wikipedia , lookup

Embodied cognition wikipedia , lookup

AI winter wikipedia , lookup

Ecological interface design wikipedia , lookup

Technological singularity wikipedia , lookup

Enactivism wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Transcript
Where has Artificial Intelligence
been and where is it going?
a highly personal view
Outline of talk
1. A lighthearted look at what has A.I. been doing these
past 45 years in order to survive.
2. A more serious look at what I believe we have learned
from work in A.I. and Cognitive Science.
3. Some speculations on whether current trends provide
any grounds for optimism for the future of A.I.
Where has A.I. been these past 45* years?
It has been battling the forces of darkness and has survived
In the meantime it has split into many parts
and has assumed many different disguises.
* It has been 45 years since the 1957 Dartmouth conference.
A.I. Has Survived Many Threats
It has survived:
•
•
•
•
•
•
The illusions of innocence
The foolishness of futurism
The perils of popularity
The dazzle of demos
The shameless pursuit of short-term goals
The corruptions of capitalism
It has survived the illusions of innocence
• The age of innocence: Cybernetics, Weiner, Turing, Newell
& Simon and the promise of mechanized intelligence
(That’s what drew me and many others to it)
Photographs courtesy of “Lord of the Rings”
It has survived foolhardy futurism
“It’s hard to predict, especially the future”
(Japanese saying)
Examples of foolhardy futurism…
• “A computer will be chess champion and will prove a significant
mathematical theorem within ten years” (Simon, 1957)
• “Within a generation the problem of creating ‘artificial intelligence’
will be substantially solved” (Minsky, 1967).
• “I believe that robots with human intelligence will be common within
50 years” (Hans Moravec, 1988)
• “…by the year 2020, [neural net computers] will have doubled about
23 times ... resulting in a speed of about 20 million billion neural
connection calculations per second, which is equal to the human
brain. …The emergence of machines that exceed human intelligence
in all its broad diversity is inevitable.” (Ray Kurzweil, 1999)
 "Guys like Kurzweil and Moravec ... somehow think that they can
take Moore's law and project things into the future and say that, by
2020, you'll have human-level intelligence. I don't believe that at all.
I think new ideas are required." (John McCarthy, 2001).
It has survived the perils of popularity
For many years (in the late
1970’s and 1980’s) A.I. was
the sexiest topic in computer
science and psychology
(which may be why many of
us are here!)
And that leads to certain distractions
“the perils of popularity”
It has survived the dazzle of demos
The ‘Gee Whiz’ School of Science:
• Toy systems (GPS, Eliza, Shrdlu, Shaky, Planner/Strips)
• Performance systems (Dendral, XCON, Prospector, DART)
Lothlorien
 But how do you know when you should be impressed by a demo?
• It’s not easy to tell
whether you should be
impressed by a demo.
• How impressive the
demo is depends on
what else you assume
the system can do.
• How impressive the
demo is may depend
on what tools are
available.
It has survived the shameless pursuit of
short term goals
• “Artificial intelligence used to mean
robots that think like people; now it
means software for rejecting junk e-mail.
Low expectations could yield better
applications, sooner."
Doug Lenat. MIT Tech Review, 2001
So, what’s wrong with that?
• Short-term solutions
rarely scale up.
• It is possible to attain
spectacular short-term
achievements without
any fundamental new
understanding of A.I.
problems (e.g., Deep
Blue).
• Short term criteria don’t
tell you when your
assumptions are
hopelessly wrong and
you should start over.
• Short term problemsolving detracts from the
bigger AI goal.
A.I. has even survived the corruption
of capitalism
Teknowledge, Tekmoney, CAIP, Cognicom, Gomi AI, ..
 What we tell venture capitalists vs. what we really believe
Some of the commercial claims make one wonder who
our colleagues are working for!
“Sometimes I wonder if we have’t carried ecumenism a bit too far”
AI’s dilemma
• “In the end, we in AI get high marks on the
applications side, to which we have been pushed by
sponsor pressure and seduced the lure of billions.
And certainly, it is hard to quarrel with the good
done. But from the perspective of answering big
questions and realizing big dreams, the
disappearance of AI into computer science retards
progress by shifting the way AI people are judged
and rewarded.”
Patrick Winston, keynote address to the American
Association for Artificial Intelligence, July 20, 1999
Is it time to return to the original goal?
Can we keep our “eye on the prize”*
Is it time for looking back on what we’ve learned
and for refocusing on the scientific puzzles
*From
Nils Nilsson’s 1995 AAAI paper title
What have we learned in the past 45 years?
Some current criticisms of AI – not my own views:
Lessons I have learned from Cognitive Science
1. Representing and manipulating knowledge is not
everything; KR has been overemphasized in AI;
2. Logical formalisms are hopeless for representing
knowledge;
3. Human cognition has a lot in common with that of other
organisms, so AI should start by simulating the simple
ones first (e.g., insects), and letting complex ones evolve;
4. Intelligence is distributed among minds, bodies, and
environments, and A.I. has not recognized these enough
in its pursuit of KR.
1. There is more to intelligence than
processing knowledge.
No matter how little of the system deals with knowledge, it will always
remain the core of intelligence because Intelligent behavior is
essentially knowledge-dependent or cognitively penetrable. This
follows from the following important facts about (human) cognition:
 The equivalence classes of events under which intelligent agents’
behavior is predictable are semantically defined: inputs with the
same meaning have the same effect.
 It’s how the world is represented, rather than how it actually is, that
determines behaviour.
 There is no rule or principal of intelligent behavior that cannot be
overturned by changing a person’s goals and beliefs. In other
words, some part of every intelligent behaviour is “cognitive
penetrable”.
For details, see my Computation and Cognition, MIT Press, 1984
(2) Logical formalisms are hopelessly
inadequate for representing
knowledge
As Winston Churchill said about
democracy: “It is the worst form of
government, except for all the rest”.
So also logic is the worst form of
knowledge representation, except for
all the rest.
 Much has been said about why “logic” is inadequate:
e.g., it has trouble representing knowledge that is
procedural, quantitative, pictorial (or sensory), higherorder (“meta knowledge”) and indexical; although
excellent work is being done on all of these problems.
A reasonable assumption is that there may have to be
more than one form of representation. Unfortunately,
none of the non-symbolic ones proposed so far (e.g.,
neural nets, images) are adequate for reasoning.
 Logic or Logical Calculus is just another name for
“symbol system” except that logic usually provides a
formal semantics (i.e., a theory of how the meaning of
expressions is composed from the meanings of its
parts) and a system of inference (at least for deductive
inference, if not for inductive, abductive and practical
reasoning). Nothing like that is available for any of the
other forms of representation.
So logic is a special form of symbol system;
But why do we need symbol systems at all?
• What makes it possible to represent and process
knowledge in a physical system, and to connect this
knowledge with both perception and action, is what
Newell & Simon called a Physical Symbol System.
This is perhaps the single most important idea of the
20th century because it made possible computing.
But in order to be adequate for encoding humanlevel knowledge, physical symbol systems must
meet some stringent conditions on format.
Conditions on the format of representations
1. The format must have the capacity to distinguish an
unlimited number of distinct representations. This
is called the criterion of productivity: which von
Humboldt characterized as the “Infinite use of finite
means”. This means it must be combinatorial.
2. The capacity for representation and inference must
be systematic: In intelligent agents the capacity to
represent or infer a certain situation is always
accompanied by the capacity to represent or infer
other related situations. This means it must be
compositional.
Conditions on the format of an adequate
system of representation
• The conditions of productivity and systematicity
entail the compositionality of representations:
complex representations are built out of simpler
representations by rules of composition.
• Any system of representation meeting these
constraints is a logical calculus or language of
thought (lingua mentis).
• The invention of systems of logic that can
encode wide range of beliefs are among the
greatest achievements of the 20th century.
• For a more detailed argument see:
Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism
and cognitive architecture: A critical analysis. Cognition,
28, 3-71.
(3) It has been suggested that since human
intelligence is continuous with animal
intelligence, we should study lower organisms
(e.g., insects) in which cognition is simpler,
and then move up to human intelligence later.
• Comparative psychology does indeed show that many
aspects of vision, visual-motor coordination, as well as
ontological categories and conceptual systems (e.g.,
concepts such as physical object, animate, cause,
conspecific) and even parts of arithmetic are shared by
most organisms, including human infants.
• But the research strategy of working up from lower
organisms will not work in general. Even though we
may share a lot with lower organisms, what we do not
share is critical; it is constitutive of “intelligence.”
Although many human cognitive capacities are found
in animals, many other capacities appear suddenly as
we go up the philogenetic scale (evolution and
“punctate equilibrium” – Steven Jay Gould).
 All humans, in contrast with members of other
species, possess the capacity for language, the
instinctive attribution of beliefs and desires to others,
the capacity for counterfactual reasoning, and the need
to provide a theoretical explanation for events around
them (i.e., to practice inductive and abductive
reasoning).
 Human vision, language, and action are at the service
of goals and beliefs in a way that makes human
behavior, unlike other animal behavior, largely
stimulus-free;
Because all intelligent behaviour is “cognitive
penetrable” by goals and beliefs.
(4) AI has traditionally downplayed the role of
the environment in intelligent action: But
intelligence must be situated and embodied.
• The first recognition of the importance of the
environment-agent relation in shaping behavior may
be in Simon’s ant example, but it now makes an
appearance in many areas of contemporary AI.
• There is considerable truth in the observation that
intelligence is situated and embodied, but the deep
intellectual puzzles still concern how the organism
represents the world, because (as discussed earlier):
It’s how the world is represented, rather than how
it actually is, that determines intelligent behavior.
But also recent A.I. trends have been in the
direction of taking the environment into
account. Recent examples that recognize the
importance of taking the agent’s environment
into account include:
 Emphases on reactive planning, where plans
allow for unexpected sensory input,
 Emphasis on active vision, in which vision
interrogates the environment for relevant
clues,
 Allowing for nonsymbolic aspects of
reasoning through closer links with perception
(e.g., “visual inference” – Jon Barwise),
 The use of indexicals in representations.
(See Lespérance & Levesque, 1995; Pylyshyn, 2000, 2001)
Forms of representation for a robot: using indexicals
From: Pylyshyn, Z. W. (2000). Situating vision in the
world. Trends in Cognitive Sciences, 4(5), 197-207.
Finally, after all this, are there
grounds for optimism about the
future of Artificial Intelligence?
 The simple answer is “Yes, of course,
otherwise we would not be here!”
 But what are some promising trends
that offset the enormous problems still
to be solved?
Some grounds for optimism
1. There are a very many accomplishments for which AI can take
credit, despite the “AI winter” we have come through. The
military, the airlines, traditional Operations Research domains,
robotics and the games world have all seen major AI
achievements (see Nilsson, 1995). There are also many small
accomplishments, like Google, seen daily on the web and in
nearly every walk of life, that have widespread effects. Many
have not used the term “artificial intelligence” but they
nonetheless owe their technology to AI which has infiltrated
much of conventional CS.
2. Unlike the early days of AI, many more approaches are being
entertained. While most of the grand polemical claims made
by their advocates are almost certainly wrong, these
approaches are leading to different lines of inquiry, which is
healthy. The diversity is also producing powerful niche
results in areas within computational vision, robotics,
computational linguistics, and speech understanding.
There is strength in diversity
(also distractions)
Neats
Scruffies
Brooks
Minsky
Newell
Logic
Schank
McCarthy
Analogues
Fuzzy sets
Neural nets
Expert $ystems
Planning
Robots
Some grounds for optimism
3. One of the consequences of de-emphasizing pure
reasoning has been more progress in areas where human
cognitive skill is modular (and may not involve any
reasoning).
 Vision (especially “early vision”).
 Computational Linguistics (a large part of grammatical
analysis is modular),
 Speech recognition (much of phonetic analysis is modular),
 Visual-motor coordination (much of that is modular in humans
and animals). Lends itself to Rod Brooks’ approach.
 Hybrid systems, in which the developments of modular “smart”
subsystems (especially vision and control) are combined with
knowledge-based systems (e.g., UBC’s hybrid controllers and
U of T’s cognitive robotics).
Some grounds for optimism
(4) Finally, there has been a large (and unexpected)
resurgence of interest in the very broad questions of the
nature of intelligence, and it relation to consciousness, to
biology, to evolution and to technology. Books and
articles by Kurzweil, Moravec, Joy, Wolfram, Dennett
and the critical writings of Searle, Fodor, Lanier and
others have once again returned the bigger questions of
human and machine intelligence to centre stage.
While these may get people thinking about where we stand
in history, they probably only move AI ahead by focusing
public debate and perhaps awakening funding agencies.
But some problems are just not ready for scientific
scrutiny: they are the mysteries as opposed to the puzzles.
The trouble with mysteries, such as the question What is
consciousness? and What are the limits of AI? is that they
are inherently ill-posed: they are not stated in a way that
could connect to a recognizable answer.
Yet despite the possibility that this new interest will let
in misguided views and give us bad press, a multitude
of approaches have to be tolerated because nobody
knows where the key discoveries will come from. So
we shouldn’t put them down, even when they are wrong
(and even when they have larger grants than we do)!
….because then we could all lose …..
References cited
Brooks, R. A. (1991). Intelligence
without representation. Artificial
Intelligence, 47, 139-159.
Fodor, J. A., & Pylyshyn, Z. W. (1988).
Connectionism and cognitive
architecture: A critical analysis.
Cognition, 28, 3-71.
Lespérance, Y., & Levesque, H. J.
(1995). Indexical knowledge and
robot action - a logical account.
Artificial Intelligence, 73, 69-115.
Levesque, H. J., & Lakemeyer, G.
(2001). The Logic of Knowledge
Bases. Cambridge, MA: MIT Press.
Nilsson, N. J. (1995). Eye on the prize.
AI Magazine (summer), 9-17.
Pylyshyn, Z. W. (2000). Situating
vision in the world. Trends in
Cognitive Sciences, 4(5), 197-207.
Pylyshyn, Z. W. (2001). Visual
indexes, preconceptual objects, and
situated vision. Cognition, 80(1/2),
127-158.
Pylyshyn, Z. W. (1999). Is vision
continuous with cognition? The case
for cognitive impenetrability of
visual perception. Behavioral and
Brain Sciences, 22(3), 341-423.
Reiter, R. (2001). Knowledge in
Action: Logical Foundations for
Specifying and Implementing
Dynamical Systems. Cambridge,
MA: MIT Press.
Winston, P. H. (1999, July, 1999). Why
I am optimistic. Paper presented at
the AAAI Annual Conference.