Download Golden Ages of AI

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Computer Go wikipedia , lookup

Computer vision wikipedia , lookup

Enactivism wikipedia , lookup

Human–computer interaction wikipedia , lookup

Personal knowledge base wikipedia , lookup

Time series wikipedia , lookup

Human-Computer Interaction Institute wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

AI winter wikipedia , lookup

Intelligence explosion wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Transcript
Ezhkova I.V.
Golden Ages of AI1
Now when e-Housing, e-Commerce, e-Democracy and e-Homo occupy our minds, when recent results in
biological, cognitive and neuro-sciences essentially declare our vision of origin, evolution and role of
intelligence it is time to look back to the most important moments in the field of AI and to applaud AI, as the
grandmother of many of those fields, for its ever-lasting inspirational beauty, generosity, openness and
pragmatic rationality.
Ten Years Ago…
"To explain a phenomenon is not to demean it. An astronomical theory ... or a chemical model
... does not lessen the fascination of the heavens at night or the beauty of the unfolding of a
flower. Knowing how we think will not make us less admiring of good thinking."
- Herbert A. Simon
While most sub-disciplines of computer science seem to be defined by their methods, AI seems
more to have been defined by the source of its inspiration. In 1950, Alan Turing published his famous
paper "Computing Machinery and the Mind", which can be metaphorically described as posing the
problem of how a computer could distinguish a male from a female merely by asking questions. This
problem has become known as the "Turing Test" and has served as an inspiring vision and
philosophical charter for many years since the beginning of AI.
At the same time, the mainstream of researchers in AI simply understand the task of AI as
engineering of useful methods (artifacts), without even much reflection on whether these take account
of human attributes or cognition. Until now three historically important problems of AI are not fully
linked in the field. These problems are: definition of the field; adequate tools; and real applications.
The situation around this problem was deeply discussed ten years ago on the IJCAI’95 conference,
which as it is seen now, was an important milestone in the development of AI. This conference finally
opened the door of AI to many different methods and techniques which earlier were not so obviously
acceptable to AI such as classical statistics, pattern recognition, stochastic approaches, and so on.
Taking into account this consolidating role of the IJCAI’95 in this paper we suggest a review of this
conference, trying to restore the atmosphere of this important event for AI. This review was written for
the European Commission and it was supposed to underline the most important trends in AI as they
appeared at the IJCAI’95.
It was time when after the euphoria of the early 1980's in the first AI applications to expert
systems, researchers began to understand that the AI systems were very brittle in their narrow
orientation to a-priori well defined domains, and that the problems which arose in AI very often were
1
See also
Ezhkova I.V. "Artificial Intelligence: Conceptual Status and Applications". Computers and Artificial
Intelligence, vol. 15, 6, 1996, pp. 589-620
Ezhkova I.V."Artificial Intelligence: Towards Integration". Position Paper at the Seventh International
Conference on Artificial Intelligence and Information-Control Systems of Robotics, Smolenice, 1997
problems of the traditionally used methods themselves and not of AI. The IJCAI’952 conference
demonstrated that, despite the separation of applications, Artificial Intelligence was trying to reinforce
a foundation based on diverse schools of thought.
At this time the AI environment ripened to produce favorable new methods and create valuable
applications. This environment was prepared, first, by thinking deeply about the human aspects of AI,
and by rethinking how important may these matters be to the definition and purpose of the field, AI's
pragmatic construction and realization. Second, there was no longer a defense offered to the use of
supposedly "unacceptable" methods, but rather there existed a healthy research about what methods
may be used for the development of AI (stochastic, random, local, incomplete and even unsatisfiable),
outside of the traditional systematic paradigm.
There was a blossoming of research using diverse techniques, as illustrated at that moment
generation by the rising of very promising researches such as Russell (on the purposes of AI), Selman
(on the methods of AI) and Stephan Muggleton of the Oxford University Computing Laboratory (on
concrete techniques) as well as many others. These were all developments that were supposed to lead
to better and more useful applications.
At the IJCAI’95 conference, the sessions on the applications of artificial intelligence were
separated into a separate conference. At the same time, the papers in the main conference demonstrated
that AI techniques were being more strongly incorporated directly into systems development
themselves, such as software engineering, multimedia creations, and World Wide Web/Internet
support.
However, looking back to the last decade we must acknowledge that it remains still important
that these developments did not remove the importance of cognition in AI. In this paper by observing
AI situation a decade ago, we inspect again the inspirational roots and pragmatic prospective of AI as
they were seen at that moment in order to pose the same important for AI questions about the
equilibrium between intelligence, mind and pragmatic rationalities.
How to define Artificial Intelligence
I previously mentioned the disconnection among definition of the field, adequacy of tools, and
applications, which existed in the field of AI at that time. This report will now turn to understanding
how these were finally linked by IJCAI’95, as it was demonstrated by papers and exhibits there. At the
IJCAI’95 sections the subject of definition of the field, including those on cognition and philosophical
2
The International Joint Conference on Artificial Intelligence (IJCAI) took place in Montreal, Canada on August 19 through
25, 1995. The IJCAI was a full program of tutorials, workshops, video presentations and lectures. There was a continuous program of
conferences, invited talks and panels. There was also a robot exhibition and competition, including a robot building laboratory. From the
1112 papers that were submitted, the Program Committee accepted only 249 papers for publication in the two conference volumes.
These represented diverse AI areas, of which the more numerous papers (more than about 20 each) were on automated reasoning,
learning, natural language, and on action and perception. There were also many papers (more than 10 each) on planning, knowledge
representation, qualitative reasoning and diagnosis, non-monotonic reasoning, constraint satisfaction, temporal reasoning, and reasoning
under uncertainty. The remaining papers dealt with areas such as case based reasoning, cognitive modeling, connectionist models,
distributed AI, genetic algorithms, and knowledge based technology. Several of the papers, including some of those given awards or
presented as invited lectures, also dealt with the connections between artificial intelligence and other fields including philosophy, biology,
physics and mathematics. There were also 36 workshops that took place on August 19 and 20, as well as the concurrent KDD
conference.
problems, showed up in many contexts. Most invited lectures and one of the three panels at the
conference dealt with definition in some form.
One invited lecture of the IJCAI’95 -- "Turing Test Considered Harmful" by Patrick Hayes of
the Beckman Institute and Kenneth Ford of the University of West Florida -- argued that the Turing
Test now leads the field to disown and reject its own success. Hayes argued that AI should not be
defined as an imitation of human abilities. If we abandon the Turing Test vision, then "... the goal
naturally shifts from making artificial super-humans which can replace us to making super-humanly
intelligent artifacts which we can use to amplify and support our own cognitive abilities..." Hayes
characterized the vision of "making artificial super-humans" as the initial goal of the pioneers of the
field, Feigenbaum, McCarthy and Minsky. Hayes then defined AI as "the engineering of cognition
based on computational vision which runs through and informs all of cognitive science".
The pioneers of the field, such as Simon, McCarthy, Minsky and Feigenbaum, were in fact
guided by the study of human intelligence, and sought deeper understanding of human cognition with
the hope that this effort would lead to better machines. Despite the mainstream engineering outlook
mentioned above, a two-part panel of the IJCAI’95 presented by John McCarthy of Stanford and Aaron
Sloman of The University of Birmingham ("Philosophical Encounter, An Interactive Presentation of
Some of the Key Philosophical Problems in AI and AI Problems in Philosophy") made clear that the
cognitive orientation remains a force in the field.
According to McCarthy "... human level artificial intelligence requires equipping a computer
program with a philosophy. The program must have built into it a concept of what knowledge is and
how it is obtained." He pointed out that he considers his main purpose now to be the development of a
theory of "contexts". This term arises, as a necessary consequence when one wants to deal with
knowledge properly. McCarthy thus also believes that "Mind has to be understood one feature at a
time".
As a practical matter, Sloman offered similar views, for example, that "...'mind' is a cluster
concept referring to an ill defined collection of features, rather than a single property that is either
present or absent." Both Sloman and McCarthy also talked of "stances" which are levels of analysis
one may make of a system, such as physical, intentional, design and function. Both assert that
philosophy and AI still have much to offer each other.
Rationality and Bounded Optimality
Stuart Russell of the University of California at Berkeley, one of the rising and more promising
workers in AI at that moment since his “The Computers and Thought Award” paper given at the
IJCAI’95, observed different definitions of AI in this paper. He argued that the most relevant position
of AI is "to create and generate intelligence as a general property of systems, rather than as a specific
property of humans". Following the agent-based view of AI he began with the idea that intelligence is
strongly related to the capacity for successful behavior. He then argued that the "bounded optimality"
predicate has to be considered as a useful formal definition of intelligence. This predicate was defined
as "the capacity to generate maximally successful behavior given the available information and
computational resources." Loosely speaking, he considers the intelligent entity and rational agent
whose actions make sense from the point of view of the information possessed by the agent on its goal.
This position was well received by the majority of the AI researchers. However it is useful to
remember that rationality is only a property of actions and not of the process by which it was designed,
which leads to the observation that the only important thing is what the agent does and not what the
agent thinks or even whether it thinks at all. Nevertheless Russell represents a very pragmatic point of
view of AI, which serves to reduce the gap between theory and practice. Based on this viewpoint,
Russell, in association with others, presented an additional four talks in the technical sections in the
same IJCAI conference. Two of these ("Local Learning in Probabilistic Networks with Hidden
Variables" and "The BATmobile: Towards a Bayesian Automated Taxi") were among the more
interesting papers of the technical sections of this conference.
Intuition, Insight and Inspiration
Let us turn now to the Research Excellence Award paper given by Herbert A. Simon, one of the
most established and respected workers in the field. Simon dealt with how to decide if AI can perform
three admirable human thought functions: intuition, insight and inspiration. He argues that theories
written in AI list-processing languages should be tested in exactly the same way as are theories in the
analytical mathematical languages of the physical sciences: to judge the accuracy of the theory by
comparing behaviors predicted by the theory to empirical experience. He then gave operational
definitions of intuition, insight, and inspiration and showed that existing AI programs already exhibit
these.
The program EPAM demonstrates intuition (rapid discovery of a new idea without being able
to trace a path of logic to reach it); a program, which combined EPAM with a General Problem Solver,
would demonstrate insight (finding a new correct solution to a problem after a period of prior failures);
and a program such as BACON demonstrates inspiration (new knowledge). Evaluating such seemingly
"human" forms of AI is thus not an issue of philosophy. Simon then concludes, and I concur, "To
explain a phenomenon is not to demean it. An astronomical theory ... or a chemical model ... does not
lessen the fascination of the heavens at night or the beauty of the unfolding of a flower. Knowing how
we think will not make us less admiring of good thinking."
Tools: Towards Integration
Traditional systematic approaches in AI include search and inference techniques, for example
as used for planning and reasoning. It is important to remember that the main limit to systematic
methods is that a complete search of a solution space can be as a practical matter impossible in a
limited (useful) time, due to the vast number of combinations that must be examined.
Alternative methods that have been explored include hill climbing, genetic algorithms and
connectionist methods. To evaluate the direction of research and methods acceptable by AI, let us
come back again to the IJCAI’95. It discussed how recent insights into computationally hard problems
have led to the development of new stochastic search methods.
The panel "Systematic Versus Stochastic Constraint Satisfaction" examined the direction of
research and methods traditionally used in AI. Edward Tsang of the Department of Computer Science
of the University of Essex, argued that stochastic methods have the advantage that they can be
terminated in a finite predetermined time, are enhanced by the newer hardware, and have potential to
tackle constraint optimization problems.
Bart Selman of the AT&T Bell Laboratories, in his interesting invited talk ("Stochastic Search
and Phase Transitions: AI Meets Physics"), reported that in the study of Boolean satisfiability, he used
an analogy with phase transition in physics and observed that there is a phase transition from the
mostly satisfiable to the mostly unsatisfiable. Therefore he concluded that there is an advantage in
using a combination of stochastic (incomplete) and systematic (compete) methods: a mixture of one
stochastic method (for model finding) and one systematic method (for theorem proving).
Some of the considerations which must be taken into account in choosing stochastic methods
include reliability, so-called "near optimality", and the risk of missing a solution. The main message
from the discussion by Selman was that stochastic methods now offer available and useful alternative
to traditional systematic methods in AI.
Collecting Data and Knowledge Bases
The question of how to collect data and knowledge and which technique has to be used in each
case is still one of the most practical questions in AI. The panel "Very Large Knowledge bases Architecture vs. Engineering" focused on comparing the imperatives for collecting knowledge using
different techniques. They considered five different approaches, the most favorable at that time.
First, regarding large lexicons in machine translation and NL projects, J. Carbonell of Carnegie
Mellon University said that the focus should be on the building of learning architectures rather than on
the building of truly massive knowledge bases. Second, the use of parallel supercomputers in the
support of massive knowledge/data bases J. Hendler of the University Maryland focused on
computational architectures that will support massive knowledge bases and hybrid knowledge/data
bases and allow rapid access to them.
Third, maybe the best known at that moment effort in very large knowledge bases is the Cyc
project, reported by D. Lenat of Cycorp. They focused on the "knowledge principle" which is to carry
on with manual knowledge entering. Fourth, regarding work in Japan, Riichiro Mizoguchi of Osaka
University reported that they have developed the largest electronic dictionary. They were working on
the "human media" project, aiming at building a seamless information space providing support for
humans "traveling around" in such large information space. Fifth, regarding "cognitive architecture" in
the SOAR project, P. Rosenbloom of the University of Southern California focused on an integration
of media information from both computer and human resources.
Application in Intelligent Web Search Support
Now it is difficult to believe that there was a time when Google did not exist. One of the papers
which created excitement at IJCAI’95 was that of Henry Lieberman of the Media Laboratory of the
Massachusetts Institute of Technology, entitled "Leticia: An Agent that Assists Web Browsing".
Leticia is a background process that operates while a person is browsing on the Web. It observes the
key words used for searches. While the user is reading in a particular Web page, Leticia then
automatically searches for other Web sites, which may meet the apparent criteria desired and puts them
in rank order.
The user may then access Leticia's proposed sites if desired, saving time of waiting for
additional searches. Leticia thus also exhibits a persistence of interest between accesses to Web sites,
also mirroring the normal patterns of users. It uses an "extensible object-oriented architecture" to
facilitate determination of a user’s actions, history and current interactive context, as well as the
content of documents. Leticia solves the problem of combinatorial explosion in search by limiting the
maximum number of accesses to non-local Web nodes per minute. Leticia operated in connection with
more conventional Web browsers at that time such as Netscape or Mosaic.
Ten years ago the use of AI for Web/Internet searches was just a timely new application.
Adding to others such as multimedia and software engineering these applications emphasized new
trends in AI. Today they still remain among the most representative for AI. At the same time the most
exciting part of analysis is always to discover and predict the most trendy and exciting applications.
However this time we will leave this job to IJCAI’05 in Edinburgh.
Robot Competition
Until now AI has not been appropriately used in robotics: successful robots appear to depend
more on the robustness of their physical technology and on intelligence of their creators. Robots have
not yet used AI appropriately; such as to be built like a complete system having not just a lower
reactive level but also a cognitive level. Top-level research from the AI field in planning support (as
for example the GOLOG language by Ray Reiter of the University of Toronto) has yet to be utilized in
robotics.
The Nomad 200, the winner of the robot competition at that year’s conference, also showed it.
The Nomad 200 was made a the company that had also won at the AAAI Conference in Washington in
1993, DC -- Nomadic Technologies Inc, of Mountain View, California. In the competition, the robots
had to navigate around unexpected obstacles. The Nomad 200 navigated using tactile, ultrasonic,
infrared and two-dimensional radar sensing, was based on UNIX, with a graphic simulator, motor
control and data interfaces. The Nomad 200 used software developed at the University of Bonn. But
the primary reason the robot won the competition appears to be technological: it used a color sensitive
camera and has a better memory.
Exhibition of Software: Who won?
The Exhibition of Software demonstrated the application of AI in software engineering. The
two companies reviewed below succeeded at the Exhibition. They were working on Dynamic Object
Oriented Programming (Franz) and Intelligent Real-time Systems (Gensym).
The Exhibition at IJCAI’95 showed that as software applications become more intelligent, one
trend in software engineering was to turn to Dynamic Object Oriented Programming (OOP). At that
moment, the most prominent OOP language was Smalltalk, which became popular in the financial
community and represented indeed the first generation of (static) OOP language. However because
Smalltalk required that every datum to be an object, computational performance was compromised.
Moreover Smalltalk supported only single inheritance. This restriction made design of objects that are
easily built by inheritance unnecessarily complex. These limitations caused Xerox Parc and other
organizations to explore Dynamic OOP languages. Dynamic OOP was a software development
technology that enabled applications to be tailored during development and after deployment, to meet
the user's changing needs without accessing the source code.
The research led to development of ANSI CLOS (Common Lisp Object System), which was
exhibited by Franz, Inc. of Berkeley, California. CLOS is a second generation of Dynamic OOP
language. CLOS has automatic memory management, method dispatching, good scalability and
multiple inheritances. Due to these features, CLOS has good runtime performance, and solves the
"brittle class problem". This problem occurred when even a small change in source code (for example
in C++) requires relinking and recompiling the entire system. CLOS provides better ability than C++ to
iterate design, to go back and forth and change a model. CLOS was at that time the only objectoriented language to become an ANSI standard.
General Electric, Pratt and Whitney, Boeing, Ford, Jaguar, Motorola, Texas Instruments,
Microsoft had applied ANSI CLOS in diverse design problems. Franz exhibited CLOS in a version
called Allegro CL for Windows. Dynamic OOP languages are better suited for managing complexity,
and should better support user-extended software, rapid application development, intelligent agents,
and application frameworks.
Gensym Corporation of Cambridge, Massachusetts, made "intelligent real-time systems"
based on integrated technologies. They exhibited the "application development environment" G2 and a
family of products based on G2. Gensym claimed G2 increases user productivity ten times compared
with traditional programming in C or C++. G2 permits "cloning" of objects, in which each clone
inherits all of the properties and behaviors of the original object. Objects can also be linked. G2 uses a
"structured natural language" for programming, which is thus easier to learn and use.
The graphic displays support the major natural languages including French, German, Spanish,
Portuguese, English, Portuguese, Japanese and Korean. G2 can be considered as an object oriented
graphic environment, which can be used for real-time dynamic process monitoring, diagnosis,
optimization, scheduling and control. It can also be used for simulations. G2 runs under a variety of
standard operating systems including UNIX, Open VMS, the several versions of Windows, and also
runs with network systems including TCP/IP, DECnet and Winsock. The software is compatible with
diverse processors including Intel, IBM, Motorola, and Power PC. G2 has bridges to other common
software products.
Under the Apple Tree…
In 1995 two processes were taking place in AI, extension of the field by adding new tools
(integration process), and extraction of new sub fields from already well recognized topics (separation
process). One of the topics experiencing separation in 1995 was Knowledge Discovery and Data
mining (KDD).
KDD grew from previously being a workshop within IJCAI to being the First International
Conference on Knowledge Discovery and Data Mining, which occurred at the same place in Montreal
on August 20 - 21, 1995. KDD/Data Mining had over 340 participants from around the world, with 11
plenary presentations, 3 invited talks (on Machine Learning, Databases and Statistics), a panel, 33
poster talks and 5 demonstrations. The conference had an interdisciplinary nature.
The definition of data mining and KDD refers to extraction of information from large
databases. As was stated at the conference, "the view of KDD that data mining is only machine
learning with lots of data is trivial". Database technology starts with databases but is more concerned
with storing and retrieving data. KDD starts with databases and takes advantage of data base
technology, but is concerned with mining data and discovering knowledge.
KDD/Data mining was already seriously applied. For example, the U.S. National Aeronautics
and Space Administration applied KDD to process and classify images of stars. The U.S. Department
of Treasury was using KDD to identify possible cases of money laundering. The human genome
project was using Markov models to search biosequence databases. Private companies were using
KDD to identify who are their better customers. IBM believed that there was a very large, multi-billion
dollar market in KDD.
Another example of an area, which later became a well-integrated subject, influenced by the
cognitive orientation of AI, was the study of Context. It was an important topic for advancing the
solutions to many problems in AI, which had to arise to satisfy a need of real applications. Context had
two separate workshops in IJCAI ("Context in Natural Language Processing", and "Modeling Context
in Knowledge Representation and Reasoning"). As was decided by those attending, in the future
Context was supposed to be also organized into a separate conference. This happened in 1997 when the
first international conference on Context took place in Rio de Janeiro.
CONCLUSION
IJCAI '95 demonstrated that AI became a strong interdisciplinary field with diverse schools of
thought and diverse techniques, and had developed promising trends toward useful practical
applications. Now just a few months before IJCAI’05, which will take place in 30 July - 5 August, in
the motherland of AI, Edinburgh, we are looking toward this important event with hope to discover
new exciting and inspirational results in AI.