Download document

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Wizard of Oz experiment wikipedia , lookup

Human–computer interaction wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Genetic algorithm wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Computer Go wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Expert system wikipedia , lookup

AI winter wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Transcript
History of AI
•1st generation: pre 1956
•2nd generation: mid 1950's to 1970
•3rd generation: 1970 on
•Fourth generation: ??:
st
1
generation: pre 1956
Alan Turing - Wrote famous paper in
1937 in which he invented the concept of
the Turing Machine to explain why some
problems are uncomputable. Devised
Turing Test to serve as an objective tool for
deciding if a program is "intelligent".
1st generation: pre 1956
Warren McCulloch (U of Illinois
psychiatrist) and Walter Pitts (18-year-old
mathematician) - In 1943, came up with a
mathematical model of a binary neuron which
could be emulated on a computer and which
could learn by adjusting the strength of its
connection with its inputs. Gave rise to the
study of perceptrons, which were developed
by Frank Rosenblatt in the 50's.
1st generation: pre 1956
(Perceptrons were later criticized by Marvin
Minsky and Seymour Papert at MIT (1969)
and research along these lines was generally
abandoned until Bernard Widrow revitalized
the field in the late 80's with more powerful
backpropagation nets.)
1st generation: pre 1956
John von Neumann (Princeton
mathematician) - helped Mauchley and
Eckert develop the ENIAC (1946). Brilliant
computer science theoretician. Built EDVAC,
which embodied the stored program concept.
Thought a lot about AI issues. Used
McCulloch-Pitts neuron to describe his
theories.
1st generation: pre 1956
Claude Shannon - mathematician who
worked in the area of communication theory.
Showed how the binary switches in a
computer could actually store information.
Described a chess playing algorithm (1950)
that anticipated much of the more recent
work in game playing.
1st generation: pre 1956
H. Ross Ashby - U of Illinois. Wrote Design
for a Brain (1952), an influential analysis of
what would be necessary to emulate the
functions of an intelligent system.
2nd generation: mid 1950's to 1970
1956 Dartmouth conference (organized by
McCarthy and Minsky) defined the field.
Attended by all the big names in AI. AI
researchers at this time came up with the
physical symbol system hypothesis; humans
think by manipulating symbols, so, instead of
trying to have computers emulate the
hardware (neurons), AI research should
concentrate on symbol manipulation
(software).
2nd generation: mid 1950's to 1970
• John McCarthy - inventor of LISP, used for
most previous AI work, especially in Natural
Language Processing. Originated term
"Artificial Intelligence".
• Marvin Minsky - Worked in many areas of
AI. In knowledge representation, developed
the concept of "frames".
• Arthur Samuel - developed a magnificent
checkers playing program that learned from
playing its opponents.
2nd generation: mid 1950's to 1970
Allen Newell, Herb Simon, J. C. Shaw developed Logic Theorist, which proved 38 of
the first 52 theorems from Russell and
Whitehead's Principia Mathematica (one
proof was shorter and more elegant than the
original!). This program was later expanded
into the General Problem Solver.
2nd generation: mid 1950's to 1970
• Ed Feigenbaum - developed DENDRAL,
one of the first expert systems.
• Seymour Papert - studied under Jean
Piaget & worked with Minsky. Developed
LOGO, worked on computer-assisted
instruction.
3rd generation: 1970 -2000
• Terry Winograd - wrote SHRDLU, Blocks
World.
• Bertram Rafael - worked in robotics,
developed SHAKEY to respond to human
instructions.
• Nils Nilsson & Richard Fikes - developed
STRIPS to achieve goals by the use of plans
and a sequence of operators.
• Daniel Bobrow - wrote STUDENT, which
could solve algebraic word problems.
3rd generation: 1970 -2000
• David Slate & Larry Atkin - developed Chess
4.5, the first world-class chess playing
program.
• Raj Reddy - HEARSAY - understood human
speech with an accuracy of 90% or better.
• Roger Schank and Richard Ableson developed the idea of scripts to provide a
framework for representing actions. Models
common-sense knowledge of stereotypical
situations.
3rd generation: 1970 -2000
• David Waltz - used a constraint satisfaction
approach for understanding visual scenes.
Can handle cracks, shadows, etc.
• Edward Shortliffe - developed first real
expert system, MYCIN, which can now
diagnose blood diseases more accurately
than most human physicians.
3rd generation: 1970 -2000
• Richard Duda - developed PROSPECTOR,
a geological analysis expert system which
found a commercially valuable deposit of
molybdenum. First expert system to
incorporate Bayes' rule.
• Doug Lenat - working on CYC, a huge
database (1 million items) of common-sense
knowledge.
4th generation: 2000 - ??
• John Hopfield - rediscovered neural
networks.
• Many current AI researchers (e.g., Bernard
Widrow, John Holland, David Goldberg, John
Koza) have recently turned from symbolic
processing to AI approaches which emulate
natural systems, such as evolution or
biological neural networks.
Summary
• First period of development of the field of AI
laid the foundations.
• Second period defined the field and explored
many approaches to AI, but achieved success
primarily with "toy" problems - expert systems
is the only significant exception that has had a
wide impact in the real world. Also trained the
3rd generation of AI researchers.
Summary
• Third period focused on building intelligent
systems in restricted domains, instead of
trying to construct a general intelligent agent.
Recognized the importance of a knowledgebased approach.
• Fourth period may be beginning now, with
more of a focus on "soft" computing based on
biological and physical metaphors: genetic
algorithms, neural networks, simulated
annealing, cellular automata, artificial life, etc.