Download File

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Computer chess wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Human-Computer Interaction Institute wikipedia , lookup

Hubert Dreyfus's views on artificial intelligence wikipedia , lookup

Automaton wikipedia , lookup

Human–computer interaction wikipedia , lookup

Turing test wikipedia , lookup

Human–computer chess matches wikipedia , lookup

Artificial intelligence in video games wikipedia , lookup

Technological singularity wikipedia , lookup

Computer Go wikipedia , lookup

Intelligence explosion wikipedia , lookup

AI winter wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Transcript
be taught new rules of thumb and
strategies – regardless of how much new
power was being fed to it. Also, there was
the notion of predictability. The authors
of ‘Chess’ have commented on the stress
they felt during tournaments where their
Type-B program would behave erratically
in accordance to different hard-coded rules.
To this day Type-A (brute force) programs
are the strongest applications available.
Intelligent Type-B programs exist, but it is
simply too easy to write Type-A programs
and get exceptional play just off of
computer
speed.
Grandmaster-level
Type-B programs have yet to materialize
since more research must be done in
understanding and abstracting the game
of chess into (even more) rules and
heuristics.
When did
Artificial
Intelligence
started?
(A Brief History)
Stephanie
Haack
is
director
of
communications for the Computer Museum in
Boston.
The quest for artificial intelligence is as
modern as the frontiers of computer science
and as old as Antiquity. The concept of a
"thinking machine" began as early as 2500
B.C., when the Egyptians looked to talking
statues for mystical advice. Sitting in the
Cairo Museum is a bust of one of these
gods, Re-Harmakis, whose neck reveals the
secret of his genius: an opening at the nape
just big enough to hold a priest.
Even Socrates sought the impartial
arbitration of a "thinking machine." In 450
B.C. he told Euthypro, who in the name of
piety was about to turn his father in for
murder, "I want to know what is
characteristic of piety ... that I may have it
to turn to, and to use as a standard whereby
to judge your actions and those of other
men."
Automata, the predecessors of today's
robots, date back to ancient Egyptian
figurines with movable limbs like those
found in Tutankhamen's tomb. Much later,
in the fifteenth century A.D., drumming
bears and dancing figures on clocks were
the favorite automata, and game players
such as Wolfgang von Kempelen'sMaezel
Chess Automaton reigned in the eighteenth
century. (Kempelen's automaton proved to
be a fake; a legless master chess player was
hidden inside.) It took the invention of the
Analytical Engine by Charles Babbage in
1833 to make artificial intelligence a real
possibility. Babbage's associate, Lady
Lovelace, realized the profound potential of
this analyzing machine and reassured the
public that it could do nothing it was not
programmed to do.
Artificial intelligence (AI) as both a term
and a science was coined 120 years later,
after the operational digital computer had
made its debut. In 1956 Allen Newell, J. C.
Shaw and Herbert Simon introduced the
first AI program, the Logic Theorist, to
find the basic equations of logic as defined
in Principia Mathematica by Bertrand
Russell and Alfred North Whitehead. For
one of the equations, Theorem 2.85, the
Logic Theorist surpassed its inventors'
expectations by finding a new and better
proof.
Suddenly we had a true "thinking
machine"-one that knew more than its
programmers.
The Dartmouth Conference
An eclectic array of academic and
corporate
scientists
viewed
the
demonstration of the Logic Theorist at
what became the Dartmouth Summer
Research Project on Artificial Intelligence.
The attendance list read like a present-day
Who's Who in the field: John McCarthy,
creator of the popular AI programming
language LISP and director of Stanford
University's
Artificial
Intelligence
Laboratory; Marvin Minsky, leading AI
researcher and Donner Professor of Science
at M.I.T.; Claude Shannon, Nobel
Prize-winning pioneer of information and
AI theory, who was with Bell Laboratories.
Pierre de Latil he wrote:
Cybernetics is not merely another branch
of science. It is an intellectual revolution
that rivals in importance the earlier
Industrial Revolution. Is it possible that
just as a machine can take over the routine
functions of human muscle, another can
take over the routine uses of human mind?
Cybernetics answers, yes.
By the end of the two-month
conference, artificial intelligence had found
its niche. Thinking machines and automata
were looked upon as antiquated
technologies. Researchers' expectations
were grandiose, their predictions fantastic.
"Within ten years a digital computer will be
the world's chess champion," Allen Newell
said in 1957, "unless the rules bar it from
competition."
Isaac Asimov, writer, scholar and author
of the Laws of Robotics, was among the
wishful thinkers. Predicting that AI (for
which he still used the term "cybernetics")
would spark an intellectual revolution, in
his foreword to Thinking by Machine by
Many people imagined that by the year
1984 computers would dominate our lives.
Prof. N. W Thring envisioned a world with
household robots, and B. F. Skinner forecast
that teaching machines would be
commonplace. Arthur L. Samuel, a
Dartmouth conference attendee from IBM,
suggested that computers would be capable
of learning, conversing and translating
language; he also predicted that computers
would house our libraries and compose
most of our music.
Getting Smarter
Artificial
intelligence
research
has
progressed
considerably
since
the
Dartmouth conference, but the ultimate AI
system has yet to be invented. The ideal AI
computer would be able to simulate every
aspect of learning so that its responses
would be indistinguishable from those of
humans.
Alan M. Turing, who as early as 1934 had
theorized that machines could imitate
thought, proposed a test for AI machines in
his 1950 essay "Computing Machinery and
Intelligence." The Turing Test calls for a
panel of judges to review typed answers to
any question that has been addressed to
both a computer and a human. If the
judges can make no distinctions between
the two answers, the machine may be
considered intelligent.
It is 1984 as this is being written. A
computer has yet to pass the Turing Test,
and only a few of the grandiose predictions
for artificial intelligence have been realized.
Did Turing and other futurists expect too
much of computers? Or do AI researchers
just need more time to develop their
sophisticated systems? John McCarthy and
Marvin Minsky remain confident that it is
just a matter of time before a solution
evolves, although they disagree on what
that solution might be. Even the most
sophisticated programs still lack common
sense. McCarthy, Minsky and other Al
researchers are studying how to program in
that elusive quality-common sense.
McCarthy, who first suggested the term
"artificial intelligence," says that after
thirty years of research AI scholars still
don't have a full picture of what knowledge
and reasoning ability are involved in
common sense. But according to McCarthy
we don't have to know exactly how people
reason in order to get machines to reason.
McCarthy believes that a sophisticated
programmed language of mathematical
logic will eventually be capable of
common-sense reasoning, whether or not it
is exactly how people reason.
Minsky argues that computers can't
imitate the workings of the human mind
through mathematical logic. He has
developed the alternative approach of
frame systems, in which one would record
much more information than needed to
solve a particular problem and then define
which details are optional for each
particular situation. For example, a frame
for a bird could include feathers, wings, egg
laying, flying and singing. In a biological
context, flying and singing would be
optional; feathers, wings and egg laying
would not.
The common-sense question remains
academic. No current program based on
mathematics or frame systems has common
sense. What do machines think? To date,
they think mostly what we ask them to.
References:
http://www.discovery.com/tv-shows/curiosity/topics/ways-artificial-intelligence-will-affect-our-lives.htm
Wikipedia, the free encyclopedia
http://www.bbc.com/news/technology-27709828
http://www.newsday.com/business/technology/creepy-or-cool-japan-s-robots-more-human-like-than-ever-1.8549624#1
3