Download Physical symbol system

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Herbert A. Simon wikipedia , lookup

John Searle wikipedia , lookup

Technological singularity wikipedia , lookup

AI winter wikipedia , lookup

Chinese room wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Hubert Dreyfus's views on artificial intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Transcript
Physical symbol system
From Wikipedia, the free encyclopedia
Jump to: navigation, search
See also: Philosophy of artificial intelligence
The physical symbol system hypothesis is a position in the philosophy
of artificial intelligence formulated by Newell and Simon (1963) as the
result of success of GPS (General Problem Solver) and subsequent programs
as models of cognition. It reads:
"A physical symbol system has the necessary and sufficient means
of general intelligent action."[1]

This claim is very strong: it implies both that human thinking is a kind
of symbol manipulation (because a symbol system is necessary for
intelligence) and that machines can be intelligent (because a symbol
system is sufficient for intelligence).[2] The hypothesis has been
criticized strongly by various parties, but is a core part of AI research.
A common critical view is that the hypothesis seems appropriate for
higher-level intelligence such as playing chess, but less appropriate for
commonplace intelligence such as vision.
A physical symbol system (also called a formal system) takes taking
physical patterns (symbols), combining them into structures (expressions)
and manipulating them (using processes) to produce new expressions. A
distinction is usually made between the kind of high level symbols that
directly correspond with objects in the world, such as <dog> and <tail>
and the more complex "symbols" that are present in a machine like a neural
network.
Contents
[hide]


1 Arguments in favor of the physical symbol system hypothesis
2 Criticism
o 2.1 Dreyfus and the primacy of unconscious skills
o 2.2 Searle and his Chinese Room
o 2.3 Brooks and the roboticists
o 2.4 Connectionism
2.5 Embodied philosophy
3 Notes
4 References
o


[edit] Arguments in favor of the physical symbol
system hypothesis
In the early decades of AI research there were a number of very successful
programs that used high level symbol processing, such as Newell and
[3]
Herbert Simon's General Problem Solver or Terry Winograd's SHRDLU. John
Haugeland named this kind of AI research "Good Old Fashioned AI" or GOFAI.[4]
Expert systems and logic programming are descendants of this tradition.
Psychological experiments carried out at the same time found that, for
difficult problems in logic, planning or any kind of "puzzle solving",
people used this kind of symbol processing as well. AI researchers were
able simulate the step by step problem solving skills of people with
computer programs. This collaboration and the issues it raised eventually
would lead to the creation of the field of cognitive science.[5]
These two lines of evidence suggested to Alan Newell and Herbert Simon
that "symbol manipulation" was the essence of both human and machine
intelligence.
The idea has philosophical roots in Hobbes (who claimed reasoning was
"nothing more than reckoning"), Leibniz (who attempted to create a logical
calculus of all human ideas), Hume (who thought perception could be
reduced to "atomic impressions") and even Kant (who analyzed all
experience as controlled by formal rules).[6] The latest version is called
the computational theory of mind, and is associated with philosophers
Hilary Putnam and Jerry Fodor.[7]
[edit] Criticism
[edit] Dreyfus and the primacy of unconscious skills
Main article: Dreyfus' critique of artificial intelligence
Another version of this position was described by philosopher Hubert
Dreyfus, who called it "the psychological assumption":

The mind can be viewed as a device operating on bits of information
according to formal rules.[8]
Dreyfus refuted this by showing that human intelligence and expertise
depended primarily on unconscious instincts rather than conscious
symbolic manipulation. Experts solve problems quickly by using their
intuitions, rather than step-by-step trial and error searches. Dreyfus
argued that these unconscious skills would never be captured in formal
rules.[9]
[edit] Searle and his Chinese Room
Main article: chinese room
John Searle's Chinese Room argument, presented in 1980, attempted to show
that a program (or any physical symbol system) could not be said to
"understand" the symbols that it uses; that the symbols have no meaning
for the machine, and so the machine can never be truly intelligent.[10]
[edit] Brooks and the roboticists
Main articles: nouvelle AI, situated, and Moravec's paradox
In the late 80s, a number of researchers advocated a completely new
approach to artificial intelligence, based on robotics. They believed
that, to show real intelligence, a machine needs to have a body — it needs
to perceive, move, survive and deal with the world. They argued that these
sensorimotor skills are essential to higher level skills like commonsense
reasoning and that abstract reasoning was actually the least interesting
or important human skill (see Moravec's paradox). They advocated building
intelligence "from the bottom up."[11]
In a 1990 paper Elephants Don't Play Chess, robotics researcher Rodney
Brooks took direct aim at the physical symbol system hypothesis, arguing
that symbols are not always necessary since "the world is its own best
model. It is always exactly up to date. It always has every detail there
is to be known. The trick is to sense it appropriately and often enough."[12]
His strategy was to build independent modules that deal directly with the
world then add more modules above this layer to implement higher goals.
He called this a subsumption architecture and named the new field Nouvelle
AI.
[edit] Connectionism
Main article: connectionism
Please help improve this article or section by expanding it.
Further information might be found on the talk page or at requests for expansion.
(November 2007)
[edit] Embodied philosophy
Main article: embodied philosophy
Please help improve this article or section by expanding it.
Further information might be found on the talk page or at requests for expansion.
(November 2007)
George Lakoff, Mark Turner and others have argued that our abstract skills
in areas such as mathematics, ethics and philosophy depend on unconscious
skills that derive from the body.
[edit] Notes
1. ^ Newell & Simon 1963 and Russell & Norvig 2003, p. 18
2. ^ Searle writes "I like the straight forwardness of the claim."
Searle 1980, p. 4
3. ^ Dreyfus 1979, pp. 130-148
4. ^ Haugeland 1985, p. 112
5. ^ Dreyfus 1979, pp. 91-129, 170-174 This type of research was
called "cognitive simulation."
6. ^ Dreyfus 1979, p. 156, Haugeland, pp. 15-44
7. ^ Horst 2005
8. ^ Dreyfus 1979, p. 156
9. ^ Dreyfus 1972, Dreyfus 1979, Dreyfus & Dreyfus 1986. See also
Russell & Norvig 2003, pp. 950-952, Crevier & 1993 120-132 and
Hearn 2007, pp. 50-51
10.^ Searle 1980, Crevier 1993, pp. 269-271
11.^ A popular version of this argument is given in Moravec 1988, p. 20
12.^ Brooks 1990, p. 3
[edit] References

Brooks, Rodney (1990), "Elephants Don't Play Chess", Robotics and
Autonomous Systems 6: 3-15,
<http://people.csail.mit.edu/brooks/papers/elephants.pdf>.
Retrieved on 30 August 2007.















Cole, David (Fall 2004), "The Chinese Room Argument", in Zalta,
Edward N., The Stanford Encyclopedia of Philosophy,
<http://plato.stanford.edu/archives/fall2004/entries/chinese-ro
om/>.
Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial
Intelligence, New York, NY: BasicBooks, ISBN 0-465-02997-3
Dreyfus, Hubert (1972), What Computers Can't Do, New York: MIT Press,
ISBN 0060110821
Dreyfus, Hubert (1979), What Computers Still Can't Do, New York:
MIT Press.
Dreyfus, Hubert & Dreyfus, Stuart (1986), Mind over Machine: The
Power of Human Intuition and Expertise in the Era of the Computer,
Oxford, U.K.: Blackwell
Gladwell, Malcolm (2005), Blink: The Power of Thinking Without
Thinking, Boston: Little, Brown, ISBN 0-316-17232-4.
Haugeland, John (1985), Artificial Intelligence: The Very Idea,
Cambridge, Mass.: MIT Press.
Hobbes (1651), Leviathan.
Horst, Steven (Fall 2005), "The Computational Theory of Mind", in
Zalta, Edward N., The Stanford Encyclopedia of Philosophy,
<http://plato.stanford.edu/archives/fall2005/entries/computatio
nal-mind/>.
Kurzweil, Ray (2005), The Singularity is Near, New York: Viking
Press, ISBN 0-670-03384-7.
McCarthy, John; Minsky, Marvin & Rochester, Nathan et al. (1955),
A Proposal for the Dartmouth Summer Research Project on Artificial
Intelligence,
<http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth
.html>.
Newell, Allen & Simon, H. A. (1963), "GPS: A Program that Simulates
Human Thought", in Feigenbaum, E.A. & Feldman, J., Computers and
Thought, McGraw-Hill
Russell, Stuart J. & Norvig, Peter (2003), Artificial Intelligence:
A Modern Approach (2nd ed.), Upper Saddle River, NJ: Prentice Hall,
ISBN 0-13-790395-2, <http://aima.cs.berkeley.edu/>
Searle, John (1980), "Minds, Brains and Programs", Behavioral and
Brain Sciences 3 (3): 417-457,
<http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html>
Turing, Alan (October 1950), "Computing machinery and
intelligence", Mind LIX (236): 433-460, ISSN 0026-4423,
doi:10.1093/mind/LIX.236.433