Download Assignment 04_4 - Siri Johansson

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

AI winter wikipedia , lookup

Chinese room wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Technological singularity wikipedia , lookup

Turing test wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Transcript
Intelligent Technology of the Future - Cognitive Science
Luleå University of Technology
Siri Johansson 2010
Natural and Artificial Intelligence
When talking about human behaviour and ethics, the discussion of what is natural and what
isn’t can be an intricate one. Nature is often referred to as an unquestioned authority, eluding
further explanations of the concept. When naturalness is defined, it is often done with an
excluding purpose. In this essay I’ll discuss whether is critical or not to make this distinction on
the topic of intelligence. The Turing test will be used as a reference for this objective. We’ll
begin by taking a look at the definition of artificial intelligence (AI).
AI is dedicated to developing programs that enable computers to display intelligent behaviour.
Since the 1950s, AI has had its up and downs. The real rise happened when research was
reoriented from using general-purpose weak methods, to knowledge-intense expert systems
that use gathered human knowledge within a restricted domain (Negnevitsky, 2002). Since this
happened in the 1970s, AI research has started to combine expert systems with artificial neural
networks, which can learn from experience, and fuzzy logic, that uses a more human way of
reasoning (with words, even using imprecise terms). It is important to point out that AI
researchers aren’t restricted to studying and imitating biological intelligence. The field is free to
explore methods dealing with much more computing than people can do. Although most
contemporary research is concerned with narrow applications, there is still an interest in the
long term goal of building generally intelligent agents (Thomason, Stanford Encyclopedia of
Philosophy).
When attempting to define a boundary between natural and artificial intelligence, the Turing
test is often brought up. The test, or “imitation game” as it was first referred to by Turing(1950),
defines intelligence as the ability to perform cognitive tasks on the level of a human. The test
consists of a human judge interrogating one human and one machine through a text-only
channel. They both try to appear convincingly human. The machine is said to have passed the
test if the judge can’t tell it from the real human. Turing believed the question “Can machines
think?” to be irrelevant and instead proposed that his test answered the question "Can
machines do what we (as thinking entities) can do?” (Turing, 1950). The definition of
intelligence should consequently not be concerned with the method by which output is
produced. A system very different from what goes on in a human brain but capable of
producing satisfying answers should therefore be considered more human-like than a poorly
performing system where great efforts have been put into simulating the workings of organic
neurons. Turing himself was not particularly interested in defining intelligence in any other
terms than behavioural.
We commonly describe intelligent behaviour as the ability to think, understand and learn.
Natural thinking has a causal drive, it is goal-directed. Information is interpreted contextually
and by taking external as well as our own internal states into account, we are able to reason,
solve problems and come up with ideas. Learning is propelled forward by the collecting of
knowledge, by acting, by analyzing the consequences of taken actions and adjusting future
behaviour accordingly to better direct oneself towards one’s goals. In order for an AI to be able
to learn, it too has to possess a plastic mind, capable of detecting mistakes and calculating
consequences. The artificial neural networks mentioned earlier are able to simulate a natural
learning process. One argument against strong AI1 is however that the ability to process
symbols according to rules doesn’t qualify as actual thinking. This is classically illustrated by
John Searle’s Chinese room thought experiment2 (Hauser, 2005). According to Searle, it doesn’t
matter if intelligent behaviour is produced - as long as the symbols being manipulated lack
semantic content, no actual thought process can be said to have occurred. The Chinese room
has been accused of dualism and meets its opponents in connectionists among others, who
argue that a more brain like system with many agents working in parallel could understand,
even if each single processor component can’t.
The Turing test does evaluate linguistic aspects, namely syntax. But even if an AI is capable of
producing expressions with perfect syntax, is the level of sophistication of one’s language
controlling the sophistication of the thoughts expressed by an individual, as claimed by the
Strong AI research intends to produce machines with an intelligence that matches or exceeds that of human beings,
whereas weak AI only claim that machines can act intelligently (without possessing real understanding). (Wikipedia)
1
“The human in the Chinese Room follows English instructions for manipulating Chinese symbols, where a computer
“follows” a program written in a computing language. The human produces the appearance of understanding Chinese
by following the symbol manipulating instructions, but does not thereby come to understand Chinese. Since a
computer just does what the human does—manipulate symbols on the basis of their syntax alone—no computer,
merely by following a program, comes to genuinely understand Chinese.” (Cole, Stanford Encyclopedia of Philosophy)
2
linguistic relativity hypothesis3? Does language equal thought, equal intelligence? The strong
version of linguistic determinism supposes that thought is not possible without language.
While this notion has been heavily criticized and today is more or less disregarded, there are
contemporary thinkers who contend that it is the imprecision and flexibility of language that
allows for the existence of our creativity-based natural intelligence. Jacob and Shapira (2008)
convincingly argues that the rules of the Turing test are set from a machine’s perspective,
making it inherently inconsistent. It is suggested that the rules of the game must be modified to
let the special features of natural intelligence be expressed. For example, the Turing test doesn’t
have causality built-in, as there are no rewards or punishments. For a “game”, this is a quite
unusual premise. Designing the test to better evaluate learning processes would make it harder
to pass for an AI, but perhaps bring it closer to detecting what the test is really after.
I believe that we since the days when Turing made his definition have expanded on the notion
of human intelligence and begun to appreciate the different forms it can take. One example is
autism, where there often lies great intelligence behind a façade practically non-penetrable due
to lack of higher cognitive abilities. It has to some extent been the increased use of computers
and machine intelligence (as a bridge for communication) that has opened up to reveal some
people’s true level of intellect. Many persons lacking higher level cognitive abilities would
probably fail the Turing Test. This means that the test can’t really be said to detect humanness.
It also leads to the conclusion that a machine could still be intelligent, even though it fails the
test. Only a positive result in the test gives a certain answer – despite failing the test the
machine can still be intelligent, but without the capability of imitating common human
behaviour, similarly to many people.
In the same way that our everyday definition of intelligence has to be revised to include some
deviating cases of human intellect, forcing the definition of machine intelligence into reaching a
human level in cognitive tasks seems hard to justify. As pointed out by Negnevitsky (2002),
trying to reach this elusive goal might be pointless. Hence, a relevant question is whether the
Turing Test really sets appropriate goals for AI research. The same problems are encountered as
with a traditional, human intelligence test – are we really testing for desirable or relevant
abilities? This of course depends on the context in which the test is executed and what the
results will be used for. Perhaps a model similar to Howard Gardner’s theory of multiple
“Many thinkers have urged that large differences in language lead to large differences in experience and thought.
They hold that each language embodies a worldview, with quite different languages embodying quite different views,
so that speakers of different languages think about the world in quite different ways. This view is sometimes called
the Whorf-hypothesis or the Whorf-Sapir hypothesis, after the linguists who made it famous.” (Swoyer, Stanford
Encyclopedia of Philosophy)
3
intelligences could be applicable to AI. For each unique case, different desired profiles or
personalities could be developed for the AI.
What is often implicitly meant by natural intelligence is the cognitive abilities of individual
primates. In order not to loose sight of other systems of intelligence, collective intelligence
should be mentioned. Out of the different discussions on intelligence that I’ve so far come
across, the large topic of collective intelligence has appeared most intriguing and relevant to the
developments of our time. The phenomena occurs in colonies of bacteria, insects and humans
alike. In fact, distributed intelligence is a large area of study with a close relationship to AI. It
researches how people and computers collectively can act more intelligently than any
individual. (Handbook of Collective Intelligence). My view is that computers should be
designed to complement human intelligence in the best possible way. This could mean a
computer capable of dealing with human input and fully able to understand us, but without
necessarily simulating our, sometimes inefficient, ways of reasoning and communicating. AI
should be used to help solve complex problems, using the collective capabilities of humans and
machine.
NASA’s recent discovery of microbes able to live off arsenic means we are currently redefining
the conditions necessary for the existence of life. What about our minds? Is the progress made
in cognitive science and related disciplines leading to any redefinitions of intelligence? The
distinction of natural and artificial seems obvious only as long as you care about the process,
the “machinery” directing behaviour. The newly discovered arsenic microbes proves that
organisms can use biochemistry in ways we’ve never dreamed of, and it provides us with more
places to look for life. In the search for thought, a more open definition of the word can likewise
enable us to see things that we wouldn’t have seen otherwise. If we are too narrow-minded in
what we are looking for, we might be missing out on valuable intelligence, be it natural or
artificial. I believe it is more interesting to discuss what things in this world can be and try to
imagine new ways of being, rather than spending too much time and intelligence defining the
boundaries.
Works Cited
Ben-Jacob, Eshel and Shapira, Yoash. 2008 “Meaning-Based Natural Intelligence Vs.
Information-Based Artificial Intelligence”
http://en.scientificcommons.org/42513974
Cole, David. “The Chinese Room Argument” Stanford Encyclopedia of Philosophy
http://plato.stanford.edu/entries/chinese-room/#3
Last updated: Sep 22, 2009 Date of access: 09 Dec, 2010
Handbook of Collective Intelligence, MIT Center for Colletive Intelligence. http://
scripts.mit.edu/~cci/HCI/index.php?title=Main_Page
Last updated Sep 19, 2010. Date of access: Dec 5, 2010
Hauser, Larry. “Chinese Room Argument” Internet Encyclopedia of Philosophy
http://www.iep.utm.edu/chineser/
Last updated: July 27, 2005 Date of access: 09 Dec, 2010
Negnevitsky, Michael. Artificial Intelligence – A Guide to Intelligent Systems
Harlow, England: Addison-Wesley, 2002
“Strong AI vs. weak AI” Wikipedia
http://en.wikipedia.org/wiki/Strong_AI_vs._weak_AI
Last modified: May 17, 2008 Date of access: 09 Dec, 2010
Swoyer, Chris. “The Linguistic Relativity Hypothesis” Stanford Encyclopedia of Philosophy
http://plato.stanford.edu/entries/relativism/supplement2.html
Year of publication: Feb 2, 2003 Date of access: 06 Dec, 2010
Thomason, Richmond. “Logic and Artificial Intelligence” Stanford Encyclopedia of Philosophy
http://plato.stanford.edu/entries/logic-ai/
Last updated: May 9, 2008 Date of access: 07 Dec, 2010
Turing, Alan. 1950 "Computing Machinery and Intelligence", Mind
http://mind.oxfordjournals.org/content/LIX/236/433