Download Is it possible to create a computer that mimics human

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Human–computer interaction wikipedia , lookup

Hubert Dreyfus's views on artificial intelligence wikipedia , lookup

Dual consciousness wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Transcript
Is it possible to create a computer that mimics human intelligence by replicating the way
the human brain processes information?
Candidate Name: Emily Hernandez
Candidate Number: 003251-016
Advisor: Jenny Levi
Subject: Computer Science
Date: 4 November 2011
Word Count (Abstract): 300
Word Count (Essay): 3980
003251-016
Abstract
In the computer age, attempts to create artificial intelligence have fueled many
technological advances. Increased processing capabilities have made it possible to design
machines that can rapidly compute and make inferences from inputs. True intelligence, however,
remains elusive. As neuroscience has advanced, many computer scientists have come to believe
that mimicking the processing method of the human brain is the key to achieving true
intelligence. Current systems, however, are not able to efficiently process the massive amounts
of varied information required to achieve brain-like processing, and the search is on for a
fundamentally new system. My research question is “Is it possible for scientists to replicate the
way the human brain processes information in a computer system?”
Before launching into a detailed study of the brain and artificial intelligence, I wanted to
define artificial intelligence, brain-like intelligence, and true intelligence. Once I better
understood these three terms, I sought to clarify how the brain processes information. After
realizing the complexity of brain processes, I explored the history of artificial intelligence and
concluded that traditional approaches to artificial intelligence seriously misinterpreted the brain’s
processing system. I examined flaws in traditional approaches, and sought information on
scientists making progress in the field of neuro-models of the brain’s processing system. First, I
focused on Hawkins’s work to create a system that replicates the visual system of learning in the
brain. As I continued looking for a complete replica of all five senses of the brain’s learning
system, I discovered the work of Walter Freeman.
Freeman’s holistic approach to replicating the way a rabbit brain thinks and learns led me
to conclude that it is possible to replicate the human brain’s processing system. However, current
limits in efficiency and the added complexity of the human brain, mean this may take significant
time to achieve.
2
003251-016
Table of Contents
Abstract……………………………………………………………………………………….…..2
Introduction…………….………………………………………………………………………...4
Important Definitions…………………………………………………………………………....5
Part 1: The Brain……………………………………………………………………………..….6
Part 2: The Past……………………………………………………………………………….....8
Part 3: Progress………………………………………………………………………….…...…11
Part 4: Walter Freeman……………………………………………………………….…….…14
Part 5: Present Problems………………….…………………………………..……………….16
Conclusion.…………………………………………………………………………..……….…17
Works Cited……………………………………………………………………..………………19
3
003251-016
Introduction
The quest to create a human-like computer has driven artificial intelligence-seeking
computer scientists for decades. This quest is not simply to prove we can accomplish this goal.
Rather, it is important for applications from security monitoring to dangerous climate changes
and natural disaster prediction (Hamm 92). If a computer system could process information with
the level of optimal efficiency and speed that the brain does, it could simultaneously gather and
process all types of sensory data, adapt as those data constantly change, and use those data to
support optimal decision-making, all without placing humans in harm’s way. Attempts to
achieve this goal by well-known scientists such as Minsky, Winograd, and Brooks all failed,
because their basic paradigm was based on classic computer architecture: a linear processing
system with separate data storage. At the time, scientists believed that the brain operated in this
manner. Recent advances in brain imaging and neuroscience, however, have led to a better
understanding of the unexpected ways in which the brain stores and processes information and
new approaches to try to reach the goal. Despite significant effort, scientists have yet to create a
computer system that possesses human-like intelligence.
Is it possible for scientists to create a computer that mimics human intelligence by
replicating the way the human brain processes information? As I explored the history of
artificial intelligence, I developed two main knowledge objectives: 1) to better understand how
the brain works; and 2) to determine why past research approaches have failed to produce the
desired result. Although a complete brain-like system does not yet exist, I discovered the
promising work of several scientists who have had success replicating portions of brain functions
in computer systems. Based on my discoveries, I have concluded that it is possible to develop a
human-like computer by replicating how the brain processes information.
4
003251-016
Important Definitions
The definition of artificial intelligence (AI) has evolved over time. When the field
originated, the goal of AI was to create machines that could solve problems and learn on their
own (Dreyfus 40). To reach this goal, scientists employed computational learning, a process that
sifts huge quantities of data through a single unit, and then presents the best answer. Scientists
argued that since they had matched the way the brain represents the outside world as values, they
had created intelligent computers. Unfortunately, we now know that the brain does not act like a
computer’s processing unit. The brain does not store millions of facts, assign them values, and
then recall the “correct” answer (Dreyfus 41). The brain does not even function in a linear way.
In a system better represented by nonlinear graph theory, the brain’s neurons send signals and
form synaptic connections with one another, linking axons together to learn and to continually
adapt to new sensory data. The brain’s “thinking” is based on a system of continually changing
flexible connections between information and experiences.
To determine if we can create human-like intelligence, one must understand what the
word “intelligence” means. Intelligence is not simply about choosing the correct answer to a
problem. Intelligence is a predictive ability that determines how the world will change
(Perlovsky 2). In simpler terms, beings with intelligence have the ability to adjust to unexpected
occurrences with ideal efficiency. The brain’s ability to adapt and learn efficiently with a goal in
mind clearly demonstrates intelligence (Werbos 201). General problems are analyzed and solved
in the optimal amount of time (Werbos 201). At present, computers can be programmed to
behave intelligently under specific conditions and to produce consistently correct and appropriate
answers that require context knowledge across a range of subjects. However, if a computer
system could be developed that actually processes information in the flexible and adaptive way
5
003251-016
of the human brain, it is reasonable to expect we can produce true human-like intelligence in a
computer.
Part 1: The Brain
In order to determine whether or not it is possible to create a computer that possesses
human-like intelligence, one must first understand the basic components and capabilities of the
brain.
Understanding the neuron is the first step to understanding the brain. A newly born
neuron consists of a body. It extends arm-like projections to seek out neuron signals, also called
impulses or nerve pulses. The longest projection becomes the axon and acts as the generator of
impulses. The rest of the projections become dendrites, or neuron impulse receptors. (Von
Neumann 45-6)
Dendrite
Synapses
Axon
Neuron 1
Neuron 2
Figure 1: Neuronal Connections (Versace)
The brain has a unique communication system that shapes the way it receives and
processes data. The fundamental way that neurons communicate is by tiny electrical disturbances
called “spike trains” (Browne 17). A stimulation of the end of an axon begins an impulse,
6
003251-016
causing temporary chemical and electrical changes in neurons (Von Neumann 40, 45-6). The
pulse generated by the axon activates links, or synapses, between other neurons (Boahen n.p.). If
an impulse is successful, the disturbance spreads along axons to create a spike train (Von
Neumann 41). Data processing occurs in these synapses (Versace 33), the connecting points
between axons and dendrites. To form a link, axons produce “growth cones” that sense chemical
trails released by active neurons. The general rule is that neurons that “fire together wire
together” (Boahen n.p.). That is, in a sea of neuron signals, those that are active at the same time
accept signals from each other and form synaptic connections. Once a growth cone comes in
contact with a dendrite, a synapse is formed. (Boahen n.p.) Until a new stimulus creates a new
connection, that link is permanent and is used as an information processing point.
The organization and quick reactions of the brain play a part in its efficiency. Although
the electrical components of the brain are slow-acting, the brain overcomes this barrier by
activating synapses between neurons (Boahen n.p.). These high-speed connections occur 10
quadrillion (1016) times per second (Boahen n.p.). Moreover, only 10-4 seconds lapse between the
arrival of one nerve pulse and the reappearance of that pulse on an axon (Von Neumann 46).
Even when factoring in fatigue, the delay between when a neuron accepts one pulse and is ready
to accept a new pulse is only 1.5 x 10-2 seconds (Von Neumann 46). This allows the brain to be
highly adaptive to new stimuli on a continuous basis.
Unless the components of the brain are closely examined, there appears to be no link
between computational intelligence and brain-like intelligence. However, similarities do exist.
Like binary code for a computer, nerve pulses are markers. The presence or absence of a pulse
can be represented by a value, similar to the binary values 1 and 0 (Von Neumann 43-4). Each
neuron accepts and emits pulses based on unique rules (Von Neumann 43-4). Sequences of
7
003251-016
impulses define specific behaviors of the nervous system (Von Neumann 70). Likewise,
computer programmers input rules as sequences of computer codes to make the machine respond
to different activities. Many neuroscientists believe that the brain depends on “neural codes”, or
algorithms, to transform nerve impulses into thoughts, emotions, and perceptions (Horgan 38).
The major flaw in this theory is that scientists have identified only a few neural codes and have
not yet been able to mathematically prove this system.
These few similarities, easily outweighed by numerous differences between the brain and
current computer processing systems, may make it seem improbable that scientists can replicate
the brain’s adaptive processing system in computers. It is clear that the brain processes
information in a dynamic manner, not through a program with rigid rules. The brain’s
intelligence is achieved by a flexible system that can optimally and efficiently alter itself. If
scientists could create a new, non-linear computer processing system that replicates this flexible
reconfiguration, then it is plausible that system would be capable of brain-like intelligence.
Part 2: The Past
GOFAI, good old-fashioned artificial intelligence, approaches did not work. In my
opinion, this is because scientists did not base their work on an accurate understanding of how
the brain processes information.
For the past sixty years, computer software designed to replicate brain functions has been
based on the classic Von Neumann computer architecture with separate locations for data
processing and data storage. Using this system, computers can process only a fixed amount of
data at any given time, and data must be transferred back and forth between storage and
8
003251-016
processing locations multiple times during the process. (Versace 33)
Data
Bus
CPU
Main memory
Figure 2: Classic Von Neumann Computer Architecture (Versace)
As early as the 1960s, scientists realized that the real problem was not about “storing
millions of facts” (Dreyfus 41). Any processing system with a large enough memory could
achieve this. Instead, the real issue was and is about common sense (Dreyfus 41). This problem,
known as the frame problem, became apparent shortly after the computational learning system
was devised. The frame problem asks how the computer should determine which two or three
facts are relevant in a given situation when a computer can store millions of pieces of
information at one time (Dreyfus 41). To try to get around this problem, Minsky attempted to
organize the millions of facts into ‘frames’, or common situations. Ideally, this would have
allowed the computer to organize relevant information, but all that resulted was a complicated
organizational system in which a tiny variable changed the entire outcome of the situation
(Dreyfus 42).
The next step seemed obvious: instead of cramming the computer full of every possible
situation, why not limit the number of relevant features (Dreyfus 42)? AI micro-worlds
9
003251-016
developed and appeared to effectively model knowledge. However, once these small-scale
models were tested on a larger area, they failed (Dreyfus 42).
The next scientist who attempted to solve this problem was Brooks. Instead of using a
fixed model of the world, his robots continuously used sensors to determine movement and
change in the present world (Dreyfus 44-5). While his idea did mark an advance towards brainlike intelligence, Brooks’s robots did not learn; they were simply programmed to respond to a
few changing features in an advanced set of “if, then” scenarios (Dreyfus 45).
Early computer scientists did not know that the human brain does not give equal meaning
to everything, nor does it “convert stimulus inputs into reflex responses” (Dreyfus 47). The
actions and reactions of the brain change with each new input. As a result, the same outside
stimuli will not produce the same signal set in response, because the brain is constantly updating
and revising its knowledge of the outside world (Horgan 37).
As time progressed and more information became available about the human brain, one
could see a trend of improvements towards replicating the brain’s processing system in a
computer. As many computer scientists pursued new programming approaches based on new
understanding of the brain’s processes, one man’s work fundamentally changed the field of
artificial intelligence by shifting it away from the brain’s processes and towards behavior
aspects: Alan Turing.
The initiation of the Turing test was a pivotal point for the field of artificial intelligence.
This test suggests that if a computer can conduct a conversation with a human without the human
realizing that he or she is talking to a machine, then that computer is intelligent (Buttazzo 25).
Instead of focusing on the engineering and processing systems at work in the computer, this test
focuses on behavioral aspects (Hawkins 22). With the advent of the Turing test, researchers
10
003251-016
turned away from studying the brain’s processing system, focusing instead on creating humanlike behavior in computers (Hawkins 22). Interestingly, only by creating a specialized topic area
of conversation has any computer been able to pass the Turing test (Buttazzo 25). For example,
scientists used this idea of specialization when creating the computer Deep Blue. By applying a
huge data set of rules, Deep Blue was able to analyze a chess game and find the most effective
move to defeat the world champion chess player (Buttazzo 25). By evaluating around 200
million possible moves per second, Deep Blue was able to defeat Garry Kasparov by sheer force
(Boahen, n.p.). Similarly, IBM’s WATSON was recently able to use very advanced algorithms to
answer complex questions in real time to defeat humans at the game “Jeopardy”. However,
neither system actually understands the game it plays or exhibits anything other than low-level
forms of brain-like learning.
Part 3: Progress
After decades of failed effort, it may seem impossible that scientists will ever produce a
machine with human-like intelligence. However, recent collaborative work by neuroscientists
and computer scientists has led to a resurgence of efforts to reverse-engineer the brain in order to
create intelligent computer systems.
Many of these efforts focus on new knowledge about the processing capabilities of the
cortex, the “brain region responsible for cognition” (Boahen n.p.).The neocortex makes up the
majority of the cortex and is the center of intelligence (Hawkins, On Intelligence 6). The
neocortical sheet is uniform; however different sections handle different functions, from vision
to music, language, and motor skills (Hawkins, “Why” 22). Almost all high-level thought and
perception is handled by the neocortex (Hawkins, “Why” 22). Unlike rigid coding programs of
computers, the sections of the neocortex work together in one flexible algorithm (Hawkins,
11
003251-016
“Why” 22). Information travels through hierarchical structures with neurons becoming more
complex as the levels increase. General information filters through each level of the neocortex
and is processed into focused pieces. (Hawkins, “Why” 22)
Because the neocortex thinks in terms of details and not complete objects, it can re-use
knowledge. While computer systems focus on full length images for identification, the neocortex
stores low level visual details in low level nodes. This allows basic characteristics to be filtered
through the hierarchy of nodes until a detailed image is reached. The neocortex does not have to
completely relearn an animal if it has previously seen a different animal that shares basic
characteristics, such as tails or fur. (Hawkins, “Why” 22)
Jeff Hawkins, a neuroscientist fascinated by the possibilities the cortex holds for
intelligent machines, has created an algorithm that attempts to model the process of the
neocortex. Hawkins’s Hierarchical Temporal Memory (HTM) system is unlike other computer
systems in that it is not programmed. Instead, HTM is trained by exposure to sensory data, to act
and learn like a young child. (Hawkins, “Why” 22) HTM attempts to imitate “the way humans
process visual imagery” (Hamm 92).
HTM utilizes the same hierarchical organization as the neocortex when recognizing
objects. Instead of storing an object as one memory, HTM breaks apart the details into levels to
piece together an image. Although the very first time an HTM system is exposed to an object, it
takes substantial time and memory to learn, subsequent objects reuse knowledge gained from the
first object. This allows for shorter training periods and more adaptive learning. (Hawkins,
“Why” 22)
Other important aspects of the HTM system are sequential pattern learning and
independent learning. Just as the neocortex matches new data to previously learned systems, an
12
003251-016
HTM system can recognize patterns that it has seen before. (Hawkins, “Why” 22-3) Unlike other
computer systems, HTM does not have a lead node that dictates what other nodes will learn.
Each individual node adapts and changes its data as it learns, just as neurons in the brain
constantly form new connections. A lead node cannot exist because the system has no way of
knowing what it will learn in the future. (Hawkins, “Why” 24)
Through a combination of computer science and brain research, Hawkins has effectively
created a flexible system based on the brain. Although HTM does not have desires and motives
as do humans, it learns in the same way as the neocortex, proving that a brain-like computer can
be achieved by reverse engineering the brain. (Hawkins, “Why” 26) Hawkins’s work is
important, because HTM systems have practical uses in society. Since HTM systems can make
sense of data, they have applications in areas involving large data sets, such as oil drilling and
drug discovery. Picture recognition is another field in which HTM systems would thrive.
(Hawkins 26)
Although there is no doubt that Hawkins’s system marks an incredible advance in
artificial intelligence, critics have noted that his system focuses solely on the visual aspect of the
brain. Without incorporating all the senses into one system, a true replica of the brain’s
processing system does not exist.
The field of “cognitive robotics”, devoted to making computers and artificial creations
behave like humans, has also seen dramatic advances. Historically, robots have mimicked human
performance, such as muscle movement and arm functions. As early as 1948, a basic form of a
cognitive robot was created by Grey Walter. His robots responded to light by moving forward.
RatSLAM, a flexible mapping program created later by Wyeth and Milford, is based on place
cells in the hippocampus, the visual part of the cerebral cortex. By making no distinction
13
003251-016
between learning and recalling information, this navigation and mapping system can adapt to
both short and long term changes. (Browne 17-18)
These scientists worked with individual senses in the brain, just as Hawkins focused only
on visual aspects. Although each project replicates part of the brain’s processing system, because
only one of the five senses is replicated, many believe that these models are unsatisfactory and
incomplete.
Part 4: Walter Freeman
Before I discuss the most advanced neurodynamic model of the brain, I must summarize
why all previously mentioned models have failed to meet the criteria for intelligence. The
infamous frame problem seeks to assign meaning to thousands of facts about the everyday world
(Dreyfus 58). However, the everyday world, as humans see it, is already organized into levels of
significance and relevance (Dreyfus 58). Because all prior neuro-models are based on a standard
linear model, they have failed to address the frame problem and are therefore unsuccessful in
replicating human intelligence (Dreyfus 58). We now know that the brain does not passively
receive data and then assign meaning to it; the brain actively picks out relevant facts and binds
them together to make a better representation of the world (Dreyfus 59).
Because the nervous system is so complex, computer design will have to dramatically
change in order to produce human-like intelligence. To this day, computers have no common
sense, and instead require explicit programming (Horgan 38). Although they can perform
operations much faster than the brain can, computers have no pattern recognition or visual
processing skills (Boahen n.p.), with the exception of Hawkins’s recent work. Efficiency is
another concern: a supercomputer with brain-like functions weighs 1000 times more and
occupies 10,000 times more space than the brain (Boahen n.p.). Perhaps the biggest problem,
14
003251-016
“Today’s computers are essentially really fast abacuses. They’re good at math but can’t process
complex streams of information in real time, as humans do” (Hamm 92). Basic perception is
impossible for computers, because they have a narrow range of abilities, usually limited to
completing one task (Hawkins, “Why” 21). Power is another serious problem. The brain operates
at around 100 millivolts, while computers require close to 1 volt to function (Versace 34).
The first scientist to seriously consider the idea that the brain is a nonlinear system is
Walter Freeman (Dreyfus 59). “On the basis of years of work on olfaction, vision, touch, and
hearing in alert and moving rabbits, Freeman has developed a model of rabbit learning based on
the coupling of the rabbit’s brain and the environment” (Dreyfus 60). Freeman’s work provides a
complete, multi-sense representation of an animal’s brain processing system. Because it
incorporates all five senses, his work can be considered a complete model of the way the
biological brain thinks and learns.
Freeman proved that the brain does not detect and process meaningless data about the
world. He has shown that selection of relevant features is based not on patterns, but on the
brain’s past experiences. The primary desire of a being is to fulfill its needs. When a need is
fulfilled, connections between neurons in the brain form and are strengthened. Each time the
need is subsequently met, the neurons “fire together and wire together” to create a “cell
assembly” (Dreyfus 60). As time passes and consistent sensory input occurs, cell assemblies
stay wired together and act together in the future. (Dreyfus 60-61) This system avoids any frame
or selection problem, because instead of focusing on detecting isolated data, the brain has cell
assemblies already adjusted based on past sensory input that signal the body to complete an
action (Dreyfus 61). Once again, Freeman correctly predicts that the constantly updating world
15
003251-016
means that no two experiences are ever identical (Dreyfus 64). Therefore, each action slightly
modifies the processing system.
Based on Freeman’s understanding of the brain, other computer scientist have worked to
create a replica of the brain’s processing system. Freeman himself has programmed his model of
the brain, with positive results, and his system could very well become the new paradigm for
artificial intelligence systems (Dreyfus 68). Scientists Robert Kozma and Peter Erdi have also
used Freeman’s model in an artificial dog that learns to run a maze (Dreyfus 73).
Part 5: Present Problems
Although progress has been made in replicating the processing system of the brain,
significant dilemmas exist that prevent a brain-based robot from being created today. The three
major problems with computers today are the efficiency problem, the power problem, and the
separation of software and hardware problem. MoNETA, a more recent system designed at
Boston University (Versace 32), attempts to solve both the power problem and the separation of
software and hardware problem.
MoNETA, Modular Neural Exploring Travel Agent, picks out important information in
its surroundings to help it survive. Because it is designed after “general-purpose intelligence”, it
can adapt to new environments without being retrained. Modeled after synapses in the brain,
MoNETA “recognizes, reasons, and learns” without programming. (Versace 35)
The components that make MoNETA successful are CPUs, GPUs, and Memristors.
CPUs are flexible, neuron-like processing units. GPUs are inexpensive, rigid microprocessors
that perform limited operations. Memristors are electronic devices designed to mimic the signal
processing of the brain’s synapses. Like a synapse, they remember how much current has passed
through them without using power. (Versace 35-6) Memristors also allow the computer to
16
003251-016
process and store data in the same place, solving a problem that has confronted scientists for
decades. Ironically, one of the most exciting things about the memristor is its high failure rate.
Although early in its development, the failure of a few individual memristors does not appear to
affect the entire system it supports. Just as the brain’s neural network does not fail when one
neuron dies, the architecture of the memristor allows for defects (Versace 35-6).
If the kinks in the MoNETA system can be worked out, it will effectively co-mingle
hardware and software in a processing system, making it the first to transfer, store, and process
information at the same time, just like the brain. However, both the rigidity of the GPUs, based
on linear algebra, and the high power need of the CPUs that are also part of MoNETA, currently
make efficient operation of this system impossible. While memristors solve the power problem
due to their processing method, the technology cannot yet handle the volume of processing
necessary to replicate the brain’s processing system.
Conclusion
Based on my research, I believe that scientists will succeed in creating a computer that
mimics human intelligence by replicating the way the human brain processes information.
Early attempts to create human-like machine intelligence failed, because they were based
on fundamental misunderstandings about how the brain works. As time progressed, computer
scientists realized that their picture of how the brain organizes and processes information was too
simplistic: the brain is clearly non-linear and dynamic, rather than linear. Advances in
neuroscience and brain imaging technology have allowed computer scientists to better
understand how the brain works. This has led to new discoveries.
Three recent advances, in particular, lead me to believe that it will be possible to create a
computer that mimics human intelligence by replicating the way the human brain processes
17
003251-016
information. First, Hawkin’s work replicating the visual processing system of the neocortex
shows that individual senses in the brain can be duplicated in computers to create systems that
exhibit childlike thinking and learning. Second, MoNETA’s memristor component indicates that
computers can now process and store information in the same place while using minimal power,
like the brain. Finally, Walter Freeman’s complete neurodynamic model of the brain’s way of
learning and its subsequent use to produce human-like learning in a robotic dog proves that
scientists can create a replica of a biological brain’s processing system.
Creating a fully-functioning, human-like robot is likely still twenty years on the horizon,
because scientists must resolve remaining issues associated with efficiency, power supply, and
the significantly greater complexities of the human brain as compared to other biological
systems. Nonetheless, I do believe that the combined and coordinated efforts of computer
scientists, mathematicians (graph theorists), neuroscientists, and engineers will realize the dream
of creating human-like intelligence in a machine.
18
003251-016
Works Cited
Boahen, Kwabera. “Neuromorphic Microchips. (cover story).” Scientific American 292.5(2005):
56-63.Nursing & Allied Health Collection: Comprehensive. EBSCO. Web. 14 June 2011.
Browne, William, Kazuhiko Kawamura, and Jeffrey Krichmar. “Cognitive Robotics: New
Insights into Robot and Human Intelligence by Reverse Engineering Brain Functions.”
IEEE Robotics and Automation Magazine 16.3(2009): 17-18. Applied Science Full Text.
Web. 14 June 2011.
Buttazzo, Giorgio. “Artificial Consciousness: Utopia or Real Possibility?” IEEE 34(2001): 24
-30.
Dreyfus, Hubert. "How Representational Cognitivism Failed and Is Being Replaced by
Body/World Coupling." After Cognitivism: A Reassessment of Cognitive Science and
Philosophy. Ed. Karl Leidlmair. Springer: Dordrecht, 2009. 39-73.
Hamm, Steve. “Building Computers That Mimic the Brain.” Business Week 4110(2008): 92.
Hawkins, Jeff. On Intelligence. New York: Times Books, 2004.
Hawkins, Jeff. “Why Can’t a Computer Be More Like a Brain?” IEEE Spectrum 44(2007): 20-6.
Horgan, John. “The Consciousness Conundrum.” IEEE Spectrum 45.6(2008): 36-41.
Perlovsky, Leonid. “Neural Dynamic Logic of Consciousness: the Knowledge Instinct.”
Neurodynamics of Higher-Level Cognition and Consciousness. Ed. Leonid Perlovsky and
Robert Kozma. Springer: Heidelberg, 2007.
Versace, Massimiliano and Ben Chandler. “The Brain of a New Machine.” IEEE Spectrum
47(2010): 30-7.
Von Neumann, John. The Computer and the Brain. New Haven and London: Yale University
Press, 1958.
Werbos, Paul J. “Intelligence in the brain: A theory of how it works and how to build it.” Neural
Networks 22(2009): 200-212.
19