Download III. Symbolic AI as a Degenerating Research Program

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Artificial intelligence in video games wikipedia , lookup

Enactivism wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Hubert Dreyfus's views on artificial intelligence wikipedia , lookup

Human-Computer Interaction Institute wikipedia , lookup

Ray Kurzweil wikipedia , lookup

Singularity Sky wikipedia , lookup

Human–computer interaction wikipedia , lookup

AI winter wikipedia , lookup

Computer Go wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

How to Create a Mind wikipedia , lookup

Intelligence explosion wikipedia , lookup

The Singularity Is Near wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Technological singularity wikipedia , lookup

Transcript
Keynote Speech IV
Theme: The Body and Technologies
Keynote Speech IV
Computation as Salvation: Awaiting Binary Bodies
Hubert Dreyfus
Professor of Philosophy
University of California, Berkeley
Abstract
According to Ray Kurzweil:
“Once computing speed reaches 10 16 operations per second—roughly by
2020—the trick will be simply to come up with an algorithm for the mind.
become self-aware, with unpredictable consequences.”
When we find it, machines will
This event is known as the technological singularity.
Wired Magazine tells us: There are singularity conferences now, and singularity journals.
There has been a
congressional report about confronting the challenges of the singularity, and [in 2007] there was a meeting at
the NASA Ames Research Center to explore the establishment of a singularity university.
Singularity
University preaches that one day in the not-so-distant future, the Internet will suddenly coalesce into a superintelligent Artificial Intelligence.
consciousness.
Computers will become so powerful that they can model human
This will permit us to download our personalities into nonbiological substrates.
cross this bridge, we become information.
When we
Then our bodies will be digitalized the way Google is digitizing
old books, so that we can live forever as algorithms inside the global brain.
And then, as long as we maintain
multiple copies of ourselves to protect against a system crash, we won’t die.
This current excitement is simply the latest version of a pattern that has plagued work in Artificial Intelligence
since its inception.
Marvin Minsky, Director of the MIT AI Laboratory, predicted: “Within a generation we
will have intelligent computers like HAL in the film 2001: A Space Odyssey.”
program failed and is now known as Good Old Fashion AI.
But Minsky’s research
Rodney Brooks took over at MIT.
He
published a paper criticizing the GOFAI robots that used representations of the world and problem solving
techniques to plan the robot’s movements.
Rather, he reported that, based on the idea that the best model of
the world is the world itself, he had “developed a different approach in which a mobile robot uses the world
itself as its “representation” – continually referring to its sensors rather than to an internal world model.
Brooks’ approach is an important advance but Brooks’ robots respond only to fixed isolable features of the
environment, not to context.
It looked like AI researchers would have to turn to neuroscience and try to, as
49
Kurzweil put it, reverse engineer the brain.
But modeling the brain’s with its billions of neurons with on the average 10 thousand connections on each
may well require more knowledge than we now have or may ever have of the functional elements in brain and
how they are connected.
had another idea.
If so, trying to “reverse engineer” the brain does not look promising.
So Kurzweil
Since the design of the brain is in the genome, we could use our enormous computing
power to model the brain DNA and then use that model DNA to grow an artificial brain.
relatively sensible proposal but developmental neuroscientists are outraged.
Kurzweil knows nothing about how the brain works.
This seemed like a
Here’s a typical response:
Its design is not encoded in the genome: what's in the
genome is a collection of molecular tools … that makes cells responsive to interactions with a complex
environment.
The brain unfolds during development, by means of essential cell to cell interactions, of which
we understand only a tiny fraction.
We have absolutely no way to calculate … all the possible interactions
and functions of a single protein with the tens of thousands of other proteins in the cell.
Why then are Kurzweil’s speculations concerning the singularity accepted by elite computer experts and by
seemingly responsible journalists?
religion and technology converge.
It seems to be the result of poor logic driven by a deep longing.
Here
As one author puts it, “the singularity is the rapture for nerds.”
Hardheaded naturalists desperately yearn for the end of our world where our bodies have to die, and eagerly
await the dawning of a new world in which our bodies will be transformed into information and so we will
achieve the promise of eternal life.
As an existential philosopher, I suggest that we should give up this
desperate attempt to achieve immortality by digitalizing our bodies and, instead, face up to our finitude.
50
I. Introduction
There is a new source of excitement in Silicon Valley. According to Ray Kurzweil:
Once computing speed reaches 1016 operations per second—roughly by 2020—the trick will
be simply to come up with an algorithm for the mind.
When we find it, machines will
become self-aware, with unpredictable consequences.
This event is known as the
technological singularity.1
Kurzweil’s notion of a singularity is taken from cosmology, in which it signifies a border in
space time beyond which normal rules of measurement do not apply (the edge of a black hole,
for example).2
Kurzweil’ excitement is contagious. Wired Magazine tells us:
There are singularity conferences now, and singularity journals.
There has been a
congressional report about confronting the challenges of the singularity, and [in 2007] there
was a meeting at the NASA Ames Research Center to explore the establishment of a
singularity university. … Attendees included senior government researchers from NASA, a
noted Silicon Valley venture capitalist, a pioneer of private space exploration and two
computer scientists from Google.3
In fact:
Larry Page, Google’s … co-founder, helped set up Singularity University in 2008, and the
company has supported it with more than $250,000 in donations.” 4
Singularity University preaches a somewhat different story from the technological singularity
which requires the discovery of an algorithm. It goes like this: one day in the not-so-distant future,
the Internet will suddenly coalesce into a super-intelligent Artificial Intelligence, infinitely smarter
1
Mark Anderson, 03.24.08, “Never Mind the Singularity, Here’s the Science”, Wired Magazine: 16.04
2
Ibid.
3
Gary Wolf, 03.24.08, “Futurist Ray Kurzweil Pulls Out All the Stops (and Pills) to Live to Witness the
Singularity,” Wired Magazine: 16.04.ß
4
Ashlee Vance, “Merely Human? That’s so Yesterday,” The New York Times, June 11, 2010.
51
than any of us individually and all of us combined; it will become alive in the blink of an eye, and
take over the world before humans even realize what’s happening.
That is our bodies will be digitalized the way Google is digitizing old books, so that we can
live forever as algorithms inside the global brain. In the technological the world envisaged by
Kurzweil and the Singularians
[C]omputers [will] become so powerful that they can model human consciousness.
permit us to download our personalities into nonbiological substrates.
this…bridge, we become information.
This will
When we cross
And then, as long as we maintain multiple copies of
ourselves to protect against a system crash, we won’t die. 5
Yes, it sounds crazy when stated so bluntly, but according to Wired these are ideas with
tremendous currency in Silicon Valley; they are guiding principles for many of the most influential
technologists.
I have no authority to speak about the possibility that computers will digitalize our bodies
thereby offering us eternal life, but I do know a good deal about the promises and disappointed hopes
of those who have predicted that computers will soon become intelligent. I hope to save you from
rushing off to Singularity University by recounting how the current excitement is simply the latest
version of a pattern that has plagued work in Artificial Intelligence since its inception. To judge
whether the singularity is likely, possible, or just plain mad, we need to see it in the context of the
half-century long attempt to program computers to be intelligent. I can speak with some authority
about this history since I’ve been involved almost from the start with what in the 50ies came to be
called AI.
II. Stage I: The Convergence of Computers and Philosophy
When I was teaching Philosophy at MIT in the early sixties, students from the Artificial
5
Gary Wolf, Op. Cit.
52
Intelligence Laboratory would come to my Heidegger course and say in effect: “You philosophers have
been reflecting in your armchairs for over 2000 years and you still don’t understand how the mind
works.
We in the AI Lab have taken over and are succeeding where you armchair philosophers have
failed. We are now programming computers to exhibit human intelligence: to solve problems, to
understand natural language, to perceive, to play games and to learn.”
Phil Agre, a philosophically
inclined student at the AI Lab at that time, later lamented:
I have heard expressed many versions of the proposition …that philosophy is a matter of mere
thinking whereas technology is a matter of real doing, and that philosophy consequently can be
understood only as deficient.
I had no experience on which to base a reliable opinion on what computer technology could and
couldn’t do, but as luck would have it, in 1963 I was invited by the RAND Corporation to
evaluate the pioneering work of Alan Newell and Herbert Simon in a new field called Cognitive
Simulation.
Newell and Simon claimed that both digital computers and the human mind could
be understood as physical symbol systems, using strings of bits or streams of neuron pulses as
symbols representing the external world.
Intelligence, they claimed, didn’t require a body, but
merely required making the appropriate inferences from internal mental representations.
As
they put it: “A physical symbol system has the necessary and sufficient means for general
intelligent action.”6
As I studied the RAND papers and memos, I found to my surprise that, far from replacing
philosophy, the pioneers in CS had learned a lot, directly and indirectly from the philosophers. They
had taken over Hobbes’ claim that reasoning was calculating, Descartes’ idea that the mind manipulated
mental representations, Leibniz’s idea of a “universal characteristic”—a set of primitives features in
which all knowledge could be expressed—Kant’s claim that concepts were rules, Frege’s formalization
of rules, and Russell’s postulation of logical atoms as the building blocks of reality. In short, without
realizing it, AI researchers were hard at work finding the rules and representations needed for turning
6
Newell, A. and Simon, H., "Computer Science as Empirical Inquiry: Symbols and Search", Mind Design,
John Haugeland, ed. Cambridge, MA, MIT Press, 1988.
53
rationalist philosophy into a research program.
At the same time, I began to suspect that the critique of rationalism formulated by philosophers
in existentialist armchairs—especially by Martin Heidegger and Maurice Merleau-Ponty—as well as the
devastating criticism of traditional philosophy developed by Ludwig Wittgenstein, were bad news for
those working in AI. I suspected that by turning rationalism into a research program, AI researchers
had condemned their enterprise to reenact a failure.
III. Symbolic AI as a Degenerating Research Program
It looked like the AI research program was an exemplary case of what philosophers of science
call a degenerating research program. That is a way of organizing their research that incorporated a
basically wrong approach to its domain so that its predictions constantly failed to pan out, and whose
believers were ready to abandon their current approach as soon as they could find an alternative.
I was
particularly struck by the fact that, among other troubles, researchers were running up against the
problem of representing relevance in their computer models– a problem that Heidegger saw was
implicit in Descartes’ understanding of the world as a set of meaningless facts to which the mind
assigned what Descartes called values.
Heidegger warned that values are just more meaningless facts. To say a hammer has the
function of hammering leaves out the relation of hammers to nails and other equipment, to the point
of building things, and to the skills required when actually using a hammer.
Merely assigning formal
function predicates like “used in driving in nails” to brute facts such as hammers weigh five pounds,
couldn’t capture the meaningful organization of the everyday world in which hammering makes sense.
But Marvin Minsky, Director of the MIT AI Laboratory, unaware of Heidegger’s critique, was
convinced that representing a few million facts about a few million objects would solve what had
come to be called the commonsense knowledge problem.
54
In 1968, he predicted: “Within a
generation we will have intelligent computers like HAL in the film 2001: A Space Odyssey.”7 He
added: “In 30 years [i.e. by 1998] we should have machines whose intelligence is comparable to
man’s.8
It seemed to me, however, that the deep problem wasn’t storing millions of facts; it was
knowing which facts were relevant in any given situation. One version of this relevance problem
was called “the frame problem.”
If the computer is running a representation of the current state of its
world and something in the world changes, how does the program determine which of its represented
facts can be assumed to have stayed the same, and which would have to be updated? For example, if I
put up the shades in my office which other facts about my office will change.
The intensity of the
light, perhaps the shadows on the floor, but presumably not the number of books.
Minsky suggested that, to avoid the frame problem, AI programmers could use what he called
frames—descriptions of typical situations like going to a birthday party in which only relevant facts
were listed.
So the frame for birthday parties for example, after each new guest arrived, required the
program to check those and only those facts that were normally relevant to birthday parties— the
number of presents for example, but not the weather —to see if it had changed.
But a system of frames isn’t in a situation, so in order to select the possibly relevant facts in
the current situation one would need a frame for recognizing the current situation as a birthday party,
that is for telling it from other social events such as ordering in a restaurant. But how, I wondered,
could the computer select from the thousands of frames in its memory the relevant frame for selecting
the social events frame, say, for selecting the birthday party frame so, as to see the current relevance
of, for example, an exchange of gifts rather than of money?
It seemed to me obvious that any AI
program using frames to organize millions of meaningless facts so as to retrieve the currently relevant
ones was going to be caught in a regress of frames for recognizing relevant frames for recognizing
relevant facts, and that, therefore, the frame problem wasn’t just a problem but was a sign that
something was seriously wrong with the whole approach of seeking to select a de-situated frame to
7
MGM 1968 Press release for Stanley Kubrick’s 2001: A Space Odyssey.
8
Marvin Minsky as heard in Michael Krasny’s KQED radio Forum (in ± 2008).
55
give meaning to a specific event in a specific situation. Indeed, Minsky has recently acknowledged
in Wired that AI has been brain dead since the early 70ies when it encountered the problem of
commonsense knowledge.
9
Terry Winograd, the best of the AI graduate students back then, unlike his colleagues at M.I.T.,
wanted to try to figure out what had gone wrong. So in the mid 70ies Terry and I began having
weekly lunches to discuss the frame problem, the commonsense knowledge problem, and other such
difficulties in a philosophical context.
After a year of such conversations Winograd moved to Stanford where he abandoned work on
AI and began teaching Heidegger in his Computer Science courses. In so doing, he became the first
high-profile deserter from what was, indeed, becoming a degenerating research program. That is,
researchers began to have to face the fact that their optimistic predictions had failed.
John
Haugeland refers to the AI of symbolic rules and representations of that period as Good Old
Fashioned AI—GOFAI for short—and that name has been widely accepted as capturing its status as
an abandoned research program.
IV. Seeming exceptions to the claim that AI based on features and rules has totally failed:
The
success of Deep Blue and (perhaps) Jeopardy.
But the history of computer chess, many claim, makes my criticism of AI look misguided.
I
wrote in 1965 in a RAND report that computers currently couldn’t play chess well enough to beat a
ten-year-old beginner. The AI people twisted my report on the limitations of current AI research into
a claim that computers would never play chess well enough to beat a ten-year-old. They then
challenged me to a game against MacHack, at the time the best M.I.T. program, which to their delight
beat me roundly.
Things stayed in a sort of stand off for about twenty years.
9
Wired Magazine, Issue 11:08, August 2003.
56
Then, given the dead end of
programming common sense and relevance, researchers redefined themselves as knowledge engineers
and devoted their efforts to building expert systems in domains that were divorced from everyday
common sense. They pointed out that in domains such as spectrograph analysis rules elicited from
experts had enabled the computer to perform almost as well as an expert. They then made wild
predictions about how all-human expertise would soon be programmed.
At the beginning of AI
research, Yehoshua Bar-Hillel called this way of thinking the first-step fallacy. Every success was
taken to be progress towards their goal. My brother at RAND quipped, “It's like claiming that the
first monkey that climbed a tree was making progress towards flight to the moon.”
It turned out that competent people do, indeed, follow rules so the computer could be
programmed to exhibit competence, but that masters don’t follow rules, so expertise was out of reach.
In spite of the optimistic predictions based on early successes in simplified formal domains, there
were no expert systems that could achieve expertise in the messy everyday world.
Then, to every AI programmer’s delight, an IBM program, Deep Blue, beat Garry Kasparov
the world chess champion. The public, and even some AI researchers who ought to have known
better, concluded that Deep Blue’s masterful performance at chess showed that it was intelligent. But
in fact Deep Blue’s victory did not show that a computer running rules gotten from masters could beat
the masters at their own game.
What it showed was that computers had become so fast that they
could win by brute force enumeration. That is, by looking at 200 million positions per second, and
so looking at all possibilities as many as seven moves ahead, and then choosing the move that led to
the best position, the program could beat human beings who could look at most at about 300 relevant
moves in choosing a move.
But only in a formal game where the computer could process millions of
moves without regard for relevance could brute force win out over intelligence.
An interesting variation of this same point is about to be tested. IBM claims it is about to
demonstrate a new phrase-based search engine named “Watson” that can play Jeopardy.
But
Jeopardy, one might think, takes place in the real world where a sense of relevance is essential. The
winner, it would seem, has to search through thousands of possibly relevant facts in the real world.
What the success of a Jeopardy program would show would be that, as in chess, relevance can be
57
replaced by brute force. Very, very roughly if the program did a Google search for all the phrases in
the question and their relation to trillions of other grammatical phrases and found the phrase that
turned up most frequently it might well be the winning one.
So if “Watson” wins, it will show that,
with no sense of meaning and relevance, but enough brute speed to search and correlate a huge
amount of data, a computer can produce a syntactic substitute for relevance even in a non-formal
problem domain. This is a surprising result, but not one that can be thought of as showing that
computer’s can be programmed to behave in a way that could be thought of as humanly intelligently.
The test for a machine’s ability to demonstrate intelligence remains the test proposed the
fifties by Alan Turing. The Turing Test, as it is called, requires two human beings and a computer.
A computer and a human being are placed in separate rooms. They are not visible to a second
person—the judge—whose job it is to decide which respondent is the human.
conversation by texting with the occupants of the two rooms.
The judge carries on a
The human being tries to get the judge
to understand that he is a human being; the computer tries to fool the judge into thinking that it is the
human.
According to Turing, if the computer succeeds in fooling the judge into thinking that it is a
human being, the computer counts as thinking. Each year the Turing Test has been performed, but
no programmer has ever come close to writing a program that enables a de-situated computer with no
sense of relevance to pass the test. There is no reason to think that even a successful “Watson”
program could pass it. Nonetheless, Kurzweil predicts that a computer will pass the Turing Test by
2029.
V. Stage II: An Alternative Approach: Eliminating Representations and Rules by Building
somewhat Embodied Behavior-Based Robots
In March l986, the MIT AI Lab under its new director, Patrick Winston, reversed Minsky’s
attitude toward me and allowed several graduate students to invite me to give a talk.
Not everyone
was pleased. The graduate student responsible for the invitation reported to me that after it was
58
announced that I was giving a talk, “Misnky came into his office and shouted at him for ten minutes or
so for inviting me.”
I accepted the invitation anyway and called the talk: “Why AI Researchers should study
Heidegger’s Being and Time.”
I repeated what I had written in my book, What Computers Can’t Do:
“[T]he meaningful objects ... among which we live are not a model of the world stored in our mind or
brain; they are the world itself.”10
The year of my talk, Rodney Brooks, who had moved from Stanford to MIT, published a
paper criticizing the GOFAI robots that used representations of the world and problem solving
techniques to plan the robot’s movements.
He reported that, based on the idea that “the best model
of the world is the world itself,” he had “developed a different approach in which a mobile robot uses
the world itself as its “representation” – continually referring to its sensors rather than to an internal
world model.”11
Brooks gave me credit for “being right about many issues such as the way in which
people operate in the world is intimately coupled to the existence of their body,” 12 but he denied the
direct influence of Heidegger, saying:
In some circles, much credence is given to Heidegger as one who understood the dynamics of
existence. Our approach has certain similarities to work inspired by this German
philosopher . . . but our work was not so inspired.
It is based purely on engineering
considerations.13
[Haugeland explains Brooks’ breakthrough denial of representation by using as an example
Brooks’ robot Herbert that goes around the MIT robot lab picking soda cans: Brooks uses what
he calls "subsumption architecture", according to which systems are decomposed not in the
familiar way by local functions or faculties, but rather by global activities or tasks. ... Thus,
Herbert has one subsystem for detecting and avoiding obstacles in its path, another for
wandering around, a third for finding distant soda cans and homing in on them, a fourth for
10
Hubert Dreyfus, What Computers Still Can't Do: A Critique of Artificial Reason.
11
Rodney A. Brooks, “Intelligence without Representation,” Mind Design, John Haugeland, Ed., The MIT
Press, 1988, 416. (Brooks’ paper was published in 1986).
12
[Ibid. 42.] not sure this is the correct reference
13
R. Brooks, “Intelligence without Representation,” 415.
59
MIT Press, 1992, 265-266.
noticing nearby soda cans and putting its hand around them, a fifth for detecting something
between its fingers and closing them, and so on... fourteen in all.
What's striking is that these
are all complete input/output systems, more or less independent of each other.14]
Looking back at the frame problem, Brooks writes:
And why could my simulated robot handle it?
Because it was using the world as its model. It
never referred to an internal description of the world that would quickly get out of date if
anything in the real world moved.15
Brooks’ approach is an important advance, but Brooks’ robots respond only to fixed isolable
features of the environment, not to context. That is, by operating in a set of fixed worlds and
responding only to the small set of possibly relevant features in each, Brooks’ animats beg the
question of changing relevance and so finesse rather than solve the frame problem.
Still, Brooks comes close to an existentialist insight, viz. that intelligence is founded on and
presupposes the basic way of coping with relevance we share with animals, when he says:16
It is soon apparent, when "reasoning" is stripped away as the prime component of a robot's
intellect, that the dynamics of the interaction of the robot and its environment are primary
determinants of the structure of its intelligence. 17
Surprisingly, the modesty Brooks exhibited in choosing to approach AI by first constructing
simple insect-like devices did not save Brooks and Daniel Dennett, a philosopher at Tufts, from
falling for the first-step fallacy and repeating the extravagant optimism characteristic of AI researchers
in the sixties. As in the days of GOFAI, on the basis of Brooks’ success with ant-like devices,
14
John Haugeland, Having Thought: Essays in the Metaphysics of Mind, (Cambridge, MA: Harvard University
Press, 1998), 218.
15
Rodney A. Brooks, Flesh and Machines: How Robots Will Change Us, Vintage Books (2002), 168.
16
See, Maurice Merleau-Ponty, The Structure of Behavior, A. L. Fisher, Trans, Boston: Beacon Press, 2nd
edition 1966.
17
Brooks. “Intelligence without Representation,” 418.
60
instead of trying to make, say, an artificial spider, Brooks and Dennett decided to leap ahead and build
a humanoid robot. As Dennett explained in a l994 report to The Royal Society of London:
A team at MIT of which I am a part is now embarking on a long-term project to design and
build a humanoid robot, Cog, whose cognitive talents will include speech, eye-coordinated
manipulation of objects, and a host of self-protective, self-regulatory and self-exploring
activities.18
Dennett seems to reduce this project to a joke when he adds in all seriousness: “While we are at it, we
might as well try to make Cog crave human praise and company and even exhibit a sense of humor.”19
Of course, the “long term project” was short lived.
Cog failed to achieve any of its goals
and the original robot is already in a museum.20 But, as far as I know, neither Dennett nor anyone
connected with the project has published an account of the failure and what mistaken assumptions
underlay their absurd optimism.
In a personal communication to me Dennett blamed the failure on a
lack of graduate students and claimed that: “Progress was being made on all the goals, but slower than
had been anticipated.”21
If substantive progress were actually being made, however, the graduate students wouldn’t
have left, or others would have arrived to work on the project. Clearly some specific assumptions
must have been mistaken, but all we find in Dennett’s assessment is the first-step fallacy—the implicit
assumption that human intelligence is on a continuum with insect intelligence, and that therefore
adding a bit of complexity to what has already been done with animats counts as progress toward
humanoid intelligence.
In spite of the breakthrough of giving up internal representations, what AI researchers have to
face and understand is not only why our everyday coping couldn’t be understood in terms of
18
Daniel Dennett, "The Practical Requirements for Making a Conscious Robot," Philosophical Transactions of
the Royal Society of London, A, v. 349, 1994, 133-146.
19
Ibid. 133
20
Although, as of going to press in 2007, you couldn’t tell it from the Cog web page.
(www.ai.mit.edu/projects/humanoid-robotics-group/cog/.)
21
Private communication. Oct. 26, 2005.
(My italics.)
61
inferences from symbolic representations, as Minsky’s rationalist approach assumed, but also why it
couldn’t be understood in terms of Brooks’ robot’s responses to meaningless fixed features of the
environment either.
AI researchers needed to consider the possibility that embodied beings like us have brains that
take as input energy from the physical universe, and respond in such a way as to open themselves to a
world organized in terms of their bodily capacities without their minds needing to impose meaning on
a meaningless given, as Minsky’s frames require, nor their brains converting stimulus input into
reflex responses, as in Brooks’s animats. That is, AI researchers would have to turn to neuroscience
and try to, as Kurzweil puts it, “reverse engineer” the brain.
VI. Stage III. Kurzweil’s suggestion:
But modeling the brain’s with its billions of neurons with on the average 10 thousand
connections on each may well require more knowledge than we now have or may ever have of the
functional elements in brain and how they are connected. If so, trying to “reverse engineer” the
brain does not look promising. But Kurzweil has now had another idea. Since the design of the
brain is in the genome, perhaps we could use our enormous computing power to model the brain DNA
and then use that model DNA to grow an artificial brain.
This seemed like a relatively sensible proposal but developmental neuroscientists are outraged.
Here’s a typical response. P. Z. Myers, a biologist at the University of Minnesota says:
Kurzweil knows nothing about how the brain works.
Its design is not encoded in the
genome: what's in the genome is a collection of molecular tools …that makes cells responsive
to interactions with a complex environment. The brain unfolds during development, by means
of essential cell to cell interactions, of which we understand only a tiny fraction. The end
result is a brain that is much, much more than simply the sum of the nucleotides (the DN) that
encode a few thousand proteins…
62
[W]e have absolutely no way to calculate … all the possible interactions and functions
of a single protein with the tens of thousands of other proteins in the cell! 22
Myers continues:
I'll make a prediction, too. We will not be able to plug a single unknown protein
sequence into a computer and have it derive a complete description of all of its functions by
2020.
Conceivably, we could replace this step with a complete, experimentally derived
quantitative summary of all of the functions and interactions of every protein involved in brain
development and function, but I guarantee you that won't happen either. …
I'll make one more prediction. The media will not end their infatuation with this pseudoscientific dingbat, Kurzweil, no matter how uninformed and ridiculous his claims get. 23
VII. Conclusion
Why then are Kurzweil’s speculations concerning the singularity accepted by elite computer
experts and by seemingly responsible journalists? It seems to be the result of poor logic driven by a
deep longing. The logic rests on an inverse version of the first-step fallacy. The fallacy used to be
the claim that since the first steps toward computer intelligence we have been inching along a
continuum so that any improvement no matter how trivial counts as progress.
Now the latest
desperate claim is that computers will achieve and surpass human intelligence not by inching along a
continuum, but by a radical discontinuity—the singularity—a break so radical that after it occurs our
science and logic will not apply.
The discontinuity is assumed to be so total that it allows sheer fantasy as to what is possible
on the other side of the discontinuity. In fact, the argument for the singularity, if you can call it an
argument, combines both bad arguments from continuity. Kurzweil assumes accelerating continuity
up to the singularity, and radical discontinuity afterwards.
22
Paul Zachary (PZ) Myers, Ray Kurzweil does not understand the brain, posted on: August 17, 2010,
http://scienceblogs.com/pharyngula/2010/08/ray_kurzweil_does_not_understa.php.
23
Ibid.
63
Here religion and technology converge.
for nerds.
As one author puts it, the singularity is the rapture
Hardheaded naturalists desperately yearn for the end of our world where our bodies have
to die, and eagerly await the dawning of a new world in which our bodies will be transformed into
information and so we will achieve the promise of eternal life.
As one computer scientist puts it:’
[C]omputer scientists are human, and are as terrified by the human condition as anyone else.
We, the technical elite, seek some way of thinking that gives us an answer to death … 24
As an existential philosopher, I suggest that we may well have to give up this last desperate
attempt to achieve technological immortality and, instead, face up to our finitude.25
24
Jaron Lanier, Op-Ed Contributor, “The First Church of Robotics”, The New York Times, August 9, 2010.
Jaron Lanier, a partner architect at Microsoft Research and an innovator in residence at the Annenberg School of
the University of Southern California, is the author, most recently, of “You Are Not a Gadget.”
25
The best we can do is live an embodied life full of meaning, intensity, and openness to the sacred. In a book
I’ve just written with Sean Kelly, All things Shining, we describe how such a life looked at the time of Homer
and how in Moby Dick Melville suggests we can retrieve a current version of it.
64