Download Will machines outsmart man

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Artificial intelligence in video games wikipedia , lookup

How to Create a Mind wikipedia , lookup

Human–computer interaction wikipedia , lookup

AI winter wikipedia , lookup

Embodied cognitive science wikipedia , lookup

History of artificial intelligence wikipedia , lookup

The Singularity Is Near wikipedia , lookup

Singularity Sky wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Technological singularity wikipedia , lookup

Transcript
Will machines outsmart man?
Scientists believe the point of 'Singularity' – where artificial intelligence surpasses that of
humans – is closer than we thought
Wendy M Grossman - guardian.co.uk, Wednesday 5 November 2008
http://www.guardian.co.uk/technology/2008/nov/06/artificialintelligenceai-engineering
They are looking for the hockey stick. Hockey sticks are the shape technology startups hope their
sales graphs will assume: a modestly ascending blade, followed by a sudden turn to a nearvertical long handle. Those who assembled in San Jose in late October for the Singularity
Summit are awaiting the point where machine intelligence surpasses that of humans and takes off
near-vertically into recursive self-improvement.
The key, said Ray Kurzweil, inventor of the first reading machine and author of 2005's The
Singularity Is Near, is exponential growth in computational power - "the law of accelerating
returns". In his favourite example, at the human genome project's initial speed, sequencing the
genome should have taken thousands of years, not the 15 scheduled. Seven years in, the genome
was 1% sequenced. Exponential acceleration had the project finished on schedule. By analogy,
enough doublings in processing power will close today's vast gap between machine and human
intelligence.
This may be true. Or it may be an unfalsifiable matter of faith, which is why the singularity is
sometimes satirically called "the Rapture for nerds". It makes assessing progress difficult. Justin
Rattner, chief technology officer of Intel, addressed a key issue at the summit: can Moore's law,
which has the number of transistors packed on to a chip doubling every 18 months, stay in line
with Kurzweil's graphs? The end has been predicted many times but, said Rattner, although
particular chip technologies have reached their limits, a new paradigm has always continued the
pace.
"In some sense - silicon gate CMOS - Moore's law ended last year," Rattner said. "One of the
founding laws of accelerating returns ended. But there are a lot of smart people at Intel and they
were able to reinvent the CMOS transistor using new materials." Intel is now looking beyond
2020 at photonics and quantum effects such as spin. "The arc of Moore's law brings the
singularity ever closer."
Judgment day
Belief in an approaching singularity is not solely American. Peter Cochrane, the former head of
BT's research labs, says for machines to outsmart humans it "depends on almost one factor alone
- the number of networked sensors. Intelligence is more to do with sensory ability than memory
and computing power." The internet, he adds, overtook the capacity of a single human brain in
2006. "I reckon we're looking at the 2020 timeframe for a significant machine intelligence to
emerge." And, he said: "By 2030 it really should be game over."
Predictions like this flew at the summit. Imagine when a human-scale brain costs $1 - you could
have a pocket full of them. The web will wake up, like Gaia. Nova Spivack, founder of
EarthWeb and, more recently, Radar Networks (creator of Twine.com), quoted Freeman Dyson:
"God is what mind becomes when it has passed beyond the scale of our comprehension."
1
Listening, you'd never guess that artificial intelligence has been about 20 years away for a long
time now. John McCarthy, one of AI's fathers, thought when he convened the first conference on
the subject in 1956, that they'd be able to wrap the whole thing up in six months. McCarthy calls
the singularity, bluntly, "nonsense".
Even so, there are many current technologies, such as speech recognition, machine translation,
and IBM's human-beating chess grandmaster Deep Blue, that would have seemed like AI at the
beginning. "It's incredible how intelligent a human being in front of a connected computer is,"
observed the CNBC reporter Bob Pisani, marvelling at how clever Google makes him sound to
viewers phoning in. Such advances are reminders that there may be valuable discoveries that
make attempts at even the wildest ideas worthwhile.
Dharmendra Modha, head of the cognitive computing group at IBM's Almaden research lab, is
leading a "quest" to "understand and build a brain as cheaply and quickly as possible". Last year,
his group succeeded in simulating a rat-scale cortical model - 55m neurons, 442bn synapses - in
8TB memory of a 32,768-processor IBM Blue Gene supercomputer. The key, he says, is not the
neurons but the synapses, the electrical-chemical-electrical connections between those neurons.
Biological microcircuits are roughly essentially the same in all mammals. "An individual human
being is stored in the strength of the synapses."
Smarter than smart
Modha doesn't suggest that the team has made a rat brain. "Philosophically," he writes on the
subject, "any simulation is always an approximation (a kind of 'cartoon') based on certain
assumptions. A biophysically realistic simulation is not the focus of our work." His team is using
the simulation to try to understand the brain's high-level computational principles.
But computational power is nothing without software. "Would the neural code that powers
human reasoning run on a different substrate?" the sceptical science writer John Horgan asked
Kurzweil, who replied: "The key to the singularity is amplifying intelligence. The prediction is
that an entity that passes the Turing test and has emotional intelligence ... will convince us that
it's conscious. But that's not a philosophical demonstration."
For intelligence to be effective, it has to be able to change the physical world. The MIT physicist
Neil Gershenfeld was therefore at the summit to talk about programmable matter. It's a neat trick:
computer science talks in ones and zeros, but these are abstractions representing the flow or
interruption of electric current, a physical phenomenon. Gershenfeld, noting that maintaining that
abstraction requires increasing amounts of power and complex programmning, wants to turn this
on its head. What if, he asked, you could buy computing cells by the pound, coat them on a
surface, and run programs that assemble them like proteins to solve problems?
Gershenfeld is always difficult for non-physicists to understand, and his video of cells sorting
was no exception. Two things he said were clear. First: "We aim to create life." Second: "We
have a 20-year road map to make the Star Trek replicator."
Twenty years: 2028. Vernor Vinge began talking about the singularity in the early 80s (naming it
after the gravitational phenomenon around a black hole), and has always put the date at 2030.
Kurzweil likes 2045; Rattner, before 2050.
Turning back time
These dates may be personally significant. Rattner is 59; Vinge is 64. Kurzweil is 60, takes 250
vitamins and other supplements a day, and believes some of them can turn back ageing. If curing
2
all human ills will be a piece of cake for a superhuman intelligence, then the singularity carries
with it the promise of immortality - as long as you're still alive when it happens.
It is in this connection between the singularity and immortality, along with the idea that
sufficiently advanced technology can solve every problem from climate change to the exhaustion
of oil reserves, that gives the summit the feel of a religious movement. Certainly, James Miller,
assistant professor of economics at Smith College, sounded evangelical when he reviewed how
best to prepare financially. He was optimistic, reviewing investment strategies and assuming
retirement funds won't be needed.
HowStuffWorks founder Marshall Brain, by contrast, explained why 50 million people will lose
their jobs when they can be replaced by robots. "In the whole universe, there is one intelligent
species," he said. "We're in the process of creating the second intelligent species."
The anthropologist Jane Goodall may disagree. She sees a different kind of singularity - the
growing ecological devastation of Africa - and worries about the disconnection between human
minds and hearts. "If we're the most intellectual animal," she said, "why are we destroying our
only home?"
If Goodall's singularity comes first, the other one might never happen at all - one of those
catastrophes that Vinge admits as the only thing he can imagine that could stop it.
Watch the Singularity Summit videos at:
http://www.vimeo.com/siai/videos/sort:oldest
. . . and go to the Summit home page at:
The Singularity Institute is pleased to host the Singularity Summit 2009, a rare gathering of
thinkers to explore the rising impact of science and technology on society. The summit has been
organized to further the understanding of a controversial idea – the singularity scenario.
Founded in 2006 by Tyler Emerson, Ray Kurzweil, and Peter Thiel, the inaugural summit was
held at Stanford, the first academic symposium focused on singularity dialogue.
http://www.singularitysummit.com/
ARTIFICIAL INTELLIGENCE: THE SINGULARITY
For over a hundred thousand years, the evolved human brain has held a privileged place in the
expanse of cognition. Within this century, science may move humanity beyond its boundary of
intelligence. This possibility, the singularity, may be a critical event in history, and deserves
thoughtful consideration
OVERVIEW: WHAT IS A SINGULARITY?
What is the Singularity?The Singularity is the technological creation of smarter-than-human
intelligence. There are several technologies that are often mentioned as heading in this direction.
The most commonly mentioned is probably Artificial Intelligence, but there are others: direct
brain-computer interfaces, biological augmentation of the brain, genetic engineering, ultra-highresolution scans of the brain followed by computer emulation. Some of these technologies seem
3
likely to arrive much earlier than the others, but there are nonetheless several independent
technologies all heading in the direction of the Singularity – several different technologies
which, if they reached a threshold level of sophistication, would enable the creation of smarterthan-human intelligence.
A future that contains smarter-than-human minds is genuinely different in a way that goes
beyond the usual visions of a future filled with bigger and better gadgets. Vernor Vinge
originally coined the term "Singularity" in observing that, just as our model of physics breaks
down when it tries to model the singularity at the center of a black hole, our model of the world
breaks down when it tries to model a future that contains entities smarter than human.
Human intelligence is the foundation of human technology; all technology is ultimately the
product of intelligence. If technology can turn around and enhance intelligence, this closes the
loop, creating a positive feedback effect. Smarter minds will be more effective at building still
smarter minds. This loop appears most clearly in the example of an Artificial Intelligence
improving its own source code, but it would also arise, albeit initially on a slower timescale,
from humans with direct brain-computer interfaces creating the next generation of braincomputer interfaces, or biologically augmented humans working on an Artificial Intelligence
project.
Some of the stronger Singularity technologies, such as Artificial Intelligence and brain-computer
interfaces, offer the possibility of faster intelligence as well as smarter intelligence. Ultimately,
speeding up intelligence is probably comparatively unimportant next to creating better
intelligence; nonetheless the potential differences in speed are worth mentioning because they
are so huge. Human neurons operate by sending electrochemical signals that propagate at a top
speed of 150 meters per second along the fastest neurons. By comparison, the speed of light is
300,000,000 meters per second, two million times greater. Similarly, most human neurons can
spike a maximum of 200 times per second; even this may overstate the information-processing
capability of neurons, since most modern theories of neural information-processing call for
information to be carried by the frequency of the spike train rather than individual signals. By
comparison, speeds in modern computer chips are currently at around 2GHz – a ten millionfold
difference – and still increasing exponentially. At the very least it should be physically possible
to achieve a million-to-one speedup in thinking, at which rate a subjective year would pass in 31
physical seconds. At this rate the entire subjective timespan from Socrates in ancient Greece to
modern-day humanity would pass in under twenty-two hours.
Humans also face an upper limit on the size of their brains. The current estimate is that the
typical human brain contains something like a hundred billion neurons and a hundred trillion
synapses. That's an enormous amount of sheer brute computational force by comparison with
today's computers – although if we had to write programs that ran on 200Hz CPUs we'd also
need massive parallelism to do anything in realtime. However, in the computing industry,
benchmarks increase exponentially, typically with a doubling time of one to two years. The
original Moore's Law says that the number of transistors in a given area of silicon doubles every
eighteen months; today there is Moore's Law for chip speeds, Moore's Law for computer
memory, Moore's Law for disk storage per dollar, Moore's Law for Internet connectivity, and a
dozen other variants.
4
By contrast, the entire five-million-year evolution of modern humans from primates involved a
threefold increase in brain capacity and a sixfold increase in prefrontal cortex. We currently
cannot increase our brainpower beyond this; in fact, we gradually lose neurons as we age. (You
may have heard that humans only use 10% of their brains. Unfortunately, this is a complete
urban legend; not just unsupported, but flatly contradicted by neuroscience.) An Artificial
Intelligence would be different. Some discussions of the Singularity suppose that the critical
moment in history is not when human-equivalent AI first comes into existence but a few years
later when the continued grinding of Moore's Law produces AI minds twice or four times as fast
as human. This ignores the possibility that the first invention of Artificial Intelligence will be
followed by the purchase, rental, or less formal absorption of a substantial proportion of all the
computing power on the then-current Internet – perhaps hundreds or thousands of times as much
computing power as went into the original Artificial Intelligence.
But the real heart of the Singularity is the idea of better intelligence or smarter minds. Humans
are not just bigger chimps; we are better chimps. This is the hardest part of the Singularity to
discuss – it's easy to look at a neuron and a transistor and say that one is slow and one is fast, but
the mind is harder to understand. Sometimes discussion of the Singularity tends to focus on
faster brains or bigger brains because brains are relatively easy to argue about compared to
minds; easier to visualize and easier to describe. This doesn't mean the subject is impossible to
discuss; section III of our "Levels of Organization in General Intelligence" does take a stab at
discussing some specific design improvements on human intelligence, but that involves a
specific theory of intelligence, which we don't have room to go into here.
However, that smarter minds are harder to discuss than faster brains or bigger brains does not
show that smarter minds are harder to build – deeper to ponder, certainly, but not necessarily
more intractable as a problem. It may even be that genuine increases in smartness could be
achieved just by adding more computing power to the existing human brain – although this is not
currently known. What is known is that going from primates to humans did not require
exponential increases in brain size or thousandfold improvements in processing speeds. Relative
to chimps, humans have threefold larger brains, sixfold larger prefrontal areas, and 98. 4%
similar DNA; given that the human genome has 3 billion base pairs, this implies that at most
twelve million bytes of extra "software" transforms chimps into humans. And there is no
suggestion in our evolutionary history that evolution found it more and more difficult to
construct smarter and smarter brains; if anything, hominid evolution has appeared to speed up
over time, with shorter intervals between larger developments.
But leave aside for the moment the question of how to build smarter minds, and ask what
"smarter-than-human" really means. And as the basic definition of the Singularity points out, this
is exactly the point at which our ability to extrapolate breaks down. We don't know because
we're not that smart. We're trying to guess what it is to be a better-than-human guesser. Could a
gathering of apes have predicted the rise of human intelligence, or understood it if it were
explained? For that matter, could the 15th century have predicted the 20th century, let alone the
21st? Nothing has changed in the human brain since the 15th century; if the people of the 15th
century could not predict five centuries ahead across constant minds, what makes us think we
can outguess genuinely smarter-than-human intelligence?
5
Because we have a past history of people making failed predictions one century ahead, we've
learned, culturally, to distrust such predictions – we know that ordinary human progress, given a
century in which to work, creates a gap which human predictions cannot cross. We haven't
learned this lesson with respect to genuine improvements in intelligence because the last genuine
improvement to intelligence was a hundred thousand years ago. But the rise of modern humanity
created a gap enormously larger than the gap between the 15th and 20th century. That
improvement in intelligence created the entire milieu of human progress, including all the
progress between the 15th and 20th century. It is a gap so large that on the other side we find, not
failed predictions, but no predictions at all.
Smarter-than-human intelligence, faster-than-human intelligence, and self-improving intelligence
are all interrelated. If you're smarter that makes it easier to figure out how to build fast brains or
improve your own mind. In turn, being able to reshape your own mind isn't just a way of starting
up a slope of recursive self-improvement; having full access to your own source code is, in itself,
a kind of smartness that humans don't have. Self-improvement is far harder than optimizing
code; nonetheless, a mind with the ability to rewrite its own source code can potentially make
itself faster as well. And faster brains also relate to smarter minds; speeding up a whole mind
doesn't make it smarter, but adding more processing power to the cognitive processes underlying
intelligence is a different matter.
But despite the interrelation, the key moment is the rise of smarter-than-human intelligence,
rather than recursively self-improving or faster-than-human intelligence, because it's this that
makes the future genuinely unlike the past. That doesn't take minds a million times faster than
human, or improvement after improvement piled up along a steep curve of recursive selfenhancement. One mind significantly beyond the humanly possible level would represent a
Singularity. That we are not likely to be dealing with "only one" improvement does not make the
impact of one improvement any less.
Combine faster intelligence, smarter intelligence, and recursively self-improving intelligence,
and the result is an event so huge that there are no metaphors left. There's nothing remaining to
compare it to.
The Singularity is beyond huge, but it can begin with something small. If one smarter-thanhuman intelligence exists, that mind will find it easier to create still smarter minds. In this respect
the dynamic of the Singularity resembles other cases where small causes can have large effects;
toppling the first domino in a chain, starting an avalanche with a pebble, perturbing an upright
object balanced on its tip. (Human technological civilization occupies a metastable state in which
the Singularity is an attractor; once the system starts to flip over to the new state, the flip
accelerates.) All it takes is one technology – Artificial Intelligence, brain-computer interfaces, or
perhaps something unforeseen – that advances to the point of creating smarter-than-human
minds. That one technological advance is the equivalent of the first self-replicating chemical that
gave rise to life on Earth.
For more information, continue with "Why Work Toward the Singularity
6
http://www.singinst.org/overview/whatisthesingularity
WHY WORK TOWARD A SINGULARITY?
If you traveled backward in time to witness a critical moment in the invention of science, or the
creation of writing, or the evolution of Homo sapiens, or the beginning of life on Earth, no
human judgment could possibly encompass all the future consequences of that event – and yet
there would be the feeling of being present at the dawn of something worthwhile. The most
critical moments of history are not the closed stories, like the start and finish of wars, or the rise
and fall of governments. The story of intelligent life on Earth is made up of beginnings.
Imagine traveling back in time to witness a critical moment in the dawn of human intelligence.
Suppose that you find an alien bystander already on the scene, who asks: "Why are you so
excited? What does it matter?" The question seems almost impossible to answer; it demands a
thousand answers, or none. Someone who valued truth and knowledge might answer that this
was a critical moment in the human quest to learn about the universe – in fact, the beginning of
that quest. Someone who valued happiness might answer that the rise of human intelligence was
a necessary precursor to vaccines, air conditioning, and the many other sources of happiness and
solutions to unhappiness that have been produced by human intelligence over the ages. There are
people who would answer that intelligence is meaningful in itself; that "It is better to be Socrates
unsatisfied than a fool satisfied; better to be a man unsatisfied than a pig satisfied." A musician
who chose that career believing that music is an end in itself might answer that the rise of human
intelligence mattered because it was necessary to the birth of Bach; a mathematician could single
out Euclid; a physicist might cite Newton or Einstein. Someone with an appreciation of
humanity, beyond the individual humans, might answer that this was a critical moment in the
relation of life to the universe – the beginning of humanity's growth, of our acquisition of
strength and understanding, eventually spreading beyond Earth to the rest of the galaxy and the
universe.
The beginnings of human intelligence, or the invention of writing, probably went unappreciated
by the individuals who were present at the time. But such developments do not always take their
creators unaware. Francis Bacon, one of the critical figures in the invention of the scientific
method, made astounding claims about the power and universality of his new mode of reasoning
and its ability to improve the human condition – claims which, from the perspective of a 21stcentury human, turned out to be exactly right. Not all good deeds are unintentional. It does
occasionally happen that humanity's victories are won not by accident but by people making the
right choices for the right reasons.
Why is the Singularity worth doing? The Singularity Institute for Artificial Intelligence can't
possibly speak for everyone who cares about the Singularity. We can't even presume to speak for
the volunteers and donors of the Singularity Institute. But it seems like a good guess that many
supporters of the Singularity have in common a sense of being present at a critical moment in
history; of having the chance to win a victory for humanity by making the right choices for the
right reasons. Like a spectator at the dawn of human intelligence, trying to answer directly why
superintelligence matters chokes on a dozen different simultaneous replies; what matters is the
entire future growing out of that beginning.
7
But it is still possible to be more specific about what kinds of problems we might expect to be
solved. Some of the specific answers seem almost disrespectful to the potential bound up in
superintelligence; human intelligence is more than an effective way for apes to obtain bananas.
Nonetheless, modern-day agriculture is very effective at producing bananas, and if you had
advanced nanotechnology at your disposal, energy and matter might be plentiful enough that you
could produce a million tons of bananas on a whim. In a sense that's what nanotechnology is –
good-old-fashioned material technology pushed to the limit. This only begs the question of "So
what?", but the Singularity advances on this question as well; if people can become smarter, this
moves humanity forward in ways that transcend the faster and easier production of more and
more bananas. For one thing, we may become smart enough to answer the question "So what?"
In one sense, asking what specific problems will be solved is like asking Benjamin Franklin in
the 1700s to predict electronic circuitry, computers, Artificial Intelligence, and the Singularity on
the basis of his experimentation with electricity. Setting an upper bound on the impact of
superintelligence is impossible; any given upper bound could turn out to have a simple
workaround that we are too young as a civilization, or insufficiently intelligent as a species, to
see in advance. We can try to describe lower bounds; if we can see how to solve a problem using
more or faster technological intelligence of the kind humans use, then at least that problem is
probably solvable for genuinely smarter-than-human intelligence. The problem may not be
solved using the particular method we were thinking of, or the problem may be solved as a
special case of a more general challenge; but we can still point to the problem and say: "This is
part of what's at stake in the Singularity."
If humans ever discover a cure for cancer, that discovery will ultimately be traceable to the rise
of human intelligence, so it is not absurd to ask whether a superintelligence could deliver a
cancer cure in short order. If anything, creating superintelligence only for the sake of curing
cancer would be swatting a fly with a sledgehammer. In that sense it is probably unreasonable to
visualize a significantly smarter-than-human intelligence as wearing a white lab coat and
working at an ordinary medical institute doing the same kind of research we do, only better, in
order to solve cancer specifically as a problem. For example, cancer can be seen as a special case
of the more general problem "The cells in the human body are not externally programmable."
This general problem is very hard from our viewpoint – it requires full-scale nanotechnology to
solve the general case – but if the general problem can be solved it simultaneously solves cancer,
spinal paralysis, regeneration of damaged organs, obesity, many aspects of aging, and so on. Or
perhaps the real problem is that the human body is made out of cells or that the human mind is
implemented atop a specific chunk of vulnerable brain – although calling these problems raises
philosophical issues not discussed here.
Singling out "cancer" as the problem is part of our culture's particular outlook and technological
level. But if cancer or any generalization of "cancer" is solved soon after the rise of smarter-thanhuman intelligence, then it makes sense to regard the quest for the Singularity as a continuation
by other means of the quest to cure cancer. The same could be said of ending world hunger,
curing Alzheimer's disease, or placing on a voluntary basis many things which at least some
people would regard as undesirable: illness, destructive aging, human stupidity, short lifespans.
Maybe death itself will turn out to be curable, though that would depend on whether the laws of
8
physics permit true immortality. At the very least, the citizens of a post-Singularity civilization
should have an enormously higher standard of living and enormously longer lifespans than we
see today.
What kind of problems can we reasonably expect to be solved as a side effect of the rise of
superintelligence; how long will it take to solve the problems after the Singularity; and how
much will it cost the beneficiaries? A conservative version of the Singularity would start with the
rise of smarter-than-human intelligence in the form of enhanced humans with minds or brains
that have been enhanced by purely biological means. This scenario is more "conservative" than a
Singularity which takes place as a result of brain-computer interfaces or Artificial Intelligence,
because all thinking is still taking place on neurons with a characteristic limiting speed of 200
operations per second; progress would still take place at a humanly comprehensible speed. In this
case, the first benefits of the Singularity probably would resemble the benefits of ordinary human
technological thinking, only more so. Any given scientific problem could benefit from having a
few Einsteins or Edisons dumped into it, but it would still require time for research,
manufacturing, commercialization and distribution.
Human genius is not the only factor in human science, but it can and does speed things up where
it is present. Even if intelligence enhancement were treated solely as a means to an end, for
solving some very difficult scientific or technological problem, it would still be worthwhile for
that reason alone. The solution might not be rapid, even after the problem of intelligence
enhancement had been solved, but that assumes the conservative scenario, and the conservative
scenario wouldn't last long. Some of the areas most likely to receive early attention would be
technologies involved in more advanced forms of superintelligence: broadband brain-computer
interfaces or full-fledged Artificial Intelligence. The positive feedback dynamic of the
Singularity – smarter minds creating still smarter minds – doesn't need to wait for an AI that can
rewrite its own source code; it would also apply to enhanced humans creating the next generation
of Singularity technologies.
The Singularity creates speed for two reasons: First, positive feedback – intelligence gaining the
ability to improve intelligence directly. Second, the shift of thinking from human neurons to
more readily expandable and enormously faster substrates. A brain-computer interface would
probably offer a limited but real version of both capabilities; the external brainpower would be
both fast and programmable, although still yoked to an ordinary human brain. A true Artificial
Intelligence, or a human scanned completely into a sufficiently advanced computer, would have
total self-access. At this point one begins to deal with superintelligence as the successor to
current scientific research, the global economy, and in fact the entire human condition; rather
than a superintelligence plugging into the current system as an improved component. At this
point human nature sometimes creates an "Us Vs. Them" view of the situation – the instinct that
people who are different are therefore on a different side – but if humans and superintelligences
are playing on the same team, it would be straightforward for the most advanced mind at any
given time to offer a helping hand to anyone lagging behind; there is no technological reason
why humans alive at the time of the Singularity could not participate in it directly. In our view
this is the chief benefit of the Singularity to existing humans; not technologies handed down
from above but a chance to become smarter and participate directly in creating the future.
9
One idea that is often discussed along with the Singularity is the proposal that, in human history
up until now, it has taken less and less time for major changes to occur. Life first arose around
three and half billion years ago; it was only eight hundred and fifty million years ago that multicelled life arose; only sixty-five million years since the dinosaurs died out; only five million
years since the hominid family split off within the primate order; and less than a hundred
thousand years since the rise of Homo sapiens sapiens in its modern form. Agriculture was
invented ten thousand years ago; Socrates lived two and half thousand years ago; the printing
press was invented five hundred years ago; the computer was invented around sixty years ago.
You can't set a speed limit on the future by looking at the pace of past changes, even if it sounds
reasonable at the time; history shows that this method produces very poor predictions. From an
evolutionary perspective it is absurd to expect major changes to happen in a handful of centuries,
but today's changes occur on a cultural timescale, which bypasses evolution's speed limits. We
should be wary of confident predictions that transhumanity will still be limited by the need to
seek venture capital from humans or that Artificial Intelligences will be slowed to the rate of
their human assistants (both of which I have heard firmly asserted on more than one occasion).
We can't see in advance the technological pathway the Singularity will follow, since if we were
that smart ourselves we'd already have done it. But it's possible to toss out broad scenarios, such
as "A smarter-than-human AI absorbs all unused computing power on the then-existent Internet
in a matter of hours; uses this computing power and smarter-than-human design ability to crack
the protein folding problem for artificial proteins in a few more hours; emails separate rush
orders to a dozen online peptide synthesis labs, and in two days receives via FedEx a set of
proteins which, mixed together, self-assemble into an acoustically controlled nanodevice which
can build more advanced nanotechnology." This is not a smarter-than-human solution; it is a
human imagining how to throw a magnified, sped-up version of human design abilities at the
problem. There are admittedly initial difficulties facing a superfast mind in a world of slow
human technology. Even humans, though, could probably solve those difficulties, given
hundreds of years to think about it. And we have no way of knowing that a smarter mind can't
find even better ways.
If the Singularity involves not just a few smarter-than-usual researchers plugging into standard
human organizations, but the transition of intelligent life on Earth to a smarter and rapidly
improving civilization with an enormously higher standard of living, then it makes sense to
regard the quest to create smarter minds as a means of directly solving such contemporary
problems as cancer, AIDS, world hunger, poverty, et cetera. And not just the huge visible
problems; the huge silent problems are also important. If modern-day society tends to drain the
life force from its inhabitants, that's a problem. Aging and slowly losing neurons and vitality is a
problem. In some ways the basic nature of our current world just doesn't seem very pleasant, due
to cumulative minor annoyances almost as much as major disasters. This may usually be
considered a philosophical problem, but becoming smarter is something that can actually address
philosophical problems.
The transformation of civilization into a genuinely nice place to live could occur, not in some
unthinkably distant million-year future, but within our own lifetimes. The next leap forward for
civilization will happen not because of the slow accumulation of ordinary human technological
ingenuity over centuries, but because at some point in the next few decades we will gain the
10
technology to build smarter minds that build still smarter minds. We can create that future and
we can be part of it.
If there's a Singularity effort that has a strong vision of this future and supports projects that
explicitly focus on transhuman technologies such as brain-computer interfaces and selfimproving Artificial Intelligence, then humanity may succeed in making the transition to this
future a few years earlier, saving millions of people who would have otherwise died. Around the
world, the planetary death rate is around fifty-five million people per year (UN statistics) 150,000 lives per day, 6,000 lives per hour. These deaths are not just premature but perhaps
actually unnecessary. At the very least, the amount of lost lifespan is far more than modern
statistics would suggest.
There are also dangers for the human species if we can't make the breakthrough to
superintelligence reasonably soon. Albert Einstein once said: "The problems that exist in the
world today cannot be solved by the level of thinking that created them." We agree with the
sentiment, although Einstein may not have had this particular solution in mind. In pointing out
that dangers exist it is not our intent to predict a dystopian future; so far, the doomsayers have
repeatedly been proven wrong. Humanity has faced the future squarely, rather than running in
the other direction as the doomsayers wished, and has thereby succeeded in avoiding the oftpredicted disasters and continuing to higher standards of living. We avoided disaster by
inventing technologies which enable us to cope with complex futures. Better, more sustainable
farming technologies have enabled us to support the increased populations produced by modern
medicine. The printing press, telegraph, telephone, and now the Internet enable humanity to
apply its combined wisdom to problem-solving. If we'd been forced to move into the future
without these technologies, disaster probably would have resulted. The technology humanity
needs to cope with the coming decades may be the technology of smarter-than-human
intelligence. If we have to face challenges like basement laboratories creating lethal viruses or
nanotechnological arms races with just our human intelligence, we may be in trouble.
Finally, there is the integrity of the Singularity itself to safeguard. This is not necessarily the
most difficult part of the challenge, compared to the problem of creating smarter-than-human
intelligence in the first place, but it needs to be considered.It is possible that the integrity of the
Singularity needs no safeguarding; that any human from Gandhi to Stalin, if enhanced
sufficiently far beyond human intelligence, would end up being wiser and more moral than
anyone alive today; that the same holds true for all minds-in-general from enhanced chimpanzees
to arbitrarily constructed Artificial Intelligences. But this is not something we know in advance.
Since we don't know how many moral errors persist in our own civilization, safeguarding the
integrity of the Singularity – in our view – consists more of ensuring the will and ability to grow
wiser with increased intelligence than of trying to find perfect candidates for human intelligence
enhancement. An analogous problem exists for Artificial Intelligence, where the task is not
enforcing servitude on the AI or coming up with a perfect moral code to "hardwire", but rather
transferring over the features of human cognition that let us conceive of a morality improving
over time (see the section on Friendly Artificial Intelligence for more information).
Safeguarding the integrity of the Singularity is another reason for facing the challenge of the
Singularity squarely and deliberately. It may be that human intelligence enhancement will turn
11
out well regardless, but there is still no point in taking unnecessary risks by driving the projects
underground. If human intelligence enhancement is banned by the FDA, for example, this just
means that the first experiments will take place outside the US, slightly later than they otherwise
would have; increasing the possible risks, delaying the possible benefits. If human intelligence
enhancement is banned by the UN this means the experiments will take place offshore, out of the
public eye, and perhaps sponsored by groups that we would prefer not be involved – although
there is a significant chance it would turn out well regardless. In the case of Artificial
Intelligence there are certain specific things that must be done to place the AI in the same moral
"frame of reference" as humanity – to ensure the AI absorbs our virtues, corrects any
inadvertently absorbed faults, and goes on to develop along much the same path as a recursively
self-improving human altruist. Friendly Artificial Intelligence is not necessarily more difficult
than the problem of AI itself, but it does need to be handled along with the creation of Artificial
Intelligence. In both cases, we can best safeguard the integrity of the Singularity by confronting
the Singularity intentionally and with full awareness of the responsibilities involved.
What does it mean to confront the Singularity? Despite the enormity of the Singularity, sparking
the Singularity – creating the first smarter-than-human intelligence – is a problem of science and
technology. The Singularity is something that we can actually go out and do, not a philosophical
way of describing something that inevitably happens to humanity. It takes the sweep of human
progress and a whole technological economy to create the potential for the Singularity, just as it
takes the entire framework of science to create the potential for a cancer cure, but it also takes a
deliberate effort to run the last mile and fulfill that potential. If someone asks you if you're
interested in donating to AIDS research, you might reply that you believe that cancer research is
relatively underfunded and that you are donating there instead; you would probably not say that
by working as a stockbroker you support the world economy in general and thereby contribute as
much to humanity's progress toward an AIDS cure as anyone. In that sense, sparking the
Singularity is no different from any other grand challenge – someone has to do it.
At this moment in time, there is a tiny handful of people who realize what's going on and are
trying to do something about it. It is not quite true that if you don't do it, no one will, but the pool
of other people who will do it if you don't is smaller than you might think. If you're fortunate
enough to be one of the few people who currently know what the Singularity is and would like to
see it happen – even if you learned about the Singularity just now – we need your help because
there aren't many people like you. This is the one place where your efforts can make the greatest
possible difference – not just because of the tremendous stakes, though that would be far more
than enough in itself, but because so few people are currently involved.
The Singularity Institute exists to carry out the mission of the Singularity-aware – to accelerate
the arrival of the Singularity in order to hasten its human benefits; to close the window of
vulnerability that exists while humanity cannot increase its intelligence along with its
technology; and to protect the integrity of the Singularity by ensuring that those projects which
finally implement the Singularity are carried out in full awareness of the implications and
without distraction from the responsibilities involved. That's our dream. Whether it actually
happens depends on whether enough people take the Singularity seriously enough to do
something about it – whether humanity can scrape up the tiny fraction of its resources needed to
face the future deliberately and firmly.
12
We can do better. The future doesn't have to be the dystopia promised by doomsayers. The future
doesn't even have to be the flashy yet unimaginative chrome-and-computer world of traditional
futurism. We can become smarter. We can step beyond the millennia-old messes created by
human-level intelligence. Humanity can solve its problems – both the huge visible problems
everyone talks about and the huge silent problems we've learned to take for granted. If the nature
of the world we live in bothers you, there is something rational you can do about it. We can do
better with your support.
Don't be a bystander at the Singularity. You can direct your effort at the point of greatest impact
– the beginning
http://www.singinst.org/overview/whyworktowardthesingularity/
THE AI IMPACT INITIATIVE
Advanced AI has the potential to impact every aspect of human life. We are in a critical window
of opportunity where we have powerful but temporary leverage to influence the outcome. Only a
small group of scientists are currently aware of the central issues, and it is essential to get input
from a broader range of thinkers.
The AI Impact Initiative will foster an interdisciplinary framework for the safe and beneficial
deployment of advanced AI. We will form a multidisciplinary body of experts to bring a broad
perspective to the critical issue of advanced AI's impact on humanity. This effort will involve
researchers with expertise in many different fields, including computer science, security,
cryptography, economics, industrial organization, evolutionary biology, cognitive science,
political theory, decision theory, physics, philosophy, ethics, religious thought, etc.
The AI Impact Initiative will host meetings over a period of 3-5 years to analyze the central
issues and produce strategic recommendations. An inaugural workshop will be held in 2008 or
2009 to bring together researchers from a variety of disciplines and begin the process of unifying
their insights. One of the goals of the Initiative is to produce documents that clearly express the
central issues so that a broader group of participants may usefully contribute. The Initiative's
longer term goal is to lay the foundation for a new science to study these issues. This will
involve creating expository materials, building an international network of scientists and
scholars, organizing workshops, and creating a comprehensive report to provide direction for
future research and development.
Singularity Summit 2007 Podcast: Rodney Brooks
August 2nd, 2007 – Tyler Emerson
Download audio (mp3)
Dr. Rodney Brooks will be keynoting at the Singularity Summit this September 8th and 9th on
“The Singularity: A Period Not An Event”. Tickets for the event can be purchased online.
13
Rodney Brooks is the Panasonic Professor of Robotics at MIT. He is also CTO of iRobot Corp
(Nasdaq: IRBT). He received degrees in pure mathematics from the Flinders University of South
Australia and the Ph.D. in Computer Science from Stanford University in 1981. He held research
positions at Carnegie Mellon University and MIT, and a faculty position at Stanford before
joining the faculty of MIT in 1984. He has published papers and books in model-based computer
vision, path planning, uncertainty analysis, robot assembly, active vision, autonomous robots,
micro-robots, micro-actuators, planetary exploration, representation, artificial life, humanoid
robots, and compiler design. Dr. Brooks is a Member of the National Academy of Engineering, a
Founding Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a
Fellow of the American Association for the Advancement of Science (AAAS), a Fellow of the
Association for Computing Machinery (ACM), and a Corresponding Member of the Australian
Academy of Science, and a Foreign Fellow of the Australian Academy of Technological
Sciences and Engineering.
In podcasts, singularity summit | Permalink | 1 Comment | Trackback | Digg this | Add to
del.icio.us
SIAI: Why We Exist and Our Short-Term Research
Program
July 31st, 2007 – Tyler Emerson
By Dr. Ben Goertzel and Tyler Emerson
Why SIAI Exists
As the 21st century progresses, an increasing number of forward-thinking scientists and
technologists are coming to the conclusion that this will be the century of AI: the century when
human inventions exceed human beings in general intelligence. When exactly this will happen,
no one knows for sure; Ray Kurzweil, for example, has estimated 2029.
Of course, where the future is concerned, nothing is certain except surprise; but the mere fact that
so many knowledgeable people (such as Stephen Hawking, Douglas Hofstadter, Bill Joy, and
Martin Rees) take the near advent of advanced AI as a plausible possibility, should serve as a
“wake-up call” to anyone seriously concerned about the future of humanity.
Read the rest of this entry »
In intros, outreach, SIAI, singularity | Permalink | 17 Comments | Trackback | Digg this | Add to
del.icio.us
Singularity Summit 2007 Abstracts - Third Set
July 31st, 2007 – Tyler Emerson
The third set of abstracts for the Singularity Summit, this September 8th and 9th at San
Francisco’s beautiful Palace of Fine Arts. Tickets are only $50, and can be purchased here.
Innovative Applications of Early Stage AI
Neil Jacobstein, Teknowledge and Institute for Molecular Manufacturing
Early stage artificial intelligence has already produced a wide range of valuable but narrowly
focused knowledge systems applications in industry and government. Many of these applications
have performed complex tasks such as planning, monitoring, design, risk assessment, diagnosis,
training, process control, classification, and analysis. For example, AAAI’s Innovative
Applications of AI Conference has published hundreds of successful applications of AI. The
applications are in fields as diverse as biotechnology, space flight, manufacturing, security,
paleontology, construction, energy, music, military, intelligence, banking, telecommunications,
14
news media, management, law, emergency services, agriculture, treaty verification, and many
other areas. This talk will review the distribution of these applications across tasks and domains,
and discuss the patterns that connect these applications: what worked, what didn’t, and what are
the key trends. None of these systems exhibited general intelligence, but each documented our
ability to codify and distribute human problem solving knowledge, and put it to work. The
answer to the question about how far are we from advanced AI depends on the operational
definition of “advanced”. It is clear from the knowledge systems produced thus far that even
relatively straightforward applications can be valuable. The larger endeavor to produce AI
systems that learn and reason at human levels and beyond is promising, and will require both
enlightened research sponsorship and appropriate safeguards.
The Nature of Self-Improving Artificial Intelligence
Stephen M. Omohundro, Self-Aware Systems
Can we predict the behavior of systems that modify themselves? Can we design them to embody
our values even after many generations of self-improvement? This talk will present a framework
for answering questions like these. It shows that self-improving systems converge on a specific
cognitive architecture that arose out of von Neumann’s foundational work on microeconomics.
In these systems there is a universal principle which governs the organization of all levels of
physical and computational resources. They exhibit four natural drives: 1) efficiency, 2) selfpreservation, 3) resource acquisition, and 4) creativity. Unbridled, these lead to both desirable
and undesirable behaviors.
The efficiency drive leads to algorithm optimization, data compression, atomically precise
physical structures, reversible computation, adiabatic physical action, the virtualization of the
physical, and governs a system’s choice of memories, theorems, language, and logic. The selfpreservation drive leads to defensive strategies such as “energy encryption” for hiding resources
and promotes replication and game theoretic modelling. The resource acquisition drive leads to a
variety of competitive behaviors and promotes rapid physical expansion and imperialism. The
creativity drive leads to the development of new concepts, algorithms, theorems, devices, and
processes.
The best of these traits could usher in a new era of peace and prosperity; the worst are
characteristic of human psychopaths and could bring widespread destruction. How can we ensure
that this technology acts in alignment with our highest values? We have leverage both in
designing the systems’ initial values and in creating the social context within which they operate.
But we must have great clarity in imagining the future we want to create. We need not just a
logical understanding of the technology but a deep introspection into what we cherish most. With
both logic and inspiration we can work toward building a technology that empowers the human
spirit rather than diminishing it.
Preparing for Bizarreness: Open Source Physical Security
Christine L. Peterson, Foresight Nanotech Institute
Attempting to take action now to get ready for a world with strong AI is a highly daunting task.
Nevertheless it is worth considering our options, especially any that are useful in the nearer term
for other reasons. We can ask: In a world of powerful entities, how can individuals be protected?
The open source software experience inspires us to look for ways to transfer the advantages of
that process to the physical world. Open source has been particularly speedy at correction of
security vulnerabilities – precisely the kind of vulnerabilities we will need to guard against in a
world of highly powerful entities of various kinds. We can begin now to extend the principles of
15
open source into the physical world: we can start to make physical security “bottom-up”,
decentralized, collaborative, and transparent.
In No Tags | Permalink | 1 Comment | Trackback | Digg this | Add to del.icio.us
Vernor Vinge Podcast
July 31st, 2007 – Tyler Emerson
Download audio (mp3)
Cameron Reilly’s latest podcast interview is with Vernor Vinge, one of the best science fiction
authors in recent decades. His novels include True Names, A Fire Upon the Deep, A Deepness in
the Sky, and Rainbow’s End. He popularized the term “technological singularity” in his 1993
essay “The Coming Technological Singularity“.
In podcasts, singularity | Permalink | No Comments | Trackback | Digg this | Add to del.icio.us
Do You Care About Hypothetical Persons?
July 30th, 2007 – Michael Anissimov
At the Transvision conference, I had a conversation with a respected transhumanist on the issue
of existential risks and humanity’s future. He told me that he did not see existential risk as a big
deal because of it threatening hypothetical persons in the future, but only because of threatening
the currently living population. This is the first time ever that anyone told me directly that they
use a discount rate of infinity when considering as-yet-to-be-born persons.
When environmentalists tell us to fight against global warming, and economists warn us about
the insolvency of Social Security, an often used motivator is to tell us to think about the world
we are handing off to our children. This may refer to one’s own children, but can be abstracted to
‘descendants’ - which includes other people’s children, the sum total continuation of the human
and eventually posthuman race.
What is confusing is that this motivator seems to work a lot on some people and not at all on
certain others. Despite the majority assigning some level of concern to hypothetical persons, at
least their immediate children, grandchildren, and even great-grandchildren, a significant
minority assigns them nothing. Having no crystal ball of where moral philosophy and consensus
will go in the future, it is very difficult for us to consider from our current vantage point whether
or not this tendency will be viewed in retrospect as an irrational bit of evolutionary baggage, or
genuine moral wisdom.
If we do care about hypothetical persons, and want to care more, might we eventually be able to
reprogram our minds to magnify this aspect of ourselves as beneficial? Will we begin to care
more about the teraperson hypothetical collective in the Whirlpool Galaxy, 25 million years from
today, than our next-door neighbor we’ll see when we walk out the door in ten minutes? This
certainly seems to be what Nick Bostrom is suggesting in his Astronomical Waste paper:
“With very advanced technology, a very large population of people living happy lives could be
sustained in the accessible region of the universe. For every year that development of such
technologies and colonization of the universe is delayed, there is therefore an opportunity cost: a
potential good, lives worth living, is not being realized. Given some plausible assumptions, this
cost is extremely large. However, the lesson for utilitarians is not that we ought to maximize the
pace of technological development, but rather that we ought to maximize its safety, i.e. the
probability that colonization will eventually occur.”
Despite Bostrom’s persuasive paper, there are people that simply don’t care, because they only
value lives as currently lived. As far as I can tell, there is nothing morally perverse about these
16
people - they simply have a different angle on the moral issue. Is one “right” and the other
“wrong”? Maybe, but ironically, it would require a hypothetical future person to give the answer
with confidence.
In No Tags | Permalink | 17 Comments | Trackback | Digg this | Add to del.icio.us
SIAI Interview Series: Ben Goertzel, Singularity Institute
July 30th, 2007 – Tyler Emerson
Dr. Ben Goertzel is SIAI’s Director of Research. In this interview, he explains the Singularity
Institute’s mission and research objectives. You can download the audio version here.
In artificial intelligence, interviews, outreach, SIAI, singularity, videos | Permalink | 1 Comment |
Trackback | Digg this | Add to del.icio.us
Singularity Summit 2007 Abstracts - Second Set
July 25th, 2007 – Tyler Emerson
The second set of abstracts for the Singularity Summit, this September 8th and 9th at San
Francisco’s beautiful Palace of Fine Arts. Tickets are only $50, and can be purchased here.
Metaverse Singularity
Jamais Cascio, Center for the Responsible Nanotechnology
There are numerous scenarios for how the Singularity might transpire, but implicit in most is the
notion that the technologies that trigger the Singularity themselves emerge from earlier
generations of systems and tools. One particularly rich potential progenitor is the spectrum of
technologies encompassed by the term “Metaverse.” Building upon my work in the recentlypublished Metaverse Roadmap Overview, I trace how each of the four Metaverse scenarios –
Augmented Reality, Lifelogging, Virtual Worlds and Mirror World – lead to very different types
of Singularities. I look at the ways in which these different Singularity models might interact,
and the implications each have for the likelihood of friendly and unfriendly AI.
Nine Years to a Positive Singularity – If We Really, Really Try
Dr. Ben Goertzel, Singularity Institute for Artificial Intelligence and Novamente
Common wisdom holds that powerful artificial general intelligence is decades to centuries off.
Even techno-futurist Ray Kurzweil projects a date of 2029 for human-level AI via human brain
emulation. My contention, however, is that powerful and beneficial AGI could come much
sooner – if sufficient attention and resources are devoted to the right approaches. My favored
approach involves integrating probabilistic and evolutionary learning, artificial economics, and
other cutting-edge computer science techniques in a cognitive architecture informed by cognitive
science and systems theory; and then embedding this architecture in virtual agents that interact
with humans and each other in online virtual worlds. Among other advantages, I argue that this
sort of AGI architecture is intrinsically better suited for stably ethical behavior than more closely
human brain based architectures, due to the presence of a coherent and logical goal system.
Current prototype work will be discussed, aimed at actualizing this approach via the release of
intelligent agents controlled by the Novamente AI Engine in Second Life and other virtual
worlds. Of course, it is difficult to place any kind of reliable estimate on the course of
development of this kind of technology, given the R&D that remains to be done, and the
uncertainties regarding funding and other practical exigencies. But radical success within less
than a decade does not seem an outrageously unlikely possibility, in the view of this AGI
researcher and entrepreneur.
17
Increased Intelligence, Improved Life
Peter Voss, Adaptive AI
Artificial General Intelligence (AGI) promises unprecedented advances not only in science and
technology, but also in ethics and social systems. However, business – and thus consumers – will
be first to experience some of the enormous benefits of this emerging technology. This talk will
explore some of these improvements, and try to make a case for how increased intelligence leads
to improved morality.
In singularity summit | Permalink | No Comments | Trackback | Digg this | Add to del.icio.us
Valuing AIs
July 22nd, 2007 – Seth Baum
How much is an AI worth?
First, let’s distinguish between “instrumental” value and “intrinsic” value. Instrumental value is
value to something else (usefulness); intrinsic value is inherent value that requires nothing else
for it to be worth something. For example, we could say that clothing has instrumental value
because they keep us warm, and our feeling of warmth has intrinsic value because it is worth
something even in the absence of anything else. Or, we could say that books have instrumental
value because they bring us knowledge and our having knowledge has inherent value. (For more,
see value theory on Wikipedia.)
Read the rest of this entry »
In No Tags | Permalink | 16 Comments | Trackback | Digg this | Add to del.icio.us
Singularity Summit 2007 Abstracts - First Set
July 21st, 2007 – Tyler Emerson
Abstracts from Singularity Summit speakers are starting to arrive. Here are the first three.
The Singularity: A Period Not An Event
Rodney Brooks, MIT Computer Science and AI Laboratory
Whatever writes future history will look back at what we are calling the singularity not as a
single event but as a period of time. The singularity period will encompass a time where a
collection of technologies were invented, developed, and deployed in fits and starts, driven not
by the imperative of the singularity itself, but by the normal economic and sociological pressures
of human affairs. A Hollywood treatment of the singularity would have a world just like today’s,
plus the singularity, as a singular event. In reality the world will be changing continuously due to
rapid growth in technologies that are both related and unrelated to the singularity itself. The
future will be embedded in a different world than the one we inhabit. And the AI systems we
create will not have the same desires, beliefs, and goals as today-us. Tomorrow-us will be much
better equipped for the changes that will take place in our world. This talk will explore how
things might unfold and how we will transform ourselves along the way.
Waiting for the Great Leap…Forward?
Dr. James Hughes, Institute for Ethics and Emerging Technologies
Sentient, self-willed, greater-than-human machine minds are very likely in the next fifty years.
But to ensure that they don’t threaten the welfare of the rest of the minds on the planet a number
of steps need to be taken. First, given their radically different architecture and origins,
developing software capacity for recognizing and relating to, perhaps having empathy for,
human sentience should be a design goal, even if machine minds are likely to evolve beyond
human perspectives and emotional traits. Second, building on the global networks established to
18
identify and respond to computer viruses, governments and cyber-security firms need to develop
detectors for and counter-measures for self-willed machine intelligence that may emerge, evolve,
or be accidentally or maliciously released. Those detectors and counter-measures may or may not
involve machine minds as well. Third, human beings should aggressively pursue cognitive
enhancement and cyber-augmentation in order to give them a competitive chance against
machine minds, economically and in th event of conflict. Fourth, since machine intelligence,
self-willed or zombie, is likely to displace the need for most human occupations by the middle of
the century, industrialized countries will need to renegotiate the relationship between education,
work, income, and retirement, extracting a general social wage from robotic productivity to lift
all boats, not just those of the shrinking group of workers and owners of capital. Finally, in order
to ensure that we do not re-capitulate slavery, we will need to be much clearer about what kinds
of minds, organic and machine, have what kinds of responsibilities and are owed which kinds of
rights. Machine minds with a capacity to understand and obey the obligations of a democratic
polity should be granted the rights to own property, vote and so on. Minds wishing to exercise
capacities as dangerous as weapons or motor vehicles, should be licensed to do so, while even
more dangerous capacities (AI equivalents of bombs) will need to be restricted to control by, or
be integrated into the functioning of, accountable democratic governance.
The Road to Singularity: Comedic Complexity, Technological Thresholds, and Bioethical
Broad Jumps
Wendell Wallach, Yale Interdisciplinary Center for Bioethics
The prospect of implementing higher order cognitive faculties in AI presumes that theories about
the computational nature of mind are valid, that known technological issues can be solved, that
there are no major surprise technological thresholds that will need to be crossed, and that
computer scientists and public officials will find ways to navigate a broad array of ethical
challenges. While some of these concerns have received considerable attention, others are just
beginning to be noted. The ethical challenges, in particular, have not been well addressed. From
robots carrying weapons, to moral decision making faculties for AI, to institutional review
boards for robotic research, and political resistance to some categories of AI research, the
bioethical challenges, if not addressed, could potentially undermine funding and public support
for advanced AI systems. Progress in developing moral decision making faculties for computers
is one area that engineers and designers can begin to tackle, and which will have a significant
impact. The successful development of artificial moral agents (AMAs) is a major step that will
help ameliorate other societal concerns regarding the development of advanced AI. The
pathways for implementing moral decision making faculties in AI include top-down, bottom-up,
and hybrid approaches. In addition, AMAs may require supra-rational faculties, such as social
skills, emotions, consciousness, and a theory of mind.
In singularity summit | Permalink | 3 Comments | Trackback | Digg this | Add to del.icio.us
Skepticism about Powerful AI and SIAI’s Mission
July 11th, 2007 – Tyler Emerson
Some skepticism about powerful AI and thus SIAI’s mission is reasonable, but it is important to
have a sense of proportion. Even if you are skeptical about powerful AI to the point where you
only give a 5% chance for its occurrence, it is important to balance your forecast with the
expected utility residing within that 5% chance to realize an astronomically large social good.
There are strong preexisting arguments in support of an astronomical amount of human value
resulting from realizing powerful AI safely. You need to have wide confidence intervals in both
19
directions given the current uncertainty residing within this new area of knowledge. You cannot
just have it in the direction that favors your particular forecast. That 5% forecast also needs to
account for the poor track record humans have as forecasters. Anyone who thinks their technical
assessment of the issues is sound enough to justify a 5% (or 90%) forecast is either someone we
should talk with, or someone showing sizable ignorance.
However, this misses the point: Even if it were possible to know that a 5% assessment were
sound, the amount of social good that could be achieved by realizing powerful AI argues that the
amount of effort presently marshaled in this direction is a rare form of human irrationality.
Unfortunately, very few people think like this. There are some remarkable people who already
do, though, and I hope there are more soon. A few include Peter Thiel, Aubrey de Grey, and
Barney Pell. I recommend exploring carefully how they think about these issues.
2006 CONFERENCE ARCHIVES:
The Singularity Summit at Stanford, the first academic symposium focused on the singularity
scenario, brought together 1,300 people and 10 speakers to explore the future of human and
machine cognition, including Ray Kurzweil, Dr. Douglas R. Hofstadter, and Dr. Sebastian
Thrun. Press coverage included a feature article in the San Francisco Chronicle.
Smarter than thou?
Stanford conference ponders a brave new world with
machines more powerful than their creators
Tom Abate, Chronicle Staff Writer
Friday, May 12, 2006
Is technology poised to develop machines that can outsmart their human creators?
And what will happen to mere mortals if such superintelligent machines arise?
These will be among the questions pondered when experts in artificial intelligence, brain
research and other futuristic fields gather at Stanford University on Saturday for what is being
called the Singularity Summit.
[TECH TALK: Chronicle innovation reporter Tom Abate interviews author/inventor Ray
Kurzweil on whether technology is poised to make machines that can outsmart their human
creators?]
Borrowing a term from physics, singularity suggests a horizon beyond which we can't see. It
describes the point at which some form of intelligence spawned by technology gains the ability
20
to rapidly improve its own programming -- becoming so powerful that we cannot predict what it
might do. At that point, its capabilities could exceed even the power of our imaginations.
"This could be very, very good if we get it right, and very, very bad if we get it wrong,'' said
Eliezer Yudkowsky, a research fellow with the Singularity Institute for Artificial Intelligence, a
nonprofit group in Palo Alto that is co-sponsoring the event.
The speakers' lineup will include inventor and author Ray Kurzweil, whose recent book, "The
Singularity Is Near," argues that a fusion of machine and biological intelligence is not only
imminent but beneficial.
"It's not going to be an invasion of intelligent machines coming over the horizon,'' Kurzweil said
recently. "We're going to merge with this technology. ... We're going to put these intelligent
devices inside our bodies and our brains to make us live longer and healthier."
More-skeptical speakers will include Douglas Hofstadter, a cognitive scientist at Indiana
University who is probably best known for his Pulitzer Prize-winning book, "Gödel, Escher,
Bach."
"I don't think it's inconceivable that some kind of singularity entity could eventually have
superior intelligence to humans, but I'd be very surprised if anything remotely like this happened
in the next 100 to 200 years,'' Hofstadter said, adding that if and when superintelligent machines
arise, the question will be, "whether we become animals in the zoo, or go extinct or just coexist
(with it) like ants.''
Organizers say more than 2,000 people have already signed up to hear these heady topics
discussed. They suggest that anyone who is not already a confirmed registrant for the free event
arrive early Saturday to wait in line for cancellations at Stanford's Memorial Auditorium. Other
speakers will include:
-- Max More, chairman of the Extropy Institute, a nonprofit group that espouses lengthening the
human lifespan and making other "improvements" to human physiology and character through
technology;
-- Nick Bostrom, director of the Future of Humanity Institute at Oxford University, which also
advocates "human enhancements" while simultaneously pondering the risks that a global
catastrophe -- whether self-inflicted through thermonuclear war or naturally occurring as in an
asteroid strike -- could wipe out the human species;
-- Environmentalist Bill McKibben, author of "Enough: Staying Human in an Engineered Age,"
who is expected to argue against such technological improvements.
In a way, the daylong summit is shaping up as the Bay Area coming-out party for the techinspired philosophy called transhumanism. In a nutshell, transhumanism holds that genetics,
nanotechnology and robotics are converging, creating the potential for "human enhancements."
Saturday's event is supported by the Stanford Transhumanist Association, a student group that
embraces the view that humankind is poised to take evolution into its own hands.
"We should view ourselves as a species in transition," said Michael Jin, a 20-year-old sophomore
and founding member of the group. Jin and fellow sophomore Yonah Berwaldt have been
pleased at how quickly the event drew a close-to-capacity crowd based largely on word of mouth
and blog posts.
"There's a large audience for this sort of thing on the Internet,'' said Jin, who says it is possible
and desirable to engineer away psychological flaws such as selfishness. "This is a troubling
aspect of human nature and something we could actually fix,'' he said.
21
Although little known outside technological circles, transhumanism inspires intense opposition
from ethical watchdog groups that dispute the notion that such technological tweaking would
represent progress.
"As soon as you take issue, you're quickly labeled a Luddite,'' said Jennifer Lahl, national
director of the Center for Bioethics and Culture Network in Oakland. "But transhumanism begs
the question: What needs to be improved upon, who gets to decide and where does it end?"
Richard Hayes, executive director of Oakland's Center for Genetics and Society, likened modern
transhumanists to the early 20th-century futurists who were fellow travelers with the fascist
movements of that era.
"The transhumanists are fundamentally elitists," Hayes said. "Once they start enhancing
themselves toward post-human status they will have little concern with the rest of humanity."
Yudkowsky, the artificial intelligence expert with the Singularity Institute, said those fears stem
from Hollywood images such as the part-human, part-machine Borg of the "Star Trek" series
whose collective consciousness is akin to a form of telepathy.
"How do they (scriptwriters) know that telepaths aren't nice people?'' he said.
Nevertheless, the Singularity Institute sees itself as a watchdog, working inside this movement to
ensure that if and when smarter-than-human machines arise that they would behave like benign
genies to help solve such problems as global warming.
"Humanity seems to have two choices in the long run: superintelligence or extinction,"
Singularity Institute Executive Director Tyler Emerson said.
Todd Davies, a lecturer at Stanford and associate director of the university's cross-disciplinary
Symbolic Systems Program, said he knew the Singularity Summit would be controversial and
tried to ensure some diversity of views on the agenda.
"I'm not at all convinced the singularity is near or that it will be a good thing,'' Davies said,
adding that, "having the summit is a way to get these ideas on the table.
http://www.sfgate.com/cgibin/article.cgi?file=/c/a/2006/05/12/BUG9IIMG1V197.DTL&type=printable
2006 Audio and Video presentations:
http://www.singinst.org/media/
22