Download Thinking Machines - William Thomas Online

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

AI winter wikipedia , lookup

Artificial intelligence in video games wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

Computer Go wikipedia , lookup

Technological singularity wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Human–computer interaction wikipedia , lookup

Intelligence explosion wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Transcript
THINKING MACHINES
By William Thomas
For someone who can remember his first digital watch and his first calculator, eventually
followed by a parade of “portable” personal computers boasting miniature displays and
onboard storage measured in kilobytes—the overnight microchipping of modernity seems
like magic. When I notice it at all.
A cyborg named Prof. Kevin Warwick has even poked an arm into the ultimate humanmachine interface by having that appendage surgically implanted with wireless arrays
communicating directly with computers at the University of Reading’s Dept. of
Cybernetics. “I was born human,” Warwick admits. But that’s been remedied. Now this
Brit is “augmented”. [WIRED Feb/00]
So is much of civilization. Even if the chips we constantly interact with remain outside
our skin, the machines so many of us serve with so much maintenance and expensive
upgrades have virtually taken over every technology-aspiring society on Earth.
Face it. Entire “wired” nations have become so seamlessly integrated with networked
microchips, if a terrorist-triggered electromagnetic pulse or computer contagion ever
fried them, our lives would be abruptly simplified in ways we might not be able to
handle.
But the fate of our inventive species could also rest on an even bigger conundrum: Can an
array of silicon chips ever be called “intelligent”?
JUST FRIENDS
Every day and long into every night, hundreds of millions of people spend more time
with their personal computers than with other humans—including their kids and spouses.
As our co-dependence with these fabulous, tyrannical, time-sucking machines deepens,
only the inability of computers to recognize human emotions and respond appropriately
inhibits our bonding with devices that often seem so helpful and compliant we no longer
notice their pull.
What happens when computers become smart enough to carry on conversations with us?
After a few exchanges, will they become as bemused as humans playing with their pets?
Or will AI machines wirelessly connected to a global brain beyond our ken, simply stop
wasting precious processing time on entities as slow and backward as humans?
A third option is already seeing nearly sentient, fleshshredding weapon systems detonating tens of thousands of
families in places neighborhoods like Ramadi and Fallujah.
If a rogue nation currently devoting more talent and treasure
to mass murder than the next 10 biggest countries combined
continues its present trajectory, even more sophisticated
robots than the drones that blew up dozens of unidentified
civilian vehicles in Afghanistan, or the Aegis naval system
that downed a passenger jet, or the Patriot missiles that killed
British pilots over Iraq—will go on mindlessly killing humans
without guilt or regret or any feelings at all…until this
poisoned planet ends up inhabited only by self-replicating
machines.
Does this disturb you? It better! We cannot talk about war and violence—or the risks and
costs of the computers on our desks—without exploring the burgeoning shadow side of
AI.
GOING EXPONENTIAL
It’s all happening very fast. Named for Gordon Moore of Intel, who predicted the
exponential curve in computer speeds, Moore’s Law has seen processor speed doubling
every 12 to 18 months. Over the past two decades, computational speeds have jumped
from 5 to 1,000MHz (million operations per second). The original IBM PC, introduced in
1981, contained an Intel 8088 processor running at 4.77 MHz. The 1998 Pentium II hit
333 MHz. Today’s home computers have high-jumped the GHz boundary at a billion
operations a second.
Sometime soon, unless our collapsing space colony distracts the societies doing most of
the damage, we could find ourselves speaking to “someone” on the phone without being
able to tell if we are interacting with another human being, or a machine that would take
umbrage at being called a “machine”.
TURING HERE
When a talking computer finally passes this “Turing
Test” by passing for human, we will have to welcome
an alien civilization into our midst.
Alan Turing formalized the AI quest in 1950 with an
article on “Computing Machinery and Intelligence”
for the journal Mind. His opening sentence was a
show-stopper: “I propose to consider the question
‘Can machines think?’”
So far, no computer has faked out a human long
enough to pass the Turing Test. But that hasn’t
stopped us from becoming addicted to screens
incessantly alight with brain-popping information,
beckoning us ever deeper at the speed of each new
link.
Asking your computer for a backrub, on the other hand, quickly exposes its limitations.
IF YOU’RE SO SMART
Often misused, the term “Artificial Intelligence” (or “AI”) was coined by Stanford
professor John McCarthy to define software systems that behave and make decisions
using processes similar to humans and other creatures like ants. AI researchers Joanna
Bryson and Jeremy Wyatt flat-out declare, “Artificial Intelligence builds machines
capable of intelligent behavior.”
But just what is intelligent behavior? Are irrational, emotions-driven humans the best
yardstick? On the other hand, how can faking a few attributes of human intelligence
make some inert microprocessors “intelligent”?
“For all of the great things that computers do, there is one thing that that they can’t do
very well: Think,” needles Nick Loadholtes. (My PC just typed, Ouch.)
“If we don’t even know how exactly the brain works, how can we replicate the behavior
or working of the brain?” Mendiratta argues. “If a machine can’t really think through or
around a situation—instead of simply act in a sequence of programmed steps—we are
limited to a single rote perspective. And this is not thinking.”
Roger Penrose wonders about less tangible aspects of the mind and mentation that arise
spontaneously from electro-chemical processes—and transcend these sparking synapses.
This living AI legend does not believe that true intelligence can be built into machines
that by definition cannot have consciousness.
But what is consciousness? Where exactly is the threshold separating a sophisticated
electronic abacus from a self-aware machine that “knows” it’s a computer and relates
everything it learns to itself? Will a computer ever pray to a cyber god?
BLIND AS ROCKS AND NOT BIG TALKERS
Probably not out loud.
“Computers can do calculus, but they can’t learn to talk as well as a two-year-old child,”
observes Attila Narin. AI modeler Robert Kosara amplifies: “Artificial intelligence has
given us everything from PARRY the paranoid chatbot, to Japanese fuzzy logic rice
cookers. But it hasn’t yet produced a computer that can carry on a conversation.”
[narin.com; kosara.com]
If you would enjoy becoming excruciatingly rich by solving one of AI’s biggest
bottlenecks, just write a computer program that can understand and speak human
languages as well we do. So far, all attempts have reportedly foundered on the vast
amount of programming required to include all contextual clues for the computer to sort
through—while trying to unscramble typically mangled human accents and syntax.
While you’re at it, why not program a computer to “see” and recognizes images as well
as a human? That’s another hard one for computers, who—I mean that—don’t perceive
the world like we do.
Unlike their hominid inventors, who developed superb pattern-recognition brain wiring
over several million years pursuing prey that could eat them, computers are dismal at
recognizing faces in a crowd. Just ask the innocent shoppers being yanked off big city
streets when AI surveillance systems tag them as “terrorists”.
[http://foldoc.doc.ic.ac.uk/foldoc/foldoc.cgi?artificial+intelligence; CNN.com]
The past is also tense for computers. “Sometimes people remark with a reference to an
event or happening in the past. How will a machine decipher that?” asks Nitin
Mendiratta.
As for intelligent machines— forget it, say many threatened humans. “There is no way
that a computer can ever exceed the state of simply being a data processor, dealing with
zeros and ones, doing exactly what it is told to do,” Narin insists. “Electronic machinery
will probably gain even more importance in the future, but it will never reach the point
where a machine has a life or could be called intelligent.”
I COMPUTE, THEREFORE I AM
When it comes to sifting data at breakneck speed,
making complex calculations, or crosschecking
every option before making another lightning-fast
chess move—today’s computers compute much
faster than we do. Searching for solutions, complex
equations called algorithms look at millions of
possibilities before instantly “deciding” on the
answer that best fits a programmed set of simple
rules stipulating: “If this, then that.”
If the answer doesn’t work, the algorithm corrects
itself before trying again. This is spooky and smart.
But is it “thinking”?
Nick Loadholtes lays out this hope for AI: “Since
computers are very good at understanding yes-or-no
questions (also known as true-or-false statements), by studying the decision-making
processes of people, a set of simple rules has been developed that are easy to
communicate to a computer.”
Ironically, the quest for truly autonomous AI could founder from this half-century-long
fixation on replicating mushy biological brains with slick silicon wafers. Though slower
than a microprocessor, a brain cell also accumulates inputs that eventually trip an output.
It’s the mind-blowing ways in which up to 100 billion neurons—each connected in
simultaneous parallel operations to tens of thousands of other neurons throughout the
brain and body—that cannot be replicated. Even young human brains can creatively
compare and draw inferences from facts and abstractions that leave the most sagacious
Supercomputers far behind.
THE SHAMAN’S CAVE
After decades spent discovering and distilling the mechanics of human thinking into
formats computers can “understand”, AI remains as elusive as a passing thought. Even
the best artificial minds are too dumb to fetch an umbrella before venturing out to fetch a
simulated newspaper in a downpour. The challenge of heuristics—defined by Narin as
“rules of thumb, educated guesses, intuitive judgments or simply common sense”—is yet
another seemingly insurmountable hurdle for machine intelligence, as well as some
machine-elected presidents.
There’s another catch. Even computers stuffed with AI algorithms that “learn” and
correct themselves in neural networks wired like a human brain “may not be regarded as
thinking, because these hardware components don’t actually
have any idea of what is happening,” protests Mendiratta.
Ever since the geared tumblers of Babbage’s hand-cranked
“Difference Engine” displayed the first machine-math solutions
back in 19th century London, AI apologists have claimed that
their attempts to model human thought mirrors back to us how
our own brains work. This sounds helpful. Until we consider
how this pervasive paradigm has restricted most discussions of
“mind”—whether animal, human or cosmic—to simplistic
mechanistic comparisons with machines that are basically
arrays of on/off switches.
Can whatever happens in our body and brain when we see a
picture or hear a reference to “Bush”, Marilyn Monroe, or
depleted uranium, be reduced to stick-chart diagrams?
What if the human brain is more than a very fancy switchboard? What if it’s a shaman’s
cave?
HAPPY FIGHTS
“Is it possible for a nonhuman to differentiate between disappointed and sadness? Can
emotions be defined? How can we explain an emotional condition or state?” asks
Nitin Mendiratta in her brilliant essay on “Emotional Machines”.
And what happens when the same action triggers different emotional responses in
different individuals?
“Happiness during fight is not possible. However, an ironical, humorous sarcastic remark
is possible,” Mendiratta mentions. How can a computer tell the difference?
BODY LANGUAGE
What about bodies? All of our thinking is subordinate to our primary need to stay alive,
Robert Kosara maintains. Even our emotions manifest in physical symptoms like the
production of hormones, shivering, or becoming “turned on”—something a computer
might easily equate with a light switch.
Kosara sees AI’s biggest flaw in its total reliance on the mechanistic mimicry of thought,
without cognition’s complete information input from bodily sensations and memories.
“It is therefore shortsighted to try to build artificial minds not only without any body, but
also without even the concept of a body,” insists this AI vision specialist.
“How should an artificial mind ever be able to understand tiredness, excitement,
happiness or fear without ever having felt it? And by feeling, I mean the physical
symptoms, and the intellectual processes that accompany the fear of injury or death, for
example. A body-less mind can never understand that, und thus will never be able to
understand humans, let alone act like one.”
COMPUTERS CAN’T CRY
Emotions are an even bigger block for machines than for humans. In face-to-face human
conversation, more than 90% of communication takes place through facial expressions
and body language. The problem for AI programs perplexed by infinitely inventive
human responses is that different people think, react, behave and speak differently when
stressed, distracted, elated, hungry, horny or exhausted. If you happen to be a machine
incapable of conjuring such emotional states, what then?
Modeling emotional states can be challenging, Narin admits. “Comparisons are the keys
that open the emotional doors in our mind,” she suggests. Software based on simplistic
models of “action-reaction” can emulate emotions with algorithms that interpret and
modify each other.
Will such computational fakery ever amount to anything
more than a parlor trick? Is it hubris to seek so stubbornly
to mechanically replicate the specialness of hu-mans
(“spirit-persons”), who share most of our DNA with
jellyfish, yet are so remarkably different?
Can human emotions be reduced to strings of 1’s and 0’s?
A computer programmed to say, “I am sad today” mocks
the feelings of a father’s whose entire being is breaking
apart as he cradles his dead daughter, killed by another
American “smart bomb”.
THE COLORS OF EMOTIONS
Attila Narin has a brainwave: “Perhaps they can be understood like colors. All colors are
made out of Red, Blue and Green. Perhaps with more than just three basic colors, we can
generate a hierarchical tree for mapping human emotions and predicting behavior.”
After years spent studying how AI-generated “ants” reproduce and compete on computer
monitors, Paul Almond urges a tack away from this dead end. Now this AI researcher
suggests that software imitating the morphing of the A-C-T-G symbols in our own
genetic code could speed the “evolution” of similarly programmed artificial systems.
If Artificial Intelligence ever arises from the primordial muck of computer code, our
definitions of “life” and “intelligence” will be radically redefined. Until then, how will
humans be able to tell if any alien machines our robot space probes happen to encounter
are actual “thinking” entities?
We’d better give this some thought. Or risk getting left in the stardust when our AI
emissaries hold the first conversations with extraterrestrial machine intelligence.
COMPUTERS CAN THINK, SORT OF
Humans and computer algorithms share the same three-step “thinking” process:
1.
2.
3.
4.
Gather information.
See if the information helps answer a question.
Perform an action based on that information.
Modify that action based on the feedback of results.
“Computers and machinery think, learn and even correct itself from its own mistakes, just
like human beings,” attests Attila Narin. “Today advanced computer systems are capable
of successfully passing this intelligence test.” [http://www.cs.bath.ac.uk/~jjb/web/whatisai.html]
Another test of intelligence is that if something acts intelligent, it is. Taking in
information, making either-or decisions, and learning from successes and failures “can be
regarded as thinking,” Narin postulates. “Yes, computers can do all this! We see
numerous demonstration of this ability in software applications and Internet portals.”
AI PICASSOS?
Borrowing characteristics from human intelligence and applying them as algorithms in a
transparent, user-friendly way can certainly result in intelligent behavior. But are such
rote responses truly “intelligent”?
Mere number crunching is not thinking. Abstract thinking is. “Can machines be creative?
How intelligent are human beings? Or is it all a headache without a head?” ponders
Mendiratta.
The human intellect is, above all, creative in the connections it makes and the conclusions
it reaches. Sure, an AI once defeated Grandmaster Gary Kasparvich at chess, says Carol
Stein. “But will they be ever able to match the genius of Einstein? Can they create
paintings like Picasso?”
They’re trying! At least 164,000 websites currently offer “computer art”.
When a computer generates poetry or stories, draws a picture, or composes music
unassisted, it randomly chooses words, picture or musical elements out of a programmed
“vocabulary”. Then it “matches specific syntactic patterns by choosing the correct type of
word or tone for certain position in a sentence, poem or piece of music,” notes Narin in
her charming English.
Is this “art”?
LEAVE IT TO EXPERT SYSTEMS
Touted as the next best thing to human expertise earned over years of hard-won
experiences, “expert systems” are computer programs derived from specialists like
doctors, lawyers, pilots and petroleum prospectors to assist in tasks as radical as brain
surgery and ensuring runaway global warming—simply by activating a menu.
But life is not a simulation. Humans holding out for the sanctity of all organic lifeforms
insist there can be no silicon shortcuts to full human apprenticeship involving the total
experience of that for which mastery is sought.
OOPS!
Don’t forget Mr. Murphy, who lurks inside all computers waiting for his next chance to
crash the system. Teamed up with ever-dependable human denial and boneheaded
stupidity, the ever-present “Oops!” factor separates sentimental AI fantasies from an
atomic war triggered by a flock of birds.
What then? Say someone boils down the subject of brain surgery into a computer
program directing a laser-wielding robot in a real operating theater. Who or what do you
sue when something goes wrong and Uncle Fred is turned into a Volkswagen?
Will the offending computer be fined or incarcerated?
Or terminally unplugged?
WHO’S FLYING THE JET?
Besides being able to physically replicate itself through nanotechnology and obtain the
periodic recharging required to run its systems, any fully functional Artificial Intelligence
must be able to flawlessly separate crucial signals pouring in at near light-speed from the
extraneous “noise” in which they are immersed.
Some folks call this trait intelligence.
Onboard an Airbus approaching a runway threshold in “AutoLand” configuration, with
zero visibility, airspeed 180 knots, and no human hand on the controls—hundreds of
unsuspecting passengers depend on this particular expert system getting it right. Focusing
on data selected by its software, the jumbo jet’s redundant cockpit computers ask a series
of cross-checking questions thousands of times each second in order to achieve their
programmed goal of safely alighting a 200-ton machine on the paved patch of planet
already looming large in the windscreen.
WATCH YOUR LANGUAGE
A less extreme test of machine learning looks at how language is used in conversation. If
a machine can generate a conversation accurately enough to fool a human, “it must be
deemed intelligent,” Kosara claims.
Since when does human gullibility make a machine smart?
Look at the word “intelligence”, Narin suggests. Derived from the Latin word
“intellegere”—to understand, perceive, recognize and realize, “legere” means to select,
choose and gather.
“Intelligence is the ability to establish abstract links between details that do not
necessarily have any obvious relationship,” defines Narin. “The essential part of
intelligence, as the Latin word suggests, is the ability to look beyond the simple facts and
givens, to understand their connections and dependencies and thus to be able to derive
new abstract ideas.”
Whatever it is, the human intellect is not just used to solve problems, Narin reminds us.
“Intelligence is used to coordinate and master a life. It is reflected in our behavior and it
motivates us to achieve our aims, which are mainly devised by our intelligence as well.”
Try telling that to your laptop.
BOTS
Right now and at least until next month, our closest encounters with AI will be the
“chatbots” springing up online like quirky cartoon characters. A bot is a customized
computer-animated figure capable of speech and primitive facial expressions. Chatbots
allow people to interact online while remaining anonymous behind their iconic
masquerades.
The trick is to make bots instantly recognizable by web surfers catching the same wave
from Moscow to Katmandu. Ai.com’s chatbot is named Botson. “Introduce yourself to
him, if you haven’t already,” the site suggests.
Him?
I CHAT, THEREFORE I AM
According to Internet surveys, chatting is the primary reason most folks go online. “If
this is any indication of people’s openness to talking and sharing emotions through and
with machines, then we are seeing the tip of the emotional machine,” Kosara confirms.
Chatbots, he adds, “are small stepping stones towards building anything close to an
emotional machine.”
So far, bots can only understand simple emotive expressions like “Hi!” and “How are
you?” Just like English 101. Nevertheless, “self-learning” products like MS Office
Assistant, Web Monkey and Japanese Pet Robots are taking such AI toddlers mainstream.
This may be subarashi in Shinjuku, where human buyers can snap up the latest electronic
gadgets. But back in the real world, the rest of us are spending too much of our allotted
time trying to correct billing errors issued by corporate computers that hide behind AI
speech menus, which are soon reduced to baffled silence by our shouted expletives.
Try chilling by chatting with Alice. Or design your own bot at alice.pandorabots.com.
Choosing among 20 icons offered on this site, you can customize age, gender and
hairstyle, as well as skin, hair and eye colors—plus the “make-up, clothing and
accessories” for your personal representative on the worldwide web. Your bot can speak
in your own voice, or in the voice of anyone ever recorded.
Lip-synch is automatic.
TRANSCENDENCE OR TERMINATION?
It may already be too late for humans to keep up with our silicon usurpers. After taking
shortcuts not found in their software instructions, advanced computers are reaching
“conclusions” not anticipated by their mystified programmers.
Think about it: multiplying hordes of computers designing their own upgrades without
human intervention or even comprehension. Now visualize billions of microchips
independently interconnecting like networked neurons in a global brain that is not the
Internet you and I access, but a budding artificial intelligence from which human
understanding and participation are already excluded.
What is this about?
Are constantly yacking, self-programming computers closer to cognition than many
humans believe? Even with Moore’s Law and recent strides in subatomic technology
relentlessly cranking up computational power and surprises, Kosara figures it will be
another 20 years before even the brainiest computers match our own nattering neurons.
By then, the entire Arctic icecap could finish melting, bringing perpetual winter to places
like northern Europe, England and the US east coast, just as cheap oil runs out and
China’s water faucets run dry. Faced by such computer-generated scenarios, endangered
humanity might be tempted to start looking for silicon saviors more rational than us.
But what if the same profit-programmed AI already liquidating the last big fish and
rainforests “decides” that we aren’t worth the nuisance or risk of keeping around? Bill
Joy, CEO and chief scientist at Sun Microsystems, wonders if the future will discard us.
Writing recently in Wired, Joy warned that accelerating advances in robotics, genetic
engineering, and nanotechnology could make humans obsolete. [Wired Aug 04]
He ought to know. Joy helped develop UNIX, the industry standard operating system for
research and education. Called by Fortune magazine “The Edison of the Internet”, he
also co-chaired the Presidential Information Technology Advisory Committee, tasked
with fast-tracking the development and adoption of 21st century technologies, regardless
of their darker implications.
Joy was jazzed by the work of legendary Ray Kurzweil, “a
prolific inventor and entrepreneur in artificial intelligence
technologies including character recognition, music
synthesis, speech recognition, and reading machines for the
blind,” reporter Douglas Dixon notes.
In his most recent book, The Age of Spiritual Machines,
Kurzweil enthuses over human evolution blending with
technological evolution. “We will see 1,000 times more
technological progress in the 21st century than we saw in
the 20th,” he promises—if we survive the ongoing Sixth
Great Extinction Event with a planet worth inhabiting.
[kurzweiltech.com]
MECH WARS?
As lithographically drawn transistors are replaced with individual molecules arranged in
nanoscale circuits, computers won’t have to “worry” about slowing down. As Joy wrote
in Wired, “By 2030, we are likely to be able to build machines, in quantity, a million
times as powerful as the personal computers of today.”
But Joy and others warn that newly emerging and mutating technologies will perform as
safely as space shuttles and nuclear power plants. All it will take, Joy says, is an
inevitable accident or abuse to release a “gray goo” of self-replicating, molecular-level
“assemblers” that spread uncontrollably across the planet, obliterating organic life.
“Given the incredible power of these new technologies, shouldn’t we be asking how we
can best coexist with them?” Joy asks. “If our own extinction is a likely, or even possible,
outcome of our technological development, shouldn’t we proceed with great caution?”
Not likely, pal. From splitting the atom, to plucking an ionosphere-threatening HAARP,
scientists have never hesitated press the button on planet-risking technologies in order
(they’ve always said), “to see what will happen.”
ROBOTS ‘R’ US
When it comes to preventing final extinction,
computers will hopefully prove smarter than us.
Remember, common sense is not their thing.
Meanwhile, as an ever smaller and smarter artificial
intelligencia infiltrate our lives disguised as toasters
and Toyotas, how quickly we cease marveling as
each big “advance” toward silicon’s ultimate
takeover replaces last year’s “breakthrough” with yet
another carcinogenic convenience we cannot possibly
live with—or without.
“When the general public ceases to disbelieve the robot,” Narin notes, “robots will
become a part of us.” But even as tomorrow’s androids fight for their basic rights,
humans could be lining up in droves to be downloaded onto self-powered, self-repairing
nanochips. (Be sure to make backup copies!) If this happens, Homo sapiens will morph
into Homo silicon.
Will being reduced to numbers in a near-immortal machine be worth giving up kisses and
the scent of summer flowers?
DISHING UP SOME BRAINS
The quickest way to sidestep the exasperating contradictions inherent in devising silicon
replicas of the human mind is to design “organic” computers using living brain tissue in
place of mechanical switches.
CNN now reports a “brain-in-a-dish” capable of flying F-22 jet flight simulator. “It’s
essentially a dish with 60 electrodes arranged in a dish at the bottom,” explains Thomas
DeMarse, professor of biomedical engineering at the University of Florida. “Over that we
put the living cortical neurons from rats, which rapidly begin to reconnect themselves,
forming a living neural network—a brain.”
Next, the 25,000 neurons from the disembodied rat brain established two-way
connections with the flight “sim” program, similar to how neurons receive and interpret
signals from each other to control our bodies. Gradually, the brain on a plate learned to
control the three-dimensional path of the plane! [CNN.com Nov 04]
If AI rats start shooting down human pilots unable to think or turn as quickly, it will be
time to put away our mousetraps…and just leave the cheese.