Download Get Smart: How Intelligent Technology will Enhance

Document related concepts

Wizard of Oz experiment wikipedia , lookup

Computer vision wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Personal knowledge base wikipedia , lookup

Ecological interface design wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Human–computer interaction wikipedia , lookup

Incomplete Nature wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

AI winter wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Transcript
LEF Smart Report 4/13
4/13/01
3:57 PM
Page 1
The Leading Edge Forum Presents:
Get
Smart
How
Intelligent
Technology
Will Enhance
Our World
LEF Smart Report 4/13
4/13/01
3:57 PM
Page 2
CSC’s Leading Edge Forum (LEF) is a
ABOUT THE DIRECTORS
thought leadership program that examines
the technology trends and issues affecting
us today and those that will impact us in the
future. Comprised of chief technologists
from across CSC, the LEF explores
emerging technologies through sponsored
innovation and grants programs, applied
research, and alliances with research labs.
It examines technology marketplace
trends, best practices and the innovation
and collaboration among CSC, our clients
and our alliance partners.
In this ongoing series of reports about
technology directions, the LEF looks at the
role of innovation in the marketplace both
now and in the years to come. By studying
technology’s current realities and
anticipating its future shape, these reports
seek to provide organizations with the
necessary balance between tactical
decision making and strategic planning.
William Koff (right)
Executive Director, Leading Edge Forum,
and Vice President and Chief Technology Officer,
CSC Consulting Group
Bill Koff is a chief architect with deep experience in managing technology-based programs
across a variety of applications and industries.
His expertise includes Web-based, distributed
and centralized systems, and object-oriented,
client-server and GUI technologies. He is a
frequent speaker on technology, architecture
and management issues. Bill is very involved in
CSC’s internal research and is an important
resource for technology innovation on CSC
consulting projects. His responsibilities include
advising CSC and its clients on critical information technology trends and guiding strategic
investments in leading-edge technology.
[email protected]
Paul Gustafson (left)
Director, Leading Edge Forum, and Senior Partner,
CSC Consulting Group
Paul Gustafson is an accomplished technologist
and proven leader in emerging technology,
applied research and strategy. Paul was among
the first Consulting Group recipients of CSC’s
Award for Technical Excellence in 1991. He has
also been recognized for his work with Index
Vanguard, a research and advisory program that
explored the business implications of emerging
technologies. Through his research in the early
1990s, Paul predicted the trend for organizations
to use intranets and extranets as a basis for
business communication and operations. As
LEF director, Paul brings vision and leadership
to a portfolio of programs that make up the
LEF and directs the technology research
agenda. He has published numerous papers
and articles on strategic technology issues and
speaks to executive audiences frequently on
these topics. [email protected]
LEF Smart Report 4/13
4/13/01
3:57 PM
Page 3
Get Smar t :
How Intelligent Technology Will Enhance Our World
CONTENTS
Smart Systems: From Vision to Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Today’s Talk of Smart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
What is Smart? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
System SQs: Five Attributes of Smart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1. Adapting: Modifying Behavior to Fit the Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2. Sensing: Bringing Awareness to Everyday Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3. Inferring: Drawing Conclusions from Rules and Observations . . . . . . . . . . . . . . . . . . . . . . . . . 24
4. Learning: Using Experience to Improve Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5. Anticipating: Thinking and Reasoning about What to Do Next . . . . . . . . . . . . . . . . . . . . . . . 38
Smart New World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Safety from Continuous Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Efficiency from Ubiquitous Smarts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Convenience from Useful Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Speed from All Things Digital . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Profitability from Business Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Well-Being as Homo Superior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Appendix: Handy Web Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51
About the Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
LEF Smart Report 4/13
4/13/01
3:57 PM
Page 4
Smart Systems:
FROM VISION TO REALITY
Smart systems have always fascinated us.
1
From the Turing Test – is it a computer
or a person answering you – to the
omniscient HAL in “2001: A Space
Odyssey,” there has been a vision of intelligent systems. Since the first practical
computers were conceived, computer
designers have dreamed of creating
intelligent computers that can think like
humans.
While we didn’t meet the goal of building
computers capable of doing everything
that HAL did by 2001 – computers still
don’t have common sense, vision or
reliable natural language facilities – we
have surpassed the film in many areas
such as computer chess, graphics,
2
miniaturization and mobility . More
importantly, practical applications of
smart technology are emerging that will
change our lives, and there is the promise
of much more to come.
2
1
The Turing test sets up an interrogator in one room
and a human and a computer in the other room.
The interrogator uses a terminal and keyboard to
“chat” with both the human and the computer, but
he does not know which is which. The computer
deserves to be called intelligent if the interrogator
cannot tell the difference between the human
and the computer as they respond to him. See also
http://cogsci.ucsd.edu/~asaygin/tt/ttest.html
2
In Hal’s Legacy: 2001’s Computer as Dream and
Reality, David Stork documents the areas in which
reality has surpassed the vision in the film. He also
notes that the film missed some trends entirely:
laptops, personal digital assistants and miniaturization (HAL is as big as a school bus).
Today’s Talk of Smart
Perhaps you have already had a taste of
smarts, like driving through a tollgate
without stopping, getting a speeding
ticket in the mail that was issued automatically, or being rescued in an accident
thanks to your smart car. You may be
intrigued by the latest breed of intelligent
robotic pets or household appliances.
Or maybe you feel intruded upon by
the idea that smart cameras could be
watching your every move.
And then there is the Internet. You may
be impressed (or taken aback) by the
notion of smart profiling, where a Web
site knows your previous purchases and
recommends new things to buy based
on aggregate purchasing patterns. Then
there is the self-organizing Web site,
which knows its most popular content
and automatically filters it to the top so
it is easy to find. Even the notorious
Web search is starting to get smarter and
return the results you are looking for
thanks to the introduction of common
sense software that resolves ambiguities.
LEF Smart Report 4/13
4/13/01
3:57 PM
Page 5
THE SMART APPLIANCES ARE COMING
In the 1950s, we were promised household appliances
that would be able to operate autonomously, relieving
people (typically housewives) from tedious jobs. Over
the next 30 years, the concept was further elaborated.
Smart coffee makers would know when we woke up
and brew a delicious pot. Smart washing machines
would detect their contents and adjust their washing
programs. Smart ovens would contain recipes, know
the weight of the roast and automatically set the oven
and cooking time. Smart fridges would know what was
in them and when to replenish out-of-stock items.
Unfortunately, little useful hardware emerged out of
the technodream of smart appliances, and most of
the early attempts look ridiculous today. The perfect
set of smart appliances, relieving us from all tedious
jobs, has definitely not yet arrived.
However, some of the recently launched brands of
“smart” appliances have made great headway. The
focus has clearly shifted from a blind admiration of
technology and gadgetry towards usefulness and
functional improvement.
Self-Adjusting Washing Machines
and Smart Alarm Clocks
For instance, South Korea’ LG Electronics Co. has
introduced the Internet Turbo Drum washing machine.
Through a PC, the machine connects to a Web site
and downloads the most appropriate washing
instructions, depending on the user’s preferences
and the machine load. LG Electronics plans to
expand the washing machine with a unit that enables
remote control of the machine over the Internet.
Margherita2000 is a washing machine with a mobile
phone that enables it to communicate with the digital
service center and download new washing software
from the Internet. You can remotely monitor and
control the machine through the Internet or by
sending it SMS (short message service) messages.
The Margherita2000 washer is just one in the Ariston
Digital line of household appliances designed by the
Italian manufacturer Merloni Elettrodomestici. These
smart appliances are networked using Merloni’s
Web-Ready Appliance Protocol (WRAP). They can
communicate with each other and contact users
and the service center.
A particularly interesting appliance in this line is
Leon@rdo, a smart console used to manage the
other Ariston Digital appliances. Leon@rdo can also
surf the Web, send and receive e-mail, order groceries,
keep a calendar and store your electronic notes.
Sunbeam Corp.’s Thalia Products Inc. also has plans
for smart home appliances, including an alarm clock,
coffee maker, electric blanket, smoke and carbon
monoxide alarm and several kitchen appliances
(cooker, steamer, bread machine). The appliances use
Thalia’s Home Linking Technology to connect through
standard electrical wiring in the home to “talk” to
each other. For instance, the alarm clock can signal
the coffee pot to turn on and the electric blanket
to turn off.
While Thalia will produce the initial control devices
for the appliances, it plans, for the most part, to
license its HLT technology to other companies, as it
can be used with virtually any home device that is
powered by electricity or batteries.
Savvy ScreenFridge
Among the new breed of smart household appliances,
perhaps the most appealing is the ScreenFridge, the
smart refrigerator developed by Swedish appliance
manufacturer Electrolux. ScreenFridge is a large
American-style refrigerator equipped with a touch
screen, keyboard, video camera, microphone and
speaker. When you put something in the ScreenFridge,
you can read its barcode with the built-in scanner.
The fridge keeps track of its contents and gives tips
on how to store food correctly. It also contains a
digital cookbook with hundreds of recipes and can
suggest a menu based on its actual contents.
Taking its job as a refrigerator seriously, ScreenFridge
goes beyond food management, serving as the
defacto communications center of the home. The
fridge door traditionally hold notes, messages and
photographs; it is an informal command post for the
family. ScreenFridge embodies the electronic version
of this. Family members can key in messages for each
other or record voice or video messages with the
touch of a button. ScreenFridge can send and
receive e-mail, act as a TV and watch the house.
And, in case you forgot, the ScreenFridge also keeps
your food and drinks cool.
Today, ScreenFridge is a prototype that travels
the world and can be admired at exhibitions and
trade shows. Electrolux has no plans (yet) to bring
it to market – so for now, keep taping those notes
to the fridge, but keep your keyboard and video
skills sharp.
3
LEF Smart Report 4/13
4/13/01
3:57 PM
Page 6
What is Smart?
Generally speaking, if a machine does
something that we think requires an
intelligent person to do, we consider
the machine to be smart.
Smart systems depart from traditional
systems by being oriented towards
problem-solving rather than traditional
process automation. Think of them as
adaptive rather than pre-programmed,
creative rather than computational.
What about Artificial Intelligence? The
once acclaimed technologist’s dinner
table discussion is no longer in fashion.
Yet AI is more prevalent than anyone may
think, to the point where it appears to
be following the way of the motor and
the computer by becoming embedded
in everyday things. Case in point: parents
buy My Real Baby for their children
because it is a toy, not because it is an
AI-enabled robot. Indeed, today’s
“Intel Inside” will evolve to “AI Inside”
as everything from toys to coffee pots
to cars gets smart.
There is a school of thought that says as
computing speeds reach and surpass the
processing speed of the human brain,
computers will have the capacity to be
intelligent like humans. But we must
consider both processing power and the
software that brings the processor to
life; of the two, software is the linchpin.
System intelligence is about having
software that is flexible and can learn –
software that can handle nuances and
discern when things don’t make sense,
like writing a check for zero dollars.
Today’s smarts are already having an
impact on our lives. What is really
behind it all, and how can we leverage
the technology underlying the smarts
for strategic advantage?
Today’s “Intel
Inside” will evolve
to “AI Inside” as
everything from
toys to coffee pots
to cars gets smart.
4
LEF Smart Report 4/13
4/13/01
3:57 PM
Page 7
SILICON BRAIN POWER BY 2040?
T h e E x p o n e n t i a l G r o w t h o f C o m p u t i n g , 19 0 0 - 2 10 0
$1000 OF COMPUTING BUYS
10
10
10
CALCULATIONS PER SECOND
10
10
10
10
10
10
10
10
10
60
55
50
45
40
All Human
Brains
35
30
25
One Human
Brain
20
15
One Mouse
Brain
10
5
One
Insect Brain
10
10
10
-5
-10
1900
1920
1940
1960
1980
2000
2020
2040
2060
2080
2100
Source: The Age of Spiritual Machines, Ray Kurzweil
We demand more from a smart system.
Like a friend or colleague, we expect it
to help us analyze situations and solve
problems, to provide judgment and
knowledge. Fundamentally, intelligence
centers on the ability to reason, no
matter how simplistic.
“By the year 2040, in accordance with
To help organizations recognize smart
systems and leverage them, CSC has
identified five attributes of smart systems,
dubbed smart quotients. The SQs are
what to look for, or aspire for, in a smart
system. This report explores the five SQs,
including examples and underlying
technologies, and the challenges posed
by bringing intelligence to things.
Moore’s law, your state-of-the-art personal
computer will be able to simulate a
society of 10,000 human brains, each of
which would be operating at a speed
10,000 times faster than a human brain.”
— Ray Kurzweil
Inventor
5
LEF Smart Report 4/13
4/13/01
3:57 PM
Page 8
System SQs:
F I V E AT T R I B U T E S O F S M A R T
CSC’s smart quotients help organizations
understand the power and purpose of
smart systems. The SQs serve as a guide
to organizations for choosing the best
system to address their problems.
A smart system may exhibit one or a
mix of SQs, which are summarized
on page 7. The SQs overlap in some
ways; for example, a learning system
by definition is also an adapting system.
As smart systems evolve, they will be
limited only by human creativity.
Ultimately, system intelligence will
come from the innovative packaging
of existing and emerging technologies
that underpin the SQs.
The technologies that underpin the
SQs will fundamentally change the way
we live and conduct business. Smart
systems will become our skilled assistants,
adapting to us as needed and – over
time – disappearing into everyday life.
This change will not come over night, as
the unfulfilled dream of smart reminds
us. However, continuing innovation and
technology advances will help today’s
smart systems overcome their limitations.
“Today, when systems try to be smart they
often show their stupidity,” says Paul
Gustafson, director of CSC’s Leading
Edge Forum. “Over time, when systems
really get smart and act with reason, we
will accept them as peers and won’t even
think of them as ‘smart.’ We will come
to expect ‘smart’ as a matter of course.”
2
6
As smart systems pervade our world,
they will boost productivity, efficiency,
personal comfort and convenience.
Personal digital assistants will adjust
to the user’s current environment
(Adapting). Smart materials will alert
us to danger or the need for repair
(Sensing). Smart systems will use deep
domains of knowledge to help manage
complexity (Inferring). Products will
serve us more ably as they learn our
needs over time (Learning). CEOs will
be able to evaluate the consequences
of high-level business decisions by having
smart systems anticipate their impact
and plan the best course of action
(Anticipating).
LEF Smart Report 4/13
4/13/01
3:57 PM
Page 9
T H E F I V E AT T R I B U T E S O F S M A R T
Attribute (SQ)
Description
Key Technologies
Adapting
Modifying behavior to fit
Adaptive networks, PnP, Jini, GPS,
the environment.
directory services, collaborative
filtering, humanized interfaces,
self-healing systems
Sensing
Bringing awareness to
Sensors, embedded systems,
everyday things.
smart environments, smart
materials, cameras
Inferring
Drawing conclusions from
Expert systems, knowledge
rules and observations.
bases, inference engines,
fuzzy logic
Learning
Anticipating
Using experience to
CBR, neural nets, genetic
improve performance.
programming, intelligent agents
Thinking and reasoning
Self-organizing systems,
about what to do next.
goal-directed systems, robots,
artificial life, HAL 9000
7
LEF Smart Report 4/13
4/13/01
3:57 PM
Page 10
1. Adapting Modifying Behavior to Fit the Environment
Living systems can only survive by
adapting easily to their environment.
Dinosaurs became extinct because
they couldn’t adapt to changes in the
environment. Businesses become extinct
when they can’t adapt to change to
compete effectively. Conversely, man
has not become extinct thanks to his
incredible ability to adapt, surviving
war, famine, disease and the elements
down through the ages.
Like living systems, software systems
become extinct if they can’t adapt to
change. The ability to adapt to users and
the environment – to recognize context
– is one of the basic attributes of smart
systems. Can you plug your information
device into a port and have it work,
regardless of what device it is?
Designing adaptive systems is a challenging
task. When systems need to survive out
of context – processing unexpected
information, for example – advanced
artificial intelligence techniques are
needed to enable systems to adapt and
function smoothly. Despite the underlying
technical complexities, the goal remains
deceptively simple: to have computers
adapt to us instead of the other way
around.
To reach this goal, it is important to
focus on work in four areas: adaptive
networks, adaptive interfaces, adapting
to location, and adapting to system stress.
Adaptive Networks
In the future, communication networks
will adapt instantly, organizing themselves
into the most optimal configuration
without all the manual steps that were
once necessary. Most organizations
don’t even know everything that is on
their network, let alone how to optimize
it. An important by-product of adaptive
2
networks will be the ability to have a
much better understanding of what is
on the network because of so much
intelligence in the network.
An experiment in adaptive networks at
the MIT Artificial Intelligence Laboratory
uses the post office paradigm. The basic
idea is that a communications network
like the post office is hierarchically
organized into regional centers, city
centers, local offices, etc. The researchers
instructed the network to organize its
nodes hierarchically depending on traffic
intensity and usage profiles. The result
was a perfectly balanced network that
optimizes its use of available resources.
The ideas behind this kind of experiment
are finding their way into corporate
networks and the Internet. Recent
advances in network technology enable
networks to adapt themselves to traffic
patterns without a complex planning
cycle. Today, self-learning networks
can learn what happens on a network
segment and boost router throughput
accordingly. Modern network architectures are increasingly self-healing; local
outages can be effectively circumvented
and even repaired autonomously.
Researchers at the University of North
Texas have implemented a system of
intelligent mobile agents (IMAs) that
route data through networks without
overloading the network. Each agent
can recognize its task, adapt to the
situation, and communicate what it is
doing to the other agents. Though IMAs
pose a potential security risk in that they
move through a network like a virus (if
malicious code were entered there would
be serious problems), their overall intent
is to offload busy systems managers.
LEF Smart Report 4/13
4/13/01
3:58 PM
Page 11
New Internet protocols such as IPv6,
RSVP and DiffServ enable the network
to react dynamically to traffic patterns
and user demands, adapting throughput,
latency and other characteristics of the
network connection to the user. Today,
heavily used Web servers are mirrored on
several sites around the world. When a
user contacts the site by entering
its URL, he or she is automatically
directed to the most optimal site.
This choice is made instantly based
on server usage and traffic intensity
on the Internet.
New network applications, built with
technologies like Jini, Universal PnP
(plug and play) HomeRF and Bluetooth,
instantaneously adapt to the devices on
the network. Devices form a community
– interoperate – according to the needs
and circumstances of the network.
Devices are added to the network easily
and automatically “announce” their
availability to the other devices on the
network. With Jini, the vision is that any
kind of network consisting of services
(devices, applications, databases, servers,
mobile appliances, storage devices,
printers, etc.) and clients (requestors
of those services) can be assembled
on the fly. This technology will be
very important in home environments,
joining smart appliances, home
computers and mobile devices into a
single networked community.
Note that while these emerging network
technologies are adaptive, capable
of adjusting to the environment, they
do not learn. Learning, a key capability
of smarts, is discussed in the fourth SQ.
SMART A NT ENNA E
It’s not easy being a mobile phone, pager or handheld computer. The world around you is filled with electromagnetic
noise from thousands of disturbances, including cars, TVs, desktops, laptops and countless other wireless communication devices. Getting your message through can be demanding – you have to be near your listener (e.g., when
using a Bluetooth network), you have to shout very loud (draining battery power), or you have to be smart.
New techniques for modulating radio signals improve the connectivity of wireless devices. Smart antennae, under
development for use in mobile phones, immediately adjust their transmission characteristics to the environment, helping
ensure that the call goes through and sounds clear.
Smart antennae work like people, who have more than one ear to listen to conversations. Because our brain correlates
the reception of noise from two ears, we can easily separate noise from conversation (try this yourself by blocking one
ear in a noisy room). Smart antennae correlate reception from more than one antenna to filter out background noise
and recover the signal.
Smart antennae rely on complex mathematical processing. Sophisticated, powerful microprocessors called digital signal
processors are needed to disentangle a high-capacity communications link from busy background noise.
Smart antennae yield spectacular improvements in reception quality and a considerable increase in throughput over
wireless networks. A device with a smart antenna uses less power than a traditional antenna to transmit the same signal
in the same conditions, so batteries in mobile phones and handheld devices can last longer.
Smart antennae also enable the base stations of wireless systems such as GSM to be more widely spaced, resulting
in huge savings for network operators since fewer stations are needed to cover a given territory. Smart antennae will
become even more important when third-generation wireless services are introduced (e.g., General Packet Radio Service,
Universal Mobile Telecommunications System). Since these services need high bandwidth, particularly for transmitting
graphics and video, the ability to limit the number of base stations (and thereby data hops) will be crucial.
So listen up for smart antennae – you will be able to listen better and work smarter.
9
LEF Smart Report 4/13
4/13/01
3:58 PM
Page 12
In addition to adapting readily to devices,
networks must adapt to people. That is
the vision of nomadic computing: the
network tunes in to you, any place any
time, rather than you tuning in to it.
Nomadic computing has been explored
in depth by Leonard Kleinrock, professor
of computer science at the University of
California at Los Angeles and a founding
father of the Internet.
Kleinrock’s vision of nomadic computing
is simple: as people move from one
location to another, the computing and
communications infrastructure adjusts,
providing services in a transparent,
integrated and convenient way.
Nomadic computing is perhaps the
essence of adaptive systems. Nomadic
computing frees the computer from a
fixed location, like a desktop, to be a
true companion that adapts to the user’s
current environment. This includes
adapting to geography, device, application, social situation (at the office means
I’m a professional vs. at home means
I’m a parent) and electronic needs
(writing a memo vs. transmitting video).
As mobile devices proliferate, signs of
nomadic computing are emerging. New
protocols and distributed databases
enable networked computer systems
to adapt quickly to their users. PDAs
synchronize with the push of a button,
enabling us to walk away with our data.
When users log into Windows 2000
from a networked computer, the local
system adapts its settings to the user’s
profile, no matter where he or she is
logging in from.
Going a step further, one can envision
being able to log in from any location
and any device – even someone else’s –
and be greeted with your personalized
environment. “The vision of adaptive
networks is that you be able to access
your personal computing environment,
including enterprise applications, home
applications and data, from an airport
workspace,” says Paul Gustafson of the
Leading Edge Forum. “The network
knows it’s me and gives me everything
that’s mine, including the right security
levels and applications.”
Adaptive Interfaces
It is clear that the way a system responds
to its user is a telling sign of its adaptiveness. The more the system or device
knows about you and your current
situation, the better it can tailor information for you.
Many Web sites dynamically adapt to
their visitors. They actively monitor the
behavior of users on the site: what pages
do they consult, what information do
they ask for and what products do they
buy? This information is then compared
to a database of behavior patterns. When
a user revisits the site, it can adjust its
behavior to the user, presenting, for
example, news and ads that match the
user’s interests.
A popular technique used for this
matching is collaborative filtering.
The idea is based on word-of-mouth
advertising. When we choose a product
or service, we tend to follow the recommendations of friends, relatives and
colleagues. Collaborative filtering relies
on the recommendations of thousands
(collaborative) to formulate specific
advice for an individual about a purchasing decision or a problem. The
filter starts with a database of profiles
and preferences from people in an
online community (e.g., buyers at a
LEF Smart Report 4/13
4/13/01
3:58 PM
Page 13
Web site). From the database a subset of
people are selected, using an intelligent
selection scheme, whom the system
determines have similar preferences
to the individual. The average of this
group’s preferences becomes the
recommendation the system makes to
the individual. Amazon.com’s book
recommendations are a well-known
example. Another is newsgroup filtering;
users can wade through mountains
of messages effectively, retrieving just
those that interest them.
A more lighthearted application of
collaborative filtering is the Jester
program, developed at the University
of California at Berkeley. Jester uses
collaborative filtering to recommend
jokes based on how you rate a set of
sample jokes. Jester adapts its selections to your sense of humor; in theory,
it won’t tell you a joke you don’t like.
Many systems attempting to adapt to
the individual, his or her interests and
his or her environment use an AI technique called “Baysian belief networks” to
understand the context of a particular
question or situation. Although often
bothersome, this technique is used in
Microsoft’s Office Assistant (the “dancing”
paperclip) to recognize when the user
needs assistance and then offer help,
such as with writing a letter or preparing
a business presentation.
In addition to adapting to the individual,
some systems adapt to a community of
individuals. Consider self-organizing
Web sites. These sites filter content on
an ongoing basis based on audience
feedback, floating the best content to
the top so it is the easiest to find. These
sites show that with some well-written
code and careful planning, a site can
take a random collection of articles or
links and turn them into a sophisticated,
highly usable system that adapts to its
audience’s tastes.
Another technique for adapting to the
user is to focus on his or her emotions;
this is called affective computing.
If computers could understand how we
feel, they could interact with us more
effectively, responding to our changing
feelings and emotions. For example, if
your computer could see you having
trouble with your PDA, it could help you.
If the computer saw you were pleased, it
could present you with a new challenge.
If it saw you walk into the room, it might
turn itself on.
Rosalind Picard, associate professor and
director of affective computing research
at the MIT Media Laboratory, is exploring
the detection and expression of emotions
The
Emotive
Human
User
Sensing
Human Affect
Response
Affective
Wearable
Computers
Affective
Computing
Applications
Recognizing
Affect Response
Patterns
Synthesizing
Affect in
Machines
Affective
Interface
Paradigms
Understanding
and Modeling
Affect
The affective computing research areas shown above aim to
bring fundamental improvements to computers in how they
serve human users, including reducing the frustration that is
prevalent in current human-computer interaction approaches.
(See http://www.media.mit.edu/affect/AC_affect.html).
Source: MIT Media Lab
11
LEF Smart Report 4/13
4/13/01
3:58 PM
Page 14
by computers. One project tries to detect
frustration with users by monitoring
physiological parameters such as heart
rate, skin conductivity and respiration.
Such a system is useful in a car to detect
driver stress. If the driver seems anxious,
the car might slow down, play calming
music, or issue an alert to the driver.
“Computers don’t
need emotional
abilities for the
fanciful goal of
becoming human,
but for a more
practical goal:
to function with
intelligence and
A more human-like form of affective
computing is Bruzard, the Media Lab’s
interactive virtual collaborator that is
designed to express emotion. Bruzard
looks like a patient child and comes with
a complete set of emotional expressions
including happiness, sadness, surprise
and anger. The character, which appears
as an animated character on your
computer screen, is slightly caricatured
and exaggerates a bit when expressing
emotions.
sensitivity towards
humans.”
— Rosalind Picard
Associate Professor
MIT Media Laboratory
There are many ways in which Bruzard
could become your friend. He could,
for instance, be called upon to express
the status of your application, disk space
or operating system, perhaps calming you
down in the event of a technical problem.
The fact that Bruzard looks like a person
is significant. So-called humanized
interfaces contribute to the adaptiveness
of the interface in that people generally
respond well – i.e., adapt – to them.
Many of us prefer talking to a person
instead of a machine and transfer this
people preference to people-like
interfaces. We make an assumption
that the more people-like the interface
is, the smarter it is. Hubert Dreyfus,
a philosopher at the University of
California at Berkeley and one of the
most important critics of artificial
intelligence, has asserted that intelligent
behavior is impossible without a body.
12
Ordinary computers don’t have a face
to show how they feel or how they think
their users feel. So, one might ask, how
smart are they?
Some of the basic assumptions behind
affective computing and humanized
interfaces are controversial. One of the
main issues is that the dividing line
between natural, emotion-laden interaction and annoying stupidity is very thin.
Although Bruzard looks cute and gives a
convincing portrayal of real interaction, it
may tire us as quickly as Microsoft’s Office
Assistant with its irritating suggestions.
Nevertheless, it is obvious that the more
human-like the interface, the more
adaptive – and thus effective – the system
can be. This type of interface is experienced as more natural, making systems
less intimidating and thus enhancing
their utility.
Adapting to Location
The ability to know where you are is a
very simple but powerful capability.
Now extend that to systems, things and
other people: if you or a system know
where someone or something is, you
or the system can better adapt to the
current situation.
Say you need to find the closest hospital.
You turn on your PDA for directions.
Its response depends on where you
are: at work, at home or in your car in
a distant city.
Your PDA has a GPS (global positioning
system) receiver, which is turning where
you are into a powerful piece of information. Today’s GPS receivers come on
a single chip and can be inserted into
any device, providing the location of the
device to within a few meters. As GPS
receivers permeate the environment,
the power of location information
to enhance adaptive capabilities will
become increasingly evident.
LEF Smart Report 4/13
4/13/01
3:58 PM
Page 15
Thanks to GPS, people and cars can be
located immediately by polling their
mobile phone or communications device.
Starting at the end of 2001, U.S. legislation will require all cell phones sold to
have GPS receivers so 911 emergency
service can locate the caller. Cars with
GPS receivers will remember everywhere
they have been from the moment they
were manufactured, handy information
for buyers, insurance companies,
mechanics and police departments.
Parents can know where their children
are. Siemens AG, the German electrical
engineering and electronics company,
has combined a GPS receiver and cellular
phone into a gadget that tracks the
location of children wearing it. The
system is sold as a service with a monthly
fee of less than $20. It is at once a blessing
for parents and a demon invoking Big
Brother. A similar device under development at eWorldtrack Inc. in Anderson,
S.C., located an autistic child who had run
away. The company decided to market
the device inside a shoe for security
reasons (a loose gadget can be easily
dropped into someone’s purse or briefcase for clandestine surveillance).
In Texas, cows are donning GPS collars
to help ranchers determine exactly
where the cows are grazing. This helps
ranchers decide where to clear land for
cattle and where to leave the land alone.
The study, conducted by specialists at
Texas A&M and Southwest Texas State
University, enables ranchers to get an
unbiased report of cow behavior. The
data is collected without the intervention
of humans, who can distract the animals
and trigger irregular grazing patterns.
And in Virginia, researchers have
implanted a miniature GPS system in a
cow’s digestive tract, along with a Web
cam, to better understand the cow’s
dietary performance in connection
with where it is feeding.
In addition to enhancing safety, productivity and health, position information
can be just plain handy. By blending
location information with destination
information in a handheld, the device
lets you know where stores, restaurants,
and other services are in relation to
where you are. Magellan’s GPS
Companion or GeoDiscovery’s Geode
are GPS modules that can be added to
Handspring’s Visor, bringing to the
handheld what GeoDiscovery dubs “the
power of place.” Position information
combines the network and the user to
deliver new levels of adaptiveness.
Adapting to System Stress
If an information system is to survive,
it too, like a living creature, must be
able to adapt to its environment.
Computers still crash or freeze when
errors occur. Networks come to a crawl
or go crazy when they are deluged with
self-replicating viruses or when phony
DNS records travel on the network.
Unreliable computer systems have
caused nightmares at NASA, where
launches have been delayed and missions
lost due to software glitches. A Navy
“smart” warship reportedly floated
adrift for hours because of a crashed
server. Virtually every organization has a
war story about a malfunctioning
computer.
Today’s systems need the ability to adjust
to different levels of stress. When their
functioning is endangered, systems
should be able to adapt and go on,
without requiring a wholesale shut-down
and abandonment of their users.
13
LEF Smart Report 4/13
4/13/01
3:58 PM
Page 16
Of course, critical systems can be backedup or clustered so that another system
takes over if the first one has a problem.
This is the “stronghold method” – using
brute force, like a medieval castle withstanding an attack. Many of tomorrow’s
smart devices will use more sophisticated
techniques to adapt to stress and failure.
Today’s smart disk systems continuously
monitor the behavior of their components. Using rules of thumb and past
experience to interpret the results of
this monitoring, the system can predict
the imminent failure of a drive. Such
fault detection systems will be more
common in the future; all devices will
monitor themselves and interpret their
behavior to anticipate trouble. They may
even call upon other devices or pieces
of software to help them resolve their
problems.
For instance, a server under attack by a
computer virus may detect the problem
in an early stage and dynamically call
upon an anti-virus provider for protection
and a cure. Taking this one step further,
computers may develop self-healing
capabilities, where a smart monitoring
system detects errors in an early stage
and initiates repair before the service
is interrupted.
In the future, digital immune systems
may help smart devices and computers
resist stress, an idea forwarded by
researchers from the International
Center for Theoretical Physics in Trieste,
Italy. Such an immune system would
work like the human immune system,
detecting viruses (and perhaps other
problems) and automatically developing
and implementing cures for the
“disease.” Several companies, including
IBM and Symantec, are working on
anti-virus systems along these lines.
But adapting to stress goes beyond
detecting attacks and viruses. When
computers become massively interconnected, a computer can alleviate
a performance problem by dynamically
requesting CPU power from other
machines on the network. Disk space in
a broadband network can by dynamically
allocated, (nearly) independent of
location, leading to a genuine “store
on the net” concept.
All these techniques require smart
software and smart agents that can
communicate with each other,
exchanging their needs and offerings
with the “community.” This leads to
an overall system that can dynamically
and unnoticeably adapt to a wide
range of stress situations, such as security
attacks, sudden performance requirements and component failure.
The more adaptive the system – whether
in terms of networks, people, location
or system condition – the better are its
chances for long-term survival.
14
LEF Smart Report 4/13
4/13/01
3:58 PM
Page 17
2. Sensing Bringing Awareness to Everyday Things
Sensing systems can acquire information
from the world around them and
respond to it. This ability to interpret
external signals and communicate back
is what makes these systems smart.
Sensing systems, which can handle quite
complex situations, yield consistent,
programmatic output.
The best known sensing systems are
chemical and nuclear plant control
systems. These large-scale systems
accept input from hundreds or even
thousands of sensors and regulate
temperature, pressure and power
throughout the plant. These systems
must be absolutely reliable; no matter
what happens in the plant, the system
must be able to respond with consistent,
acceptable behavior.
Not all sensing systems need to be large
and complex. New types of sensors that
are very small can detect a wide variety
of conditions and parameters. Microprocessors, ever shrinking, can be
embedded into virtually any device.
Tiny actuators are enabling small devices
to move. The creative combination of
sensors, microprocessors and actuators
will give rise to completely new types of
sensing systems in the environment, in
people, and in state-of-the-art devices.
Sensors in the Environment:
Cars, Roads and Rooms
Sensors in the environment bring new
levels of efficiency and convenience to
those they serve. For example, smart cars
and smart roads will use this technology
to improve the car driving experience.
Smart homes and smart offices will be
able to automatically adjust to their
occupants and the situation at hand
to enhance productivity.
Smart Cars. The car is a self-contained,
high-value environment, where driver
concentration and passenger comfort
and safety are central. This makes
the car a prime candidate for a smart
environment.
General Motors’ OnStar service has
been a pioneer in the area of car safety
and information services. OnStar is a
simple sensing system that combines
cellular phones, computers, and GPS
receivers in the car with a remote telephone call center. When the driver
presses the OnStar button in a GM car,
he or she is connected to the call center
while the car transmits the vehicle’s
position to the call center. The operator
can immediately see where the vehicle
is. If an accident occurs and the airbag
deploys, OnStar automatically notifies
the call center so emergency services can
quickly locate the car.
Also recognizing the car as a high-value
environment is Egery, a joint venture
between Vivendi Universal, a French
media and communications giant, and
PSA, a major European car manufacturer.
Egery has developed a wide range of
services for car drivers called Multi-Access
Motorist Services. MAMS include route
guidance and navigation, traffic information, parking services and mobile
office capabilities (e.g., drivers can
listen to their email, which is converted
LEF Smart Report 4/13
4/13/01
3:58 PM
Page 18
to speech). The services are delivered
to cars through wireless network services
such as SMS and WAP. Drivers use a
hands-free voice interface to interact
with the system.
MAMS, which CSC has been working
on with Egery, also features multimedia
entertainment services for passengers
including interactive radio, Internet
access, TV and games. Passengers can
even shop and plan trips while on the
road.
As telematics (integrating technology
into cars) advances, MAMS will be
extended to take action when the auto
senses trouble. For example, the car
could display more than a warning
light, which only instills panic. Upon
sensing something wrong, the car could
immediately notify customer service,
transmit real-time diagnostics and
schedule an appointment at a nearby
service center – all while you continue
to drive.
As this scenario suggests, data about the
car will become increasingly important.
Today many high-end cars come
equipped with memory that continuously
registers the position and performance
of the car. Tomorrow, this kind of logging
will be a standard feature on every car.
A tiny disk drive or solid state memory
device will keep track of all parameters
of the car: position; speed; date and
time when the driver entered the car,
started the engine and stopped it; what
radio station was on; etc. This information will be used for diagnosing technical
problems and could some day even be
called on for investigating accidents,
just like the black box on an airplane.
In fact, car data recorders have been
installed in about half of GM’s 1999
cars models and almost all its 2000 and
2001 cars. Several police forces in the
United States, and the police force in
Ontario, Canada, have the necessary
data retrieval systems and are using the
data for crash reconstruction.
Insurers can also make good use of car
data. Mayfield Village, Ohio-based
Progressive, the fourth-largest auto
insurance company in the United States,
has been testing a usage-based insurance
system, known as Autograph, in Texas
since 1998. A combination of an
on-board GPS receiver and cellular
technology logs when and where the
car is driven and reports this information
back to Progressive. The customer’s
insurance rate is based, in part, on
driving duration and locations. Technically, the usage-based system is feasible;
Progressive reported that customers
liked the control and cost savings.
16
LEF Smart Report 4/13
4/13/01
3:58 PM
Page 19
Several car manufacturers are developing
smart systems that focus on the use of
the car rather than the car itself. Experimental face recognition systems built into
cars observe the driver’s behavior. The
purpose is not only to recognize the
driver (protects against car theft) but
also to analyze the mood of the driver
and his or her intentions. The system
would then issue warnings or even
prohibit certain operations with the car.
For instance, the car may “refuse” to
start if the driver seems to be drunk.
Motorola has teamed with MobilEye,
a Jerusalem-based company creating
robotic vision applications, to develop
technology that assists car drivers. A
camera on the car front connected to
an advanced vision system will keep a
“third eye” on the road. The system is
able to warn the driver of road bends,
nearby cars and obstacles, or if the car
is leaving its lane.
Most car manufacturers are working on
comparable products for inclusion in
their high-end vehicles. Simple warning
and collision avoidance systems have
already been incorporated in some
luxury cars. Toyota, Ford, Jaguar,
Mercedes and others have adaptive
cruise control systems on some models.
Radar detects nearby vehicles and warns
the driver of imminent danger. Some
trucks feature onboard radar that
detects obstacles, even in darkness, fog
or rain. Subaru has presented a car
that warns the driver when the car
leaves its lane. Other manufacturers
are developing intelligent braking
systems that automatically reduce speed
when on a collision course. Intelligent
transport systems, including crash
avoidance, is one of the research areas
of the U.S. National Highway Traffic
Safety Administration.
We can expect the first integrated intelligent car safety systems to enter the market
in 2002 or 2003. More advanced systems
that can recognize a variety of dangers
and obstacles will probably gain general
acceptance within a decade. Cars will
evolve into highly integrated computerized vehicles, equipped with sensors
and controlled by intelligent computers
that monitor and manage all aspects
of the car, from fuel consumption to
driver behavior and traffic conditions.
Smart Roads. Smart cars become
even smarter when they are combined
with smart roads. Small radio beacons in
roads can identify the road and convey
speed limits and other traffic information
to the car. Cars and roads can communicate with each other, exchanging position,
speed and destination information.
The traffic system can then assign a slot
in a driving lane and enforce minimal
distance from other vehicles.
Sensors in the
environment bring
new levels of
efficiency and
convenience to
those they serve.
Expect managed car traffic systems to
look like air traffic control systems, where
all participants are registered and
their journey is planned and centrally
managed. The ultimate goal is intelligent
traffic management: cars and roads
assemble into an automatic and selfregulating system, where the position
and speed of cars in different lanes is first
mutually negotiated and then controlled
by the traffic system. Intelligent traffic
systems will be much more efficient and
safe than today’s ad hoc, jammed traffic
systems.
17
3
LEF Smart Report 4/13
4/13/01
3:59 PM
Page 20
The first steps towards intelligent traffic
management are well underway. Smart
Trek is a traffic monitoring system in
the Seattle area. Hundreds of sensors
and cameras along the highways gather
information on traffic density, traffic
jams and accidents. This information is
centralized and distributed over the
radio and the Internet. Before leaving
home or the office, one can look on the
Smart Trek site to see real-time traffic
density on highways. In this way, some
traffic problems can be avoided. Local
radio stations use the information to
broadcast traffic information. The same
information is also accessible through
mobile phones.
The Netherlands uses ATCS (Automatisch
Traject Controle Systeem), an automated
system developed by CSC to enforce
speed limits. ATCS is deployed on the
highway between Amsterdam and
Utrecht, monitoring car speed and
collecting evidence for fines.
Many other cities around the world
struggling with heavy traffic have implemented limited traffic management
systems that include notification about
traffic congestion and available parking
spaces, and supervision of speed limits
on highways. Although most of these
initiatives are limited and fragmented,
they manifest the evolution of cars and
roads towards smart transportation systems.
However, technology advances in this
area are slow because the technology
must be absolutely reliable and there are
no standards for systems or interfaces.
T R Y TA L K I N G Y O U R WAY
OUT OF THIS SPEEDING TICKET
The Netherlands has one of the busiest highway systems of the world, including huge traffic jams and a high rate of
traffic violations and accidents. The Dutch Ministry of Transportation wanted to improve the safety and traffic throughput
on highways by using information technology. In 1996 they awarded CSC Netherlands a contract to develop the world’s
first fully automated speed enforcement system for a three-lane highway.
The system, ATCS (Automatisch Traject Controle Systeem), has been operational since May 1997. ATCS monitors
traffic at three different locations 800 meters to 3 kilometers apart on the Amsterdam-Utrecht highway.
“At each of the locations, a camera and a pattern recognition system produce a ‘fingerprint’ of each passing car, together
with a time stamp,” explains Nico van der Herik, the main architect of ATCS. “The system then matches fingerprints,
snapped at different locations. If the travel time between successive locations is too short, the speed of the car was
too high.”
ATCS then extracts the car’s license plate number from the images and automatically generates a speeding ticket,
including the identification of the license plate owner and the time-stamped images showing the violation. The ticket
is mailed to the license plate owner.
“ATCS has proven to be very efficient,” says Martin Evertse, ATCS project manager. “The number of violators dropped
from 6 percent to 0.6 percent. Traffic flow has become smoother, deceasing pollution and increasing security.”
So while you may be loathe to receive a smart speeding ticket, many are breathing easier – not to mention driving safer.
18
LEF Smart Report 4/13
4/13/01
3:59 PM
Page 21
Smart Facilities. Rooms in homes
and offices are another type of smart
environment. They echo the basics of
smart cars: cameras recognize occupants,
occupants interact naturally with the
environment, and the environment alerts
occupants to problems and other events.
Smart rooms in homes and offices are
aware of their occupants and adjust to
them. Temperature and lighting conditions adjust automatically to the occupant’s
preferences when he or she enters the
room. In the future, this may be triggered
by sensors in our bodies, following
the research of Kevin Warwick at the
University of Reading, U.K.
Professor Warwick donned the first
surgically-inserted silicon chip transponder in his forearm. The transponder,
which transmits data and instructions to
computers in the immediate environment,
was designed to work with a smart
building. The building detects the presence of the person and can personalize
the environment for him by, for example,
diverting phone calls to the nearest
hand set, configuring network connections, and updating door signs and
location information. This is especially
helpful for mobile employees who set
up temporary offices (“hoteling”) at
company sites.
R ECOGNI ZI NG YO U
A key capability of any smart environment is to recognize its
occupants. Without asking for ID, smart cars recognize their drivers
and smart rooms recognize their occupants.
Recognizing individuals relies primarily on face recognition
technology. A camera detects the face of a person in its field of
view and is able to select the individual from a database of facial
information without interfering with the person. In addition to smart
environments, face recognition has many applications in surveillance
and security, including spotting criminals.
Face recognition is useful where strong person authentication is
necessary, or when people need to be identified from a distance
(e.g., on streets or in crowded rooms). However, today’s systems
need to view the person nearly face-on. A future challenge is to
recognize faces of people moving around and not looking directly
into the camera; in these situations current systems show 80 percent
success rates, versus 99 percent success rates for frontal images.
Another challenge is to recognize facial expressions (happiness,
anger) and the actions of occupants (sitting, walking, gesturing).
Face recognition systems will evolve into “person” recognition
systems, where face, behavior and voice will be analyzed to identify
a person. This will lead to virtually infallible biometric identification
systems and advanced unmanned surveillance systems.
Meanwhile, scaled-down versions of person identification systems
will be incorporated into tomorrow’s cars, homes and offices. In this
way the person and environment will be aware of each other and
will interact with each other in a natural way.
1. Decompose
= 0.22
-0.07
+...+ 0.26
2. Compare
(0.22, 0.31, -0.07, ..., 0.26)
People will seamlessly integrate with
smart environments, unaware of the
underlying technology. As the terminology
suggests, the behavior of a computer
controlled system is that of a computer.
A smart system, on the contrary, does
not behave like a computer; a smart
phone behaves like a phone, a smart
car behaves like a car. We should feel at
home in our smart home, not like we
are surrounded by computers.
+0.31
(0.17, 0.25, -0.08, ..., 0.28):
(0.95, -0.1, -0.21, ..., 0.1):
(0.21, 0.33, -0.10, ..., 0.26):
...
(0.88, -1.03, 0.2, ..., 0.24)
Jeff Jones
John Baker
Claude Doom
Claude Doom
Garry Austin
One approach to face recognition is to decompose (1) the rough image of a face into a
series of “standard” faces, called eigenfaces. The collection of all contributions of the
eigenfaces to the image is called the “template.” By comparing (2) the template to a
database of templates, the face can be identified.
19
LEF Smart Report 4/13
4/13/01
3:59 PM
Page 22
Sensors in People: Smart Pills
and the Bionic Man
In addition to being embedded in the
environment, sensors are finding their
way onto and into people (as Professor
Warwick’s embedded sensor illustrates).
These biosensors gather data from the
body and transmit it to a nearby (perhaps
wearable) computer for further processing. Today, tiny biosensors exist that
measure body temperature, pulse rate
and blood pressure.
Tomorrow’s smart environments will rely
on both traditional and new types of
sensors. Traditional sensors like cameras
and microphones are advancing rapidly
and fading into the environment, leading
to innovative security approaches and
smart environments. Like it or not, fans
at this year’s Super Bowl were monitored
unknowingly by cameras upon entering
the stadium. The cameras focused on
faces, one by one, and transmitted the
images to computers, which took less
than a second to compare them with
thousands of images of known criminals
and suspected terrorists. (Only one
match – a ticket scalper who disappeared
into the crowd – was found.)
Beyond the environment at large, sensors
can be found in smart materials, which
contain sensors that make the material
aware of its own condition. Sensors
embedded in car tires monitor tire
pressure, improving fuel consumption
and safety. The massive recall of
Firestone tires has rejuvenated interest
in tire pressure monitors after an earlier
regulation requiring them was dropped.
With such a dashboard monitor, the
driver can be warned when tire pressure
becomes alarmingly low. Similarly,
crystals inside helicopter rotor blades
can convey information on vibrations of
the blades; the pilot can be warned when
dangerous vibrations occur, averting
an accident.
20
One company uses biometric data
to enhance lifestyle and well being.
BodyMedia Inc. aims to sense a person’s
lifestyle, using an arm band laden with
sensors, in an attempt to address the
fact that most medical problems stem
from poor lifestyle choices. The arm
band collects metrics such as ambient
air temperature (indication of time
spent indoors versus outdoors), heat
flux across the skin (corresponds with
sleep versus wakefulness) and galvanic
skin response (hints at arousal). By
emphasizing general wellness information
and round-the-clock body monitoring,
BodyMedia hopes to help people regain
command of their lives and the factors
that contribute to a healthy lifestyle.
In addition to encouraging healthy
lifestyles, body monitors will become
generally acceptable, over time, for
monitoring the elderly and anyone with
a health risk. These monitors will
consist of tiny biosensors that sense our
vital signs and a microprocessor that
correlates and interprets the data.
When a dangerous condition arises,
the body monitor will automatically
alert emergency services and relatives.
Diabetics will be alerted as sugar levels
rise dangerously. No heart patient should
ever die from an unexpected attack.
LEF Smart Report 4/13
4/13/01
3:59 PM
Page 23
Of course, continuous monitoring of
biometrics can have unsavory implications, just like GPS receivers. Health
insurers with access to your biometric
data could base their insurance premium
on your actual health profile derived
from the data. This is bad news for those
in poor health (though potentially good
news for the healthy).
Other types of biosensors use a combination of biological material and silicon to
create sensors that can detect chemicals,
bacteria and even DNA signatures.
Future biosensors will recognize particular cells or strings of DNA, enabling
selective drug delivery. The “smart pill”
will consist of a drug reservoir and a
biosensor, programmed to release the
drug only when it recognizes the correct
DNA in neighboring cells. Such a pill
could, for instance, release destroying
drugs only to cancer cells by recognizing
the DNA defects in the cancer cells.
Steven Schwendeman, assistant professor
of pharmaceutical sciences at the
University of Michigan, thinks such
sensing drug delivery systems will become
common medical tools, like shots and
pills. “As biomaterials and delivery systems
continue to improve, my feeling is that
eventually we will all have little devices
that sit in our bodies to deliver drugs
and to do other things,” he says.
Some technologists think that biosensor
implant technology will lead to the
merge of man and machine. Today,
some forms of deafness can be cured
by implanting a tiny device that directly
stimulates the nerves of the patient’s
hearing system. Patients can enjoy oral
conversations and receive audible
warnings. Implants that stimulate visual
nerves have been implanted in blind
people, feeding an 8x8 pixel image from
a small frontal camera directly into the
brain. Currently, this gives only sufficient
vision ability to see doors and openings
and avoid objects. As the technology
advances, the blind may come to enjoy
higher resolution imagery, fed directly
into the brain.
Another approach is to implant an
artificial retina. In a recent landmark
trial sanctioned by the U.S. Food and
Drug Administration, a silicon chippowered artificial retina was successfully
implanted into the eyes of three blind
patients. The prosthesis is two millimeters
in diameter, less than the thickness of a
human hair, and contains approximately
3,500 individual light-sensitive solar cells.
SMART PILL
Batteries
Control Circuitry
Biosensor
Artificial Muscle Membrane
Drug Release Holes
Drug Sensor
Biocompatible Permeable Membrane
This tiny delivery system, whose drug release holes are coated inside with
artificial muscle, could deliver the hormone melatonin to insomniacs.
Source: News in Engineering, Ohio State University College of Engineering, August 1999.
21
LEF Smart Report 4/13
4/13/01
3:59 PM
Page 24
ARTIFICIAL SILICON RETINA
Inner Retina
Outer Retina
Optic Nerve
Lens
Implant in
the subretinal space
Iris
Cornea
Implants will also help paralyzed patients,
who will receive a brain implant connected
to a robotic arm. By learning to stimulate
the implant, these patients will gain
control over the motions of the arm.
We may envision such “bionic” implants
for both disabled and healthy individuals.
“When the human nervous system is
connected directly to a machine, we
will rapidly learn to use it,” says Peter
Cochrane, co-founder of ConceptLabs.
“We will soon realize that we can extend
and enhance our limited functionality
through silicon implants. Ultimately we
might enjoy the choice of never forgetting some things whilst being able to
delete others. If we could enjoy a wider
range of sensorial stimuli, processing,
and memory, who could resist such an
extension of our limited humanity?”
22
2
The world’s first implantations of the Artificial
TM
Silicon Retina (ASR ) chip prostheses into the
eyes of three patients with retinitis pigmentosa
are giving hope to millions of people who suffer
from vision loss. The ASRs, pioneered by
Optobionics Corporation, are two millimeters
in diameter and less than the thickness of a
human hair, and contain approximately 3,500
individual light-sensitive solar cells.
Sailors and military personnel will receive
implants that boost their vision up to 400
percent. Musicians will enjoy an implant
that enables them to hear frequencies
up to 100 kilohertz, enabling them to
hear previously unknown dimensions
of music. Perhaps a 128-terabyte brain
memory expansion will one day be a
fashionable birthday present.
Biosensors also provide the basic technology for artificial limbs. SILL (Smart
Integrated Lower Limb) is a project
aimed at the development of artificial
lower limbs. Multiple sensors provide
input, and a digital processor handles
output, controlling hydraulic joints and
actuators to provide a natural motion of
the artificial socket, knee and ankle in a
wide variety of circumstances. The first
results of the project are expected on
the market in 2002.
LEF Smart Report 4/13
4/13/01
3:59 PM
Page 25
Sensors in State-of-the-Art
Devices
Sensors are reaching beyond the confines
of the environment and people, stepping
out into the world through leading-edge
research. Sensors are “out there” in
several state-of-the-art devices including
mechanical fish, farm combines and
even smart dust.
Sensor fish, artificial fish laden with
sensors, are being used in the Pacific
Northwest as guides to help salmon
reach the Pacific Ocean. For a young
salmon spawned on the Columbia or
Snake Rivers, reaching the Pacific Ocean
can be a deadly pursuit, as the fish must
negotiate through hydroelectric dams
with lethal spinning turbine blades. The
dams also cause severe pressure changes
that can stun the fish.
Enter the sensor fish. The six-inch fish,
developed by the Pacific Northwest
National Laboratory, is sent through the
dam to measure stress and strain and
report its findings to the laboratory. The
idea is to understand the conditions
facing the fish and design safer dams.
Farmers in Alabama, Georgia and
Tennessee are using combines outfitted with sensors and GPS, coupled with
satellite-based thermal remote sensing,
to diagnose soil conditions and crop
yield. This study, sponsored by the
Alabama Space Grant Consortium, the
Georgia Space Grant Consortium,
Auburn University and the University of
Georgia, combines remote sensing with
precision farming to help farmers better
manage their crops.
Finally, for the masses, there is smart
dust. Researchers at the University of
California at Berkeley are developing a
complete communications system the
size of a grain of sand. The Smart Dust
project, funded by the U.S. Defense
Advanced Research Projects Agency
(DARPA), is exploring both military
and commercial applications. Picture
sprinkling smart dust sensors on the
battlefield for surveillance, in the
warehouse to control inventory or
on products to track quality. Imagine
reaching for a box of cereal on the
grocer’s shelf and finding out it has sat
in 80 percent humidity for three days
(its flakes won’t be crunchy!).
As sensors generate a new wave of
information technology innovation, we
can expect a wealth of sensing systems
to emerge. These systems will lead to a
world where every system, device, person
and environment can be monitored
continuously – for better or worse.
“When the human
nervous system is
connected directly
to a machine, we
will rapidly learn
to use it.”
— Peter Cochrane
Co-Founder
ConceptLabs
23
LEF Smart Report 4/13
4/13/01
3:59 PM
Page 26
3. Inferring Drawing Conclusions from Rules and Observations
Beyond adapting and sensing, a smart
system should be able to solve problems
using rules and observations and draw
conclusions that can help it perform
tasks. This ability, known as inferring, is
essential for any intelligent entity and
provides the “intelligence” in artificial
intelligence systems. It is akin to man’s
ability to see smoke and infer that fire
is nearby.
Getting Started:
The General Problem Solver
Originally, AI researchers thought that
they could build a “General Problem
Solver” that would mimic a human’s
ability to solve problems across a wide
variety of domains. Alan Newell and
Herb Simon monitored human subjects
in various problem-solving activities such
as playing games like chess. The behavior
of the subjects was recorded and broken
down into elementary components that
were then regarded as the basic bits
of problem-solving knowledge. This
knowledge was encoded into a set of
“production rules” that, in theory, could
be applied to solve any problem in any
domain better than a human because of
the tremendous speeds of the computer.
But something was missing. No one could
find a single set of rules that could be
applied to all problem domains. It was
noted that the methods humans use to
solve problems employ a great deal of
knowledge about the domain in addition
to our problem-solving skills. Doctors are
able to diagnose illness because they
have extensive knowledge of medicine;
likewise, mechanics need specific
knowledge of engines and transmissions
to be effective. What was missing was
“expert knowledge.”
Adding the Expert:
Expert Systems
Expert knowledge is a combination of a
theoretical understanding of the problem
and a collection of heuristic problemsolving rules that experience has shown
to be effective in a domain. Expert systems
– arguably the most popular approach
in AI – are knowledge-based programs
that capture the domain knowledge of an
expert in a specific, very narrow field
and provide “expert quality” solutions
restricted to that field.
The advantages of expert systems are
apparent when compared to conventional systems using databases – expert
systems operate at the knowledge level
while conventional systems operate at
the information level. Databases typically
contain information stored as text or
numbers that has no meaning outside
the database structure. Conventional
systems are considered to be operating
at the information level because they
merely manipulate the data. Knowledge
bases contain knowledge – compiled
chunks of information that represent
heuristics (rules of thumb), observations,
relationships or conclusions based on
the experience of experts in the domain.
The expert systems that use these
knowledge bases operate at the knowledge
level similar to their human expert
counterparts. It is no coincidence that
they tend to have greater success than
conventional systems in areas where we
normally rely on a human expert.
LEF Smart Report 4/13
4/13/01
3:59 PM
Page 27
Programming at the knowledge level
makes it possible to represent knowledge
in a natural fashion. Experts tend to
express many problem-solving techniques
in terms of if-then rules that can be
readily coded. For instance, when you
tell a mechanic that your car won’t start,
he or she is likely to respond, “If the
engine doesn’t turn over, then check
the battery” – which is, of course, a rule.
Additionally, updating the system is
somewhat simplified, since each rule
approximates an independent chunk
of knowledge. Adding a rule about
adjusting the carburetor won’t require
modifying the rules concerning the
battery.
Tax programs are a good example for
comparing conventional programs and
expert systems. A conventional program
can do quite well by leading you through
the tax form to compute your earnings
and deductions, and then calculating
your taxes by referring to the tax tables
stored in a database. But an expert system
contains the experiential knowledge of
a tax accountant and can prompt you
for additional deductions, display tax
advice on video clips, alert you of the
possibility of an audit, and provide you
with recommendations that will help
you save money in future tax years.
CSC’s Civil Group has developed several
expert systems to help the National
Aeronautics and Space Administration
operate satellites, including REDEX,
which diagnoses hardware failures in
NASA tracking stations, and AMOS,
which automates the command and
control of NASA’s XTE satellite.
Ed Luczak, leader of the team that built
AMOS, points out, “In these days of
reduced budgets and increased workload,
NASA simply is not able to conduct
these types of missions without relying
on expert systems. Without expert
systems, some existing missions would be
curtailed or even canceled prematurely.
The next generation of missions – which
will involve constellations of multiple
spacecraft – would not even be possible.”
Expert systems have been around since
1965 when they were used to identify
3
chemical compounds and diagnose
4
bacterial infections . Since then, they
have been used to mimic an expert’s
ability to interpret, predict, diagnose,
monitor, and control in domains
from architecture (design) to zoology
(classification).
3
Dendral was designed to infer the structure of
organic molecules from their chemical formulas.
4
MYCIN used expert medical knowledge to diagnose
and prescribe treatment for spinal meningitis.
25
LEF Smart Report 4/13
4/13/01
3:59 PM
Page 28
AMOS has enabled NASA to reduce
XTE staffing by 50 percent and is saving
the agency a total of $6.4 million over
the XTE mission lifetime.
AMOS, THE EXPERT
S AT E L L I T E O P E R AT O R
Operating satellites is by no means simple. First, they are complex
systems with limited systems resources. Next, one cannot afford to
lose control of the system. Finally, communication with satellites is
often slow and sparse; you can only communicate when the
satellite is within range of a ground station during a so-called “pass.”
NASA routinely operates satellites by using two operators in four
shifts. One operator monitors the health of the satellite; the other
sends the commands.
AMOS was developed by CSC for NASA to automate the command
and control of NASA’s XTE satellite. AMOS combines two expert
systems with paging and the Web. The system operates in “lights-out”
mode – i.e., without operators present.
AMOS
External
Control
Center
Software
Schedule
Generator
Mission
Planning
Files
Monitoring
Expert
System
Commanding
Expert
System
Manual
Operations
Interface
Schedule
Executor
Data
Server
Middleware
Glueware
Paging
& Web
Server
Perhaps the most notorious expert system
is Deep Blue, the world-famous chess
program. Deep Blue’s knowledge of
chess was extracted from an international
grandmaster and loaded on to specialized
hardware consisting of a chess-specific
processor chip combined with a
PowerParallel SP computer. This combination of specialized hardware and software
gave Deep Blue the ability to examine
and evaluate 200 million positions per
second. Deep Blue accomplished the
unthinkable on May 11, 1997, beating
World Champion Garry Kasparov in a
little over an hour. It was the first time
a current world champion had lost a
match to a computer opponent under
tournament conditions.
Phone
Internet
Web
Security
Firewall
The first expert system monitors the health of the satellite. The
second system assembles commands and sends them to the
satellite. When problems occur, the information is transferred to a
Web site and an operator is paged. The operator can consult the
Web site to evaluate the problem and determine a strategy. All
subsystems communicate through specially developed middleware
and glueware, which also interface to mission planning and scheduling.
The AMOS team worked closely with NASA operators to capture
their knowledge into the expert systems. Since its inception, AMOS
has handled over 20,000 passes of the XTE satellite.
Boosting Productivity:
Expert System Shells
Early expert systems had to be custombuilt and required as much effort as
5
their conventional counterparts . To
accelerate market adoption, expert
system shells were developed that
provided all of the components of an
expert system except the knowledge base,
thereby drastically reducing the time
and programming expertise required
to create an expert system. The barrier
then became cost – expert system
shells could cost as much as $50,000.
While expert systems could solve many
problems, they were reserved for the very
few that could justify such an investment.
That changed in 1984, when a group at
NASA Johnson Space Center created
the C Language Integrated Production
System (CLIPS) as a means of bringing
expert systems to the masses. CLIPS was
26
3
MYCIN was developed in about 20 person-years.
LEF Smart Report 4/13
4/13/01
3:59 PM
Page 29
free to universities and anyone working
on a government contract, and available
at the nominal cost to everyone else. It
has since entered the public domain and
can be freely downloaded.
The legacy of CLIPS continues through
others that it inspired such as Jess.
Jess is a Java-based expert system shell
developed by Ernest Friedman-Hill
at Sandia National Laboratories in
Livermore, Calif. Using Jess, you can
build Java applets and applications that
have the capacity to “reason” using
knowledge you supply in the form of
declarative rules. Jess has not entered
the public domain, but it can be downloaded at little or no cost by agreeing
to certain licensing restrictions.
Large organizations took advantage of
these expert system shells to advance
their business. Computer maker Digital
Equipment Corporation built an expert
system to configure customer orders;
energy giant Schlumberger created an
expert system to aid in oil drilling.
Enhancing Adroitness:
Fuzzy Logic
One weakness of computing is that it
tends to force us into a “yes” or “no”
world, with little allowance for nuances
– responses like “maybe” or “a little bit.”
Humans are faced with this problem as
well, but we handle it better. For example,
if I ask you if a man who stands six
foot-two inches is tall, you’d probably
say yes. But you have no difficulty
understanding why his basketball team
may consider him short!
Fuzzy logic was developed by Lotfi Zadeh
in the 1960s to handle these types of
contradictions in natural language.
Fuzzy logic, a technology used in inferring systems, has since proven itself
in a wide variety of control problems
including automatic focusing for
cameras, elevator control, and anti-lock
brakes.
Rather than force us to use a rule that
says, “If a man’s height is greater than six
foot then he is tall,” fuzzy logic allows
us to create sets of height (such as very
short, short, average, tall, and very tall),
and let the conclusion of the rule divide
membership of the man into all of the
sets. So a five-foot man might be considered to have 70% membership in the
“very short” set, 20% membership in the
“short” set and 5% in each of the average
and “tall” sets.
This approach allows us to write rules
that use qualitative phrases such as “If
a man is very tall” or “If a car is going
very fast” and let the determination of
“very tall” or “very fast” depend on the
circumstances. It also allows us to view
someone as “38% tall” rather than forcing
us to decide if they are tall or not.
Fuzzy logic control techniques have
been applied to many electronic control
systems in the automotive industry such
as automatic transmissions, engine
control and Anti-lock Brake Systems
(ABS). The Mitsubishi Gallant uses fuzzy
logic to control four of its automotive
systems. General Motors’ Saturn utilizes
fuzzy logic for automatic transmission
shift control.
Intel Corporation, the leading supplier
of microcontrollers for ABS, has an
agreement with Inform Software
Corporation, the leading supplier of
fuzzy logic tools and systems, to develop
ABS for cars. The fuzzy logic approach
to ABS allows developers to create more
27
LEF Smart Report 4/13
4/13/01
3:59 PM
Page 30
complex rules, including rules that have
memory such as: “If the rear wheels are
turning slowly and a short time ago
the vehicle speed was high, then reduce
rear brake pressure.”
It is quite likely that your car has fuzzy
brakes, but your car dealer didn’t tell
you. That’s because many car manufactures in the United States are concerned
about the negative connotation that
goes along with the word “fuzzy” since
it implies imprecision. According to
Constantin Von Altrock, founder of
Inform Software Corporation, there is
concern that a clever lawyer could persuade a layman’s jury that a fuzzy-logic
ABS is hazardous simply because of the
name.
Putting Inferring to Work:
Data Mining
A powerful application of inferring
technologies is data mining: harvesting
useful information from mountains of
data. Wal-Mart, the chain of over 2,000
retail stores, uploads 20 million point-ofsale transactions every day to an AT&T
massively parallel system with 483 processors running a centralized database.
At corporate headquarters, they want
to know trends down to the last Q-Tip,
but it would take several lifetimes for a
human analyst to glean anything from the
equivalent of two million books contained in a terabyte.
Data mining is the computer-assisted
process of digging through and analyzing
enormous sets of data and then extracting meaning from the data. Data mining
extracts patterns, changes, associations
and anomalies from large data sets
to describe past trends and predict
future ones.
Data mining differs from traditional
statistics in that statistics forms a
hypothesis and validates it against the
data. In contrast, data mining “discovers”
patterns and forms its own hypothesis.
For example, hospitals can discover
patterns for how many days a patient
occupies a bed for a given disease, or
which doctors’ patients had longer than
average stays for a given disease.
Organizations have become excited
about data mining because the digital
age has provided them with a wealth of
data they know is valuable – if only they
D ATA M I N I N G P R O C E S S
Data sources
Databases, flat files,
newswire feeds and
others
Preprocess Data
Collect, clean and
store
Search for patterns
Queries, rules, neural
nets, maching learning,
statistics and others
Analyst reviews output
Interpret results
Revise/refine
queries
Data warehouse or mapping scheme
28
Take action
based on findings
Report findings
LEF Smart Report 4/13
4/13/01
3:59 PM
Page 31
knew how to use it. Data mining can
be used in retail stores to correlate the
purchasing of items and predict where
items should be located in the store.
For example, customers that buy beer
tend to be susceptible to purchasing
pretzels, buyers of infant formula tend to
need diapers, and buyers of vegetables
may also require salad dressing. What
other patterns exist? Data mining can
tell you.
Another potential gold mine for data
mining is in direct marketing and catalog
sales. Companies that send catalogs
directly to your home can have mailing
lists numbering millions of addresses
but a budget for mailing to only a
fraction of the list – perhaps 50,000
addresses. Since the response to catalogs
is generally low, companies must determine which households are most likely
to respond and target those homes.
Marketminer, a data mining tool developed by MarketMiner, Inc., explores
mailing lists for past purchasing trends
to help companies create smaller, bettertargeted lists that optimize the potential
return on a mailing.
The increased digitization of information
will ensure that data mining continues
to play an important part in customerrelations management. Years ago, shopkeepers had no trouble understanding
their customers and responding quickly
to their needs. It’s much more complex
for today’s businesses, which must deal
with more customers, stronger competition, and a quicker pace of business.
Data mining can help organizations
understand their customer base and
predict trends in their buying habits.
Still Needed: Common Sense
Despite the power of inferring systems,
they still fall short when it comes to
everyday common sense. It seems
counterintuitive, but the system that
beat the world champion in chess
doesn’t have the foggiest idea that dogs
bark or ovens can be hot. In fact, it
doesn’t have the common sense you
would expect of a two-year-old child.
That’s the main disadvantage of
inferring systems – their knowledge
is limited to the domain for which
they’re programmed.
Initially, researchers thought that the
hard part of building inferring systems
would be capturing the expertise. But
it turned out, as inventor Ray Kurzweil
put it in The Age of Spiritual Machines,
that the problems we thought would
be difficult were easy, and what we
thought would be easy was seemingly
impossible.
Domains like mathematical logic,
chemical compounds, geography and
demographics rely on a small set of
concepts and heuristics, making it
29
LEF Smart Report 4/13
4/13/01
4:00 PM
Page 32
CYC is a massive common sense knowledge base containing over a million
assertions about the world. CYC may
someday augment our computers with
deeper knowledge, amplify our minds
with in-ear oracles, and animate our
world with talking VCRs, coffeepots
and cars. But even with 75 people
working on CYC, no one expects these
breakthroughs for at least 25 years.
relatively easy to capture and represent
the domain knowledge. On the other
hand, capturing the thousands of simple,
heuristic and unwritten rules about the
world, its contents and its behavior –
what we call common sense – has been
a major challenge. The most serious
effort to date has been the work by AI
pioneer Doug Lenat on CYC.
The premise of CYC is that building
expert systems to perform specialized
tasks will always be an imperfect proposition because these systems lack common
sense. The analogy is that before someone
can be a brain surgeon, they have to
have completed medical school, which
was preceded by college, high school,
elementary school, and kindergarten.
Even by the time a child goes to kindergarten, he or she already knows a million
facts about the world.
When Lenat conceived of CYC in 1984,
he theorized that for computers to truly
possess artificial intelligence, they needed
the same foundation of basic facts about
the world. The CYC system seeks to
provide just that.
30
Still, CYC has proven itself effective
in mundane tasks. E-CYC, the search
engine incarnation of CYC, is used to
make the Hotbot search engine more
precise by detecting ambiguity and
reducing the number of false hits
encountered during a Web search.
Other applications of CYC include
cross-checking spreadsheets, retrieving
images based on descriptions of them,
detecting network vulnerabilities and
improving call center operations.
As the complexity of the world we live
in continues to rise, we will rely on
inferring systems to help us make sense
of it all. More importantly, the field of
research that produced inferring systems
continues to create even more powerful
systems. Already there are systems that
go beyond solving problems to learning
from experience and training. While
inferring systems will always be important,
these learning systems can handle many
more tasks.
LEF Smart Report 4/13
4/13/01
4:00 PM
Page 33
S M A R T Q & A AT T H E I R S
Inferring systems may soon leave their mark on ordinary
taxpayers. The U.S. Internal Revenue Service, working
with CSC, is testing a common sense knowledge system
to help the agency respond more efficiently to the
hundreds of emailed questions it receives every day.
The goal of the research is to have the system, called
CYC, be able to discern whether a question has already
been answered, meaning a “canned” response can be
used. This frees IRS staff to work on new questions,
while delivering a speedy response to the taxpayer.
Currently the IRS uses 1,700 part-time tax advisors in
10 locations nationwide to field emailed questions. In
2000 the IRS received 326,000 emailed questions, up
21 percent from the 270,000 it received in 1999. With
no advertising, the question and answer service has
become very popular as more people become more
comfortable with computers. When the IRS launched
the service, called the Electronic Tax Law Assistance
Program (ETLA, nicknamed “Ask the IRS”), in the midnineties, it received 13,000 questions in its first year.
It takes a tax advisor anywhere from 10 minutes to
two hours to respond to a question. Anything that
can be done to reduce that time improves tax advisor
productivity and response time to the taxpayer.
Email-Based Customer Service
“The IRS ETLA project is just a small part of a rapidly
growing trend in business and government to provide
email-based customer service. Any product that can
analyze and in some sense ‘understand’ taxpayer emails
has tremendous industry-wide potential for productivity
savings,” said Tom Beers, project manager of ETLA.
“We are hopeful that CYC represents a significant step
toward realizing the goal of ‘automated understanding’
of customer emails.”
Here’s how ETLA works today: A taxpayer submits a
question to the IRS from its Web site. The question is
posted to a database in Austin, and a tax advisor picks
up the question to work on. Using key words, the tax
advisor checks a database of questions to see if the
question has already been answered. If it has, the
advisor reviews and emails the canned answer with a
personal note to the taxpayer. If it is a new question, the
advisor researches the question, develops a response,
and emails it to the taxpayer. The question and response
are also posted to the database of questions and
answers.
The IRS believes its database of questions and answers
is underutilized. The database is used to address
20 percent or fewer of all incoming questions; the
IRS thinks that figure could exceed 50 percent.
Understanding When Money is Income
That’s where CYC comes in. CYC, under development
for 15 years by Austin-based Cycorp, contains over a
million rules and assertions about common sense and
the world around us. For instance, CYC knows that birds
fly but that tables normally don’t. CYC knows that the
money you earn working is income, and that if you sell
something, that money can also qualify as income.
CYC is being evaluated as a tool to improve the ETLA
software. The goal is that CYC would eventually enable
ETLA to pick the right answer from the database of
canned answers without any human intervention.
The research, still in the early stages, currently focuses
on getting CYC to understand enough tax terms and
concepts to allow it to be used as a smart search tool
for the database of canned answers. When a tax advisor
picks up an emailed question, instead of using key
words to match the question to an existing question,
the advisor rephrases the question and types the new
question into CYC. CYC analyzes the question and,
based on its understanding, determines whether there is
a relevant canned answer(s). If there is, CYC generates
a response and includes the canned answer(s) as
justification. (For now, CYC is being applied only to
questions pertaining to tax filing.)
As before, the tax advisor appends a short message
to the answer and emails it to the taxpayer. But unlike
before, the entire process is much faster, and more
matches with existing questions can be found.
Training CYC
Although CYC has a lot of common sense, it is no whiz
kid, much less a tax expert. CYC has to be trained on
the nuances of tax vocabulary and lexicon. For instance,
it had to be told what a dependent is (in the tax sense),
as well as the many different ways to refer to a tax
return form: the taxes, the return, the IRS refund.
“People refer to tax forms in many ways; we have to
train CYC on this,” explained Roland Sanguino, CSC
consultant working on the IRS-CYC project.
Similarly, CYC has to be trained on the many subtleties
of the English language. For example, CYC is being
trained to understand these two questions:
Can my mother claim my daughter as a dependent?
Can my daughter claim my mother as a dependent?
Notice that both questions use exactly the same words
but have a different subject and object. Using its
common sense knowledge, CYC can understand subtle
differences like this, which can’t be picked up by a
regular text-matching tool.
Indeed, vocabulary and grammar are major challenges
for a system like CYC because they are governed by
very complex rules and are extremely error-prone.
“When the advisor rephrases a question, he or she is
acting like a grammarian and spell-checker for CYC.
However, as CYC’s natural language interface evolves,
CYC may be able to do more of this rephrasing itself,”
Sanguino said.
Initially there were loftier intentions for CYC: having it
generate answers to new questions on its own and being,
in effect, a tax expert. But that idea has been shelved
– at least for now – in favor of the more limited but
practical application of identifying repeat questions.
Pragmatics over flashiness may signal a trend.
“Artificial intelligence systems like CYC can bring a lot
of value if applied in the proper place,” said Sanguino.
“In the past there were many unreasonable expectations
about what AI could do, and results were disappointing
and abandoned. But with a tool like CYC, you can get
a lot of value by applying the tool very specifically to
applications that can benefit from machine-powered
common sense.”
31
LEF Smart Report 4/13
4/13/01
4:00 PM
Page 34
4. Learning Using Experience to Improve Performance
The ability to learn is a vital component
of intelligence. Indeed, the very notion
of an unchanging intellect is a contradiction in terms. Intelligent systems must be
able to improve through the course of
their interactions with the world, as well
as through the experience of their own
internal states and processing.
In order to learn, a system must be able
to:
•
Evaluate current behavior. This
enables the system to distinguish
between inefficient or incorrect
behavior and successful behavior.
•
Induce. Given a set of examples, the
system is able to create a general concept of how to approach a problem.
•
Modify internal knowledge. Based on
the realization that its current behavior
is incorrect, coupled with the new
concept created from induction, the
system can modify its knowledge or
structure in a way that should produce
better behavior in the future.
Learning is an important factor in implementing practical AI applications. In fact,
the major obstacle to the widespread
use of expert systems is the “knowledge
engineering bottleneck.” This bottleneck
is the cost and difficulty of building
expert systems using traditional knowledge
acquisition techniques. An elegant
solution would be to program the system
with a minimal amount of knowledge
and allow it to learn from examples,
high-level advice, or its own exploration
of the domain.
2
This is a critical goal of the C YC common
sense knowledge system. Initially, both
a subject matter expert (SME) and a
knowledge engineer were required to
handcraft and spoon-feed knowledge
into CYC. A current project known as
Rapid Knowledge Formation has created
tools that replace the knowledge engineer
in the knowledge acquisition process,
enabling SMEs to quickly build large
knowledge-based systems themselves.
Rapid Knowledge Formation allows the
system to read text, assimilate what it
read, and ask the reader if its interpretation is correct. (This form of assisted
learning is impressive, though the ultimate
goal has always been to enable CYC
to learn on its own, using automateddiscovery methods guided by models
of the real world.)
Increasing Confidence:
Intelligent Agents
The ability to learn enables a program to
become much more than just an expert
system. It increases our confidence in
the system, allowing it to act as our
“intelligent agent.” While agents are not
the only manifestation of machine
learning, they are an important example
of smart programs and the potential
of machine learning.
An intelligent agent is a software program
capable of acting on behalf of its user.
The difference between an intelligent
agent and conventional software lies in
the agent’s capabilities and how it is
used. In addition to its ability to learn,
agents are also persistent, proactive,
and semi-autonomous. Persistence is the
ability of the agent to remain active over
the course of all programs, accumulating
knowledge as it goes. Being proactive
means that the agent does not wait to
be told what to do – rather, it seeks
LEF Smart Report 4/13
4/13/01
4:00 PM
Page 35
opportunities to fulfill its goals. Semiautonomous means that once the agent
finds an opportunity, it can act on our
behalf without waiting for an explicit
action on our part.
Because an agent exhibits more intelligent
behavior than its conventional counterpart, it is usually allowed to perform
more trusted tasks on our behalf. Agents
can be found at several e-commerce
Web sites bidding, buying, and selling
items for their users – while learning
their users’ likes, dislikes, and budgets
in the process. Agents are commonly
used to scan the Internet, searching for
relevant information and then filtering,
processing, and analyzing the data for
us. Agents are also used as “secretaries”
for mobile phones, answering calls
and alerting users to important events.
Perhaps the most natural use of agents
is as characters in interactive computer
games and military simulations.
Multiple agents are often required to
solve complex problems. Multi-agent
systems have many applications including
air-traffic control, network monitoring
and manufacturing. Sandia National
Laboratory developed a multi-agent
system to guard against computer
hackers; individual agents monitor
computer systems and alert other agents
of intruders. At the Xerox Palo Alto
Research Center, agents are being
developed that detect faults in network
printers. The agent can summon a
repairman, notify other printers on the
network and even detect trends that
might indicate network design flaws.
THALLIUM DI AG NOSTIC
W O R K S TAT I O N –
LEARNING TO DI AG NOSE
CORONARY ARTERY DISEASE
The Thallium Diagnostic Workstation (TDW), developed by CSC
researcher Rin Saunders, learns to diagnose coronary artery disease
from a set of digitized heart images. TDW could save an estimated
15 fliers per year from undergoing unnecessary cardiac catherterization, an invasive procedure to the heart.
Physicians at the U.S. Air Force School of Aerospace Medicine
(USAFSAM) qualify fliers for aeromedical fitness. Significant coronary
artery disease (CAD), causing narrowing of the arteries supplying
blood to the heart, is grounds for disqualification. The Air Force
can disqualify a flier for the loss of 30 percent of the diameter of a
coronary artery. However, CAD often produces no severe symptoms
until close to 90 percent of the diameter of an artery is lost, making
the diagnosis of aeromedically significant CAD a harder problem than
if diagnosing CAD in a conventional setting.
If EKG or other test results raise the suspicion that deposits have
narrowed the flier’s coronary arteries, a cardiologist administers a
thallium test, injecting radioactive thallium into the flier’s bloodstream
during physiologic stress (the flier is run on a treadmill) and then
using a gamma camera to image the heart. If the image is considered
abnormal, the flier undergoes cardiac catherterization to obtain a
definitive diagnosis.
Interpreting thallium imagery is difficult and subjective. Physicians
at USAFSAM showed considerable variation in their thallium-reading
skills, which they acquire through on-the-job experience. Further,
physicians often leave after a three- to four-year assignment, their
expertise going out the door with them.
By learning rules for diagnosing thallium imagery, TDW enables every
physician to perform as well as the best available physician. Further,
TDW retains expertise despite physician turn-over. The bottom line:
TDW can improve thallium imagery interpretation overall, lowering
patient risk by reducing unnecessary cardiac catherterizations.
TDW combines machine vision with symbolic induction techniques
to learn diagnostic rules for thallium imagery. Its custom-developed
machine-learning algorithm called METARULE learns rules for
diagnosing by making assertions about what makes a case normal
or abnormal. These assertions make up the inductive kernel. Rules
in the kernel reference aspects of the image, such as which feature
types are present and how many features there are. METARULE
selects combinations of these rules to produce a complete rule set.
The learned rules outperformed USAFSAM’s best diagnostician.
Because TDW’s expertise is objective, physicians can compare
and evaluate objective criteria for diagnosing thallium images. TDW
provides a diagnostic standard that is consistent and reproducible
across both physicians and patients.
33
LEF Smart Report 4/13
4/13/01
4:00 PM
Page 36
An extension of the agent-based
approach are systems that rely on
both human and machine agents. These
systems produce something known as
“mixed-initiative intelligence” because
the agent (human or machine) that has
the most information or strength for a
specific task in the reasoning effort seizes
the initiative in solving the problem.
This integrates human and automated
reasoning to take advantage of the
respective reasoning strengths of each.
The implication is clear – learning
improves performance and opens up
new opportunities for innovation. The
question then becomes, what is the best
approach to learning? Since there are
many approaches but no clear winner,
it is useful to examine three promising
approaches – case-based reasoning,
neural networks and genetic algorithms
– to see how we can enable a computer
to learn.
Learning by Experience:
Case-Based Reasoning
Case-based reasoning employs what
is perhaps the simplest approach to
learning: learning by experience. CBR
is the process of using solutions to
previous problems (cases) to analyze
and solve a new problem. By relying on
past cases, the quality and efficiency of
the reasoning is increased through the
derivation of shortcuts and the anticipation of problems in new situations.
Newly solved cases are added to a “case
base,” allowing the system to continually
improve performance through learning.
The case base not only stores successes
for reuse but also failures to avoid
repeating mistakes. Proponents contend
that CBR mimics a human’s approach
to reasoning and is a sensible method
of compiling past solutions to avoid
reinventing the wheel or repeating past
mistakes.
34
CBR systems offer a very simple, yet
useful, method of capturing past experiences. This approach is best suited to
situations in which a set of cases exists
and it is difficult to specify appropriate
behavior using abstract rules. Most
successes have been in domains involving
classification (e.g., medical, legal) and
problem solving (e.g., help desks, design,
planning, diagnosis).
The Web site “CBR on the Web” lists
some of the businesses that are already
relying on CBR to handle common
problems and questions posed by their
customers. For example, the HP Printer
Helpdesk is an online troubleshooting
tool designed to guide you through
solving common problems related to
HP laser printers. 3Com uses a similar
on-line CBR helpdesk approach to
provide technical information to help
diagnose and solve installation, upgrade
and configuration problems with 3Com
products.
Digital Gray Matter: Neural Nets
Artificial neural networks are another
important approach to learning systems.
Neural nets are simplified computer
versions of the human brain. Whereas
expert systems rely on a human’s
description of how to accomplish a task,
a neural net is best suited for problems
in which we can’t accurately describe
how we do something. For example,
consider writing an algorithm for
recognizing handwritten characters.
Where would you start? Yet, this is a task
that we easily accomplish every day.
Neural nets are based on sophisticated
mathematical techniques that make
tasks such as handwriting recognition
relatively easy. The network is composed
of layers of “neurons” with values
LEF Smart Report 4/13
4/13/01
4:00 PM
Page 37
assigned to them that determine how
they will react. The accuracy of the values
associated with a system will determine its
performance. But rather than programming the values, the system is “trained”
by providing it with samples of handwritten characters and the correct answer.
During the training phase, the network
will adjust its weights to improve its
performance. If the training cases were
selected well, the network will be able
to generalize from the training set and
recognize most people’s handwriting.
Neural nets cannot do anything that
cannot be done in the “traditional” way
– i.e., by writing an algorithm for a
6
traditional computer. But neural nets
are very effective for solving problems
involving noisy, incomplete data, or tasks
for which we don’t have an algorithm
(e.g., vision, natural language understanding or pattern recognition).
Problems that fall in this category
include predicting behavior of complex
systems such as weather and classifying
problems such as how to diagnose a
health condition.
As suggested earlier, optical character
recognition (OCR) is one of the most
successful neural net applications.
Several commercial OCR packages are
based on neural nets and can recognize
both printed characters and hand-written
characters. They are incredibly fast,
even when the original is not crisp. A
related application is signature verification. Because neural nets can handle
noisy data, they can be trained to ignore
irrelevant variations in signatures made
by the same person and to concentrate
on the constant factors, such as width of
strokes.
6
The “proof” of this is that neural networks are in
many cases implemented (or rather, simulated) on
traditional computers. Some simple neural networks
can even be implemented in a spreadsheet.
Neural nets have also been used to classify and diagnose. Neural nets are used
to inspect crops and classify fruits and
vegetables. The London Underground
subway system uses neural net systems to
detect faults. In manufacturing, neural
nets can recognize defective parts.
Neural nets have found a wealth of
applications in the financial industry,
where they are used to forecast stock
prices, analyze trends and manage funds.
Here the power of neural nets is used
to model a largely unknown and fuzzy
inter-dependence between market
parameters and stock prices. Several
marketing tools also contain neural net
technology for evaluating the impact of
direct marketing.
Other applications in the financial world
include fraud detection, buying patterns,
credit rating and risk evaluation. For
instance, when detecting fraud, the
“training cases” for the neural net is the
normal behavior of customers, without
explicitly defining what “normal” behavior
is. However, the neural net will detect any
pattern that deviates from the patterns
it is trained for. Such deviating patterns
indicate different customer behavior,
possibly caused by fraud.
Fraud Solutions, a venture by Nortel
Networks, provides a neural network
called Cerebus that rapidly detects telephone fraud by monitoring subscriber
behavior. Mimicking a human analyst,
Cerebus creates individual profiles for
telephone subscribers and monitors
these profiles for anomalous activity
patterns.
35
LEF Smart Report 4/13
4/13/01
4:00 PM
Page 38
HNC Software and Authorize.Net have
joined forces to use neural networks to
prevent fraud and establish the Internet
as a highly secure means of business-tobusiness (B2B), business-to-consumer
(B2C), and consumer-to-consumer (C2C)
commerce. As Authorize.Net’s merchants
process transactions through the company’s Internet payment gateway, they
have the option of using neural nets to
screen their transactions for fraud.
Neural nets have also been used to
develop gaming strategies. TD-Gammon,
a backgammon playing program,
advanced its playing skills by playing 1.5
million games against itself and evaluating
the strategies used. After many tens of
thousands of games, the level of TDGammon started to improve until it
reached the level of the best players in the
world. Other neural net games include
Go, chess, checkers, bridge and Othello.
Neural nets are genuine learning systems
because they can evaluate their performance through training and testing. They
can also change their internal “rules” by
adjusting their internal weights. However,
most neural nets used today in commercial applications are not continuously
learning. They are trained when they are
designed but become static once they
are inserted into a broader application.
Researchers are investigating neural
nets capable of continuous learning.
These neural nets can adapt to changing
conditions and become smarter as they
solve more problems. New generations
of applications containing continuously
learning neural nets will become smarter
as they are used.
The investigation into possible applications for neural nets has barely begun.
Early findings suggest that applications
involving behavior considered to be
36
unpredictable, such as weather, may be
solvable with neural nets. Tony Hall, a
meteorologist from the U.S. National
Weather Service in Fort Worth, Texas,
has developed a network that considers
19 variables to predict rainfall. The
results to date have been outstanding,
accurately predicting the amount of
rainfall 83% of the time.
Survival of the Fittest:
Genetic Algorithms
Genetic algorithms, another approach
to learning, are based on a biological
metaphor: survival of the fittest. Genetic
algorithms view learning in terms of
competition among a set of evolving
alternative concepts. In principle,
genetic algorithms can be applied to
any problem that a normal program
can solve, but they are best-suited for
problems that we don’t know how to
solve efficiently but for which we can
quantitatively evaluate a solution.
A good example is finding the shortest
path that visits all nodes in a network
(also known as the Traveling Salesman
7
problem ). The appeal of genetic algorithms comes from their simplicity and
elegance as robust search algorithms,
as well as from their power to discover
good solutions rapidly for difficult
multi-dimensional problems.
Genetic algorithms are useful and
efficient when:
• The search space is large, complex
or poorly understood.
• Domain knowledge is scarce or
expert knowledge is difficult to
encode to narrow the search space.
• No mathematical analysis is available.
• Traditional search methods fail.
7
The Traveling Salesman Problem is a member of
the class of problems known as Non-deterministic
Polynomial Complete (NP-Complete). This class of
problems includes tasks for which conventional
problem-solving techniques cannot solve the general
case problem in a reasonable amount of time (e.g.,
your life span). Examples include path planning,
game playing and resource scheduling.
LEF Smart Report 4/13
4/13/01
4:00 PM
Page 39
The basic concept behind genetic algorithms is to encode a potential solution
as a series of parameters. A single set of
parameter values is treated as a genome,
or the genetic material of an individual
solution. A large population of candidate
solutions is created (initially with random
parameter values). These candidates
are tested as solutions to the problem.
In an approach similar to Darwinism,
only the fittest of the solutions survive
based on their performance.
New candidates are created from the
survivors through a process analogous
to breeding. This process includes
crossover (combining characteristics of
two parent solutions) and mutation
(random changes to genetic material of
a single solution). The process continues
through several generations, with weak
solutions being replaced by new
candidates bred from the ever-stronger
population of solutions.
Genetic algorithms can be applied to a
range of problems including drug design,
financial modeling, network management, career planning and even music
creation. In the pharmaceuticals industry,
genetic algorithms are emerging as
computational aids for drug design and
for studies of molecular folding and
intermolecular interactions. This, coupled
with the fact that such approaches can
be run on desktop computers, makes
the use of genetic algorithms and other
AI tools a promising, cost-effective and
complimentary approach to traditional
drug design, which uses a more statistical
approach to correlate known molecular
structures with functional information.
The first company to pioneer the genetic
algorithms approach for drug design is
CyberChemics, Inc.
Genetic algorithms are also well-suited
for financial modeling because they tap
into the “payoff-driven” nature of these
problems. Scenarios are easily evaluated
for fitness by the returns they provide.
In his book Genetic Algorithms and
Investment Strategies, Richard Bauer
claims, “Genetic algorithms hold the
key to forecasting price movements and
mastering market timing techniques.”
The Traveling Salesman problem has
practical applications for network management, transportation, manufacturing and
robotics. A genetic algorithm-based
system developed for KLM Royal Dutch
Airlines by Syllogic and IBM plots the
availability, training and career moves
of airline pilots. The system learns from
the behavior of pilots and predicts
changes in the careers of pilots.
“Find a bug in a
program, and fix it,
Genetic algorithms can even create
music, although opinions of the quality
vary with the listener. GenJam is a
genetic algorithm-based model of a
novice jazz musician learning to improvise. GenJam maintains hierarchically
related populations of melodic ideas that
are mapped to specific notes through
scales suggested by the chord progression
being played. As GenJam plays its solos
over the accompaniment of a standard
rhythm section, a human mentor gives
real-time feedback, which is used to
derive fitness values for the individual
measures and phrases. GenJam then
applies various genetic operators to
the populations to breed improved
generations of ideas.
and the program
will work today.
Show the program
how to find and fix
a bug, and the
program will work
forever.”
— Oliver Selfridge
Early AI Pioneer
The three learning approaches discussed
– case-based reasoning, neural networks
and genetic algorithms – are only a few
that are currently being researched.
They demonstrate how we can create
smarter programs, such as intelligent
agents, that will allow us to put more
trust in our computers. Once they have
our trust, we can delegate increasing
responsibility to them. But this will
require creating systems with even more
sophisticated skills such as reasoning,
predicting and thinking ahead.
37
LEF Smart Report 4/13
4/13/01
4:00 PM
Page 40
5. Anticipating Thinking and Reasoning About What to Do Next
The culmination of smart is the anticipating system. An anticipating system can
reason about itself, its users and the
environment, predicting actions and
needs in advance and offering solutions
for current as well as unexpected
problems. In essence, an anticipating
system thinks ahead.
An anticipating system should realize
that you have a problem before you do.
For example, a group of civil engineers
is trying to determine how future traffic
congestion in Chicago might be alleviated
if they constructed a bridge in a specific
part of town. If they use an anticipating
system, one of the engineers might
come in the office on a Monday morning,
before there is an actual traffic problem,
and be greeted with the following
message from the anticipating system:
“Over the weekend I was comparing the
traffic flows in Chicago to traffic patterns
in Amsterdam and Paris. In Amsterdam
and Paris they use a bridge to reduce
congestion at a point that is similar to
an area in downtown Chicago. If we
construct a similar bridge, it would
reduce waiting time for commuters in
that area by 17% and allow for future
growth.”
Anticipating systems are not directed
to solve specific problems; they find
problems, recommend solutions and in
some cases fix them on their own.
Anticipating systems are not limited to
one domain but can apply their knowledge and reasoning across multiple
domains. In this sense, anticipating
systems come the closest to modeling
human intelligence. They draw on all
the SQs – adapting, sensing, inferring
and learning – to think ahead. The
goal: to give us more informed and timely
data to make smarter decisions more
quickly.
Today, mature anticipating systems exist
only in the minds and works of visionaries
and science fiction writers. The most
famous anticipating system is the HAL
9000 computer in the film “2001: A Space
Odyssey.” Although the story of HAL
does not end optimistically – HAL goes
crazy and starts to kill the crew – it gives
a realistic portrayal of the capabilities
of an anticipating system. HAL can
anticipate events and actions by actively
monitoring its environment and using its
knowledge to reason about the future
and about people’s intentions. HAL
even develops new strategies as it misleads
the crew and uses lip-reading to gather
information.
There is no doubt that the 21st century
will witness the appearance of HAL-like
devices in daily living, though these
devices will certainly not look like HAL,
who sported mainframe-like hardware.
Ray Kurzweil and Hans Moravec, along
with other technology visionaries, have
described a future of ever more powerful and smarter computers that will
ultimately equal or even surpass man’s
capabilities.
In the meantime, the seeds of anticipating systems are evident in consumer
products, planning systems, robots and
– the ultimate – artificial life.
Consumer Products
Although mature anticipating systems
have yet to evolve, the seeds of anticipating systems can be seen in several
consumer products.
LEF Smart Report 4/13
4/13/01
4:00 PM
Page 41
When you start typing the address of a
Web site in your browser, the browser
will anticipate what site you want to visit
based on past sites you’ve visited and fill
in the address for you. This is a simple
example of using case-based reasoning
for anticipation. While it may not seem
that impressive, it does save keystrokes!
Microsoft’s Office Assistant anticipates
that you need help and pops up on the
screen to offer it. Contrast this with the
previous help facility – the user had to
know he needed help in the first place.
Of course, the desktop assistant isn’t
perfected yet, and often offers the
wrong help.
TiVo, the digital personal video recorder
manufactured by Phillips and Sony in
the United States and Thomson Scenium
in the U.K., learns about your TV
preferences over time and anticipates
what shows you will like. TiVo then
records these shows for you, providing
a customized selection of programming
geared to your tastes.
“The most interesting feature of TiVo is
the ‘Thumbs-up, thumbs-down’ feature,”
explains Jim Skinner of CSC, an AI
expert who has researched intelligent
agents for CSC’s Leading Edge Forum.
“TiVo records what it thinks you’ll like
based on how you rate shows. The more
input you provide, the better TiVo’s
suggestions are.”
Planning Systems
Several systems under development for
cars and fighter jets try to anticipate the
driver’s or pilot’s actions and respond
accordingly. For example, information
in the jet’s heads-up display changes
depending on the pilot’s intent. In the
car, steering, braking or suspension
changes depending on what the driver’s
goals are anticipated to be.
1 The Hunt for Red October
Tue
2/1
Wed
2/1
3 The Simpsons
Tue
2/1
4 The Cosby Show
Tue
2/1
5 Rumble in the Jungle
Tue
2/1
6 Fresh Prince of Bel-Air
Tue
2/1
7 Family Matters
Tue
2/1
8 Home Improvement
Tue
2/1
2 Star Trek: The Next Generation
© 2000 TiVo, Inc. All Rights Reserved.
TiVo, the digital personal video recorder, learns about your TV
preferences over time and anticipates what shows you will like.
Such goal-directed systems are sometimes
called planning systems. These systems
know what you want done in advance
and plan a course of action to achieve
this goal. These systems lend themselves
to complex planning tasks such as travel
and logistics.
In their simplest form, systems like
Mapquest and Travelquest on the Web
help people plan routes and trips. Not
only can these services provide maps,
but they can also optimize routes
and find alternatives (i.e., they can be
goal-directed).
Air traffic control is another important
planning activity. As air space becomes
more crowded and complex to manage,
the pressure for better planning mounts.
That’s why MITRE Corporation is developing Path Objects, a language that
enables controllers and pilots to easily
communicate changes to the intention of
an aircraft by changing just one parameter
of a shape at a time. For example, rather
than representing a circular path as a
series of coordinates along the circumference, a Path Object algorithm would
calculate the precise path based only on
the center and radius. With Path Objects,
pilots, controllers, and automation
systems can exchange information about
39
LEF Smart Report 4/13
4/13/01
4:00 PM
Page 42
an aircraft’s intended paths reliably,
unambiguously, and efficiently. The
result: smarter air navigation systems
that can easily modify the aircraft’s route
without complicated instructions from
the ground. Ultimately, this means
more planes in the air safely and more
alternative routes for pilots.
“Adaptive robots
will find jobs
everywhere, and
the hardware and
software industry
that supports
them could
become the
largest on earth.”
— Hans Moravec
Technology Visionary
On the ground, planning plays an
important role in gate management.
Planning which gate an aircraft should be
stationed at is a daunting task involving
many variables: landing schedules, gate
availability, corridor capacity, facilities
at the gate, crew readiness, maintenance
requirements and timing constraints.
Increasingly, airports are using anticipating systems to manage gates faster and
more efficiently than human controllers
can. For airports, this means reduced
costs and down time. When an aircraft
is at the correct gate, its time there is
minimal because facilities and personnel
are ready. Planning in advance by taking
into account future traffic further
optimizes the process.
Many corporate financial systems – and
even some personal financial management systems – include the ability to
do financial planning well in advance.
These smart programs can carefully
evaluate many parameters and produce
a goal-directed plan for financial
management.
Another area ripe for planning is logistics,
complex because it involves so many
inter-related variables. One of the most
intensive users of logistics planning
systems is the military; getting armies
of men and material in place quickly,
efficiently and without mistakes is
of vital importance. DARPA has made
a considerable investment in the
development of sophisticated logistics
planning systems.
Consider the logistics of the Gulf War
and, on a smaller scale, the Balkan
conflicts. Thousands of people and
pieces of equipment needed to be
moved rapidly, not to mention food,
lodging, consumables, maintenance
material, medical support and administrative support (so people could get
their mail, etc.). Managing all this
effectively requires smart systems that
can anticipate and plan.
Robots at Your Service
In addition to planning systems, other
goal-directed systems are robots. Today’s
robots tend to resemble machines or
toys (rather than humans) and focus on
a specific goal or task. Their defining
characteristic is mobility: either the
entire robot, or a part, moves. In general,
to be mobile a robot must be able to
sense and respond to its environment,
learn by trial and error, plan a course
of action (movement), and anticipate
future situations.
Robotic toys are in many ways laying the
groundwork for commercial utilitarian
robots. At this year’s annual Toy Fair in
New York, robotic toys were everywhere.
Many can sense walls and other obstacles,
as well as hear and recognize simple
speech and respond to commands. Some
can see. All can move.
40
LEF Smart Report 4/13
4/13/01
4:01 PM
Page 43
I-Cybie is the
newest robotic
toy dog on the
block. Fully
I-Cybie, a sophisticated robotic toy dog
being developed by Tiger Electronics
(maker of Furby), can walk without
bumping into walls, wiggle its head, find
objects and even seek a recharging system
when its battery is low. The $200 dog
has been described as more real than
mechanical, even “soulful.” But development of the sensor-laden pooch has
been a challenge; the company missed
several important deadlines just to get
the dog to walk.
While robotic toys are fun, robots that
are utilitarian will doubtless have more
staying power. The beauty of utilitarian
robots is that they can be used to do
things man either doesn’t want to or
can’t – everything from vacuum cleaning
to laying cable in sewers to planetary
exploration.
Consider CareBot PCR 1.1, a wandering
robot for the home. Carebot, controlled from a PC, builds its own navigation map to explore your house
(avoiding obstacles) and can do the
vacuum cleaning, its main function.
CareBot was inspired by work done on
autonomous robots by Rodney Brooks,
director of the MIT Artificial Intelligence
Laboratory. Carebot is a versatile robot;
by changing its program, the robot can
perform different functions. Gecko
Systems, the maker of Carebot, plans
to have Carebot care for the elderly,
monitor children, run errands and
control household appliances.
CareBot gives us a glimpse of what
domestic robots will be like in the near
future: flexible, expandable and capable
of simple household tasks. Variations
may guide visitors through museums and
exhibitions or assist people in restaurants
and shops. Although CareBot’s current
capabilities are limited, the product
demonstrates that autonomous robots
are technically and commercially feasible.
motorized, he
starts out just
like a puppy;
it is up to you
to train him. If
you’re successful, i-Cybie can
learn to speak,
bark, walk, lie
down, shake
your hand,
Far from the comfortable confines of
homes, museums and shops, robots are
also making their way into city sewer
systems, where they are being unleashed
to lay fiber-optic cable. In Albuquerque,
the Sewer Access Module, or SAM, works
in the dank recesses beneath the street,
where it is dirty, cramped and foulsmelling. The robot, manufactured by
KA-TE Systems of Switzerland and
already being used to lay fiber-optic
cable in Hamburg, Germany, is on a
mission to bring high-speed access to
homes and businesses without tearing up
pavement. SAM was originally designed
for sewer maintenance but has been
adapted for this higher purpose.
perform tricks,
follow hand
claps and obey.
Robots are a natural for working in
difficult or hazardous environments,
including chemical and nuclear plants,
waste processing facilities, and deep-sea
exploration. NASA is particularly interested in autonomous robots for planetary
exploration. Because it takes radio signals
hours to travel to other planets,
controlling robots from Earth is
impossible. Even the control of “semiautonomous” simple robots like the
Mars Pathfinder turns out to be tedious.
To be effective, a planet-exploring
robot must be able to function on its
own, independent of remote control.
41
LEF Smart Report 4/13
4/13/01
4:01 PM
Page 44
Dante II, another robot developed by
NASA, explored the interior of Alaska’s
St. Spurr volcano in July 1994 while
Nomad, developed by NASA and
Carnegie Mellon University, successfully
completed a 200-kilometer trip through
the Chilean Atacama desert. Nomad
performed even better in January 2000
during a mission to the remote Antarctic
region of Elephant Moraine. Without
any help, Nomad found and classified
five indigenous meteorites and dozens
of terrestrial rocks, illustrating the
capabilities of autonomous robots in
planetary exploration.
In the future, we can expect to see many
more autonomous robots. These robots
will determine their behavior by reasoning about their goals and planning how
to realize them. The basic capabilities
of these systems will include avoidance
and seeking of objects, maneuvering
through a space, and laying out a “plan
of the world” in memory.
Artificial Life
The ultimate in anticipating systems is
artificial life – systems that can evolve,
on their own, into a higher form of
intelligence.
In nature, systems organize by themselves,
despite the second law of thermodynamics,
which states that isolated systems
become less and less organized. These
naturally-organizing systems do so because
they are open systems, communicating
with their environment and expelling
disorder to increase internal order.
Artificial life applies the basic concepts
of natural self-organization to machines
and computers. The main challenge of
artificial life research is to understand
how intelligent behavior can originate
from simple rules. Ants are quite simple
animals, yet an ant colony is a highly
sophisticated and intelligent social
organization. The same concept can
be applied to micro- (or even nano-)
machines: a large colony of micromachines, each capable of simple
interaction, could together carry out
an intelligent task. If we could better
understand self-organization, we could
build embryonic systems and set them
off to evolve into a higher form of
organization and intelligence.
Artificial life eliminates the need to program expected system behavior explicitly.
Instead, simple behavior is programmed
into the components – a task that is
many times smaller. Overall smart
behavior results from the interaction of
the components. Thus highly distributed
systems, like networks, are early examples
of artificial life. It is not possible to
define the overall behavior of a complex
distributed network from the top-down.
Work from the bottom-up: define how
individual nodes should behave and then
let them work together to dynamically
adjust to overall traffic loads.
Artificial life also manifests itself in more
human-like systems such as humanoid
Self-organization reflects natural intelligence, including the ability to anticipate.
Thus it has attracted the attention of
many AI researchers, spawning the
research domain called artificial life
(also called amorphous computing or
DNA computing).
42
Norns have parents and grow up, take
care of themselves, move and talk.
LEF Smart Report 4/13
4/13/01
4:01 PM
Page 45
robots and cybercreatures. COG is the
humanoid robot project of MIT’s Brooks.
COG displays many aspects of human
behavior, such as head and limb movements and facial expressions, and the
robot is quite interactive with humans.
COG has learned some of its basic skills,
such as pointing to objects, itself.
Lucy Mathilda is a baby orangutan
robot from Cyberlife Research. She (it)
was “conceived” in May 2000. The robot
has a brain and a nervous system and
will be equipped with sensors and actuators to interact with the environment.
Lucy Mathilda is to be educated like an
infant in order to develop a mind of
her own. The idea is to create artificial
life by “educating” an intelligent system
and taking it through phases like babyhood, childhood and formative years.
The creators hope that Lucy Mathilda
will evolve into an intelligent system that
culminates in an anticipating system.
Lucy Mathilda is a second-generation
cybercreature created by Steve Grant,
the founder of Cyberlife Research. In
1994 Grant was co-founder of Cyberlife
Technology, now Creature Labs, the
creator of a series of software artificial
life forms called Creatures. Today
Creature Labs is marketing its third
generation of Creatures, mostly for
computer games.
The Creatures, dubbed Norns, have
digital DNA, so they can inherit characteristics from their digital ancestors and
pass them on to their offspring. New
Norns are created by combining DNA
from two parents into an “egg.” Norns
have a neural network for control and
learning. They possess elementary language skills and can communicate with
each other. With the ability to grow,
take care of themselves, move and converse, Norns show signs of simple
human behavior and intelligence.
Example of the world’s first robot completely designed
and fabricated by a robot, as part of the Golem project
(see http://golem03.cs-i.brandeis.edu/).
Moving from the virtual word of Norns
out into the physical world, robots at
Brandeis University have learned to
spawn other robots. The Genetically
Organized Life-like Electro-Mechanics
(Golem) project looks at life “as it could
be” and has resulted in creating robots
that can build better robots.
The robots are small and simple. Their
only task is to move themselves across a
desk. A genetic algorithm is used to
create the best design for accomplishing
this task. (The movement of the robot
is controlled by neural networks so the
robot can learn to move better as it goes
along.) After the design is complete, a
3D printer that makes plastic shapes is
used to build the body of the robot (this
printer was originally designed to create
prototypes of cell phones by dripping
plastic into shapes). Humans are needed
for only one step – snapping the motors
and wires into place on robots – but the
robot instructs them.
Anticipating systems, the ultimate
smart system, have a long way to go.
Nonetheless, their utility is palpable,
even in today’s nascent systems.
Anticipating systems, always a step
ahead, offer exciting promise for a
smart new world.
43
LEF Smart Report 4/13
4/13/01
4:01 PM
Page 46
S m a r t N e w Wo r l d
Smart technology will change the world
gradually but profoundly. The development, commercialization and acceptance
of smart technology will lead to a world
characterized by new levels of:
•
•
•
•
•
•
Safety from continuous monitoring
Efficiency from ubiquitous smarts
Convenience from useful robots
Speed from all things digital
Profitability from business
intelligence
Well-being as homo superior
Safety from Continuous
Monitoring
The development and commercialization
of sensors and interactive systems will be
a major contributor to the smart world
of tomorrow. All devices and systems
will be monitored, from household
appliances to cars to chemical plants.
These systems will perform self-diagnostics and have the ability to report errors
and initiate repair.
In the near future, much of our environment will be constantly monitored by a
multitude of sophisticated sensors. In
and near plants and factories, air and
water will be closely monitored by
“electronic noses” detecting chemicals
and analyzing air and water composition.
In our homes, air quality and the environment will be constantly monitored and
adjusted. Security systems will monitor
the perimeter of our property and contact
us immediately when anomalies occur.
When someone rings your doorbell, you
will be able to communicate with the
person by wireless visiophone, where
they will be able to see the person at the
door on the phone screen.
Tracking and logging the whereabouts
of people will become commonplace. All
wireless devices will include positioning
devices, igniting a multitude of locating
services. Today a few techno-savvy parents
locate their children; tomorrow parents
and communities will routinely keep tabs
on children, the elderly and impaired.
Insurers will monitor the whereabouts
of the cars they insure. As more and
more people and devices become
“locatable,” police will be able to track
people and things – within the limits
of the law – easily.
It is not unimaginable that within a
decade everyone will carry a locator,
either embedded in a wireless phone or
implanted in the body. When locators
are combined with biosensors, constant
monitoring of the human condition will
be possible. At first, at-risk patients will
be monitored. As these systems become
cheaper, everyone will receive a personal
monitor. The actual and potential
use of location and body data will raise
fundamental ethical issues that will
never be fully resolved.
In the next few years, smart materials
will leave the labs and find applications
throughout industry. The strength,
reliability and performance of many
things, from pens to cars to bridges, will
increase dramatically. Microtechnology
and nanotechnology, the successors of
smart materials, will continue to evolve.
The simple microdevices of today
will pave the way for complex microand nano-machines with thousands
of applications such as self-cleaning
surfaces, smart drugs and self-assembling
constructions.
LEF Smart Report 4/13
4/13/01
4:01 PM
Page 47
Efficiency from
Ubiquitous Smarts
Smart technology will be included in
just about every device and system being
developed. Even the simplest household
appliances will be connected to a network
and will interact naturally with users,
recognizing them and knowing their
preferences.
Personal devices such as phones, personal
computers and digital assistants will take
over many simple tasks like ordering
routine groceries, composing a personal
newspaper and making restaurant reservations. They will be able to relate different
tasks to each other, remember previous
problems they solved, and recognize
similar situations. You will be able to
instruct such a system to “book the
same trip as last New Year’s Eve for
next weekend.”
Homes, offices and public places will
naturally evolve into smart environments,
aware of users and occupants and
adjusting accordingly. Security will be
a built-in but largely invisible feature
of smart environments, detecting and
perhaps logging occupants and recognizing security hazards (e.g., identifying
criminals when they enter a bank or a
public place). Sometime in the next 10
years, a bank clerk will have all the
information about you on his or her
computer screen before you identify
yourself at the counter – even if the
clerk has never seen you before. The
smart environment will have recognized
you immediately upon entering the bank.
project 3D images onto the retina, or by
high-resolution wall projection systems
that are less intrusive. Pointing to an
object on a wall screen from the other
side of the room will be sufficient to
start a video playing.
Mobility is a key measure of any civilization. As society and technology advance,
we will travel even more than today.
Cars and roads will evolve in several
stages from today’s chaotic systems into
fully managed systems. In the first phase,
ongoing today, smarts are fragmented
and isolated. Cars are equipped with
navigation and collision avoidance
systems. Traffic lights adjust dynamically
to changes in traffic load. Some highways
feature variable speed limits and local
access filtering.
In the second phase, basic interactions
will emerge between cars and roads and
between cars. Tracking cars on major
roads and highways will be commonplace. This will enable new forms of
vehicular management. Because all cars
will be identified, tolls can be collected
automatically, deducted from the bank
account associated with the car’s owner.
Speeders will be automatically fined,
receiving their ticket in the mail. Road
Interaction with smart environments
will evolve from keyboard and screen
technology to natural interaction using
speech and gestures. Perhaps the keyboard will not disappear, but it will not
be used much. Today’s video monitors
will be replaced by wearable devices that
45
LEF Smart Report 4/13
4/13/01
4:01 PM
Page 48
access restrictions will be easy to impose.
For instance, access to residential areas
could be blocked for non-residents’ cars
unless special permission is given by a
resident.
In the third phase, cars will be controlled
by a central computer and will “drive by
wire.” On highways, a central computer
will negotiate lane use and a common
speed for neighboring cars, taking full
control and permitting a small but safe
distance between cars until control is
returned to the driver to exit the highway.
Finally, in the last stage traffic systems
will be fully planned and centrally
managed. Drivers will gain access to the
system of main roads and highways by
submitting a trip plan. The traffic computer will then plan the trip, taking into
account traffic and capacity, and will
hand out a departure slot – a designated
time to enter the managed traffic system.
During the trip, car computers will
interact with traffic computers, following
the trip plan and using the designated
roads at the planned times.
The end result of smart traffic systems
will be more efficient road use, safer
highways and fewer delays.
Convenience from Useful Robots
The maturing of smart technology will
finally enable the development of useful
robots. These robots will not be the
humanoids portrayed in science fiction
but, rather, will resemble highly specialized machines.
We can envision the day when robots
will come out of the labs to perform
planetary explorations and highly
specialized microsurgery. The use of
robotics is already beginning to transform
the operating room as we know it into
the Intelligent OR, making it safer, more
productive and more cost effective.
“Computers and robotics will be the
enabling technology to create the next
generation of surgical instruments and
procedures,” says Dr. Richard Satava,
professor of surgery at Yale University
Medical Center.
Robots will also make inroads into agriculture and industry. Specialized robots
will work in fields, plowing and harvesting.
The intelligent operating room of the future uses the
TM
ZEUS Robotic Surgical System, which enables
advanced endoscopic procedures by using robotic
technology to enhance the surgeon’s natural dexterity
and precision (see http://www.computermotion.com).
46
LEF Smart Report 4/13
4/13/01
4:01 PM
Page 49
Warehouse movements, loading and
unloading trucks, and physical distribution will be taken over by robots
directly interacting with sophisticated
logistics systems, something that is
already implemented in some advanced
warehouses today.
And finally, as the cost of robot technologies drops, we can expect mass
adoption of specialized home robots,
doing simple but tedious physical jobs
such as gardening and cleaning house.
These robots will be smarter and more
useful than most of the experimental
home robots of the 1980s and 1990s.
Speed from All Things Digital
We are moving towards the day when
virtually all devices, services and businesses
will be accessible through a digital interface. This includes appliances, which will
come standard with a home network
interface; radio, TV, newspapers and
books, which will be available in digital
form; and mom-and-pop shops, which
will also operate electronic store fronts.
This widespread digital access is key for
full integration of intelligent systems.
When logistics systems can interface
with traffic management systems,
companies can plan delivery routes in
advance. There will not be a single
business without an e-business interface,
enabling customers to place orders,
trace them and look up service information on their own.
Moreover, most digital interfaces will
be standardized, documented and
predictable. This will make it easy to
create interfaces between any two systems,
doing away with the lengthy process of
interface specification and building. Ebusiness interfaces will be standardized
around XML documents. Once a device
supports these standards, it can be used
to access any business for purchasing
goods or services. For example, you will
be able to book all aspects of a trip
from your PC or palm device because
the airline, hotel, car-rental company
and restaurants all have the same digital
interface.
Virtually all processes related to business
will be electronic. A substantial amount
of buying and selling will be through
e-business, either direct (initiated by
people) or via agents that operate in
virtual marketplaces. All financial
operations will be fully integrated, both
internally as well as externally with
suppliers, customers, banks and other
parties. Even physical entities, like roads,
will be controlled by software, enabling
them to function smarter. Manufacturing
plants and warehouses will have a digital
interface to them that applies the
organization’s business intelligence
to optimize performance.
Perhaps the most important development
to support overall process integration is
the Business Process Modeling Initiative,
a broad industry initiative proposing a
common standard language, Business
Process Modeling Language (BPML), to
describe and manage business processes.
“Mobility, acute
vision and the
ability to carry out
survival related
tasks in a dynamic
environment provide the necessary
basis for the
development of
true intelligence.”
— Rodney Brooks
Director
MIT Artificial
Intelligence Laboratory
“The initiatives undertaken by BPMI.org,
including BPML, will allow cross-enterprise networks to really happen,” says
Howard Smith, CSC’s European chief
technology officer and a key author of
the BPML specification. “We will have
the foundation on which to build agile,
collaborative networks that can evolve
as technology and business dictate.”
47
LEF Smart Report 4/13
4/13/01
4:01 PM
Page 50
QUESTIONS YOU SHOULD
BE ASKING
Following are a set of questions to consider regarding smart technology.
Use these to stimulate discussion with colleagues and business
partners.
1.
What does “smart” mean to me and my organization?
2.
Where are there opportunities to function smarter in my
organization?
3.
How could my organization use an adaptive network to configure
itself and optimize the use of its resources?
4.
What business processes are people- or time-intensive with fairly
repetitive tasks? How could intelligent software agents be used
to shift workload and reduce cycle time and costs?
5.
What processes or procedures could be automated or improved
using expert systems?
6.
What products or services would have more value by adding
smart technology to them?
7.
How could my organization use sensors to observe or collect
valuable data in places that are impossible or too dangerous for
people to reach (e.g., deep in the ocean or inside machinery)?
8.
How can decision-making in my organization be supported by
expert systems, case-based reasoning or neural networks?
9.
If I could run complex scenarios through advanced simulations
to predict outcomes, how would that help my organization define
new business models, marketing strategies or products?
10. What internal or external information (e.g., analyst reports,
customer surveys) could be integrated into my organization’s
knowledge store and mined to improve day-to-day operations,
create truly personalized offerings, or enhance overall strategic
planning?
11. How could my customers be better served if my organization could
anticipate questions they might ask or needs they might have?
12. What are the downsides to smart systems and devices in my
organization or home, and how do I safeguard against them?
Profitability from
Business Intelligence
Collaborative networks, both crossenterprise and intra-enterprise, pave
the way for the next era of business
organization, where the focus will shift
from business processes to business
intelligence.
From strategic planners to business
developers, customer service representatives and system architects, organizations
will concentrate on tapping the organization’s knowledge and leveraging it to
improve performance. For instance,
the factory of the future will rely on
information flows rather than labor to
make quality, customized products just
in time. Rapid communications and
instant data analysis will allow smart
machines to make smart products in
small quantities with high efficiency. IT,
which will play a crucial role, will no
longer stand for “information technology”
but rather “intelligence technology.”
Smart technology will become a major
enabler of the business intelligence era.
Expert systems, neural nets, machine
learning and, finally, anticipating systems
will be integrated into business software
to manage and manipulate business
knowledge. Organizations will use this
knowledge to better understand the
business, improve current performance,
predict how the business will evolve, and
act on those predictions to spur growth.
For instance, will changing market conditions require new sales channels? The
business intelligence system can send
electronic trading agents to new virtual
marketplaces, searching for new customers
and perhaps offering a new brand
or a different pricing scheme. Do new
customers demand faster delivery?
48
LEF Smart Report 4/13
4/13/01
4:01 PM
Page 51
The business intelligence system can
contact new carriers electronically and
negotiate deals to provide express
delivery services.
Techniques like collaborative filtering
and data mining are harbingers of intelligent business. Several systems integrators
are developing “intelligence” layers on
top of enterprise resource planning
(ERP), smart logistics and customer
resource management systems to build
early business intelligence systems.
The era of business intelligence will come
upon us in the next decade. Cheap
instant data will cause massive changes
in businesses, which will have to adapt
to survive. Organizations, demanding
more for less, will look to business
intelligence to get the most out of their
operations, to uncover new growth
opportunities, and to maximize shareholder value. The change is inevitable.
Well-Being as Homo Superior
Finally, man himself will be affected by
the smart new world. Both physically and
mentally, his capabilities will flourish –
though at a price.
Implants and artificial aids will be used
on a large scale to restore lost capacities.
The physically impaired will be fitted
with artificial limbs. Many blind or deaf
will regain their sense by implants wired
to their nervous system. In the far future,
even limited brain damage may be
repaired by implanting intelligent silicon.
Healthy people may use the same technology to enhance their senses – seeing
at 400 percent or hearing frequencies
up to 100 kilohertz – or boost their
physical abilities.
Pharmaceuticals
Biofuels
Agriculture
Forensics
Industrial
Anthropology
Processes
Bioremedication
Source: U.S. Department of Energy Human Genome Program
(see http://www.ornl.gov/hgmis).
Continuing advancements in genome
research promise radical innovation in
molecular medicine, waste control,
environmental cleanup, biotechnology,
energy sources and risk assessment, all
of which will enhance our well-being.
In the next decade, expert systems will
commonly assist us when dealing with
complex knowledge such as mathematics
or chemistry, or when repairing a jet
engine or even a TV. As we learn to
interact naturally with these systems, we
will consider them a natural extension
of our mental capabilities.
Farther into the future, learning assistants
will scan text books, the Internet and
technical literature, building knowledge
in a particular domain. When we start
using our learning assistant (probably
after having bought it at a hefty price),
it will teach us the basics of the domain
and assist us like an expert system in
more difficult problems. While the
learning assistant instructs us, it will
also keep abreast of the latest developments within the domain, regularly
digesting and storing new knowledge
for future reference. It will update us on
the most important new developments.
49
LEF Smart Report 4/13
4/13/01
4:01 PM
Page 52
By using smart technology, man will
become stronger, smarter, more agile
and capable of solving more difficult
problems. However, these new capabilities come at a price. Every technology
has a dark side; smart technology raises
the specter of less (or no) privacy, information overload from 24-hour everything, more cyber attacks with more
devastating consequences (there will
be more to attack and it will all be
interconnected) and machines that are
smarter than man, potentially rendering
him passive.
But these fears can be addressed. We
may actually have more privacy because
in an interconnected world we can
control all of our data better. There may
be less information overload thanks to
collaborative filtering, smarter searches
and data mining. With more sophisticated
tools and techniques and heightened
awareness, cyber attacks may be easier
to detect, isolate and shut down.
“To render technology useful, we must blend it with
humanity. This process will serve us best if, alongside
our most promising technologies, we bring our full
humanity, augmenting our rational powers with our
feelings, our actions and our faith. We cannot do this
by reason alone!”
— Michael Dertouzos
Director
MIT Laboratory for Computer Science
50
While some envision that machines will
ultimately take over the world, either
themselves or through a merge of man
and machine, that is doubtful. Man has
already created devices that are more
powerful than he is without giving up
control. Cars travel faster and farther
than man. Airplanes fly. Ships cross the
high seas. Pocket calculators do math
faster and more accurately. And still,
man reigns supreme, the dominant
species on the planet. We should welcome
smart technology, not shun it, as a
positive extension of our capabilities.
However, we must be very careful about
how we harness smart technology and
how we educate our children to use it –
lest we lose more than we gain.
Indeed, we must bring sense and sensibility to bear. As Michael Dertouzos,
director of the MIT Laboratory for
Computer Science, writes: “To render
technology useful, we must blend it
with humanity. This process will serve
us best if, alongside our most promising
technologies, we bring our full humanity,
augmenting our rational powers with
our feelings, our actions and our faith.
We cannot do this by reason alone!”
In the end, it is unlikely that man and
machine will physically merge in the
next century. Rather, there will be a
close collaboration between man and
the smart machine that will make the
21st century man – in all his machineenhanced humanity – truly superior to
his homo sapiens ancestor.
LEF Smart Report 4/13
4/13/01
4:01 PM
Page 53
Appendix:
H A N DY W E B S I T E S
Smart Systems: From Vision to Reality
Ray Kurzweil:
http://www.kurzweiltech.com/
Smart appliances:
LGE turbodrum:
http://www.lge.com/aboutus/news/pressroom/2000/2000_1012.html
Margherita2000 washing machine: http://www.margherita2000.com
Merloni: http://www.merloni.it
Thalia: http://www.thaliaproducts.com
Electrolux ScreenFridge: http://www.electrolux.com/screenfridge/start.htm
Adapting
Webpresence for people, places and things:
http://www.cooltown.hp.com/papers/webpres/WebPresence.htm
Jini:
http://www.sun.com/2000-0829/jini/
Jester
http://shadow.ieor.berkeley.edu/humor/
Self-organizing Web sites:
http://www.thevines.com
http://themestream.com
Affective computing at MIT
http://www.media.mit.edu/projects/affect/
Detecting driver stress:
http://www.media.mit.edu/affect/AC_research/projects/driver_stress.html
Semantic location: location as context:
http://www.cooltown.hp.com/papers/semantic/semantic.htm
Tracking systems:
http://www.eworldtrack.com/
Smart antenna:
http://www.iec.org/tutorials/smart_ant/
Self-organizing networks:
http://www.swiss.ai.mit.edu/projects/amorphous/Network/
Universal Plug and Play Forum
http://www.upnp.org/
Nomadic computing by Leonard Kleinrock:
http://www.lk.cs.ucla.edu/LK/Bib/PS/paper185.pdf
Session Initialization Protocol at the IETF:
http://www.ietf.org/html.charters/sip-charter.html
Virginia cows with implanted GPS:
http://members.nbci.com/AceBoudreau/CowCam.htm
51
LEF Smart Report 4/13
4/13/01
4:01 PM
Page 54
Sensing
OnStar:
http://www.onstar.com/
Egery
http://www.egery.com
Discussion of Progressive’s “smart” insurance:
http://www.auto.com/industry/insure25_20010125.htm
MobilEye auto vision system:
http://www.mobileye.com/
Smart Trek traffic control in the Seattle area:
http://www.smarttrek.org/
Smart rooms:
http://vismod.www.media.mit.edu/vismod/demos/smartroom/ive.html
Face recognition system used at the Superbowl:
http://www.viisage.com
BodyMedia Sensewear monitor:
http://www.bodymedia.com/sec01_entry/01B1_sensewear.shtml
Artificial Silicon Retina:
http://www.optobionics.com
Smart pill:
http://www.eng.ohio-state.edu/nie/nie712/712_biosensors.html
Smart Integrated Lower Limbs project:
http://www.sandia.gov/media/NewsRel/NR2000/smartleg.htm
Sensor fish:
http://www.sciam.com/2000/0300issue/0300scicit1.html
Position sensing:
http://www.geodiscovery.com/home
ATCS speeding detection system:
http://www.atcs.nl/
Inferring
AMOS:
http://www.spacedaily.com/news/software-99c.html
Deep Blue:
http://www.research.ibm.com/deepblue/home/html/b.html
Java Expert System Shell:
http://herzberg.ca.sandia.gov/jess/
CLIPS download:
http://www.ghgcorp.com/clips/WhereCopy.html
Jess download:
http://herzberg.ca.sandia.gov/jess/
52
LEF Smart Report 4/13
4/16/01
5:11 PM
Page 55
Fuzzy logic and anti-lock brake systems:
http://developer.intel.com/design/mcs96/designex/2351.htm#A2
On “fuzzy” being hazardous simply because of its name:
http://www.circellar.com/pastissues/articles/misc/88constantin.pdf
Data mining at Wal-Mart:
http://www.byte.com/art/9510/sec8/art2.htm
Data mining and understanding customers:
http://www3.shore.net/~kht/text/whexcerpt/whexcerpt.htm
Cycorp and Cyc:
http://www.cyc.com/
IRS
http://www.irs.ustreas.gov
Learning
Case-based reasoning (CBR on the Web):
http://www.cbr-web.org/CBR-Web/
BrainMaker neural network, predicting rainfall:
http://www.calsci.com/Weather.html
SEP Brainware software for content analysis:
http://www.seruk.com/
Mixed initiative agents:
http://www2.csc.com/lef/programs/grants/finalpapers/skinner_mixed_initiative_agents.pdf
Applications of agents:
http://www2.csc.com/lef/programs/grants/finalpapers/sary_final.htm
Smart agents monitoring computer intruders:
http://www.sandia.gov/media/NewsRel/NR2000/agent.htm
Ward Systems Inc. neural net software:
http://www.wardsystems.com/
Cerebrus fraud detection software by Nortel Networks:
http://www.fraud-solutions.com/cerebrus/detection.html
CyberChemics:
http://www.cyberchemics.com
GenJam genetic algorithm that plays jazz solos:
http://www.it.rit.edu/~jab/GenJam.html
Thallium Diagnostic Workstation:
http://www.coiera.com/ailist/list-main.html#HDR26
53
LEF Smart Report 4/13
4/16/01
5:11 PM
Page 56
Anticipating
Mapquest
http://www.mapquest.com/
Path Objects
http://www.caasd.org/proj/pathobjects/
Carebot robot:
http://www.geckosystems.com/
Human Genome project:
http://www.ornl.gov/hgmis/
Cog:
http://www.ai.mit.edu/projects/humanoid-robotics-group/cog/
Golem:
http://golem03.cs-i.brandeis.edu/
Cyberlife creatures:
http://www.creaturelabs.com/
Creatures community:
http://www.creatures.co.uk
Lucy Mathilda:
http://www.cyberlife-research.com/Lucy/index.htm
Amorphous computing and self-organizing systems:
http://www.swiss.ai.mit.edu/projects/amorphous/
HAL’s legacy online:
http://mitpress.mit.edu/e-books/Hal/
TiVo
http://www.tivo.com/
KA-TA Systems AG, who developed the Sewer Access Module:
http://www.ka-te-system.com/
Intelligent Robotics:
http://ic-www.arc.nasa.gov/intelligent-robotics/
TD-Gammon:
http://satirist.org/learn-game/systems/gammon/td-gammon.html
Smart New World
Michael Dertouzos on technology:
http://www.technologyreview.com/magazine/jan01/dertouzoskurzweil.asp
54
LEF Smart Report 4/13
4/13/01
4:02 PM
Page 57
About the Author
Claude Doom is a senior technology consultant
in CSC’s Brussels office, focusing on all aspects of
networking, e-business architecture and information
technology strategy. He has helped numerous
organizations orient themselves to new information
technologies and implement modern infrastructure
and applications.
Claude was awarded an LEF technology grant in 1999
to research IP version 6, the next Internet protocol.
As a result of this work, he was named the LEF
Associate for 2000. In this role he researched smart
technology and intelligent systems. He investigated
a variety of subjects ranging from smart appliances and smart environments to
expert systems for businesses and problems of machine consciousness.
Before joining CSC in 1997, Claude worked with a major Belgian bank and with
Alcatel; he had previously been an astrophysicist for nine years. Claude researched
the evolution of massive stars and binary systems, as well as the structure and the
evolution of the sun.
Claude is a regular speaker at international seminars and writes frequently on networking technology and the strategic aspects of information technology and e-business.
Acknowledgments
I wish to thank those who contributed to the research, development and review of
this report:
Lisa de Araujo, Creature Labs
Jacques Auberson, CSC
John Barrer, MITRE
Tom Beers, IRS
Joost van Boeschoten, CSC
Lisa Braun, CSC
Cynthia Breazeal, MIT AI Lab
Peter Cochrane, ConceptLabs
Deborah Cross, CSC
Walt Davis, Motorola
Dick Dijk, CSC
Martin Evertse, CSC
Pierre-Joseph Gailly, CSC
Jean-Louis Gross, CSC
Paul Gustafson, CSC
Artie Kalemeris, CSC
Bill Koff, CSC
David Lasseel, CSC
Doug Lenat, Cycorp
Ed Luczak, CSC
Luc Mercier, CSC
Douglas Neal, CSC
Brad Nixon, CSC
Richard Pawson, CSC
Roger Payne, BT
Rosalind Picard, MIT Media Lab
Bruce Radloff, General Motors
Roland Sanguino, CSC
Charisse Sary, CSC
Rin Saunders, CSC
Jim Skinner, CSC
Howard Smith, CSC
Rich Stillman, CSC
Herman Vijverman, CSC
55
LEF Smart Report 4/13
4/13/01
4:02 PM
Page 58
Computer Sciences Corporation
Worldwide CSC Headquarters
The Americas
2100 East Grand Avenue
El Segundo, California 90245
United States
+1.310.615.0311
Europe, Middle East, Africa
279 Farnborough Road
Farnborough
Hampshire GU14 7LS
United Kingdom
+44(0)1252.363000
Australia/New Zealand
460 Pacific Highway
St. Leonards NSW 2065
Australia
+61(0)2.9901.1111
Asia
139 Cecil Street
#08-00 Cecil House
Singapore 069539
Republic of Singapore
+65.221.9095
About CSC
Computer Sciences Corporation, one of the world’s leading information technology services providers, helps
organizations achieve business results through the adroit use of technology. Since its formation in 1959,
CSC has earned a customer-centric reputation for developing and managing solutions specifically tailored
to each client’s needs. No other company offers the same range of professional services and global reach
as CSC does in areas such as e-business strategies and technologies, management consulting, information
systems consulting and integration, application software, and IT and business process outsourcing.
www.csc.com
Copyright © 2001 Computer Sciences Corporation. All rights reserved.
Printed in USA ?M 4/01 AP 7161 WHxxx