Download Intelligent User Interfaces for Ubiquitous Computing

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Philosophy of artificial intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Affective computing wikipedia , lookup

Enactivism wikipedia , lookup

Human factors and ergonomics wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Personal knowledge base wikipedia , lookup

Human-Computer Interaction Institute wikipedia , lookup

Usability wikipedia , lookup

Speech-generating device wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

Ecological interface design wikipedia , lookup

Human–computer interaction wikipedia , lookup

Transcript
Intelligent User Interfaces for Ubiquitous Computing
Prof. Dr. Rainer Malaka
Bremen University
Faculty 3, Computer Science
P.O. Box 33 04 40
D-28334 Bremen
Tel.: ++49 (0)421 218-7745
Fax: ++49 (0)421 218-8751
e-mail: [email protected]
To appear in: Handbook of Research on Ubiquitous Computing Technology for Real Time Enterprises
Edited By M. Mühlhäuser and I. Gurevych
1
INTELLIGENT USER INTERFACES FOR UBIQUITOUS
COMPUTING
ABSTRACT
Designing user interfaces for ubiquitous computing applications is a challenging task. In
this chapter we discuss how to build intelligent interfaces. The foundations are usability
criteria that are valid for all computer products. There are a number of established methods
for the design process that can help to meet these goals. In particular participatory and
iterative so-called human centered approaches are important for interfaces in ubiquitous
computing. The question on how to make interfaces more intelligent is not trivial and there
are multiple approaches to enhance either the intelligence of the system or that of the user.
Novel interface approaches follow the idea of embodied interaction and put particular
emphasis on the situated use of a system and the mental models humans develop in their
real-world environment.
Keywords: usability engineering, user interface, interaction design, intelligent user
interfaces
User interfaces for computational devices can be challenging for both their users and their
designers. Even such simple things as VCRs or TV sets feature interfaces that many people
find too difficult to understand. Reviews and tests of consumer electronic devices very
often rank bad usability even higher than technical aspects and the originally intended main
function of the devices or features. Moreover, for most modern appliances there is not
much technical difference in their core functions. For instance TV sets differ less in quality
of display and sound and more in the way the user interacts with the device. This already
shows why user interface design is crucial for any successful product. However, we want to
extend the question of user interface design in two directions: the user interface should
become more intelligent and adaptive and we want more suitable interfaces for Ubiquitous
Computing scenarios.
The first aspect seems to be clear at first sight: Intelligent user interfaces are just what we
want and nobody will neglect the need for smart, clever, and intelligent technology. But it
becomes more difficult, if we strip away the buzzwords and dig a bit deeper into the
question of what an intelligent user interface actually should do and how it would differ
from an ordinary interface. Would the standard interface then be a stupid one?
The second aspect introduces a new level of complexity: An interface is by definition a
clear boundary between two entities. A user interface resides between human and machine;
other interfaces mediate, for instance, between networks and computers. In Ubiquitous
Computing we have the problem that there might not be a clear boundary any more.
Computers are no longer visible and in the end, they can disappear from the user’s
conscious perception. We will, therefore, face the challenge of building an interface for
something that is rather shapeless.
2
In the following, we will go into more detail through these questions and will introduce
some general approaches for designing user interfaces. We will see that we can learn from
good interface design for other – classical – devices, and that we can apply many of those
user interface design principles for Ubiquitous Computing as well. A central aspect will be
the design process that helps to find the right sequence of steps in building a good user
interface. After discussing these general aspects of user interface design, we will focus on
the specific needs for Ubiquitous Computing scenarios and finally on how to build
intelligent user interfaces – or to be less euphemistic: to avoid stupid interfaces.
BUILDING GOOD USER INTERFACES
The design of a good user interface is an art, which has been ignored for a long time in the
information and communication technology (ICT) business. Many software developers just
implemented whatever they found useful for themselves and assumed it would also be
beneficial for the respective users. However, most users are not software developers and
their way of interact with technology is very different. Sometimes, the result is technology
that is highly functional and useful for a small group of people, namely the developers of
the system, and highly inefficient, frustrating or even unusable for most other people. Some
of the highlights of this dilemma can be found in the communication with the user when
something goes wrong: An error message notifying the user: “an error occurred, code 127”
might be of some use for the developer and help in his efforts in debugging the system, but
a user will hardly be able to understand what went wrong.
Today usability plays a much bigger role and many systems (including computer systems)
are now designed with more care for easy and safe usage. On the one hand, this is due to
legal constraints demanding accessibility but also due to the fact that many systems do not
differ so much in their technical details and vendors have to diversify their products solely
in terms of their “look and feel”. We now have a wealth of methods, tools, and guidelines,
which all help to develop a good user interface (Dix et al., 1998; Mayhew 1999). However,
there is not one single recipe whose application guarantees 100% success. The essence of
usability engineering is to work iteratively in order to achieve the goal of better usability.
Let us briefly go through these steps and summarize some of the most important issues of
usability engineering. For more detailed information, a number of textbooks and research
articles can be consulted (Dix et al., 1998; Nielsen, 1993; Shneiderman, 1997).
The first question of usability engineering is the question of what goals we actually want to
achieve. The typical list of usability goals contains at least the following five (ISO 9241,
2006):
Safety and security
Good design should not harm users or other people affected by the use of a product. It
should also help to avoid errors made by humans in using the system.
Effectiveness
A good user interface supports a user in solving a task effectively, i.e., all aspects of a task
can be actually handled.
Efficiency and Functionality
3
A well designed and usable system should allow for quick and timely work.
Joy and fun
How enjoyable is it to work (or play) with the system? Is it fun or is it a pain to interact
with it?
Ease of learning and memorizing
How fast can new users interact with the system and will they remember what they learned.
This list, of course, is not exhaustive and not all aspects can be fulfilled to the same (high)
degree, which is to say that there are classic trade-offs. Some aspects, therefore, might even
be in conflict with others and it is important to identify such conflicts and to decide which
aspect to optimize and to what extent. For instance, when designing an interactive game,
joy and fun might be more important and effectiveness is less important. In contrast, a
system for firemen has to be more efficient and can be less fun. Another typical trade-off
exists between the need for efficient work and for training. One solution can be to provide
two modes: an expert mode and a novice mode.
As a general rule, all efforts and goals of usability should be measurable in quantitative or
qualitative ways. And since most usability criteria depend on the actual use of a system,
there is a need to involve users in the design process. Of course, many human factors have
been studied and psychologists have theories about how people can perceive information
and how they can – in principle – react. But, in order to actually find out if the goals are
met, one must try things out with actual users. And the more unknown your application
terrain is, the more involvement of users is required, which is of particular importance for
Ubiquitous Computing because there is not yet a large set of experience, studies, and
guidelines at hand.
The design process that involves users has been named human-centered design (ISO 13407,
1999). Its principle is to develop an application iteratively with evaluations in every cycle.
Human-centered design also is regarded as the best approach when design goals are hard to
formalize in technical terms.
There have been multiple approaches for system design processes that involve the users.
Their roots are in the participatory design idea from Scandinavia that involves workers in
the definition and design of their working environment (Olson & Blake, 1981). In contrast
to the classical waterfall model in systems engineering (Royce, 1970) that segments the
design process into a linear order of clearly separable steps, these models iterate and
involve users and evaluations in each cycle. A number of models have been proposed
replacing the waterfall scheme by cycles or stars, i.e., the design process is open and
decisions can be revised depending on user feedback during development (Gould et al.,
1991; Hartson & Hix, 1989; Hix & Hartson, 1993). Since many usability goals are not
well-defined and cannot be formally defined beforehand, these models allow for a
continuous evolution of the usability of the system (Fig. 1).
4
task analysis/
functional analysis
implementation
evaluation
requirements
/specification
prototyping
conceptual design/
formal design
Fig. 1: Star model for user-centered design (Hartson & Hix, 1989)
The design steps in these models are the following:
Definition of the context
As a first step, designers should consider the context of their envisioned product. This
includes defining the way the system will be used, if it will be used for life-critical or fun
purposes, in home or in office environments as well as the market situation. The latter is
important because it tells something about expectations of users and about who is going to
buy the product. In general, not only the target users are involved in deciding about the
success (i.e., sales) of a product. Decisions can be done by managers of the users and they
can influence third parties such as the customers or clients of the users.
Description of the users
Based on the context definition, each group of directly or indirectly affected users must be
carefully analyzed. Their physical and cognitive abilities, their cultural and social
background may affect the way they interact with the system. Special needs may play a
role. Accessibility has become important for IT systems and is demanded by many legal
constraints in particular in working environments.
Task analysis
Multiple techniques help to derive a rather formal description of the task users want to
solve from informal interviews and observations. Most importantly, designers should find
out how users actually solve their task currently (not how they think they do it) and how
they make use of tools at hand, how they communicate and how their context influences the
course of activities.
Requirements/Specification
5
This step would have been the first step of the classical software development process. For
user-centered design, it is now based on a better understanding of the users, their context
and their tasks. Moreover, the specifications can be changed in each iteration, when a better
understanding of the system could be gained through evaluations.
Conceptual design/formal design
The requirements and specifications are translated into system components.
Prototyping
Instead of “doing it right the first time” we iteratively build prototypes of the system. A
prototype can be a very simple design sketch or an almost complete and working system
depending on the stage of iterations in the design process.
Evaluations
Evaluations are essential for assessing the progress of the design process and for deriving a
better understanding of the tasks, the requirements and thus a better specification and
implementation of the system (prototype).
Implementation, Tests, Maintenance
Whenever the iterations have got to a stage where the prototype sufficiently fulfills the
design goals, the final prototype (product) can be implemented. Of course tests and
maintenance are as important as in classical system engineering. Moreover, they can help to
further improve the system and in particular user feedback after deployment can be used for
defining new development cycles.
These design steps are the building blocks for good user interface design. They are very
generic and they are valid for basically every interactive system. Iterative development,
however, is inevitable for the design of human-computer interaction in Ubiquitous
Computing as we enter a domain of interactive systems, where we cannot derive system
requirements from interaction goals without user involvement. This is mainly due to the
fact that interaction in Ubiquitous Computing aims at intuitively usable pervasive IT
systems that assist users in their real-world endeavors. Without taking these aspects into
account, these systems are subject to failure. Many Ubiquitous Computing prototypes are
completely technology-driven. Their developers focus on smart new gadgets, networks and
infrastructure but they do not focus their design efforts on their users. Just for the sake of
plausibility, some usage scenarios and users are added to the design. Such systems will not
leave the research labs and they will fail to find their market.
EVALUATIONS, ALTERNATIVES AND PROTOTYPES
The importance of iterations in designing intelligent user interfaces for Ubiquitous
Computing has now been emphasized. However, how should that bootstrapping actually
happen? Where to start, how to proceed and when to stop? If we need a full fledged
prototype in each iteration along with an evaluation with a high number of users, the costs
for developing a better Ubiquitous Computing application will rapidly explode.
Fortunately, things can be done more efficiently and some techniques help to manage the
process:
Where to start?
6
In order to get a first impression of how to build a system that actually meets the usability
goals, e.g., being an understandable and enjoyable assistant for some user task, we do not
need any system but can make up a fake system without bothering with how to build a real
one. A number of methods can be used (Dix et al., 1998; Shneiderman, 1997):
Design sketches: Instead of actually building something that looks like a real system, users
or usability experts can evaluate early design ideas. First sketches on paper or on a
blackboard can already give an impression of the designer’s ideas, and feedback can
already help to avoid basic mistakes. Moreover, the discussion can facilitate the mutual
understanding of the users’ world and the prospective system.
Wizard of Oz experiments: If, however, the users should already get an impression of how
the interaction with the system might look, a system can also be simulated. A human
operator remote controls all functions of the environment and the test users are told they are
already interacting with the system. This technique has been proven to be extremely fruitful
for systems that need data on the interaction in advance. For instance for systems that are
language controlled, Wizard of Oz experiments can be used to collect utterances and
language patterns that help to build speech recognizers and grammars for the real system.
Mock-Ups: A mock-up is a model of the system that already exposes the “look and feel”
but does not yet include real functionality of the intended system. Early mock-ups for
graphical interfaces can, for instance, consist of a PowerPoint walkthrough through a
system or some Web sites emulating a system.
Prototypes: In contrast to the mock-up, the prototypes include actual functionalities of the
target system. They may iteratively evolve to the final system.
Since many applications for Ubiquitous Computing scenarios are embedded into real-world
tasks and many of them are also affected by or affect other objects in the users’
surroundings, Wizard of Oz experiments are a cheap and very beneficial first step in system
design. They can help to understand how people would interact in an environment that is
enhanced by Ubiquitous Computing technology. Moreover, the designers get data that help
to design interaction with the system. For most cases of more natural interaction such as
speech or gesture, such data is necessary anyway because the recognizers need training
data.
How to proceed?
Evaluation is the core of the above mentioned “star-model”. Depending on the maturity of
the design, the budget and the nature of the system, a great variety of evaluation techniques
can be used. Evaluation methods can be classified according to the following dimensions:
•
Qualitative vs. quantitative methods: In qualitative methods, feedback in form of
comments, impressions and subjective ratings is collected in interviews or
questionnaires. Quantitative methods measure parameters such as error rates, task
completion times or movements of users in order to estimate the quality and efficiency
of an interface.
7
•
•
•
Studies in the field or in the lab: Field studies are conducted under realistic conditions
where the systems are actually used, e.g., in the office or home of the users. They
usually need more effort than studies in the lab under simulated conditions, but they
yield more realistic results.
User tests or expert evaluations: User studies involve real test users. They are more
expensive than expert evaluations where a few experts judge the system by their
experience on user behavior and the application domain. There are many well-known
techniques for both – such as cognitive walkthrough, discount evaluation, thinking
aloud - and in some cases even combinations may be useful.
System state (sketch, mock-up, prototype …): As discussed above, in early evaluations,
a system does not necessarily have to be fully functional but can rather be a sketch or a
mock-up.
It is beyond the scope of this chapter to go into all details of evaluation techniques. We will
focus rather on the most important aspects for Ubiquitous Computing interfaces.
Even though evaluation is crucial for the design of good interfaces, it should be noted that
evaluation techniques do not solve all problems and can even be misleading. One of the
main problems of evaluations is that they are always limited snapshot observations
restricted in the time of usage and the complexity of the context. This is important to note,
in particular, for Ubiquitous Computing systems interfaces. Take, for instance, the famous
Ubiquitous Computing scenario of an intelligent refrigerator that keeps track of its contents
and can alert a user when she is running out of milk. In an evaluation setting one could look
at users while they are at home or while they are in a supermarket and one could measure
how they react to notifications of the system. A questionnaire reveals if the users like the
system and would like to buy it when it comes on to the market. In a realistic setting, a
study would observe some 10 to 20 users each over a time span of one to two hours of
interaction. All would be in the same representative supermarket and in some model
kitchen. The results would be definitely interesting and the study would even go beyond
many other evaluations of similar systems. However, it is too limited for multiple reasons:
•
•
•
No long-term observation: Since users would interact with such a Ubiquitous
Computing system not only for a few hours but rather over months or years, the short
interaction of a novice user does not reveal much about the user’s future interaction.
Limited frame of context: In order to gain comparable results, all users are set to the
same or a similar context. In everyday situations, however, contexts may differ a great
deal and users show a much higher degree of variation in their behavior.
Additional tasks, people, and devices: As with most Ubiquitous Computing
applications, users may not be focused on just one task but may be doing many other
things concurrently. They could have other devices with them or be interacting with
their colleagues or family members.
These limitations of evaluation results make some of them questionable. However, by using
a good and careful evaluation design, some aspects can be counterbalanced. Moreover,
keeping the limitations in mind may help to focus on the right questions and avoid
overstating the results. And finally: even when evaluations only shed limited light on the
8
usability of a system, this is much better than working in complete darkness without
evaluations.
As a rule of thumb, it should be noted that evaluations for Ubiquitous Computing interfaces
should be made as realistic as possible. Thus field studies would be better than lab
conditions. Moreover, the designers should have a clear understanding of what they want to
achieve with their system in order to know what they want to prove using evaluations.
When to stop?
The development cycle should not be an endless loop. In general, the (re-)design-prototypeevaluation cycle can go on forever leading to a continuous increase of usability. In practice,
either the number of cycles is fixed beforehand or certain measures define when the loop
has to be stopped and the final design is achieved. Typically these measures would quantify
the usability goals listed at the beginning of this chapter. Such a goal could be “95% of the
test users rate the system as very convenient” or “the task completion rate within 30
minutes is 98%”. In some cases the stop-criterion is not bound to usability but to other
measures such as “we are out of budget” or “the deadline is next week”.
SPECIFIC CHALLENGES OF USER INTERFACES FOR UBIQUITOUS
COMPUTING
So far we have learned about how to design a good user interface. The principles we
discussed are rather generic and they apply – of course – for designing intelligent user
interfaces for Ubiquitous Computing, but they are also valid for other user interfaces such
as Web-interfaces or interfaces of desktop applications. The general process of humancentered design could even be applied to non-IT products such as cars, coffee machines and
other objects of our daily life. It is a matter of fact that we have got to such a generic
process. On the one hand, good usability is a property that is generic and the design process
is fairly similar in multiple domains. On the other hand, Ubiquitous Computing is about
integrating things into the objects of our normal life. Thus usability has, owing to the very
nature of Ubiquitous Computing, got something to do with the usability of everyday things.
Since the early days of Ubiquitous Computing, usability has been in its focus. Mark
Weiser’s idea of Ubiquitous Computing encompasses “invisible” interfaces that are so
naturally usable that they literally become invisible for the user’s conscious perception
(Weiser 1999a, b). This notion goes back to the German philosophers Georg Gadamer and
Martin Heidegger who call such interaction with things that we use without conscious
awareness things that are “ready-to-hand” or at our “horizon”. In this phenomenologist
view, the meaning of the things is actually derived from our interaction with them. Such a
view on interactive artifacts has become popular in Ubiquitous Computing and is closely
related to the notion of embodiment (Dourish, 2001). This is a fundamental shift from the
classical positivist approach in computer science, i.e., modeling the real world in simplistic
formal computer programs, to an embodied approach that takes the user in the real world
into account. This is relevant for Ubiquitous Computing for multiple reasons. On the one
hand, Ubiquitous Computing applications are to be used in complex real-world settings and
their meaning (for the user) will, in fact, only evolve in the course of action. Additionally, if
9
things should become natural extensions of our physical abilities, they must be designed
such that they do not need conscious interference from their users.
Given this notion of being “invisible” we can see that this does not necessarily mean “not
there”, but rather present without conscious interaction. The most basic examples for such
physical objects are our body parts. We do not have to think consciously about what we do
with our arms, but we just do the things we want. When we leave our house, we do not
have to remember: “let’s take the arm with us, we might need it today”. It is there and ready
for immediate use. When we throw a ball, we just throw it and we do not think and plan
how to make our hand grasp the ball and our arm swing around in order to accelerate the
ball. In this sense, our arm is invisible but also very present. Thus if we speak of
Ubiquitous Computing interfaces that are “invisible” or computers that are “disappearing”,
we actually speak of things that are present and “ready-to-hand”. However, the artifacts we
interact with might not be consciously realized as computers.
A good example of such a “ubiquitous” technology is present in our homes already:
electrical light. Whenever we enter a room that is dark, we just find a switch with our hands
next to the door and the light goes on. Without thinking we turn on the light. We do not
think of cables that conduct electrons. We do not have to consider how the light bulb works
or how they generate electricity at the power plant.
We have a very simplistic model of how the thing works and it is internalized to such a
degree that we do not have to think about it when we enter a room. These “mental models”
of how things work play an important role in designing good user interfaces as well as in
designing other everyday things (Norman, 1998). Donald Norman emphasizes that a good
design is about providing good mappings (Fig. 2):
•
•
•
The design model must be mapped to the system image.
Users must be able to map their understanding (mental model) to the system.
The system must allow the user to map its image to the user’s model.
10
Design Model
User’s Mental
Model
Designer
User
System
System Image
Fig. 2: Mappings of design model, mental model and system images (Norman, 1998)
The question is now, how can a system image support the appropriate user’s mental model.
The answer – with our notion of embodiment in mind – must bring the meaning of things
into the things themselves and thus a user can derive the meaning of something from the
interaction with it or from its mere appearance that may signal some properties indicating
how to use it. Such properties have been named affordances (Norman, 1998). The idea of
affordances is to bring knowledge into the world instead of having it in mind. Many highly
usable things that surround us just let us know by their physical appearance how we can use
them. A chair for instance does not need a label or instructions on how to sit on it. We just
see and know it is a chair and we know what to do with it.
Similarly, affordances have been defined as virtual affordances for computer interfaces and
many metaphors on our computer screens signal functionalities, e.g., mouse pointers and
scrollbars. With the advent of Ubiquitous Computing, the term affordance becomes again
more literally a property attached to the physical properties of things. Many Ubiquitous
Computing objects include tactile interfaces or smart objects with physical and not just
virtual properties.
There are a number of consequences arising from this perspective of embodied interaction
for Ubiquitous Computing:
- Support mental models: humans use mental models to understand and to predict
how things react to their actions. The system image should support such mental
models and make it easy to understand it.
- Respect cognitive economy: humans re-use their mental models. If there are wellestablished mental models for similar things, then they can be a good basis for an
easy understanding of a new artifact.
11
-
-
-
Make things visible and transparent: In order to understand the state of an object it
should be obvious what is going on. For instance, a container can indicate if it is
loaded or not.
Design for errors: Mappings between the user’s model and the system sometimes
fail. Most “human errors” are, in fact, mapping errors. Therefore, systems must
assist users in finding a solution for their task even if something has gone wrong.
There are a number of techniques for doing so, e.g., allowing undo-actions or sanity
checks on user inputs.
Internal and external consistency: Things within an application should work
consistently. For instance pushing a red button always means “stop”. External
consistency refers to expectations users may have from usage of other applications.
If we add some Ubiquitous Computing technology to a cup and turn it into a smart
cup, a user will still expect the cup to work as a cup.
With these guidelines and the general design process considerations we are already well
prepared for building very good interfaces for Ubiquitous Computing applications.
However, there are a number of further practical considerations and human factors that play
a role for Ubiquitous Computing user interfaces. Some of these issues are related to the
very nature of these applications being “ubiquitous” and some are more related to technical
problems in mobile and ubiquitous scenarios. We will briefly highlight some of these
aspects. Due to the broad spectrum of possible applications, we cannot go into details of all
possible factors.
Human factors for Ubiquitous Computing
In classical human-computer interaction, we have a well-defined setting. In Ubiquitous
Computing, we do not know where the users are, what tasks they are doing currently, which
other persons may be around. This makes it very hard to account for some human factors
that can greatly influence the interaction. Depending on time, concurrent tasks etc., the
user’s cognitive load, stress level, patience, and mood may vary extremely. Thus an
interface can, in one situation be well-suited and in another situation the user is either bored
or overloaded.
Another problem lies in spatial and temporal constraints. In many Ubiquitous Computing
applications, location and time play a crucial role. Users need the right information at the
right time and place. In a system that helps a user to navigate her vehicle through a city, the
information “turn right” only makes sense at a very well-defined point in space and time.
An information delay is not acceptable. Even though space and time are the most prominent
context factors in systems today, other context factors may also play a big role (cf. chapter
”Context Models and Context Awareness”). An interface can adapt to such context factors
and take into account what is going on. In particular, the user might not have the focus of
attention on the system but rather might be busy doing something else. But not only userdriven activities can distract the user, other people and events are not the exception but the
normal case in many Ubiquitous Computing scenarios. This has a huge effect on the
interface and dialog design. While in desktop applications, the designer can assume that the
user is looking at the screen and a system message is (in most cases) likely to be read by the
user, in Ubiquitous Computing we must reckon with many signals from the system being
ignored by the user.
12
The interfaces can try to take the users’ tasks into account and thus adapt their strategy to
reach the user’s attention. For example., when the user is driving a car, the system might
interact in a different way than when the user is in a business meeting. However, when the
system is literally ubiquitous, the number of tasks and situations the user might be in can be
endless and it is not feasible to model each and every situation. The system interface might
then instead be adaptable to a few distinct modes of interaction.
Who is in Charge?
As we mention adaptation and adaptivity, we get to a point where the system behaves
differently in different situations. This can be a perfect thing and can significantly increase
the ease of use. A mobile phone, for instance, that automatically adapts to the environment
and stays silent in a business meeting, but rings in other situations is rather practical.
However, the tradeoff is a reduced predictability and as discussed above, many usability
goals can be in conflict with each other. The developers and (hopefully) the users have to
decide which goal is more important. It is important to know about these conflicts and to
decide explicitly how to deal with them.
Typically usability goals in Ubiquitous Computing that come into conflict with others are:
•
•
•
•
•
Controllability: Is it the system or the user who controls the situation?
Support of mental models: How can a user still understand a very complex system?
Predictability: Humans want to be able to predict the outcome of their actions. If a
system is too adaptive and autonomous, users get lost.
Transparency: If the system adapts to all sort of context factors, its state becomes less
transparent.
Learn ability: A system that learns and behaves differently in new situations can be hard
to understand.
The designers have to decide to what degree they want to achieve which level in each of
these dimensions and how other aspects such as autonomy or adaptivity may affect them. In
general, there are no rules or guidelines that can give clear directions. While in many other
IT domains such as Web-systems, some established standards may set the stage and good
guidelines exist, the designer of a Ubiquitous Computing system will have to derive his
own solution on the basis of the goals he wants to achieve. The only way to prove that the
solution actually fits these goals, are, in turn, evaluations. Therefore, a user-centered design
approach is the only way to design Ubiquitous Computing systems that incorporate good
user interfaces.
INTELLIGENT AND DUMB INTERFACES FOR UBIQUITOUS
COMPUTING
In the last part of this chapter we want to focus on intelligent user interfaces. The term
“intelligent user interface” has been debated for a while and it is not so clear what it means
and if at all intelligent interfaces are something beneficial. But even the term intelligence is
not well-defined and has been used (or misused) in multiple ways. Before going into
13
technical details we should, thus, first discuss what the term means and then see some
techniques that are used for realizing them. We will finish with a discussion on how much
intelligence a good interface actually needs.
System
Interaction
a) classical interfaces
System
Interaction
c) embodied interaction
System
Interaction
b) intelligent user interface
(classical AI)
System
Interaction
d) intelligent cooperative interface
Fig. 3: Multiple views on intelligent user interfaces
What is an intelligent user interface?
So far we have presented a number of techniques for building good interfaces. We also saw
how the view of embodied interaction can be used as a paradigm for Ubiquitous
Computing. In general, a technical solution can be called “intelligent” for two reasons: (i)
there is some built-in intelligent computation that solves some otherwise unsolvable
problem; (ii) Using the system, a user can solve an otherwise unsolvable problem, even
though the system itself does not actually do anything intelligent. Suppose that calculating
the logarithm of a number is a hard problem for a human, then a calculator is a good
example for case (i) and an abacus would be an example for (ii). The calculator solves the
problem for the human and the abacus empowers the user to solve the problem on her own.
The classical approach of artificial intelligence (AI) is a rationalist one. According to this
approach, a system should model the knowledge that human experts have and thus emulate
human intelligence. In this sense, the “intelligence” moves from the user to the system (Fig.
3a, 3b). This approach is valid for many cases, e.g., if expert knowledge is rare and nonexperts should also be able to work with a system. As discussed above, the embodied
interaction view would rather try to make the interaction more intelligent (Fig. 3c). This fits
too many new trends in AI where embodied intelligence is viewed as a property that
emerges from the interaction of an intelligent agent with the environment. In this view,
even simple and light-weight agents can perform intelligent behavior without full reflective
and conscious knowledge of the world. With respect to this definition, all of the abovementioned material already describes how to build an intelligent interface. Because the
14
processes for designing human-centered systems are just the right techniques for designing
intelligent interactive systems, we already defined to a great extend how to build intelligent
user interfaces.
Instead of leaving all the intelligence to the system, the user or the interaction, we can also
try to get the best of all worlds and combine these techniques into a cooperative system,
where both the system and the user cooperate with their knowledge on solving some tasks
supported by intelligent interaction techniques (Fig. 3d).
As discussed above, we can make the system more intelligent by enhancing the system, the
interaction or the user. Intelligent user interface techniques exist for all three aspects. We
will briefly list the key methods. Some details on them can be found in other chapters of
this volume.
Techniques for enhancing the system’s intelligence
A huge number of AI techniques can be used to put more knowledge and reasoning into the
system. Besides state of the art IT methods such as databases, expert systems, heuristic
search and planning, a number of more recent developments have attracted a good deal of
interest by researchers and practitioners in the field:
• World knowledge and ontologies: semantic technologies and formal models of world
knowledge have had a great renaissance in the last couple of years. In context of the
Semantic Web efforts, ontologies have been established as a standard method for
capturing complex relations of objects and events in the world. Ontologies (cf. chapter
”Ontologies for Scalable Services-Based Ubiquitous Computing”) can be successfully
used in user interfaces in order to give the system a better understanding of the domain
of an interaction. In particular for natural language interaction, ontologies provide
resources for better understanding and reasoning.
• User Adaptation: User models and techniques of user adaptation allow for
individualized interaction (cf. chapter ”Adapting to the User”). A number of methods
allow for autonomous and user-driven customization of systems and they are widely
used for intelligent user interfaces. In particular for Ubiquitous Computing, user
adapted systems play a big role since these systems often have to support a great variety
of use cases where a single standardized interface is not appropriate.
• Context adaptation: Context plays a crucial role for Ubiquitous Computing (cf. chapter
”Context Models and Context Awareness”). As discussed already, context-dependent
user interfaces can greatly enhance the usability of these systems. However, context can
also be challenging because it can depend on a huge number of parameters and it is hard
to formalize the meaning of contexts and to learn the relations between them
autonomously.
• Service federation: Integrating a variety of services and providing a single interface can
be a significant step towards intelligent user interfaces. If users do not have to interact
with all sorts of services separately, but can use a single portal, they can work much
more efficiently with less cognitive load. However, service integration can be a hard
problem, in particular when multiple services have to be integrated semantically that
had not originally been designed to be integrated. An intelligent ubiquitous travel
15
assistant could, for instance, integrate maps, events, travel, sights and weather
information from different providers and offer the user an integrated trip plan.
Of course, many other techniques are available. In principle all advanced semantic and AIbased methods with relation to user interaction can help to make systems smarter and better
understand what a user might need and want using ubiquitous information sources.
Techniques for more intelligent interaction
As discussed above, the most important aspect of intelligent interaction is to provide good
and working mappings of the user’s models of the world and the system’s model. These
mappings depend highly on the semiotics, i.e. the meaning and perception of the signs and
signals, established between user and the system. These can be both actively communicated
codes in form of a language but also passive features of the artifact that signal the user
affordances. Both aspects can be supported through intelligent methods that aim at more
natural interaction such that the interaction takes place based on the premise of human
communication rather than machine languages.
• Multimodal interaction: Multimodal techniques make use of the human ability to
combine multiple input and output modalities for a semantically rich, robust and
efficient communication. In many Ubiquitous Computing systems, language, gestures,
graphics, and text are combined to multimodal systems (cf. chapters “Multimodal and
Federated Interaction” and ”Multimodal Software Engineering”). Multimodality is on
the one hand more natural and on the other hand it also allows for more flexible
adaptation in different usage situations.
• Cross-media adaptation: Ubiquitous Computing systems often use a number of different
media, devices and channels for communicating with the user. A user can, in one
situation, carry a PDA with a tiny display and, in another situation, interact with a wallsized display in public or even use no display but just earphones. Intelligent interaction
can support media transcoding that presents content on different media adapted to the
situation.
• Direct interaction: Humans are very good at multimodal communication, but for many
tasks we are even better using direct interaction. It is for instance much easier to drive a
car using a steering wheel than to tell the car to which degree it should steer to the left
or to the right. For many activities, direct interaction is superior to other forms of
human-computer interaction.
• Embodied conversational agents: Since humans are used to communicating with
humans (and not with machines), anthropomorphic interfaces presenting animated
characters can in some circumstances be very useful. In particular in entertainment
systems, so-called avatars have become quite popular. However, there is also some
debate about these interfaces and some people dislike this form of interaction.
Again, the list of techniques could be much longer. Here we just highlight some of the most
important trends in the field. More ideas have been proposed and there will be more to
come.
Techniques for amplifying the user’s intelligence
In principle, we can try to build better interface techniques, but we will not be able to
change the user’s intelligence – leaving aside e-learning and tutorial systems that might
explicitly have teaching or training purposes. But even if we do not affect the user’s
16
intelligence, we can still do a lot about her chances to make use of it. In the scientific
community there has been a controversy about the goal of intelligent user interfaces over
the last couple of years on where to put how much intelligence (Fig. 3). And even though it
might be counterintuitive, an intelligent interface can sometimes be the one that leaves the
intelligence on the part of the user rather than putting it into the system (Fig. 3a, 3b). In the
debate about these approaches, the slogan “Intelligence Amplification (IA) instead of
Artificial Intelligence (AI)” was coined. The idea is that a really intelligent interface leaves
intelligent decisions to the user and does not take away all intelligent work from the user by
modeling it in the system. The question is: how can the system support users in acting
intelligently? The answers have been given already when we discussed usability goals: A
system that is easy to learn, where users have control and understand what is going on,
where mental models are applicable is more likely to let people act intelligently. In contrast,
a system that only leaves minor steps to the user, that does not provide information about
its states and how it got there and for which the users do not have appropriate mental
models will in the long run bore its users and decrease their creativity and enthusiasm.
It should be noted that both type of systems can be packed with AI and be extremely smart
or very dumb things. It is more the kind of interaction design that facilitates human
intelligence or not.
HOW MUCH INTELLIGENCE?
Intelligent user interfaces for Ubiquitous Computing will be a necessary thing in the future.
However, there are multiple competing views and philosophies. In general: three things
could be intelligent: the user, the system or the way in which they interact. Most
researchers focus on enhancing the system’s intelligence and the assumption is that this will
lead to a better usability. This is often the case but not always. A different approach is to
say that users are the most intelligent agents and their intelligence should be enhanced
rather than replaced by artificial intelligence (IA instead of AI). In practice, however, we
should do all together in order to make the interaction as easy and efficient as possible. But
each decision should be made carefully keeping in mind that the overall goal of an
intelligent user interface still should be defined by the usability goals. And like with all
good things “less is sometimes more” and some simple things often are more enjoyable and
easier to understand than highly complex and automated devices.
CONCLUSIONS
This chapter introduced aspects of designing user interfaces for Ubiquitous Computing in
general and intelligent interfaces in particular. The basics for building intelligent interfaces
are techniques for building good interfaces. Consequently, we first presented an up-to-date
introduction to methods of human-centered design. A central aspect of this technique is to
iteratively design systems with repeated evaluations and user feedback. This approach is
especially important for Ubiquitous Computing systems, since they lack clear guidelines
and decades of experience and thus iterations are crucial in order to approach the desired
design goals.
17
Obviously, many of these basic techniques are also valid for many other systems. However,
Ubiquitous Computing introduces some more specific issues such as a high variety of
contexts, the lack of single dedicated interface devices and – by its very nature – ubiquitous
interaction at any time and location. Therefore, Ubiquitous Computing interfaces must
place even more emphasis on good mappings to mental models and provide good
affordances. The view of embodied interaction gives us a good theoretical idea of the way
we should think of and model interaction in such systems.
With these prerequisites, designers can build very good interfaces and can take many
usability aspects into consideration. However, so far the “intelligence” of the interfaces has
not been discussed. We did that in the last part of the chapter and presented a modern and
sometimes controversial view on intelligent user interfaces. There are different paradigms
that may contradict each other. The main question can be formulated as: “AI or IA –
Artificial Intelligence or Intelligent Amplification”. We discussed these design philosophies
and presented some ideas on how to combine the best of both worlds. We also presented a
number of current trends in the field that can be found in modern Ubiquitous Computing
systems.
FUTURE RESEARCH DIRECTIONS
As the field of Ubiquitous Computing matures, its user interface techniques will also
undergo an evolutionary process and some best practices will be established, making things
much easier. We currently see this happening for Web applications where developers can
choose from established interaction techniques that are well-known to the users and
guarantee efficient interaction.
However, Ubiquitous Computing might never reach that point since the ambition to support
users in every situation at every time and place which is the final goal of it requires such
rich interfaces that have to cope with the complexity of the users’ entire life. This might be
good news for researchers in the field because they will stay busy searching for better and
more intelligent interfaces.
The main challenges for future research will lie in the problem of extensibility and
scalability of intelligent user interfaces. How can a system that has been designed for a user
A in situation S be extended to support thousands of users in a hundred different situations?
REFERENCES
Dix, A., Finley, J., Abowd, G. & Beale, R. (1998). Human-computer interaction. Upper
Saddle River/NJ, USA: Prentice-Hall.
Dourish, P. (2001): Where the action is: the foundations of embodied interaction.
Massachusetts, USA: MIT Press.
Gould, J. D., Boies, S. J. & Lewis, C. (1991). Making usable, useful, productivityenhancing computer applications. Communications of the ACM, 34 (1), 74–85.
18
Hartson, H. R. & Hix, D. (1989). Toward empirically derived methodologies and tools for
human–computer interface development. International Journal of Man-Machine Studies,
31 (4), 477–494.
Hix, D. & Hartson, H. R. (1993). Developing User Interfaces: Ensuring usability through
product and process. New York, USA: John Wiley and Sons.
ISO 13407 (1999). Human-centred design processes for interactive systems. International
Organization for Standardization.
ISO 9241 (2006). Ergonomics of human-system interaction. International Organization of
Standardization.
Mayhew, D. J. (1999). The usability engineering lifecycle. Burlington/MA, USA: Morgan
Kaufmann.
Nielsen, J. (1993). Usability engineering. Boston/MA, USA: Academic Press.
Norman, D. A. (1998): The Design of Everyday Things. Massachusetts, USA: MIT Press.
Olson, M. H. & Blake, I. (1981). User involvement in system design: an empirical test of
alternative approaches. New York, USA: Stern School of Business.
Royce, W. W. (1970). Making the development of large software systems: concepts and
techniques. Technical Papers of Western Electronic Show and Convention (WesCon).
August 25-28, Los Angeles, USA.
Shneiderman, B. (1997). Designing the user interface: strategies for effective humancomputer interaction. Boston/MA, USA: Addison Wesley.
Weiser, M. (1999a). The computer for the 21st century. ACM SIGMOBILE Mobile
Computing and Communications Review. 3 (3), 3-11. Special issue dedicated to Mark
Weiser.
Weiser, M. (1999b). Some computer science issues in ubiquitous computing. ACM
SIGMOBILE Mobile Computing and Communications Review, 3 (3), 12-21. Special issue
dedicated to Mark Weiser.
FURTHER READING
Cooper, A. & Reimann, R. M., (2003). About Face 2.0: The Essentials of Interaction
Design. New York/NY, USA: John Wiley.
Dautenhahn, K. (1996). Embodied cognition in animals and artifacts. In Embodied Action
and Cognition: Papers from the AAAI 1996 Fall Symposium in Boston (pp. 27-32).
Hornecker, E. & Buur, J. (2006). Getting a grip on tangible Interaction: a framework on
physical space and social interaction. Conference on Human Factors in Computing Systems
(pp. 437-446). New York/NY, USA: ACM Press.
Preece, J., Rogers, Y. & Sharp, H. (2002). Interactive Design. New York/NY, USA: John
Wiley.
Sharkey, N. & Zeimke, T. (2000). Life, mind and robots: the ins and outs of embodied
cognition. In S. Wermter & R. Sun (Eds.), Symbolic and Neural Net Hybrids.
Massachusetts, USA: MIT Press. URL: citeseer.ist.psu.edu/sharkey99life.html
Winograd, T. & Flores, F. (1987). Understanding computers and cognition: a new
foundation for design. Boston/MA, USA: Addison Wesley.
19