Download SIGCHI Conference Paper Format

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Ecological interface design wikipedia , lookup

Pattern recognition wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Time series wikipedia , lookup

Enactivism wikipedia , lookup

Human-Computer Interaction Institute wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Human–computer interaction wikipedia , lookup

Affective computing wikipedia , lookup

History of artificial intelligence wikipedia , lookup

AI winter wikipedia , lookup

Transcript
Situation, Activity and Goal Awareness in Ubiquitous Computing
1
Liming Chen, 2Parisa Rashidi
1
School of Computing and Mathematics, University of Ulster, UK
2
Department of Information and Computer Science and Engineering, University of Florida, USA
Abstract
Purpose – this paper aims to provide a view on Situation, Activity and Goal awareness (SAGAware) in ubiquitous
computing. It discusses these concepts, articulates their roles in scalability and prevalence of pervasive applications, and
speculates the future development.
Design/methodology/approach – the paper is a visionary essay that introduces the background, elaborates the basic concepts
and presents the authors’ views and insights into SAGAware research in pervasive computing based on the latest state of the
art of the development of pervasive computing and its applications.
Findings – SAGAware is closely related to the levels of automation and large-scale uptake of pervasive applications. It is an
interdisciplinary research area requiring collaboration and cooperation of multiple research communities.
Research implications – the paper highlights the importance of the SAGAware research and further points out research
trends and future research directions.
Practical implications – the paper helps stimulate research interests and draw attention of relevant researchers to this
promising research area.
Originality/value – the paper pioneers the investigation of an increasingly important research theme, and presents an indepth view to its potentials and research trends.
Keywords: Situation, Activity, Goal, Modeling, Representation, Reasoning, Recognition, Ubiquitous Computing,
SAGAware
Paper type: Viewpoint
Introduction
Ubiquitous / pervasive computing (Weiser, 2002; Saha and Mukherjee, 2003) is conceptualized as a new paradigm of humancentric computer interaction by embedding increasingly ubiquitous connected computing devices in an environment to allow
seamless integration of everyday objects and activities with information processing. It aims to enable and support “anywhere,
anytime” computing without the users’ explicit awareness. Built upon this, artificial intelligence techniques try to make the
environment sensitive and responsive to the presence of people by providing advanced ICT technologies and systems that
support context awareness, personalization, adaptability and anticipation. This has given rise to a number of novel
applications, for example, climate monitoring, smart environments and disaster responses and management. Among them,
smart environments may be the most impressive, killer application exemplar of the ubiquitous paradigm and technology
infrastructure.
A smart environment is a physical space where miniaturized computing devices work in concert to support people in carrying
out their daily working and living activities in an easy, natural and personalized way. A typical real-world example of such an
environment is a “Smart Home” within which the daily activities of its inhabitants, usually the elderly or disabled, are
monitored and analyzed so that personalized context-aware assistance can be provided. Others examples include intelligent
meeting rooms, conferencing centre, research environments, hospitals and shopping malls, intelligent cars, to name but a few.
Ubiquitous computing in general and smart environments in specific have been under rigorous investigation in various
national, regional (EU) and international research initiatives and programme. In particular, as the Smart Homes based
approach to technology-driven ambient assistive living becomes a viable mainstream solution to the ever-increasing ageing
population and over-stretched healthcare, research on smart environments has gained significant momentum. While a whole
raft of heterogeneous computing technologies providing fragments of the necessary functionality, e.g. sensor networks, data
processing and activity recognition algorithms, has been developed, so far large-scale development, deployment and uptake
of real-world applications in ubiquitous/smart environments have not been seen due to the lack of reusability, scalability and
applicability at both technology and system levels.
It is commonly agreed that the collection, modeling, representation and interpretation of the States, Events and Behaviors
(SEB) from both physical and software agents/devices situated in a ubiquitous environment is key to data fusion, joint
interpretation and advanced level application features. The level of abstraction in modeling and representation decides the
level of automation. As a result, researchers have been working on SEB modeling, representation and inference at different
levels of abstraction over the past several years.
Concepts and ongoing efforts
In the following we outline major research efforts in SEB modeling and representation in an attempt to draw valuable
insights that can inform and guide future research.
Context modeling, representation and reasoning
Context awareness has been the core of ubiquitous computing and a center of research in ubiquitous computing paradigm and
applications. While context has been defined in various ways in different disciplines and even within the context-aware
computing alone, the widely accepted definition of context was provided by Dey (Dey, 2001): “Context is any information
that can be used to characterize the situation of an entity.” An entity is defined as “a person, place, or object that is considered
relevant to the interaction between a user and an application, including the user and applications themselves”. Dey also gives
a definition of context awareness as following: “A system is context-aware if it uses context to provide relevant information
and/or services to the user, where relevancy depends on the user’s task”.
In order to make a system context-aware, it is necessary to 1) acquire contextual information from sensors, 2) model and
represent contextual information, 3) store and manage contextual information, and 4) retrieve and interpret the context. These
requirements have invoked considerable research on context modeling, representation and reasoning as well as framework
and middleware for context management, as can be seen in the latest survey (Bettini et al., 2010). A large number of contextaware applications based on various context models have been developed over the past few years for a variety of application
domains. The experiences obtained from research on context-awareness and context-aware applications help identify the set
of the requirements for context modelling and reasoning, and further influence the research on context awareness. There is a
trend to raise the level of abstraction of context modelling and representation from informal low-level contextual information
towards formal more expressive high-level context.
Situation modeling, representation and reasoning
A situation is often defined as a snapshot of states at a specific time point in a physical or conceptual environment. Situation
awareness has been commonly referred to as “the perception of elements in the environment within a volume of time and
space, the comprehension of their meaning, and the projection of their status in the near future” (Endsley, 1995). It is a
cognitive process that consists of three operational functions. Firstly, it involves the sensing and recognition of different
elements in the environment, including in particular the high-level characteristics and behaviors. Secondly, it needs the
interpretation and comprehension of the significance associated with perceived elements in the environment. And thirdly it
requires the ability to anticipate the actions of elements and predict future states of the environment. For entities, either
human beings or robots or software systems, operating in complex, dynamic and uncertain environments, situation awareness
is the determinant of making informed right decisions at the right time in the right place.
Comparing with context that focuses on information of characterisation of a situation for an entity, situation places emphasis
on 1) dynamic aspects, i.e., the previous and future situations of current situation along a time line, 2) spatial aspects, i.e., the
effect of events happened somewhere else to the concerned entity, and 3) the high-level interpretation of phenomena in a
environment, e.g., putting hot water into a cup is not just the action itself but a making tea activity. While context aware
systems are mainly concerned with linking changes in the environments with software systems. Situation aware systems are
concentrated on the knowledge and understanding of the environment that is critical to decision making. Situation awareness
pays particular attention on the mental model and cognitive processes from the system’s perspective.
Situation modelling, representation and reasoning use a situation as the basic building and manipulation blocks. It can be
viewed as a higher level of abstraction than context in sensor data modelling, representation, aggregation and reasoning.
Relevant research, e.g., Situation Calculus (SC) (Levesque et al., 1998) and Event Calculus (EC) (Kowalski and Sergot,
1986) has long been conducted in AI communities. Research on situation awareness in mobile and ubiquitous environments
has emerged recently (Smart et al., 2007; Häussermann et al., 2010). Nevertheless, it is still in its infancy, requiring further
attention.
Activity modeling, representation and recognition
Ubiquitous systems need to recognise a user’s behaviour dynamically in order to support situation aware applications. For
example, in a smart home based assistive living setting, only when an inhabitant’s activity is recognised, the system can
decide about the required assistance. Activity modelling, representation and recognition has attracted increasing attention in
recent years, in particular in ambient assistive living settings.
Approaches to activity modeling, representation and recognition can be generally classified into three categories. The first
category is based on the use of visual sensing facilities, e.g., camera-based surveillance systems, to monitor an actor’s
behavior and environmental changes (Moeslund et al., 2006). These approaches exploit computer vision techniques to
analyse visual observations for pattern recognition (Turaga et al., 2008). The second approach relies on various mobile and
wearable sensors to recognize indoor/outdoor physical activities, such as physical exercise (Parkka et al., 2006). In particular,
these methods exploit a variety of sensors readily available on today’s smart phones such as accelerometer, proximity sensor,
and gyroscope. The third category relies on emerging dense sensor network technologies for ambient activity monitoring. In
these approaches, various types of sensors are installed in the environment, and human-object interaction is monitored by
using low-power and low-cost ambient sensors such as passive infrared motion sensors (PIR). In ambient settings, sometimes
a sensor such as an RFID tag is also attached to the actor under observation, in order to easily identify the person who is
interacting with the objects (Philipose et al., 2004, Lee and Mase 2002).
With the dense sensing paradigm, activity modelling, representation and recognition can be performed in several strands. The
majority of ambient activity recognition algorithms are supervised, and rely on labelled data for training, including various
generative and discriminative methods. The generative approaches attempt to build a complete description of the input or
data space, usually with probabilistic analysis methods such as Markov models (Sanchez and Tentori, 2008), and Bayesian
networks (Bao and Intille, 2004). The major disadvantage with such methods is that the model is static and subjective in
terms of probabilistic variable configuration. Discriminative approaches provide a mapping from input data to output activity
labels, and include methods such as neural networks, linear or non-linear discriminant learning (Brdiczka and Crowley, 2009;
Hoey and Poupart, 2005). The generative and discriminative approaches require large datasets for training models, thus suffer
from the data scarcity or the “Cold Start” problem. It is also difficult to apply modelling and learning results from one person
to another. To solve this problem, logical approaches exploit logical formalisms, for example by using event calculus (Chen
et al., 2008) and ontologies (Chen et al., 2012), for representing activity models and conducts activity explanation and
predication through deduction or abduction reasoning. Compared to the above two data-centric approaches, logical
approaches are semantically clear in modeling and representation and elegant in inference and reasoning. In addition to the
logical approaches, unsupervised machine learning approaches provide another alternative to the supervised approaches.
Unsupervised approaches, such as frequent activity pattern mining, do not look for predefined patterns of activity, but rather
try to discover interesting activity patterns in data (Rashidi, 2011). This is especially important in real world settings, as the
assumption of consistent pre-defined activities usually does not hold in reality due to physical, mental, cultural, and lifestyle
differences of individuals.
In general, activity modelling, representation and recognition have been under vigorous investigation, and significant
progress has been made. Research has recently shifted from single-user single-activity scenarios towards interleaved,
concurrent or multi-user activity scenarios, and researchers are looking at scalable and reusable technologies for activity
recognition in real world applications.
Goal / intent modeling, representation and recognition
Current approach to ubiquitous computing is underpinned by a multi-step process, namely, to (1) monitor an user’s behaviour
using multiple multi-modal sensors, (2) collect and fuse sensor data in some ways suitable for interpretation, (3) recognise
activities the user is performing, (4) identify the user’s needs or anomalies, and (5) finally provide personalised context-aware
interactions or assistance whenever and wherever needed. This is roughly a bottom-up approach starting from the lowest
sensor data, then discovering the activity and purposes of the user through increasing higher level processing. Given the
diversity of activities, the different ways of performing them by individuals, the wide range of sensors used and also the
difficulty of acquiring personal data (e.g., privacy, ethnic issues), this bottom-up approach faces various challenges such as
applicability, scalability and adaptability for real world deployment.
Rather than trying to figure out the purpose of a user from sensor data, it is also possible to define goals for users, e.g., each
goal is the objective realised by performing an activity. Goals are either explicitly manually specified, e.g., a care provider
defines goals for a patient to achieve during a day, or learnt based on domain context. Activities are pre-defined in some
flexible way and linked to specific goals. As such, once a goal is specified or identified, ubiquitous systems can
instruct/remind users to perform the corresponding activity. The user will have the freedom of how to perform the activity,
i.e., not in a rigid single sequence of actions but arbitrary order in terms of his/her preferences and the location of required
items. The system will monitor the unfolding of the activity, i.e., is it performed or is it performed in the right way. Based on
the observation the system will provide timely guidance to help the inhabitant complete the activity. This can be referred as
the top-down approach to the realization of ubiquitous computing.
Goal or intent modeling, representation, recognition and manipulation are crucial elements of the aforementioned top-down
approach. Research on cognitive computation, i.e., modeling and representation of motivations, goals, intention, belief and
emotion, has been widely undertaken in various AI communities, in particular in intelligent agent research community.
However, the transfer and application of such knowledge into ubiquitous computing so far has received little attention (Lin
and Easterbrook, 2007).
Practice and future trends
The advancement and maturity of the supporting technologies for ubiquitous computing, especially low-power, low-cost,
high-capacity and miniaturized sensors, wired and wireless communication networks, and data processing techniques, have
already pushed the research emphasis of ubiquitous computing towards high-level information integration, context processing
and activity recognition and inference. Meanwhile solutions for a number of real-world problems have become increasingly
reliant on the joint interpretation of complex SEB at a higher level of automation. For example, surveillance and security
applications rely on activity and intent recognition technologies to address the threats of terrorists. Ambient assisted living
systems exploit activity monitoring, recognition and assistance to compensate cognitive decline of ageing population, thus
supporting independent living and ageing in place. Other emerging applications, such as intelligent meeting rooms and smart
hospitals, also depend on the correct understanding of situation, activities and goals in order to provide multimodal
interactions, proactive service provision, and context aware personalized activity assistance. As a result of the technology
push and application pull research on situation, activity and goal awareness has become a regular topic in related areas,
including ubiquitous computing, ambient intelligence, smart environments, assistive living, public security control, to name
but a few.
Two research trends have emerged with regard to SAGAwareness in ubiquitous computing. One is that SEB modeling,
representation, interpretation and usage has been constantly shifting from low-level raw observation data and their
direct/hardwired usage, raw data aggregation and fusion, to high-level formal context modeling and context-based
computing. It has evolved from context awareness, situation awareness to activity recognition and most recently towards goal
and intent awareness. Research events and reported results in a number of mainstream international conferences such as
(AAAI, CVPR, IJCAI, NIPS, Pervasive, UbiComp, PerCom, ISWC, ICAPS, IJAMI) have demonstrated the relevance of the
research.
Another noticeable trend is that the modeling, representation and inference of situation, activity and goal have increasingly
adopted an artificial intelligence based approach utilising formal knowledge modeling, representation, reasoning and
planning techniques. One major effort is, in addition to making use of existing knowledge representation formalisms, to
develop an activity context representation language for modeling and representation. A number of workshops have been
dedicated to this research from different research angles and communities. For example, in 2011 alone eight workshops were
specifically devoted to research pertaining to situation, activity and goal or intent modeling, representation and reasoning
(IWFAR, HAU3D, SAGAware, PAIR, GAPRec, ACRP-TL, HBU, HAR). Many of these workshops continue in 2012. The
interest and enthusiasm for this thread of research is still increasing.
We believe that the above trends will continue in the years to come. We expect the research will bring together fundamental
theoretical research, e.g., the intelligent agent, cognitive and planning research in AI communities, applied technology
research and development, e.g., pervasive computing and human computer interaction, and real-world applications domains,
e.g., ambient assistive living, emergency management and environmental monitoring. We envision that researchers and
practitioners from a diversity of research communities will motivate and inspire each other, help each other to identify
opportunities and challenges and facilitate collaborations and synergy. The exchange of ideas and coordinated efforts is
critical for this promising cross-community research.
REFERENCES
AAAI, The AAAI Conference on Artificial Intelligence, http://www.aaai.org/
ACRP-TL, Activity Context Representation: Techniques and Languages, AAAI 2011, http://activitycontext.org/
Bao, L. and Intille, S. (2004), "Activity recognition from userannotated acceleration data", In Proc. Pervasive, LNCS3001,
pp.1-17.
Bettini, C., Brdiczka, O., Henricksen, K., Indulska, J., Nicklas, D., Ranganathan, A. and Riboni, D. (2010), "A survey of
context modelling and reasoning techniques," Pervasive and Mobile Computing, Vol. 6, pp.161-180.
Brdiczka, O. Crowley, J.L. (2009), “Learning situation models in a smart home”, IEEE Transactions on Systems, Man and
Cybernetics - Part B: Cybernetics, Vol.39 No.1, pp.56-63.
Chen, L. Nugent, C., Mulvenna, M., Finlay, D. and Hong, X. (2008), “A Logical Framework for Behaviour Reasoning and
Assistance in a Smart Home”, International Journal of Assistive Robotics and Mechatronics, Vol.9 No.4, pp.20-34.
Chen, L., Nugent, C.D. and Wang, H. (2012), “A Knowledge-Driven Approach to Activity Recognition in Smart Homes”,
IEEE Transactions on Knowledge and Data Engineering, Vol.24 No.6, pp.961-974.
CVPR, Computer Vision and Pattern Recognition, http://www.cvpr201x.org/
Dey, A. K. (2001), “Understanding and using context”, Personal and Ubiquitous Computing Journal, Vol.5 No.1, pp.4-7.
Endsley, M.R. (1995), “Toward a theory of situation awareness in dynamic systems”, Human factors, Vol.37, pp.32-64.
GAPRec,
Goal,
Activity
and
Plan
Recognition,
GAPRec,
ICAPS2011,
http://icaps11.icapsconference.org/workshops/gaprec.html
HAR, Workshop on robust machine learning techniques for human activity recognition, IEEE conf on Systems, Man and
Cybernetics 2011, http://www.opportunity-project.eu/workshopSMC
HAU3D, International Workshop on Human Activity Understanding from 3D Data, CVPR 2011,
http://ictr.uow.edu.au/hau3d11/index.html
Häussermann, K., Zweigle, O., Levi, P., Wieland, M., Leymann, F., Siemoneit, O. and Hubig, C. (2010), “Understanding and
Designing Situation-Aware Mobile and Ubiquitous Computing Systems”, World Academy of Science, Engineering and
Technology, Vol.14 No.1, pp.329-339.
HBU,
Workshop
on
Human
Behavior
Understanding:
Inducing
Behavioral
Change,
Ami2011,
http://staff.science.uva.nl/~asalah/hbu2011/
Hoey, J. and Poupart, P. (2005), “Solving POMDPs with continuous or large discrete observation spaces”, In Proc. Int’l. Joint
Conf. on Artificial Intelligence, pp.1332–1338.
ICAPS, International Conference on Automated Planning and Scheduling, http://icaps11.icaps-conference.org/
IJAMI, International Joint Conference on Ambient Intelligence, http://www.ami-11.org/
IJCAI, International Joint Conference on Artificial Intelligence, http://ijcai.org/
ISWC, International Symposium on Wearable Computers, http://www.iswc.net/
IWFAR, International Workshop on Frontiers in Activity Recognition using Pervasive Sensing, Pervasive2011,
http://di.ncl.ac.uk/iwfar/
Kowalski, R. A. and Sergot, M. J. (1986), “A Logic-based Calculus of Events”, New Generation Computing, Vol.4, pp.6795.
Lee, S.W. and Mase, K. (2002), “Activity and location recognition using wearable sensors” IEEE Pervasive Computing,
Vol.1 No.3, pp.24-32.
Levesque, H., Pirri, F. and Reiter, R. (1998), “Foundations for the situation calculus”, Electronic Transactions on Artificial
Intelligence, Vol.2 No.3-4, pp.159-178.
Lin, M. and Easterbrook, S. (2007), "valuating User-centric Adaptation with Goal Models, First International Workshop on
Software Engineering for Pervasive Computing Applications, Systems, and Environments (SEPCASE '07), pp6-13.
Moeslund, T.B., Hilton, A. and Krüger, V. (2006), “A survey of advances in vision-based human motion capture and
analysis,” Comput. Vis. Image Understand., Vol.104 No.2, pp.90-126.
NIPS, The Neural Information Processing Systems, http://nips.cc/
PAIR, Workshop on Plan, Activity and Intent Recognition (PAIR), AAAI 2011, http://people.ict.usc.edu/~pynadath/PAIR2011/
Parkka, J., Ermes, M., Korpipaa, P., Mantyjarvi, J., Peltola, J. and Korhonen, I. (2006), “Activity classification using realistic
data from wearable sensors”, IEEE Transactions on Information Technology in Biomedicine, Vol.10 No.1, pp.119-128.
PerCom, IEEE international conference on pervasive computing and communications, http://www.percom.org/
Pervasive, Conference on Pervasive Computing, http://pervasiveconference.org/
Philipose, M., Fishkin, K.P., Perkowitz, M., Patterson, D.J., Fox, D., Kautz, H. and Hahnel, D. (2004), “Inferring activities
from interactions with objects”, IEEE Pervasive Computing, Vol.3 No.4, pp.50-57.
Parisa Rashidi, Diane J. Cook, Lawrence B. Holder, and Maureen Schmitter-Edgecombe. 2011. Discovering Activities to
Recognize and Track in a Smart Environment. IEEE Trans. on Knowl. and Data Eng. 23, 4 (April 2011), 527-539.
SAGAware2011,
International
Workshop
http://scm.ulster.ac.uk/~lchen/SAGAware2011/.
on
Situation,
Activity
and
Goal
Awareness,
Saha, D. and Mukherjee, A. (2003), “Pervasive computing: a paradigm for the 21st century”, Computer, Vol.36 No.3, pp.2531.
Sanchez, D. and Tentori, M. (2008), “Activity recognition for the smart hospital” IEEE Intelligent Systems, Vol.23 No.2,
pp.50-57.
Smart, P. R., Bahrami, A., Braines, D., McRae-Spencer, D., Yuan, J. and Shadbolt, N. R. (2007), “Semantic Technologies
and Enhanced Situation Awareness”, 1st Annual Conference of the International Technology Alliance (ACITA), Maryland,
USA.
Turaga, P., Chellappa, R., Subrahmanian, V.S. and Udrea, O. (2008), “Machine Recognition of Human Activities: A
Survey”, IEEE Transaction on Circuits and Systems for Vedio Technology, Vol.18 No.11, pp.1473-1488.
UbiComp, Conference on Ubiquitous Computing, http://ubicomp.org/
Weiser, M. (2002), “The computer for the 21st century (reprint)”, Pervasive Computing, Vol.1 No.1, pp.19-25.
Dr. Liming Chen is a lecturer at the School of Computing and Mathematics, University of Ulster, UK. He received his BSc
and MSc in Computing Engineering from Beijing Institute of Technology, China, and DPhil in Artificial Intelligence from
De Montfort University, U.K. His current research interests include semantic technologies, ontology enabled context
management, intelligent agents, information/knowledge fusion and reasoning, assistive technologies and their applications in
smart homes and intelligent environments. Liming Chen can be contacted at: [email protected].
Dr. Parisa Rashidi is a research faculty member at the Information and Computer Science and Engineering Department,
University of Florida, USA. She received her Ph.D. in Computer Science from Washington State University. Her research
interests include ambient intelligence, ambient assisted living systems and applying data mining and machine learning
techniques to various pervasive health care problems. Parisa Rashidi can be contacted at [email protected]