Download Mixed-Initiative Dialogue in Case

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Knowledge representation and reasoning wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Personal knowledge base wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

Ecological interface design wikipedia , lookup

Visual Turing Test wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Human–computer interaction wikipedia , lookup

Speech-generating device wikipedia , lookup

Transcript
Mixed-Initiative Dialogue in Case-Based Reasoning
David McSherry
School of Information and Software Engineering
University of Ulster, Coleraine BT52 1SA, Northern Ireland
[email protected]
Abstract. The importance of support for mixed-initiative dialogue in casebased reasoning (CBR) is being increasingly recognised. We examine the
dialogue features that characterise the approach, such as the ability to select
questions that are in some sense most useful while allowing the user to
volunteer data at any stage, and the potential benefits in terms of acceptability
to users and problem-solving efficiency. We also briefly review progress made
in increasing the quality of problem-solving dialogues in CBR tools for fault
diagnosis and product recommendation.
1 Introduction
The importance of support for mixed-initiative dialogue is being increasingly
recognised by CBR researchers and practitioners (Aha et al., 2001; McSherry, 2001a;
Gupta, 2001; Shimazu et al., 2001; Aha & Gupta, 2002). Intelligent systems are
unlikely to be accepted if they insist on asking the questions and ignore the user’s
opinion (Patil et al., 1982; McSherry, 1986; Berry & Broadbent, 1987). On the other
hand, it is not unreasonable for users to expect an intelligent system, like a human
expert, to be capable of taking the initiative when the need arises. As well as being
more acceptable to users, mixed-initiative dialogue may help to improve problemsolving efficiency by enabling the system to adapt to individual users, ranging from
those who may have a positive contribution to make to those who need maximum
guidance and support (McSherry, 2001a; Aha & Gupta, 2002).
Of course, initiative can be shared in ways that need not involve user interaction,
for example between case-based and generative modules in mixed-initiative planning
(Muńoz-Avila et al., 2000). However, we focus here on the sharing of initiative
between system and user in tools for interactive CBR. In the following sections, we
examine the features that characterise mixed-initiative dialogue and briefly review
progress made in increasing the quality of problem-solving dialogues in CBR tools for
fault diagnosis and product recommendation.
2 Basic Dialogue Features
In Table 1, we identify dialogue features found in most intelligent systems that
support mixed-initiative dialogue, and reasons for their importance. Examples of CBR
tools that support these features include Adaptive Place Adviser (Göker & Thompson,
2000), CaseAdvisor (Yang & Wu, 2001), CBR Strategist (McSherry, 2001a),
ExpertGuide (Shimazu et al., 2001), and NaCoDAE (Aha et al., 2001). However, our
list of required features is intended to be representative rather than prescriptive. In a
1
system that has access to information from external sources, for example, it is easy to
imagine safety-critical situations in which it may be undesirable for the user to remain
in control.
Table 1. Characteristic features of mixed-initiative dialogue
Dialogue Feature
Rationale
Volunteering Data
User can volunteer data without
waiting to be asked
 User may know which features were most useful in
the solution of a previous similar problem
 Involves the user more closely in the problemsolving process
User in Control
User can decide when the system
has the initiative, and recapture
the initiative at any stage
 Helps to avoid frustration for the user
Intelligent Question Selection
When given the initiative, the
system can select questions that
are, in some sense, most useful
 User may need guidance in the selection of relevant
tests
 May have an important bearing on problem-solving
efficiency
2.1 Volunteering Data
Allowing the user to volunteer data is likely to affect the problem-solving process in
ways that the system cannot ignore. In inductive retrieval, the system must avoid
asking a question that has already been answered, while in similarity-based retrieval,
the ranking of candidate cases in order of similarity must be updated. Such changes
may in turn affect the ranking of questions in order of usefulness. Another important
issue is the need to avoid asking questions whose answers can be inferred from data
volunteered by the user (Aha et al., 2001; Gupta, 2001; Aha & Gupta, 2002). A
related problem that appears to have received less attention is how to prevent the user
from volunteering inconsistent data (McSherry, 2001a).
2.2 User in Control
There is considerable variation in the ways that sharing of initiative is managed in
different approaches. In CBR Strategist, the system alternates between asking the user
direct questions and allowing the user to volunteer data by selecting from an unranked
list of questions (McSherry, 2001a). At any stage, the user can switch dialogue modes
at the click of a command button. In Adaptive Case Adviser (Göker & Thompson,
2000), the dialogue more closely resembles a human conversation with the system
mainly asking the questions (e.g. ‘What type of cuisine would you like?’). Before
answering any question, the user can take the initiative to ask for clarification (e.g.
‘What types are there?’), volunteer data, or change her answer to a previous question.
NaCoDAE and other tools influenced by the ‘conversational’ approach to CBR
pioneered by Inference Corporation are distinctive in that they do not ask the user
2
direct questions (Aha et al., 2001). Instead, the user selects from a list of questions
ranked by the system in order of usefulness. The sharing of initiative is more subtle
here than in other approaches. Nevertheless, the user is effectively in control, with the
options of selecting the highest-ranked question, thereby ceding the initiative to the
system, or taking the initiative by selecting a different question.
An advantage of this style of interface is that there is no need to switch modes as
in systems that alternate between asking direct questions and allowing the user to
volunteer data with no intervention or ‘prompting’. Another advantage in comparison
with asking direct questions is that no effort is wasted when the user is unable (or
prefers not) to answer the question considered most useful by the system. Arguably, a
trade-off for these advantages is a greater cognitive burden for the user than when
faced with only one question at a time.
2.3 Intelligent Question Selection
Usually the objective of a question-selection strategy is to optimise some aspect of
retrieval performance such as precision, recall, accuracy, or the average length of
problem-solving dialogues (Doyle & Cunningham, 2000; Aha et al., 2001; Kohlmaier
et al., 2001; McSherry, 2001b; Shimazu et al., 2001; Yang & Wu, 2001). Another
possibly conflicting criterion is whether the system can explain the relevance of the
questions it asks, an issue we shall return to in Section 3.
A lazy or demand-driven approach to question selection is essential to support
some of the dialogue features that we discuss here and in the following section. In
NaCoDAE, questions are ranked in decreasing order of their frequency in the most
similar cases (Aha et al., 2001). An alternative approach to question selection in
similarity-based retrieval is to select the question that maximises the expected
variance in the similarity of candidate cases (Kohlmaier et al., 2001). The latter
approach is analogous to selecting the question that maximises information gain in
inductive retrieval.
In demand-driven inductive retrieval, an explicit decision tree is not constructed
(Smyth & Cunningham, 1994; McSherry, 2001a). Instead, questions are selected
dynamically at retrieval time, often on the basis of information gain, and the user’s
answers are used to construct a path in a virtual decision tree. Advantages of the
demand-driven approach include the ability to support more flexible dialogue. For
example, a well-known limitation of inductive retrieval based on static decision trees
is the possibility of retrieval failure if the user is unable (or declines) to answer any
question in the decision tree (Watson, 1997). In demand-driven induction of decision
trees, a simple solution to this problem is to select the next best question when the
value of the most useful attribute is unknown at problem-solving time (McSherry,
1995).
An important advantage of information gain (Quinlan, 1986) is its tendency to
produce smaller decision trees than other splitting criteria (Mitchell, 1997; McSherry,
2001b), thus helping to reduce the average length of problem-solving dialogues.
Reasons for minimising the length of problem-solving dialogues include avoiding the
risks and costs of unnecessary tests, avoiding frustration for the user, reducing
network traffic, and simplifying explanations of how conclusions were reached
(Breslow & Aha, 1997; Doyle & Cunningham, 2000; McSherry, 2001b). Increasing
dialogue efficiency is also a motivating factor in conversational CBR approaches that
3
exploit taxonomic and causal relations between case features to eliminate redundant
or irrelevant questions (Gupta, 2001; Aha & Gupta, 2002).
3 Additional Dialogue Features
In Table 2, we identify dialogue features that are not essential for mixed-initiative
dialogue but have been used to increase the quality of problem-solving dialogues in
interactive CBR. One of the five features, query refinement, is specific to
recommender systems. All of the CBR tools mentioned in Section 2 support one or
more of the other features.
Table 2. Additional dialogue features associated with mixed-initiative dialogue
Dialogue Feature
Rationale
Tolerating Incomplete Data
User can decline to answer any
question
 Providing the answer may involve a test that the user
is incompetent or reluctant to perform
 In a recommender system, the user may be
indifferent to the values of certain attributes
Updating Data
User can change her answer to a
previous question at any stage
 User may have been uncertain about her previous
answer
 New information may have come to light
 The user may wish to examine the effects of tests
whose results are unknown on the outcome
(sensitivity analysis)
 The system may be unable to offer a solution
because the problem is over-constrained, or unable
to discriminate between possible solutions because
of insufficient information
Explanation of Reasoning
Before answering any question,
the user can ask why it is
relevant
 User may like to know why an expensive or difficult
test is necessary
 Users are likely to have more confidence in a system
that can explain its reasoning
User-Specified Goals
User can select a target
diagnosis or outcome class to
guide the selection of relevant
questions
 Involves the user more closely in the problem-solving
process
 Problem-solving efficiency may benefit, as an
experienced user may have a good idea about what
is causing the problem
Query Refinement
User can revise (or initiate) a
query by proposing changes
relative to an alternative
suggested by the system
 Users often find it easer to tweak a specific example
than to formulate queries
4
3.1 Tolerating Incomplete Data
Often in fault diagnosis, the user is unable or disinclined to answer every question that
the system may ask, for example because providing the answer involves a test that the
user is incompetent or reluctant to perform (McSherry, 2001a). A similar problem
arises in a recommender system when the user is indifferent to the values of certain
attributes (McSherry, 2002a). Allowing the user to select a question other than the one
considered most useful by the system, as in NaCoDAE, is one way to ensure that
progress can be made in the absence of complete data (Aha et al., 2001). Systems
that ask direct questions usually allow the user to answer ‘unknown’ or ‘anything’ to
any question (Göker & Thompson, 2000; McSherry, 2001a; Shimazu et al., 2001). A
demand-driven approach to question selection is therefore essential to enable the
system to select the next best question when the answer to the most useful question is
unknown.
3.2 Updating Data
Allowing users to change their answers to previous questions is important for several
reasons, not least of which is support for sensitivity analysis, which may help to
increase the user’s confidence in the results (McSherry, 2001a). On regaining the
initiative, the system must respond appropriately to changes in the reported data;
again this highlights the importance of a demand-driven approach to question
selection. An interesting feature of Adaptive Place Adviser is that query adjustments
may be suggested at the initiative of the system rather than the user (Göker &
Thompson, 2000). For example, the system can suggest constraints to be relaxed
when there is no alternative that meets the requirements of the user.
3.3 Explanation of Reasoning
The importance of intelligent systems having the ability to explain their reasoning is
well recognised (Patil et al., 1982; McSherry, 1986; Southwick, 1991; Leake, 1996).
However, the absence of a specific goal in most CBR approaches to question
selection makes it difficult to explain the relevance of questions in terms that are
meaningful to users. To address this issue, question selection in CBR Strategist
(McSherry, 2001a) is based on the evidence-gathering strategies used by doctors, who
are known to rely on hypothetico-deductive reasoning in diagnosis (Elstein et al.,
1978; Kassirer & Kopelman, 1991). One advantage of the approach is that the
relevance of a test can be explained in terms of the purpose for which it was selected,
such as confirming a target diagnosis or eliminating a competing diagnosis. The
approach is best suited to diagnosis and classification tasks in which the number of
outcome classes is small. For example, it is natural in these circumstances to select the
likeliest outcome class as the target outcome class. How to select a target outcome
class is not so obvious when all outcome classes are equally likely as usually the case
in a recommender system (McSherry, 2001b).
Recently we presented a new approach to question selection in which CBR
Strategist’s multiple-strategy approach is replaced by the single strategy of increasing
the probability of the target outcome class (McSherry, 2001c). While retaining the
ability to explain the relevance of questions in strategic terms, the new algorithm
5
tends to give better performance in terms of accuracy and problem-solving efficiency,
at least on binary classification tasks. We also described an alternative approach to
explanation of attribute relevance that can be used with any attribute-selection
strategy in top-down induction of decision trees. Regardless of how an attribute is
selected, its relevance can be explained in terms of its effects, such as confirming the
likeliest outcome class in the data set. Of course, this does not amount to an
explanation of why the attribute was selected.
3.4 User-Specified Goals
In goal-driven approaches to question selection, the user can be given the option of
selecting a target diagnosis rather than leaving this task to the system. In this way, the
user can be more closely involved in the problem-solving process. Problem-solving
efficiency may also benefit, as a user with experience of fault diagnosis in the domain
may have a good idea about what is causing the problem (McSherry, 2001a).
3.5 Query Refinement
In product recommendation, Hammond et al.’s (1996) insight that users often find it
easier to critique an actual example than to formulate queries highlights the
importance of support for query revision based on adjustments (or tweaks) proposed
by the user relative to a suggested alternative. Entree is a recommender system for
restaurants in which, for example, the user can ask to see restaurants that are ‘like this
but cheaper’ or ‘like this but with a different type of cuisine’ (Burke, 2000).
4 Conclusions
We have examined some of the features associated with mixed-initiative dialogue in
intelligent systems and the potential benefits in terms of acceptability to users and
problem-solving efficiency. The techniques discussed are being increasingly used to
improve the quality of problem-solving dialogues in interactive CBR, and promising
progress has been made in addressing such issues as minimising dialogue length,
maintenance of consistency in dialogue, and explanation of reasoning. However, it is
important to recognise that the user-interface requirements of interactive CBR are
continually evolving with the development of new and more demanding applications.
For example, the emergence of product recommendation as a major application
domain has highlighted the limitations of techniques that were successfully used in
more traditional applications and changed the dynamics of problem-solving dialogues
in ways that remain to be fully investigated (Hammond et al., 1996; Göker &
Thompson, 2000; Bridge, 2001; Kohlmaier et al., 2001; Shimazu, 2001; McSherry,
2002b).
References
1. Aha, D.W., Breslow, L.A., Muńoz-Avila, H.: Conversational Case-Based Reasoning.
Applied Intelligence 14 (2001) 9-32
6
2. Aha, D.W., Gupta, K.M.: Causal Query Elaboration in Conversational Case-Based
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
Reasoning. Proceedings of the Fourteenth International Conference of the Florida
Artificial Intelligence Research Society. AAAI Press, Florida (2002) 95-100
Berry, D.C., Broadbent, D.E. Expert Systems and the Man-Machine Interface. Part
Two: The User Interface. Expert Systems 4 (1987) 18-28
Breslow, L.A., Aha, D.W.: Simplifying Decision Trees: a Survey. Knowledge
Engineering Review 12 (1997) 1-40
Bridge, D.: Product Recommendation: A New Direction. Proceedings of the Workshop
Programme at the Fourth International Conference on Case-Based Reasoning (2001) 7986
Burke, R.: A Case-Based Reasoning Approach to Collaborative Filtering. In: Blanzieri,
E., Portinale, L. (eds) Advances in Case-Based Reasoning. LNAI, Vol. 1898. SpringerVerlag, Berlin Heidelberg (2000) 370-379
Doyle, M., Cunningham, P.: A Dynamic Approach to Reducing Dialog in On-Line
Decision Guides. In: Blanzieri, E., Portinale, L. (eds) Advances in Case-Based
Reasoning. LNAI, Vol. 1898. Springer-Verlag, Berlin Heidelberg (2000) 49-60
Elstein, A.S., Schulman, L.A., Sprafka, S.A.: Medical Problem Solving: an Analysis of
Clinical Reasoning. Harvard University Press, Cambridge, Massachusetts (1978)
Göker, M.H., Thompson, C.A.: Personalized Conversational Case-Based
Recommendation. In: Blanzieri, E., Portinale, L. (eds) Advances in Case-Based
Reasoning. LNAI, Vol. 1898. Springer-Verlag, Berlin Heidelberg (2000) 99-111
Gupta, K.M.: Taxonomic Conversational Case-Based Reasoning. In: Aha, D.W.,
Watson, I. (eds) Case-Based Reasoning Research and Development. LNAI, Vol. 2080.
Springer-Verlag, Berlin Heidelberg (2001) 219-233
Hammond, K.J., Burke, R. and Schmitt, K.: A Case-Based Approach to Knowledge
Navigation. In: Leake, D.B. (ed) Case-Based Reasoning: Experiences, Lessons & Future
Directions. AAAI Press/MIT Press, Menlo Park, California (1996) 125-136
Kassirer, J.P., Kopelman, R.I.: Learning Clinical Reasoning. Williams and Wilkins,
Baltimore, Maryland (1991)
Kohlmaier, A., Schmitt, S., Bergmann, R.: A Similarity-Based Approach to Attribute
Selection in User-Adaptive Sales Dialogues. In: Aha, D.W., Watson, I. (eds) Case-Based
Reasoning Research and Development. LNAI, Vol. 2080. Springer-Verlag, Berlin
Heidelberg (2001) 306-320
Leake, D.B.: CBR in Context: the Present and Future. In: Leake, D.B. (ed) Case-Based
Reasoning: Experiences, Lessons & Future Directions. AAAI Press/MIT Press, Menlo
Park, California (1996) 3-30
McSherry, D. Intelligent Dialogue Based on Statistical Models of Clinical Decision
Making. Statistics in Medicine 5 (1986) 497-502
McSherry D. Integrating Machine Learning, Problem Solving and Explanation. In:
Bramer, M., Nealon, J., Milne, R. (eds) Research and Development in Expert Systems
XII. SGES Publications, Oxford (1995) 145-157
McSherry, D.: Interactive Case-Based Reasoning in Sequential Diagnosis. Applied
Intelligence 14 (2001a) 65-76
McSherry, D.: Minimizing Dialog Length in Interactive Case-Based Reasoning.
Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence.
International Joint Conferences on Artificial Intelligence (2001b) 993-998
7
19. McSherry, D.: Explanation of Attribute Relevance in Decision-Tree Induction. In:
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
Bramer, M., Coenen, F., Preece, A. (eds) Research and Development in Intelligent
Systems XVIII. Springer-Verlag, London (2001c) 39-52
McSherry, D.: The Inseparability Problem in Interactive Case-based Reasoning.
Knowledge-Based Systems 15 (2002a) 293-300
McSherry, D.: Recommendation Engineering. Proceedings of the Fifteenth European
Conference on Artificial Intelligence. IOS Press (2002b) 86-90
Mitchell, T.M.: Machine Learning. McGraw-Hill (1997)
Muńoz-Avila, H., Aha, D.W., Breslow, L.A., Nau, D.S., Weber, R.: Integrating
Conversational Case Retrieval with Generative Planning. In: Blanzieri, E., Portinale, L.
(eds) Advances in Case-Based Reasoning. LNAI, Vol. 1898. Springer-Verlag, Berlin
Heidelberg (2000) 210-221
Patil, R.S., Szolovits, P., Schwartz, W.B.: Modeling Knowledge of the Patient in AcidBase and Electrolyte Disorders. In: Szolovits, P. (ed) Artificial Intelligence in Medicine.
Westview Press, Boulder, Colorado (1982) 191-226
Quinlan, J.R.: Induction of Decision Trees. Machine Learning 1 (1986) 81-106
Shimazu, H.: ExpertClerk: Navigating Shoppers' Buying Process with the Combination
of Asking and Proposing. Proceedings of the Seventeenth International Joint Conference
on Artificial Intelligence. International Joint Conferences on Artificial Intelligence
(2001) 1443-1448
Shimazu, H, Shibata, A., Nihei, K.: ExpertGuide: a Conversational Case-Based
Reasoning Tool for Developing Mentors in Knowledge Spaces. Applied Intelligence 14
(2001) 33-48
Smyth, B., Cunningham, P.: A Comparison of Incremental Case-Based Reasoning and
Inductive Learning. In: Haton, J-P., Keane, M., Manago, M. (eds) Advances in CaseBased Reasoning. LNAI, Vol. 984. Springer-Verlag, Berlin Heidelberg (1994) 151-164
Southwick, R.W.: Explaining Reasoning: an Overview of Explanation in KnowledgeBased Systems. Knowledge Engineering Review 6 (1991) 1-19
Watson, I.: Applying Case-Based Reasoning: Techniques for Enterprise Systems.
Morgan Kaufmann, San Francisco (1997)
Yang, Q., Wu, J.: Enhancing the Effectiveness of Interactive Case-Based Reasoning with
Clustering and Decision Forests. Applied Intelligence 14 (2001) 49-64
8