Download A History and Introduction to the Algebra of Conditional Events and

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of randomness wikipedia , lookup

Indeterminism wikipedia , lookup

Randomness wikipedia , lookup

Probability box wikipedia , lookup

Birthday problem wikipedia , lookup

Dempster–Shafer theory wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Inductive probability wikipedia , lookup

Probability interpretations wikipedia , lookup

Conditioning (probability) wikipedia , lookup

Transcript
A History and Introduction to the Algebra of
Conditional Events and Probability Logic
Hung T. Nguyen and Elbert A. Walker
Department of Mathematical Sciences
New Mexico State University
Las Cruces, New Mexico 88003
Abstract
This article is meant to serve as an introduction to the following series of
papers on various aspects of conditional event algebra and probability logic. It
addresses the history of the problem and gives an overview of the development
of the subject and its impact on the investigation of problems within AI.
Contents
1
2
3
4
5
6
7
1
Introduction . . . . . . . . . . . . . . . . . .
Bayesian Methodology and AI Problems
Conditionals in Probability Theory . . .
Logic and Probability . . . . . . . . . . . .
Algebraic Logic of Conditionals . . . . . .
Connections with Approximations . . . .
Role of Conditional Event Algebra in AI
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
2
4
4
5
7
8
Introduction
Modern technology has given hope of creating machines capable of intelligent behavior. There are many instances of machines that gather information, manipulate it,
reach some “decision”, and trigger some further action. To create machines capable
of ever more sophisticated tasks, and of exhibiting ever more human-like behavior, a
better understanding of the structure of human knowledge and the process of human
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL.24, NO.12,
PP.1671-1675, DECEMBER 1994
1
reasoning is needed. Knowledge representation and manipulation are important ingredients in this endeavor. Knowledge, or information, is often conditional in nature,
such as “if-then” rules in expert systems. This paper is about some recent work
on such conditional information modeling, viewed as an extension of unconditional
information modeling in probability theory. We explore the mathematical modeling
of conditional statements in natural language and the associated probability logics.
This is in line with the probabilistic approach to certain problems in AI. But uncertainty concepts are not restricted to randomness. Other numerical uncertainties
such as belief and fuzziness will be touched upon. The series of papers following this
introductory article will provide some details of this new area of investigation.
In Section 2, we use Bayesian statistical theory to illustrate the need of a more
general framework for decision making. In Section 3, conditioning in probability
theory is revisited. Section 4 deals with probability logic and surveys some work
of logicians in this area. Section 5 describes the path to a mathematical theory of
conditionals, which goes from Boole to Stone and may be viewed as a foundation
for algebraic logic of conditionals. In Section 6, we describe some approximation
problems related to conditionals. Finally, in Section 7, we discuss the role of this
theory in AI.
2
Bayesian Methodology and AI Problems
Decision making in the face of uncertainty is a common and important human activity. Some aspects of it have become a science. When the uncertainty involved
is attributed to randomness, we have available statistical decision theory, based on
probabilistic laws. There are two basic ingredients in the Bayesian approach to decision making. The …rst is knowledge representation. If the problem is the location
of an unknown parameter, knowledge of that location is represented by a probability
distribution. “Objective”information about that parameter can be obtained through
sampling. But in the Bayesian procedure, one uses additional information, generally
more subjective in nature. This subjective information is also in the form of a probability distribution. It is this additional information that is often an issue for debate.
However, the Bayesian principle seems convincing. Any kind of information available
should be used in the decision making process. In any case, no additional mathematical tools are needed to process this added information. It concerns probability
distributions only, and the logic used is classical two-valued logic.
But in building expert systems or in improving existing ones, one quickly surmises
that the Bayesian methodology is too restrictive. Knowledge is present which cannot
be expressed in the form of probability distributions. Indeed, this knowledge is often
expressed in natural language which cannot always be put easily into mathematical
terms. This poses the problem of information modeling. If information in the form of
if-then rules, for example, expressed in natural language, is to be used, this information must be translated into mathematical terms for processing. In any case, unlike
2
the situation in Bayesian decision-making, data might not be represented in the form
of probability measures, and the combination of various types of information becomes
a technical problem. New mathematical tools are needed, including new logics for
reasoning, modeling of semantic information via, say, fuzzy set theory [37], and the
theory of evidence [33].
Here is a typical situation, where, although the uncertainty analysis is entirely in
the the theory of probability, new tools in this theory are needed. Let us consider
the problem of “reasoning with knowledge” in AI. The …eld of statistics concerns a
single and speci…c case of this general problem. In a statistical problem, for example
the estimation of an unknown parameter of a model, knowledge is probabilistic in
nature, namely the prior distribution of that parameter (if you are Bayesian), and
the observations of the random variable of interest. The logic need not be spelled out
since it is always the classical two-valued one.
Now consider probabilistic knowledge in the form of a collection of statements
together with their probabilities. (See, for example [27].) The problem is how to
infer probabilities for other statements. Mathematically, this is an approximation
problem. More generally, viewing “knowledge”as probability measures on some given
space, what do we mean by “reasoning with probabilities”? The object of interest is
a collection of probability measures, and to reason with them we need a logic on these
objects. This means that we need notions of and, or, and not, and rules of inference.
In the classical situation, the objects of interest are propositions, which are viewed
as elements of a Boolean algebra. Now a Boolean algebra is a lattice, with its partial
order given by a b if a ^ b = a: Conversely, the operations _ and ^ may be de…ned
in terms of by a _ b = supfa; bg and a ^ b = inffa; bg: So a starting point of a more
general logic can be a lattice, or even more generally, a partial order. So in reasoning
with probabilities, one can start with a partial order in which P Q captures the
essense of P entails Q: From one hopes to obtain the various logical connectives
including and and or. New concepts like P jQ arise. Thus, because of new types
of problems, new mathematical concepts, and the need for new mathematical tools,
arise. (See [3].)
Even if we restrict ourselves to the problem of building expert systems and are
in situations where probability is appropriate for uncertainty modeling, for example,
Bayesian networks [26], [31], it is well known that computation is the main problem
unless the networks have some special structures. From a logical viewpoint, this is
due to the fact that probability logic is non-truth functional. Also, when treating
probabilities as “truth values”, we need to introduce new conditioning operators in
logic.
Realistic AI problems will involve knowledge reasoning in its most general forms.
For data fusion or combination of evidence from di¤erent sources, it seems necessary
to be able to regard conditional statements (information) as mathematical entities.
3
3
Conditionals in Probability Theory
A probability space ( ; A;P ) is a model for describing randomness. Regardless of
how probabilities are assigned to events, there is a consensus on the mathematical
properties of the model, and in particular on the properties of P as a function.
“Unconditional” probabilities are given by a map P : A ! [0; 1], and there is the
notion of “conditional probability” given by P (ajb) = P (ab)=P (b); where a and b
are two events. But there is no a priori meaning given to ajb itself. Thus in the
unconditional case, one de…nes events …rst, and then assigns probabilities to them,
while in the conditional case, one did not even talk about “conditional events”, but
proceeded to assign a “probability” to a conditional situation. DeFinetti [9] was
the …rst to mention “conditional events” outside the operator P: Within probability
theory itself, this situation is handled simply in the case of conditioning on b by
switching to the probability space (b; A^b; Pb ), where Pb (a) = P (ajb):
However, the situation is di¤erent when we use probability to quantify production
rules in expert systems. Given an in‡uence diagram, or a directed graph expressing
causal relationships between events, one can put weights on the arrows of the graph
by conditional probabilities. But the problem of combining conditional probabilities with di¤erents antecedents such as P (ajb) and P (cjd) is intractable unless more
assumptions such as conditional independence are made about the graph. An alternative is to combine objects like (ajb) and (cjd); yielding another “conditional event”
(ejf ). Then a value of (ejf ) could be given, either by asking experts, or by calculations
from previous information. Without an intrinsic concept of (ajb) and of combining
such conditional events, it is di¢ cult to see how such quantities as the probability of
“(ajb) or (cjd)" can be derived through probability theory. Problems in AI involving
these expressions will be discussed in Section 7.
Early in the 1940’s, Koopman [23] and Copeland[8] investigated the concept of
“conditional event” with a view to providing a more complete foundation of probability theory. Kolmogorov [22], in establishing the foundations of probability, had not
addressed this issue. Schay [34] took up the issue and produced a provocative piece
of work. But it was before the time of AI and other areas of possible application, so
was ignored over the years. Yet, over and over again, the need to speak of “conditional events” arose, even in probability theory proper, particularly in the Bayesian
framework [25].
4
Logic and Probability
Implication in logic is the most important operator for knowledge based systems. If
the logic is two-valued, then the implication operator is material implication, given
by b ! a = b0 _ a: But P (b0 _ a) 6= P (ajb): Thus if conditional probabilities are to be
used as semantic evaluations of “conditionals”, then some other implication b ) a
must be used. This is Stalnaker’s thesis. Further, there exists no such ) such that
4
P (b ) a) = P (ajb) for all a and b. This simply means that b ) a must be allowed
to be outside the original Boolean algebra of events. The work of Schay con…rmed
this, where events, identi…ed with indicator functions, were extended to generalized
indicator functions in the same spirit as for fuzzy sets.
In the logic camp, there is the work of Hailperin[20], reexamining and rigorizing
Boole’s work on “division of events”. On the other hand, Adams[1] took conditionals
as primitives. Through Schay [34], if we view indicator functions as semantics, then
conditionals lead to three-valued logic. In examining a book such as Rescher [32],
three-valued logics lack syntax in the sense that there is no counterpart of Boolean
algebras in three-valued logic. While algebraic logic for two-valued logic is thoroughly
developed, there is no counterpart to Boolean algebras in three valued logic. That is,
there is no algebraic structure theory of conditional events. In Hailperin, the intuitive
idea of viewing (ajb) as a coset in a Boolean ring was on the right track, but there was
no algebra of cosets of ideals for di¤erent ideals. That is, there were no operations
de…ned between elements of quotient rings with respect to di¤erent ideals. Recently,
this has changed, and we will outline some work in the next section on the algebra of
conditionals.
5
Algebraic Logic of Conditionals
What is the algebraic counterpart of Boolean rings for representing conditional statements? That is, what is the algebraic counterpart in three-valued logic to Boolean
rings in two-valued logic? To solve this problem, one …rst must decide on a representation of conditional events (ajb) themselves. By Lewis’ triviality result [24],
conditional events cannot be represented by elements of the original Boolean algebra
R itself and be compatible with probability. That is, (ajb) cannot be made to correspond to an element c of the Boolean algebra so that P (c) = P (ajb) for all probability
measures on the Boolean algebra. An axiomatic derivation in Goodman, Nguyen, and
Walker [18] results in a representation of (ajb) by the coset a+Rb0 in the quotient ring
R=Rb0 : The ring R itself is represented by the elements of the quotient ring R=R0,
and the probability of an element is given by P (a + Rb0 ) = P (ajb): Now any set in
one-to-one correspondence with the union of all such quotient rings can serve as a
representation of the set of conditional events for the event space R: Besides the set
of all these cosets themselves, some representations that are of interest, together with
the embedding of R and the probabilities of the representing elements are
1. f(a; b) : a; b 2 R; a
bg; a ! (a; 1); P (a; b) = P (a)=P (b);
2. f(a; b) : a; b 2 R; ab = 0g; a ! (a; a0 ); P (a; b) = P (a)=P (a _ b);
3. f(a; b) : a; b 2 R; a
bg; a ! (a; a); P (a; b) = P (a)=P (a _ b0 );
5
4. (A A)= ; where (a; b) (c; d) if ab = cd and b = d; a ! equivalence class
containing (a; 1); P (class(a; b)) = P (ab)=P (b):
Each of these representations has had its use. As mentioned above, in [18] conditional events are represented by cosets. In [35], the elementary properties of conditional events is developed from the point of view of representation number 1. Representation number 2 is the point of view of Walker in [36], where an investigation
is carried out of various operations that can be put on conditional events. In [28],
Nguyen showed a connection between conditional events and rough sets using representation number 3. In [13], Gehrke and Walker were forced to alternate between
representations number 1 and number 3 in developing an iteration of conditionals.
Calabrese seems to prefer representation number 4.
In 1987, Calabrese [5] reactivated the investigation of “measure-free”conditionals
from a logical viewpoint. (See also Dubois and Prade [11].) The mathematical concept
of a conditional event was established. The next basic step for applications was to
understand the logic of conditionals. In 1975, Adams [1], while viewing conditionals
as primitives in natural language, proposed one such logic. But there are many logics
on conditionals extending the two-valued logic of events. One way to see this is
to view conditionals within a three-valued logic context, of which there are many
[32]. There certainly will be no agreement on a canonical choice among them. In
this regard, one can contrast the approaches in [5] and in [18]. Perhaps, as in the
case of multi-valued logics, each conditional logic is suitable for a particular …eld of
applications. Thus, a normative approach to the choice of a conditional logic seems
called for.
We will denote the set of conditional events, no matter how represented, by
RjR. A revealing representation of a conditional event (ajb) is that of the partition
(ab; a0 b; b0 ); really equivalent to representation number 2 in our list above. Any two
of the three pieces ab; a0 b; b0 are disjoint and their union is 1. While unconditional
propositions, or material implications, have only two truth values f0; 1g, conditionals
have three, corresponding to the three members ab; a0 b; b0 of the partition. Thus, from
an algebraic logic point of view, conditionals form a syntax for three valued logics.
In Rescher [32], several well known semantics of three valued logics are discussed,
for example those of Lukasiewicz, Bochvar, Kleene, Heyting, and Sobocinski. Now
a fundamental fact is that each connective in any of these three valued logics yields
an operation on RjR; and conversely. This is discussed thoroughly in [18]. Thus a
syntax for any three valued logic can be provided by an algebraic system on RjR:
The syntax corresponding to Lukasiewicz’s logic is a Stone algebra, a well studied
mathematical system. See, for example, [19]. It is this set of operations on RjR that
has been studied most, followed by the system given by the three valued logic of
Sobocinski (Calabrese [5], Dubois and Prade [11]). The system in this case is not a
lattice.
6
In the unconditional case, with a Boolean algebra A for syntax, a probability
function P : A ! [0; 1] gives a “multivalued logic”, the truth values of a proposition
being measured by the probability function P: We can do the same thing on RjR,
having P de…ned on it whenever we have a probability function P on R: Thus we
have “conditional probability logic”. (See for example, [18].)
6
Connections with Approximations
In this section we point out some connections between mathematical conditionals
and approximation problems in AI. First, a special class of conditional events has
emerged as a new tool for reasoning with relational data bases. This new tool is
based on the “rough sets”of Pawlak [29], [30]. The situation is this. An information
table is speci…ed by a set of objects and a map f : ! <n (say), corresponding to
the collection of attributes in the table. Such an f induces an equivalence relation '
on by ! ' ! 0 if f (!) = f (! 0 ): Let P be the partition induced by this equivalence
relation, and A be the complete Boolean algebra generated by P. This is simply the
subalgebra of the algebra of all subsets of whose elements are arbitrary unions of
members of P: The elements of A play the role of concepts which are “measurable”
by f . For any subset A of we have the subsets
A = [fX 2 A :X
A = \fX 2 A : X
Ag;
Ag:
Clearly, A
A
A : Two subsets A and B of
are de…ned to be equivalent if
A = B and A = B ; and rough sets are the equivalence classes of this equivalence
relation. So a rough set is a collection of subsets of
all of which have the same
lower and upper approximations by the information table.
Each rough set can of course be identi…ed with the interval [A ; A ] in the Boolean
algebra of all subsets of ; where A is any member of the rough set. One representation of conditional events on the collection of all subsets of is the set of all intervals
[A; B] with A; B 2 : The correspondence
(AjB) ! [A; A [ B 0 ]
where A
B, gives this representation, and in this representation the GoodmanNguyen operations become component-wise [ and \: Thus each rough set can be
identi…ed with a conditional event. In fact, the rough sets form a sub-Stone algebra
of the Stone algebra of conditional events [28] under the operations
[A ; A ] [ [B ; B ] = [A [ B ; A [ B ];
[A ; A ] \ [B ; B ] = [A \ B ; A \ B ]:
It is not obvious at all that the set of intervals [X ; X ] is closed under these
operations, but it is, and given that, the veri…cation that it is a Stone algebra is
7
routine. So the set of rough sets is a sub-Stone algebra of the Stone algebra of
conditional events. Thus its semantic is a Lukasiewicz three-valued logic. This strong
connection between rough sets and conditional events is a surprising one. This theory
is developed further in [14]
While rough sets, as special conditional events, are approximations of events of
interest, the approximations of uncertainty measures of these events are usually taken
to be lower and upper probabilities. In some special situations, this numerical approximation problem is related to the theory of evidence [33]. In that theory, belief
functions were introduced as a framework to handle incomplete probabilistic knowledge in expert systems. It is well known that besides computational problems, the
concept of conditioning in belief theory is not satisfactory. When restricted to reasoning with probability knowledge, the theory of belief functions is used as follows. The
“true” probability P0 is only known to lie in a collection P of probability measures
on some measureable space ( ; A): To infer P0 (A) for A 2 A; one proceeds to lower
and upper probabilities
P (A) = inffP (A) : P 2 Pg;
P (A) = supfP (A) : P 2 Pg:
It su¢ ces to look at the lower probability P .
Consider the following concrete situation in Bayesian statistics. Let fA1; A2 ;
; Ak g
Pk
be a partition of a …nite set ; and let i 0 with i=1 i = 1: Let P be the set of
probability measures P on the power set A of such that P (Ai ) = i : Then P is a
belief function and P = fP : P
P g: The proof of this brings out the connection
between rough sets, which are special conditionals, and belief function modeling at
the numerical level.
For each A 2 A; the map
P ! [0; 1] : P ! P (A )
is constant, where A is the lower rough approximation of A induced by the partition
fA1; A2 ;
; Ak g: Thus for any P 2 P;
G : A ! [0; 1] : G(A) ! P (A )
is a belief function since \(Bj ) = (\Bj ) ; and [(Bj )
([Bj ) : Moreover, we see
that G is a belief function, G P , and G P if and only if P 2 P:
While conditional events were originally constructed from a probabilistic setting,
other non-additive measures of uncertainty, such as belief functions, can be de…ned
on these mathematical entities.
7
Role of Conditional Event Algebra in AI
Generally speaking, the concept and logical operations on conditional events are
useful new tools in the management of probabilistic uncertainty in expert systems.
The need of these mathematical tools can be illustrated as follows.
8
First, consider the problem of calculating the probability of an event a after learning a rule of the form “if b then c”. Let us symbolize this new rule, or information,
by (cjb), (not to be taken as material implication b0 _ c): Then formally, we face the
expression P (aj(cjb)): (There is an interesting example in [2].) If (cjb) is modeled so
that P ((cjb)) = P (cjb) = P (bc)=P (b), one cannot interpret P (aj(cjb)) as P (ajb0 _ c).
Formal Bayesian updating via conditioning cannot be carried out in standard probability theory since the object (cjb) is not an event. Thus, the need to specidy (cjb) as a
mathematical entity is apparent. It turns out that (cjb) cannot be reasonably viewed
as an event in the original event algebra, and so must be considered as a mathematical
entity outside that algebra. So already we have the need for higher order conditionals
(aj(bjc)): (A theory of a theory of iterated conditionals is developed in [13].)
More generally, suppose we have two rules, (cjb) and (ejf ): Then we have to
evaluate the probability of a in light of (cjb) and (ejf ), that is, the quantity P (aj(cjb)
and (ejf )): Thus, besides de…ning the quantities (cjb) and (ejf ), one still needs to
have logical operations between them, such as the connective “and”. Such logical
operations are discussed at length in [1], [16], [5], [6], [7], [18], [34], [35], and [36], for
example.
Second, in probabilistic inference, rules of the form “if a then b” are quanti…ed
as conditional probabilities P (bja). When a and b are conditionals themselves, say
(aje) and (bjf ), then the rule is of the form “if (aje) then (bjf )”. (See, for example
[15].) In order to use probabilistic inference, one would like to be able to quantify
this rule by conditional probability, that is, by P ((aje) ) (bjf )); where ) denotes
a modeling of the connective “if - then” among conditionals. Now entailment relations can be de…ned directly, as in the case of reasoning with probability measures
(probabilistic knowledge), where one proceeds directly by de…ning a partial ordering
among probability measures. (See [3].) Alternatively, entailment relations can be
derived from basic logical operations, as in the case of two-valued logic. With regard
to conditionals, ) can be derived from the algebraic structure of conditional events
mentioned earlier [18].
Third, the need for considering the mathematical form of “if - then”rules is apparent in Bayesian updating procedures as well as in the construction of belief functions
[33]. Speci…cally, let ( ; A) be a measureable space, and P a prior probability measure on it. When an event a 2 A occurs, one updates P to Pa by Pa (b) = P (bja):
This Bayesian updating procedure is a special case of the general updating of knowledge in light of new information. If we stay with knowledge expressed as probability
measures, we still need to update them even if the evidence is not in the form of
realized events. A typical situation is when the additional knowledge is expressed
as a rule “if c then a”. Viewing this rule as a conditional event (ajc), one has to
update P to P(ajc) , which must be given meaning. One way to do this is to construct
a Boolean algebra B containing AjA, extend the probability measure P to this larger
Boolean algebra, and update to P(ajc) : This program has been carried out in [17]. This
“Booleanization”of conditional is also useful in constructing belief functions. Again,
9
see [17] for details.
Finally, note that one of the main uses of conditional events in AI is related to
non-monotonic reasoning tasks in a probabilistic setting. See [12], for example.
The following papers will provide details for many of the points discussed above,
and contain many new results. The research e¤orts of the authors of these papers will
no doubt put conditional reasoning in intelligent systems on a …rmer mathematical
foundation, spur further interest in the topic, and provide a framework in which
signi…cant practical applications can be made.
References
[1] Adams, E. (1975) The Logic of Conditionals. D. Reidel, Dordrecht, the Netherlands.
[2] Adams, E. (1992) Conditional Information from the Point of View of the Logic
of High Probability, preprint.
[3] Bennette, B. M., Ho¤man, D. and Murthy, P. (1992) Lebegue order in probabilities and some applications to perception. To appear in J. Math. Psychology.
[4] Boole, G. (1854) The Laws of Thought. Dover, 1958.
[5] Calabrese, P. (1987) An algebraic synthesis of the foundations of logic and probability. Inf. Sci. (42) 187-237.
[6] Calabrese, P. (1991) Deduction and inference using conditional logic and probability, Conditional Logic in Expert Systems, I. Goodman, M. Gupta, H. Nguyen,
and G. Rogers, editors, North-Holland, 71-98.
[7] Calabrese, P. (1993) A Theory of Conditional Information with Applications,
these proceedings.
[8] Copeland, A. N. (1941) Postulates for the theory of probability. Amer. J.
Math.(63). 741-762
[9] DeFinetti, B. (1974) Theory of Probability. Vol 1 and 2, J. Wiley.
[10] Dempster, A. (1967) Upper and lower probabilities induced by a multivalued
mapping. Ann. Math. Statist. (38), 325-339.
[11] Dubois, D. and Prade, H. (1987) Theorie des Possibilities. Masson, Paris.
[12] Dubois, D. and Prade, H. (1991) Conditioning, non-monotonic logic and nonstandard uncertainty models, Conditional Logic in Expert Systems, I. Goodman,
M. Gupta, H. Nguyen, and G. Rogers, editors, North-Holland, 115-158.
10
[13] Gehrke, M. and Walker, E. (1992) Iterated Conditionals and Symmetric Stone
Algebras, preprint.
[14] Gehrke, M. and Walker, E. (1992) The structure of rough sets, Bull. Acad. Sci.
Math. 40, 235-245.
[15] Goldszmidt, M. and Pearl, J. (1992) Reasoning with qualitative probabilities can
be tractable, Proceeding 8-th UAI Conference, Morgan Kaufmann, 112-120.
[16] Goodman, I. and Nguyen, H. (1988) Conditional objects and the modeling of
uncertainties, in Fuzzy Computing, Theory, Hardware, and Applications (M. M.
Gupta and T. Yamakawa, eds.), North-Holland, 119-138.
[17] Goodman, I. and Nguyen, H. (1993) A theory of conditional information for
probabilistic inference in intelligent systems, to appear in Information Sciences:
[18] Goodman, I., Nguyen, H., and Walker, E. (1991) Conditional Inference and Logic
for Intelligent Systems: A Theory of Measure-free Conditioning. North Holland.
[19] Gratzer, G. (1978) General Lattice Theory. Birkhauser, Basel.
[20] Hailperin, T. (1976) Boole’s Logic and Probability. North Holland, Amsterdam.
[21] Halmos, P. (1963) Lectures in Boolean Algebras. Springer, N. Y.
[22] Kolmogorov, A. (1950) Foundations of the Theory of Probability. Chelsea, N. Y.
[23] Koopman, B. O. (1941) The bases of probability, Amer. Math. Soc. Bull. (2)
(46) 763-774.
[24] Lewis, D. (1976) Probabilities of conditionals and conditional probabilities. Phil.
Rev. (85) 297-315.
[25] Lindley, D. V. (1982) Scoring rules and the inevitability of probability. Intern.
Statist. Review (50) 1-26.
[26] Neapolitean, R. E. (1990) Probability reasoning in expert systems: Theory and
Applications, J. Wiley, N. Y.
[27] Nilsson, N. (1986) Probability logic. Art. Intell (28) 71-87.
[28] Nguyen, H. T. (1992) Intervals in Boolean rings: Approximation and logic. To
appear in J. Foundations of Computing and Decision Sciences (vol 17, no. 3).
[29] Pawlak, Z. (1982) Rough sets. Inter. J. Inf. Comp. Sci. (11), 341-356.
[30] Pawlak, Z. (1991) Rough Sets: Theoretical Aspects of Reasoning about Data.
Kluwer Academic, Dordrecht.
11
[31] Pearl, J. (1988) Probabilistic Reasoning in Intelligent Systems. Morgan Kau¤mann, San Mateos, CA.
[32] Rescher, N. (1969) Many-valued Logics. McGraw-Hill, NY.
[33] Shafer, G. (1971) A Mathematical Theory of Evidence. Princeton Univ. Press,
NJ.
[34] Schay, G. (1968) An algebra of conditional events. J. Math. Anal. Appl. (24)
334-344.
[35] Walker, E. (1991) A simple look at conditional events, Conditional Logic in
Expert Systems, I. Goodman, M. Gupta, H. Nguyen, and G. Rogers, editors,
North-Holland, Amsterdam, 101-114.
[36] Walker, E. (1992) Stone Algebras, Conditional Events, and Three-Valued Logic,
these proceedings.
[37] Zadeh, L. A. (1965) Fuzzy sets. Inf. and Control 8 (3), 338-353.
12
The authors:
1. Hung T. Nguyen. Dr Nguyen received his B. S. in mathematics in 1967,
and his M. S. in probability theory in 1968 at the University of Paris. He
received his Ph.D. in mathematics at the University of Lille, France, in 1975.
After spending several years at the University of California at Berkeley and the
University of Massachusetts at Amherst, he joined the faculty at New Mexico
State University where he is currently Professor of Mathematical Sciences. His
research interests include mathematical statistics and uncertainty analysis in
knowledge based systems. He is co-author of four books and co-editor of two.
Dr. Nguyen is an associate editor of the International Journal of Approximate
Reasoning, the Journal of Intelligent and Fuzzy Systems, and the International
Journal of Fuzzy Sets and Systems, the International Journal of Uncertainty,
Fuzziness and Knowledge Based Systems, and the IEEE Transactions on Fuzzy
Systems. During 1992-93, he held the LIFE chair of fuzzy theory at Tokyo
Institute of Technology in Japan.
2. Elbert A. Walker. Dr. Walker received his B.A. in mathematics in 1950 and
M.A. in mathematics in 1951 from Sam Houston State University, his Ph.D. in
mathematics from the University of Kansas in 1955, and his M.A. in statistics
from Colorado State University in 1978. After spending one year at the National
Security Agency and one year at the University of Kansas, Dr. Walker came to
New Mexico State University in 1957, where he became Professor Emeritus in
1987. He spent 1987-89 as a program o¢ cer at the Division of Mathematical
Science at the National Science Foundation in Washington, D.C. His research
interests include Abelian group theory, ring theory, lattice theory, statistics,
mathematical linguistics, and uncertainty analysis in knowledge-based systems.
13