Download On Detection - University of Connecticut

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Clinical decision support system wikipedia , lookup

Gene expression programming wikipedia , lookup

Transcript
On Detection Networks and Iterated Influence Diagrams:
Application to a Parallel Distributed Structure
Haiying Tu, Satnam Singh, Krishna R. Pattipati, Peter Willett
Electrical and Computer Engineering Department
University of Connecticut
Storrs, CT 06269-2157, USA
860-486-5965
Email: [email protected]
Abstract— For two decades, detection networks of various
structures have been used to study information fusion from
multiple sensors and/or decision makers. On the other hand,
influence diagrams are widely accepted as graphical representations for decision problems under uncertainty. In this paper,
the similarities between these two modeling techniques, as
well as their advantages and disadvantages are discussed using a parallel network structure as an example paradigm. A
framework, termed iterated influence diagrams, which combines influence diagrams and person-by-person optimization,
is proposed to take advantage of the benefits from both representations. The key purpose of the iterated influence diagrams is the relaxation of one of the major constraints of a
regular influence diagram, viz., decision nodes must be ordered. As a consequence, influence diagram can also be used
to represent and solve distributed detection problems, i.e.,
find the optimal decision policies for all the decision makers.
TABLE OF C ONTENTS
1
I NTRODUCTION
2
A PARALLEL D ISTRIBUTED D ETECTION N ET-
WORK
3
M ODEL AS I NFLUENCE D IAGRAMS
4
I TERATED I NFLUENCE D IAGRAM
5
C ONCLUSIONS AND F UTURE W ORK
1. I NTRODUCTION
Detection networks are decision networks that perform distributed hypothesis testing (event detection) by generalizing
the signal detection theory to distributed settings [1, 2]. In
this paper, the two terms, detection networks and decision
networks, are used interchangeably. The problem scope and
complexity of event detection require that the information acquisition, processing, and decision making functions be distributed over a team of decision making units (agents, sensors, in general decision makers), arranged in the form of
a decision network [3]. The final decision, coupled by the
individual and team processes, should be superior to that of
0-7803-9546-8/06/$20.00/c 2006 IEEE
IEEEAC paper # 1335
any single decision maker. Oftentimes, detection networks
are optimized based on Bayes’ decision criterion [4] and/or
Neyman-Pearson criterion [5].
Influence diagrams were originally developed in the mid1970s as a means to describe a decision problem under uncertainty [6,7]. An influence diagram represents the probabilistic
dependencies and the information flow in a decision model in
the form of a directed graph [8]. It generalizes Bayesian Networks (BNs) by introducing decision nodes to depict the decision makers and utility nodes to model the payoff or reward
functions. The primary task of an influence diagram-based
inference system is the determination of the decision alternatives that maximize the expected utility [9]. Algorithms that
directly evaluate influence diagrams can be found in [10–13].
Stimulated by the strong relationship between influence diagrams and BNs, researchers also developed algorithms,
which can transform influence diagrams into BN inference
problems [9, 14, 15]. As a consequence, the influence diagrams can also be solved by BN inference algorithms, either exactly or approximately. The relationships between a
decision tree, a traditional decision analysis representation,
and influence diagrams are discussed in [16] and [17]. Smith
[16] combines elements of the decision tree and influence diagram, which can efficiently represent asymmetric decision
problems. Diehl [17] establishes the relationship between
multi-objective influence diagrams and multi-objective decision trees, thereby allowing a decision maker to utilize the
advantages of both representations.
In this paper, we seek to build a bridge between detection networks and influence diagrams, and utilize the benefits accrued
from both representations. With a very simple detection network, viz., a parallel distributed detection network, we will
show that the influence diagram representation is restricted
by its limited ability in modeling continuous variables, and
by one of the key constraints of regular influence diagrams
[10], viz., the requirement of a directed path that contains all
of the decision nodes. In other words, the current influence
diagram framework is unable to solve for the optimal decision policies of a parallel detection network problem, which
has been resolved via person-by-person optimization [18] in
the context of detection networks. A combination of regular
influence diagram inference algorithms and person-by-person
optimization scheme is thus proposed and termed iterated influence diagram in the paper to relax Shachter’s constraint.
The rest of the paper is organized as follows. We will firstly
formulate the distributed detection problem in the context of
a parallel detection network, and discuss a person-by-person
optimization procedure to solve the problem. Then, the same
network is re-formulated as an influence diagram, which can
not be solved directly. However, a slightly modified version
of the influence diagram is shown to solve part of this detection problem, viz., the optimal policy of the fusion center
(primary decision maker) for specified policies of subordinate decision makers. Further, we use a person-by-person
optimization procedure (so called single policy updating in
the parlance of Lauritzen [19] for an influence diagram with
relaxation of nonforgetting assumption), which can be generalized to incorporate existing influence diagram inference
methods to solve the parallel detection network problems. Indeed, both the graphical representation, and the numerical solution are of interest. The paper will then conclude with a
summary and future research directions.
2. A PARALLEL D ISTRIBUTED D ETECTION
N ETWORK
Consider a parallel distributed detection network [20] in Figure 1. Three subordinate decision makers DM i , (i = 1, 2, 3)
are faced with a binary hypothesis testing problem of deciding which of two possible events H 1 (H = 1) and H0
(H = 0) has occurred. The prior probabilities of these two
hypotheses H1 and H0 are P1 and P0 = 1 − P1 , respectively. Each decision maker DM i , (i = 1, 2, 3) makes an
individual local decision based on its own sensor measurement yi , (i = 1, 2, 3). Each measurement is assumed to be
Gaussian distributed with mean at the true event and known
variance as σi 2 , (i = 1, 2, 3) (for notational simplicity, we assume the variance under both hypotheses to be the same) for
the three decision makers, respectively. Thus, the measurements are signals blurred by white noise. In other words, each
measurement is a mixture of Gaussian random variables [21],
which can be represented by:
f (yi ) = P1 f1 (yi ) + P0 f0 (yi )
(1)
Team decision
d 0  ^0,1`
DM0
Local decisions
d1  ^0,1`
DM1
Sensor
measurement
d 2  ^0,1`
DM2
y1
y2
f1 (yi ) ∼ N (η1 , σi 2 ) = N (1, σi 2 )
f0 (yi ) ∼ N (η0 , σi 2 ) = N (0, σi 2 )
Finally, a fusion center DM 0 makes a team decision using
certain decision rule by which we want to minimize the expected cost of the decision d 0 given the true event H, i.e.,
E C(d0 , H) = ΣH Σd0 C(d0 , H)P (d0 |H)P (H)
(2)
A reasonable assumption in (2) is that C(d 0 = 1, H = 0) >
C(d0 = 0, H = 0) and C(d0 = 0, H = 1) > C(d0 =
DM3
y3
Environment
event: H  ^0,1`
Figure 1. A Parallel Distributed Detection Network
1, H = 1), meaning that the cost of an erroneous decision is
greater than that of a correct one.
The true expertise of a decision maker in the context of binary detection
problems is a collection of
probabilities of
=
P
(d
=
1|H
=
1)
and false alarms
detection
P
D
PF = P (d = 1|H = 0) for all possible preferences. The
locus of (PD , PF ), whose graphical representation is called
a Relative Operating Characteristic (ROC) curve, represents
the accuracy or reliability of a decision maker [22]. Figure 2
illustrates PD versus PF in the Gaussian mixture case. With
the shifted mean Gaussian assumption for the two hypotheses, the ROC curve is constructed via:
(3)
PD = Φ Φ−1 (PF ) − SN R
with signal-to-noise ratio SN R defined as:
SN R =
1
η1 − η0
=
σ
σ
(4)
∞
x2 1
Φ(x) = √
dx
(5)
exp −
2
2π x
is the error function of a standard Gaussian random variable.
and
We seek a team optimal decision rule
d∗ = mind0 ,d1 ,d2 ,d3 E{C(H, d0 )}
with
d 3  ^0,1`
(6)
such that the decision rule d i , (i = 1, 2, 3) corresponds to a
team-optimal operating point on its own local ROC curve.
The solution of a similar network has been presented in [18]
with the generalization that DM 0 can have its own measurement. Here, d∗0 is an assignment, given possible configurations of d1 , d2 , d3 ; d∗i is decided by a threshold τ in Figure
2, which is specified by an operating point (P D , PF ) on the
ROC curve. Following the process of likelihood ratio test (for
fusion center) and person-by-person optimization (for subordinate DMs), we can obtain the solution for (6) with the
d0
d1
d2
d3
u
y1
Figure 3. Original Influence Diagram for the Parallel
Distributed Detection Network
structure in Figure 1 as well. Note that the converged solution from person-by-person optimization may not be globally
optimal; however, it is a Nash equilibrium for the subordinate
DMs, since the expected cost cannot be any lower by making
changes to a decision rule at any single DM, i.e.,
(7)
and the same equation holds for d 2 and d3 as well. Computational algorithms for solving the distributed detection problems are discussed in [18, 20]]
3. M ODEL
AS I NFLUENCE
y3
H
Figure 2. Optimal Decision for Binary Events with
Gaussian Noise
E{C|d∗0 , d∗1 , d∗2 , d∗3 } = mind1 E{C|d∗0 , d∗2 , d∗3 }
y2
D IAGRAMS
How can the detection structure in Figure 1 be formalized
as an influence diagram? Intuitively, we would like to obtain something similar to Figure 3, where we follow the typical graphical representation for influence diagrams with ovals
denoting the chance nodes, rectangles representing the decision nodes, and diamonds referring to the utility nodes. The
dashed arcs are the information arcs into decision nodes that
indicate the available information at the time the decision
nodes need to select options [15]. While the decision nodes
for the decision makers and chance nodes for the measurements and hypothesis maintain the same state space, the definition of a utility node is the opposite of cost in a detection
network, i.e., instead of minimizing the expected cost, the inference in influence diagram is based on the maximization
of the expected utility (M EU ). The optimal decision rule
is rephrased as “maximum policy” [19] in the influence diagram:
(8)
d∗ = maxd0 ,d1 ,d2 ,d3 E{u(H, d0 )}
with u(H, d0 ) = −C(H, d0 ).
However, the influence diagram in Figure 3 is unsolvable by
any existing influence diagram tools, such as Hugin expert
(http://www.hugin.com), Netica (http://www.norsys.com),
GeNIe (http://www.sis.pitt.edu/∼genie), and IDEAL (http://
www.ideal.com), etc. The bottleneck stems from their representation limitation for the continuous variable such as
yi , i = 1, 2, 3, which are mixtures of Gaussians in our example. Specifically, the current inference algorithms (implemented in most of the influence diagram software tools), viz.,
Shachter’s node absorption [10] or Cooper’s policy evaluation via BN transformation [9], only consider discrete variables, by which they can utilize sequential summations and
maximization to find the maximum policy. By “sequential”,
we mean the constraint in Shachter’s definition of a “regular influence diagram” [10], which is violated by the structure in Figure 3. Shachter’s assumption was employed by all
the influence diagram inference algorithms in the literature as
far as we are aware of. That is, a regular influence diagram
assumes the existence of a directed path that contains all of
the decision nodes. Equivalently, we can view it as a single
decision maker making sequential decisions at different decision epochs. Apparently, no parallel decision can be made
under this assumption. Netica allows a user to model continuous variables; however, it performs discretization before inference. Analytica (http://www.lumina.com) is a tool dealing
with uncertainty in risk and policy analysis [23]. This tool has
many nice features, such as general distributions, either continuous or discrete, statistical graphical outputs, support for
time evolution and loop representation, etc. However, Analytica is a simulation-based tool, and currently cannot prescribe
an optimal solution for the detection network, such as the
one considered in this paper. Although unavailable in software tools, influence diagrams with continuous variables can
be found in the literature, such as [21, 24, 25]. Again, none
of the works is applicable to the parallel detection network
structure.
Now consider a variation of Figure 3, where we remove the
continuous variables, i.e., the measurements from the sensor,
and transform the local decision makers to be chance nodes.
This yields Figure 4. An alternative model, which is equivalent to Figure 4, is shown in Figure 5. A similar structure was
introduced by Heckerman [26] to analyze the value of information for diagnosis. In this configuration, the subordinate
decision makers provide evidence or local decisions on the
ROC Curves for DM1, DM2 and DM3
d0
d2
u
d3
H
Probability of Detection
d1
1
DM1
DM2
DM3
0.8
0.6
0.4
0.2
Figure 4. An Influence Diagram with the Same Team
Decision Rule
d0
u
0
0
0.2
0.4
0.6
0.8
Probability of False Alarm
1
Figure 6. ROC Curves for the Three Subordinate DMs
(“squares” denote the initial operating points; “stars” depicts
the optimal operating points)
H
ters correspond to the maximum expected utility policies for
the individual decision makers.
d1
d2
d3
Figure 5. An Alternative but Equivalent Model
true state of event H.
In both cases, the primary decision maker DM 0 becomes the
only decision node in the influence diagram. The expertise of
a subordinate decision maker is indeed the conditional probabilities P (di |H), (i = 1, 2, 3) , which can be constructed
from the operating point on the ROC curve P Di , PFi , (i =
1, 2, 3). The optimal decision of the primary decision maker
involves a likelihood ratio test. From the knowledge of detection network, we can improve the expected utility by moving the operating points of the subordinate decision makers.
That is, by varying the conditional probability tables, with
the constraint that they move on their individual ROC curves,
subordinate decision makers improve the team performance.
In the next section, an iterated influence diagram, based on
a person-by-person optimization scheme, is proposed to find
the optimal decision rules for all the decision makers.
4. I TERATED I NFLUENCE D IAGRAM
As illustrated in Figures 3 and 4, decision makers with a
mixture of Gaussian measurements can be reformulated as
chance nodes with their conditional probabilities constrained
to lie along their individual ROC curves. The utility functions
are therefore parameterized functions (in terms of probability
of false alarm since they are constrained to lie on the individual ROC curves), and we can adjust the false alarm probabilities to maximize the expected utility. The optimized parame-
We propose an iterated influence diagram, which combines
the regular influence diagram and person-by-person optimization for decision problems with both serial (can be topologically ordered) and parallel (cannot be uniquely ordered)
decision nodes as follows:
(1) Randomly generate a set of policies for parallel decision
nodes. Set them as the currently optimal policies d ∗i , ∀i.
(2) Calculate M EU based on a regular influence diagram inference algorithm [9, 10]. For each group of parallel decision
nodes, we only allow one of them (e.g., d 1 ) to be in the loop
of inference and fix the others at the currently optimal policies (e.g., di = d∗i , ∀i > 1). With this assumption, Shachter’s
influence diagram constraint is satisfied. This step will return
a new d∗1 along with the optimal policies for other sequential decision nodes. Continue this process for all the parallel
decision nodes, and update the optimal policies d ∗i , ∀i.
(3) Repeat step 2 until M EU from the regular influence diagram inference algorithm converges.
As an example, let us assume that σ 1 = 1.25, σ2 = 1 and
σ3 = 0.8. The ROC curves for the three DMs can be generated from (3) and are shown in Figure 6. Intuitively, a
DM with less noisy measurement has better expertise. The
utility functions are assumed to be u(d 0 = 1, H = 0) =
u(d0 = 0, H = 1) = −1 (in the sense of penalty) and
u(d0 = 0, H = 0) = u(d0 = 1, H = 1) = 0. The prior
probabilities for the true events are: P 1 = 0.8 and P0 = 0.2.
A randomly generated set of operating points for DM 1, DM 2
and DM 3 are (PF 1 = 0.1822, PD 1 = 0.5888), (PF 2 =
0.4092, PD 2 = 0.8819) and (PF 3 = 0.4110, PD 3 =
0.9386), respectively, with an initial M EU of −0.127. These
Table 1. Original Policy
d1 = 1
d2 = 1
d2 = 0
d3 = 1 d3 = 0 d3 = 1 d3 = 0
-0.9845 -0.7439 -0.8551 -0.2122
-0.0155 -0.2561 -0.1449 -0.7878
points are depicted as squares in Figure 6. Two-by-two conditional probability tables (CPTs) for each DM can be easily
constructed from these numbers. The initial policy for the
fusion center, along with the expected utilities, is shown in
Table 1.
This result can be easily checked with the likelihood ratio test.
Define the likelihood ratio as:
Λ(d1 , d2 , d3 ) =
Πi P (di |H = 1)
Πi P (di |H = 0)
(9)
The decision rule of DM 0 is the likelihood ratio test:
Λ≶dd00 =1
=0 λ
(10)
where,
(H=0)[u(d0 =0,H=0)−u(d0 =1,H=0)]
λ = P
P (H=1)[u(d0 =1,H=1)−u(d0 =0,H=1)]
= 0.25
(11)
The decision policy of d 0 for all the configurations of the subordinate decision makers (highlighted with Bold fonts in Table 1 are listed in Table 2 along with the ordered likelihood
ratio. We can see that as the influence diagram achieves exactly the same result, i.e., the policy of DM 0 is optimal for
the specified decision policies of subordinate DMs.
Table 2. Initial Policy for DM 0
d1
0
1
0
0
1
1
0
1
d2
0
0
1
0
1
0
1
1
d3
0
0
0
1
0
1
1
1
Λ
0.0105
0.0673
0.1130
0.2295
0.7260
1.4753
2.4747
15.9053
d0
0
0
0
0
1
1
1
1
Now we fix CPTs for DM 2 and DM 3, but allow DM 1 to
operate at any point on its ROC curve. The M EU corresponding to the maneuver of DM 1 is plotted in Figure 7,
and the optimal policy of DM 1 is achieved at P D ≈ 0.09 .
We can see that the overall M EU curve is not concave with
respect to PF i . Consequently, nonlinear optimization algorithms, such as Gauss-Seidel iterations, may be trapped at a
local-maximum [18].
d1 = 0
d2 = 1
d2 = 0
d3 = 1 d3 = 0 d3 = 1 d3 = 0
-0.9082 -0.3112 -0.4787 -0.0402
-0.0918 -0.6888 -0.5213 -0.9598
MEU Changes with the Operation Points on ROC
−0.11
Maximum Expected Utility
d1
d2
d3
d0 = 0
d0 = 1
−0.12
−0.13
−0.14
−0.15
−0.16
0
0.2
0.4
0.6
0.8
Probability of False Alarm
1
Figure 7. M EU for DM1
By the iterated influence diagram process, the M EU finally
converges at around −0.1015. Figure 8 shows the convergence of the M EU in nine iterations (actually three complete iterations, where each complete iteration contains three
person-by-person optimized M EU s). The M EU of the first
point is −0.1193, which comes from the optimization of
DM 1 (note that the original M EU = −0.127).
The final M EU is achieved when DM1 operates at the
point (PF 1 = 0.06, PD 1 = 0.336), DM2 operates at
(PF 2 = 0.2, PD2 = 0.7165) and DM3 operates at (P F 3 =
0.12, PD3 = 0.7233). These optimal policies are shown as
stars in the ROC curves in Figure 6. Accordingly, the fusion center changes its policy as in Table 3. Now, the optimal
policy for DM 0 is to make decision d 0 = 0 only when all
the three subordinate decision makers agree on H = 0, i.e.,
di = 0, (i = 1, 2, 3). The corresponding M EU is −0.2395
as shown in the last column of Table 3.
5. C ONCLUSIONS
AND
F UTURE W ORK
The purpose of this paper is to seek graphical representations with unified inference algorithms, such as influence diagrams, for solving the distributed decision network problems.
We discussed the similarities and differences between detection networks and influence diagrams. Influence diagrams are
restricted by their limited capability to represent continuous
Table 3. Optimal Policy
d1
d2
d3
d0 = 0
d0 = 1
d1 = 1
d2 = 1
d2 = 0
d3 = 1 d3 = 0 d3 = 1 d3 = 0
-0.9979 -0.9619 -0.9795 -0.7140
-0.0021 -0.0381 -0.0205 -0.2860
Convergence of the MEU
440–446, 1985.
−0.095
[3] A. Pete, K. Pattipati, Y. Levchuk, and D. Kleinman,
“An overview of decision networks,” IEEE Trans. on
Systems, Man and Cybernetics - Part C: Applications,
vol. 28, pp. 172–192, 1998.
−0.1
[4] R. Duda and P. Hart, Pattern Classification and Scene
Analysis. Wiley-Interscience, 1973.
MEU
−0.105
[5] J. Neyman and E. Pearson, “On the problem of the most
efficient tests of statistical hypotheses,” Philosophical
Transactions of the Royal Society, vol. A 231, pp. 289–
337, 1933.
−0.11
−0.115
−0.12
1
d1 = 0
d2 = 1
d2 = 0
d3 = 1 d3 = 0 d3 = 1 d3 = 0
-0.9839 -0.7609 -0.8579 -0.2395
-0.0161 -0.2391 -0.1421 -0.7605
2
3
4
5
6
Iteration
7
8
9
Figure 8. Convergence Curve of the M EU
random variables and the requirement of a directed path that
contains all of the decision nodes. An extension to regular influence diagrams, termed iterated influence diagrams, is proposed to solve a parallel detection network problem.
More work needs to be done to generalize the framework of
iterated influence diagrams, including extensions to other detection network structures, such as tandem networks (serial
distributed detection networks), and generalized event structures. The person-by-person optimization used in the iterated
influence diagram will increase the complexity of inference,
thus requiring efficient algorithms to find the optimal policies
at each iteration.
ACKNOWLEDGMENT
[6] R. Howard and J. Matheson, Influence Diagrams.
Springer-Verlag, 1981.
[7] A. Miller, M. Merkhofer, R. Howard, J. Matheson, and
T. Rice, “Development of automated computer aids for
decision analysis,” Technical Report 3309, SRI International, Menlo Park, California, Tech. Rep., 1976.
[8] J. Tatman and R. Shachter, “Dynamic programming and
influence diagrams,” IEEE Trans. on Systems, Man and
Cybernetics, vol. 20, no. 2, pp. 365–379, 1990.
[9] G. Cooper, “A method for using belief networks as influence diagrams,” in Proceedings of the Workshop on Uncertainty in Artificial Intelligence, Minneapolis, Minnesota, 1988.
[10] R. Shachter, “Evaluating influence diagrams,” Operations Research, vol. 34, no. 6, pp. 871–882, 1986.
[11] P. Shenoy, “Valuation-based systems fro bayesian decision analysis,” Operations Research, vol. 40, no. 3, pp.
463–484, 1992.
[12] P. Ndilikilihesha, “Potential influence diagrams,” International Journal of Approximate Reasoning, vol. 11, pp.
251–285, 1994.
R EFERENCES
[13] F. Jensen, F. Jensen, and S. Dittmer, “From influence
diagrams to junction tree,” in Proceedings of the Tenth
Conference on Uncertainty in Artificial Intelligence,
Seattle, Washington, 1994, pp. 367–373.
[1] R. Tenney and N. Sandell, “Detection with distributed
sensors,” IEEE Trans. on Aerospace and Electronic Systems, vol. 17, no. 4, pp. 501–510, 1981.
[14] R. Shachter and M. Peot, “Decision making using
probabilistic inference methods,” in Proceedings of the
Eighth Conference on Uncertainty in Artificial Intelligence, Stanford, CA, 1992, pp. 276–283.
[2] J. Tsitsiklis and M. Athans, “On the complexity of decentralized decision making and detection problems,”
IEEE Trans. on Automatic Control, vol. 30, no. 5, pp.
[15] N. Zhang, “Probabilistic inference in influence diagrams,” Computational Intelligence, vol. 14, pp. 475–
497, 1998.
This work is supported by Aptima Inc., Woburn, MA 01801.
[16] J. Smith, S. Holtzman, and J. Matheson, “Structuring
conditional relationships in influence diagrams,” Operations Research, vol. 41, no. 2, pp. 280–297, 1993.
[17] M. Diehl and Y. Haimes, “Influence diagrams with multiple objectives and tradeoff analysis,” IEEE Trans. on
Systems, Man and Cybernetics - Part A: Systems and
Humans, vol. 34, no. 3, pp. 293–304, 2004.
[18] Z. Tang, K. Pattipati, and D. Kleinman, “An algorithm
for determining the decision thresholds in a distributed
detection problem,” IEEE Trans. on Systems, Man and
Cybernetics, vol. 21, no. 1, pp. 231–237, 1991.
[19] S. Lauritzen and D. Nilsson, “Representing and solving
decision problems with limited information,” Management Science, vol. 47, no. 9, pp. 1235–1251, 2001.
[20] A. Pete, K. Pattipati, and D. Kleinman, “Team relative
operating characteristic curve: A normative-descriptive
model of team decisionmaking,” IEEE Trans. on Systems, Man and Cybernetics, vol. 23, no. 6, pp. 1626–
1648, 1993.
[21] W. Poland and R. Shachter, “Mixtures of gaussians and
minimum relative entropy techniques for modeling continuous uncertainties,” in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence, Providence, Washington DC, 1993, pp. 183–190.
[22] W. Peterson, T. Birdsall, and W. Fox, “The theory of signal detectibility,” Transactions of the IRE Professional
Group in Information Theory, vol. 2-4, pp. 171–212,
1954.
Haiying Tu received the B.S. degree in
automatic control from the Shanghai Institute of Railway Technology, Shanghai,
China, in 1993 and the M.S. degree in
transportation information engineering
and control from Shanghai Tiedao University, Shanghai, China, in 1996. She is
currently working toward the Ph.D. degree in electrical and computer engineering at the University of Connecticut (Uconn), Storrs. Prior to joining UConn,
she was a Lecturer at Tongji University, Shanghai, China,
and also worked as an employee of the Computer Interlocking System Testing Center, which belongs to the Ministry of
Railway of China. Her current research interests include organizational design, Bayesian analysis, fault diagnosis and
decision making.
Satnam Singh received the M.S. degree in electrical engineering from the
University of Wyoming. He is working toward the Ph.D. degree in electrical
and computer engineering at the University of Connecticut, Storrs. His interests are in signal processing, communication, and optimization.
[23] M. Morgan and M. Henrion, Uncertainty: A Guide to
Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge University Press, 1998.
[24] B. Cobb and P. Shenoy, “Hybrid influence diagrams using mixtures of truncated exponentials,” in Proceedings
of the 20th Conference on Uncertainty in Artificial Intelligence, Banff, Canada, 2004, pp. 85–93.
[25] R. Shachter and C. Kenley, “Gaussian influence diagrams,” Management Science, vol. 35, no. 5, pp. 527–
550, 1989.
[26] D. Heckerman, E. Horvitz, and B. Middleton, “An approximate nonmyopic computation for value of information,” IEEE Trans. on Pattern Analysis and Machine
Ietelligence, vol. 15, no. 3, pp. 292–298, 1993.
Krishna Pattipati received the B.S. degree in electrical engineering from Indian Institute of Technology, India, in
1975 and the Ph.D. degree in electrical engineering from the University of
Connecticut in 1980. He is a Professor of Electrical and Computer Engineering at the University of Connecticut, Storrs. His research has been primarily in the application of systems theory and optimization techniques to
complex systems. Prof. Pattipati is a Fellow of the IEEE.
He received the Centennial Key to the Future Award from
the IEEE Systems, Man and Cybernetics (SMC) Society in
1984, the Andrew P. Sage Award for the Best SMC Transactions Paper for 1999, the Barry Carlton Award for the Best
AES Transactions Paper for 2000, the 2002 NASA Space Act
Award, and the 2003 AAUP Research Excellence Award at
the University of Connecticut. He also won the Best Technical Paper Award at the 1985, 1990, 1994, 2002, 2004, and
2005 IEEE AUTOTEST Conferences, and at the 1997 and
2004 Command and Control Conferences. He served as the
Editor-in-Chief of the IEEE TRANSACTIONS ON SYSTEMS,
MAN, AND CYBERNETICS-CYBERNETICS (PART B) during 1998-2001.
Peter Willett received the B.S. degree in
engineering science from the University
of Toronto, Toronto, Canada, in 1982
and the Ph.D. degree in electrical engineering from Princeton University in
1986. He is a Professor of Electrical
and Computer Engineering at the University of Connecticut, Storrs. He has
written, among other topics, about the processing of signals
from volumetric arrays, decentralized detection, information
theory, code division multiple access (CDMA), learning from
data, target tracking, and transient detection. Dr. Willett is a
Fellow of the IEEE, a member of the Board of Governors of
IEEE’s AES society and the IEEE Signal Processing Society’s
SAM technical committee. He is an Associate Editor for both
the IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS and the IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS. He was a Track Organizer for Remote Sensing at the IEEE Aerospace Conference
(2001-2003) and was co-chair of the Diagnostics, Prognosis, and System Health Management SPIE Conference in Orlando. He also served as Program Co-Chair for the 2003
IEEE Systems, Man and Cybernetics Conference in Washington, DC.