Download IFIP Conference, Banff, Canada

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Bootstrapping (statistics) wikipedia , lookup

History of statistics wikipedia , lookup

Foundations of statistics wikipedia , lookup

Confidence interval wikipedia , lookup

Expectation–maximization algorithm wikipedia , lookup

German tank problem wikipedia , lookup

Transcript
A Probabilistic Treatment of
Conflicting Expert Opinion
Luc Huyse and Ben H. Thacker
Reliability and Materials Integrity
[email protected], [email protected]
45th Structures, Structural Dynamics and Materials (SDM) Conference
19-22 April 2004
Palm Springs, CA
Southwest Research Institute, San Antonio, Texas
Motivation
 Avoid arbitrary choice of PDF
 Account for vague data
 Efficient computational tools
 Account for model uncertainty
2
Probabilistic Assessment
 Choice of PDF
Companion paper
 Dealing with (conflicting) expert opinion data
Use Bayesian estimation
 Efficient Computation
Method must be amenable to MPP-based methods
 Epistemic Uncertainty in the decision making process
“Minimum-penalty” reliability level
3
Estimation with Interval Data
 Use Bayesian updating
f  y  
l  y  f  
 l  y  f   d

 Bayesian updating equation for intervals is


f   y1 , y2  
f   
 f   

y2
y1
y2
y1
f  y   dy
f  y   dyd
4
Non-informative Priors and the
Uniform distribution
 Temptation is to assume uniform distribution when nothing
is known about a parameter
 Non-Informative does NOT necessarily mean Uniform
 Illustration:
 Choose uniform for X because nothing is known
 Choose uniform for X2 because nothing is known
 Rules of probability can be used to show that PDF for
X2 is NOT uniform
 Selecting a uniform because “nothing is known” is not
justified
5
Transformation to Uniform
 Transformation t exists such that random variable X can
be transformed t: X  Y where Y has a uniform PDF.
dx
fY ( y )  f X ( x )
dy
 Question is no longer whether a uniform PDF is an
appropriate selection for a non-informative prior but under
which transformation t: X  Y the uniform is a reasonable
choice for the non-informative distribution for Y.
6
Data-translated Likelihood
Likelihood for Poisson density
likelihood for y = 1
likelihood for y = 5
likelihood for y = 10
0.45
0.4
transformed likelihood
 Uniform PDF is noninformative if the shape of
the likelihood does not
depend on the data
 Jeffrey’s principle:
uniform PDF is appropriate
in space where likelihood
is data-translated.
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
0
1
2
3
f=l
4
5
1/2
7
Updating with Interval Info
0.1
Prior
0.09
y=5
y in [4,6]
0.08
probability density function
 Variable y has a Poisson PDF;
estimate mean value of Y
 Non-informative prior used
 Consider six different updates
for mean
 Posterior variance decreases as
interval narrows
 “Weight” of expert depends on
length of their interval estimate.
y in [3,7]
0.07
y in [2,8]
y in [1,9]
0.06
y in [0,10]
0.05
0.04
0.03
0.02
0.01
0
0
5
10
mean value
8
Combining Interval & Point Data
0.12
Prior
Value 5
0.1
probability density function
 Variable y has a Poisson PDF;
estimate mean value of Y
 Non-informative prior used
 Consider five updates for mean
 Posterior variance reduces with
successive addition of precise
observations
 Narrow interval contains almost
as much information as point
estimate
 Wide interval estimate still adds
some information
Repeat 5 (2x)
Repeat 5 (3x)
5, [4,6]
0.08
5 (2x), [0,10]
0.06
0.04
0.02
0
0
5
10
mean value
9
Conflicting Expert Opinion
 Source of conflicting expert opinion
 Elicitation questions not properly asked or understood
 Correct through iterative expert elicitation process
 Each person susceptible to differences in judgment
 “Weighting” of expert opinion data has been proposed
 Difficult to determine who is “more” right.
 Adding weights to experts is therefore a matter of the analyst’s
judgment, and should be avoided.
 Proposed approach:
 Each expert opinion treated as a random sample from a parent
PDF describing all possible “expert opinions”.
 Weight is related to width of interval
 Conflict accounted for automatically in the updating process
10
Treatment of Model Uncertainty
 Separate inherent (X) and epistemic () variables
Bounds reflect epistemic
uncertainty
1
0.9
0.8
Reliability
0.7
As epistemic uncertainty is
reduced, bounds collapse to
computed CDF
0.6
0.5
0.4
0.3
Computed CDF
0.1
reflects
inherent
0
uncertainty
0.2
X
12
Efficient Computation
 Because of model uncertainty , b (safety index) is a
random variable
 Interval estimates with confidence level
 Compute CDF of b
 Exact confidence bounds determined from CDF
 Usually requires numerical tool  NESSUS
 First-Order Second-Moment Approximation
 Requires only a single reliability computation using the
mean value of epistemic variables 
13
Analytical Example
1.2
Non-informative Prior
Add [.5,.8]
Add [1,1.2]
Add [.7,1.1]
Add [.9,1.4]
Add [.9,1.5]
1
probability density function
 Limit State Function
g = X – /100
pf = Pr[g<0]
 Assume X is exponential PDF
with uncertain mean value l
  represents model uncertainty:
assume Normal(1,s), with s =
0.3
 Estimate the l using 5 interval
data (shown)
 Reliability b (related to pf) is a
function of epistemic parameters
l and 
0.8
0.6
0.4
0.2
0
0
1
2
3
4
l
14
Uncertain Reliability Index
2
1.8
0.8
4 Experts
5 Experts
1.4
1.2
1
0.8
0.6
0.6
0.5
0.4
0.3
0.2
0.2
0.1
0
0
2.5
3
3.5
reliability index
4
4.5
4 Experts
5 Experts
0.7
0.4
2
1 Expert
2 Experts
3 Experts
0.9
cumulative distribution function
1.6
cumulative distribution function
1
1 Expert
2 Experts
3 Experts
2
2.5
3
3.5
4
4.5
reliability index
Confidence bounds shrink when more information is available
15
Decision Making with
Epistemic Uncertainty
 In a decision making context, a penalty p(b) is associated
with using the “wrong” reliability index; the expected value
of the total penalty is:




E p(B  b )   p( b  b )fB ( b )db
B
 Minimum penalty reliability index minimizes the expected
value of the total loss (Der Kiureghian, 1989):

bmp  arg min
  p( b  b )fB ( b )db
b
B
16
Cost function and bmp
 1 

 k 1 
 Normal Approximation
k=1
k=5
k=20
Total Cost
 Linear penalty function:
 a( b  b target ), b target  b
p( b )  
ka( b  b target ), b target  b
 k is a measure for the
asymmetry of (usually > 1)
 Minimum penalty reliability index
(Der Kiureghian, 1989)
b mp  Fb1 
b mp , N  b  us b
 k 
with u   

k

1


1
btarget
2
2.5
3
3.5
4
4.5
actual reliability index
17
Minimum-Penalty Reliability Index
Exact
Normal approximation
StDev
2.5
0.4
2.4
k=1
0.35
2.3
0.3
2.2
k=5
0.25
2.1
2
0.2
k = 20
1.9
0.15
1.8
0.1
1.7
0.05
1.6
1.5
0
1
2
3
4
5
Number of expert opinions
18
Standard deviation
Minimum-Penalty Reliability Index
 bmp is a “safe” reliability level
 This level strongly depends
on the severity of the
consequence (value k)
 bmp increases with the number
of experts
Summary
 Proposed method handles both precise and interval
(expert opinion) data within probabilistic framework
 Conflicting information automatically accounted for
 Minimum-penalty reliability index can be estimated from a
single reliability computation  Highly efficient
 Allows effect of epistemic uncertainties to be
determined
 Companion paper (tomorrow) will discuss use of a
distribution system, whereby the data can determine the
shape of the distribution as well as any parameter
19
Future Work
 Amenable to MPP-based solution (future work)
 Link to pre-posterior analysis, compute sensitivity of
design decision to epistemic uncertainty.
Model
uncertainty
20
Thank You!
Luc Huyse & Ben Thacker
Southwest Research Institute
San Antonio, TX
21