Download The Value of Using Imprecise Probabilities in

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Birthday problem wikipedia , lookup

Randomness wikipedia , lookup

Probabilistic context-free grammar wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Dempster–Shafer theory wikipedia , lookup

Probability interpretations wikipedia , lookup

Inductive probability wikipedia , lookup

Expected utility hypothesis wikipedia , lookup

Risk aversion (psychology) wikipedia , lookup

Transcript
The Value of Using Imprecise
Probabilities in Engineering
Design
Jason Matthew
Aughenbaugh
e-mail: [email protected]
Christiaan J. J. Paredis
e-mail: [email protected]
Systems Realization Laboratory,
G. W. Woodruff School of Mechanical
Engineering,
Georgia Institute of Technology,
Atlanta, GA 30332-0405
Engineering design decisions inherently are made under risk and uncertainty. The characterization of this uncertainty is an essential step in the decision process. In this paper,
we consider imprecise probabilities (e.g., intervals of probabilities) to express explicitly
the precision with which something is known. Imprecision can arise from fundamental
indeterminacy in the available evidence or from incomplete characterizations of the
available evidence and designer’s beliefs. The hypothesis is that, in engineering design
decisions, it is valuable to explicitly represent this imprecision by using imprecise probabilities. This hypothesis is supported with a computational experiment in which a pressure vessel is designed using two approaches, both variations of utility-based decision
making. In the first approach, the designer uses a purely probabilistic, precise best-fit
normal distribution to represent uncertainty. In the second approach, the designer explicitly expresses the imprecision in the available information using a probability box, or
p-box. When the imprecision is large, this p-box approach on average results in designs
with expected utilities that are greater than those for designs created with the purely
probabilistic approach, suggesting that there are design problems for which it is valuable
to use imprecise probabilities. 关DOI: 10.1115/1.2204976兴
Keywords: imprecision, imprecise probabilities, epistemic uncertainty,
uncertainty, engineering design, probability bounds analysis, PBA
Introduction
Of the many challenges in engineering design, among the greatest are accounting for uncertainty and reducing associated risks.
Engineers have developed or adopted various methods to support
design decision making under uncertainty and to mitigate risk,
such as safety factors 关1兴, utility theory 关2–5兴, robust design 关6,7兴,
reliability based design optimization 关8–11兴, and probabilistic risk
assessments 关12,13兴. The risk that results from uncertainty has
three main components—what can go wrong, how likely it is, and
what its consequences are 关13兴, with risk defined as the probability of a scenario times the consequences associated with that scenario. In this paper, we focus on quantification of probabilities.
We believe that current methods are still limited in their ability to
clearly and quantitatively reflect the uncertainty encountered in
engineering design because they do not adequately accommodate
imprecise characterizations of uncertainty.
Imprecision can result from fundamental indeterminacy in the
available evidence or from incomplete characterizations of the
available evidence and designer beliefs, as discussed subsequently
in this paper. Common practice, such as in expected utility maximization, is to ignore this imprecision and to represent uncertainty
using precise probabilities. Our hypothesis is that, in engineering
design decisions, it is valuable to explicitly represent the imprecision in the available characterization of uncertainties by using
imprecise probabilities 共e.g., intervals of probabilities兲.
In the context of decision theory and utility theory, one design
method is preferred to another if on average it yields designs with
higher expected utilities. In this paper, we use a pressure vessel
design example and computational experiment that demonstrate
Contributed by the Design Automation Committee of ASME for publication in the
JOURNAL OF MECHANICAL DESIGN. Manuscript received September 13, 2005; final
manuscript received December 21, 2005. Review conducted by Shapour Azam. Paper presented at the ASME 2005 Design Engineering Technical Conference and
Computers and Information in Engineering Conference 共DETC2005兲, September 24–
28, 2005, Long Beach, CA.
Journal of Mechanical Design
aleatory
the value of a design method that uses imprecise probabilities to
characterize uncertainty, in comparison to a method that uses precise probabilities.
Uncertainty in Engineering Design
Definitions of Uncertainty. We view uncertainty in the context
of decision theory, and following Nikolaidis 关14兴, we define uncertainty indirectly from the definition of certainty. Nikolaidis defines certainty as the condition of knowing everything necessary
to choose the course of action whose outcome is most preferred.
We define a decision-maker’s uncertainty as the gap, shown in
Fig. 1共a兲, between certainty and the decision-maker’s present state
of information—the information the decision maker currently has
available for decision making, which is a slight refinement of
Nikolaidis’s definition. Note that the relative sizes of components
in Fig. 1 have no real meaning; the intention here is only to show
how the components are related to each other.
One component of uncertainty is irreducible uncertainty, as
shown in Fig. 1共b兲. This uncertainty commonly comes in the form
of a random process, and other authors use terms such as variability or aleatory uncertainty 关15–17兴 to describe similar aspects of
uncertainty. The actual existence of such irreducible uncertainty is
a philosophical issue about which many people disagree. For example, people disagree about whether any process is truly random, with some claiming that if the process were fully understood, it would no longer appear random. Such philosophical
arguments about the existence of irreducible uncertainty are tangential to our main point. Most authors 关15–17兴, including skeptics of a fundamental distinction between types of uncertainty
关18兴, are willing to admit that it is useful in practice to accept that
some uncertainties, such as machining errors, are the result of
truly random processes. In a sense, engineers are building a model
of the uncertainty. Even if the process is not random at the level of
fundamental physics, engineers may choose to assume it is for
practical reasons, much as they make other assumptions and sim-
Copyright © 2006 by ASME
JULY 2006, Vol. 128 / 969
following sections, we motivate the representation of uncertainty
in engineering design using imprecise probabilities, as suggested
and formalized by Walley 关36兴. We begin with a discussion of
interpretations of probability.
Fig. 1 Characteristics of uncertainty „adapted from Ref. †10‡…
plifications when modeling real systems.
Irreducible uncertainty accounts for the gap between certainty
and a state of precise information, defined as the state of having
acquired all information about a particular model of irreducible
uncertainty available at any price. Clearly, different assumptions
could be made about the irreducible uncertainty, but we focus on
how well the irreducible uncertainty is characterized. For example, even if a quantity is assumed to be random, the type of the
distribution 共e.g., normal兲 and its parameters 共e.g., mean and variance兲 still need to be determined. If the distribution type and
parameters are known perfectly, then the irreducible uncertainty is
known precisely.
The gap between the present state of information and a state of
precise information about the irreducible uncertainty, shown in
Fig. 1共b兲, is defined as imprecision. Other authors use terms such
as epistemic uncertainty, ignorance, or reducible uncertainty
关15–17兴 to describe similar aspects of uncertainty. We specifically
avoid the terms aleatory and epistemic because they have been
used primarily in an attempt to differentiate the inherent nature of
different uncertainties, rather than focusing on how human decision makers should manage uncertainty in engineering design.
Previous work has examined imprecision in preferences 关19兴 and
pure interval data 关20兴, while other work has examined the use of
possibility theory to capture uncertainty in design 关21,22兴.
Decision theory has long differentiated between the decisionmaking with known probabilities 共decision-making under risk兲
and decision making without knowledge of probabilities 共decision
making under uncertainty兲 关23兴. Since then, researchers have explored the middle ground of incomplete knowledge of probabilities 关24兴, such as ordered probabilities 关25兴 and linear constraints
on the probabilities 关26兴. Other literature has examined incomplete or partial information 共see Ref. 关27兴 for a review兲 in the
context of imprecisely characterized preferences 关28–30兴 and unknown weights in multiattribute decision making 关31兴.
These methods all share the goal of formalizing decision making with partial information, and most result in intervals of expected utility that must be resolved using additional decision
rules. In this paper, we focus on partial information about
probabilities—specifically, imprecise probabilities—in the context
of engineering design and risk. Imprecise probabilities are a generalization of the notion of probability with roots back to Keynes,
who first suggested that a single number is not sufficient to characterize probability 关32兴. Since Keynes, many researchers have
presented theories of uncertainty that start with the notion that
probabilities in general can only be bounded by upper and lower
共or imprecise兲 probabilities 关33–37兴 or sets of probabilities
关38–40兴. These generalizations of probability theory are more
broadly applicable than theories of ordered probabilities. In the
970 / Vol. 128, JULY 2006
Interpretations of Probability. Most engineers are familiar
with the mathematics of probability, but many have not been formally exposed to the competing interpretations of probability. The
philosophical arguments for or against different interpretations
can be quite passionate. The interpretations most commonly
adopted in engineering design are variations of the frequentist and
subjective interpretations 关41兴. We briefly introduce both and indicate how imprecise probabilities are relevant under either interpretation. For more complete discussions of interpretations of
probability, see, for example, Ref. 关36,41–44兴.
The frequentist interpretation is based on the notion of relative
frequencies of outcomes. Under a frequentist interpretation, a
probability represents the ratio of times that one outcome occurs
compared to the total number of outcomes in a series of identical,
repeatable, and possibly random trials. In engineering design,
events are not always repeatable. Even assuming some events are
essentially repeatable and data can be collected, there is no guarantee that a particular sample is representative of the true relative
frequency. Although in theory the relative sample frequency approaches the true relative frequency as the sample size goes to
infinity, an infinite sample size is impossible to acquire in practice.
Consequently, engineers will always face imprecision in their
characterizations of the frequentist probabilities.
Proponents of a subjective interpretation of probability assert
that there is no such thing as a true or objective probability, but
rather probabilities are an expression of belief based on an individual’s willingness to bet 关18,42,45兴. One of the subjectivists’
primary arguments against a frequentist perspective is the absence
of truly repeatable events, especially in practical problems. For
example, the probability that team A beats team B in a basketball
game has no real meaning under a frequentist interpretation, because that event—that particular game—will occur exactly once.
In this context, the notion of a long term frequency, and even
random events, is meaningless 关42兴. However, many people are
willing to express their belief of who will win in terms of bets.
When framed appropriately, such bets can be taken as subjective
probabilities.
The process of eliciting and assessing an individual’s beliefs, or
willingness to bet, is resource intensive. Even assuming that precise beliefs—and, hence, precise probabilities—exist, it will often
be impractical to fully characterize them due to constraints such as
bounded rationality, time, and computational ability 关27,36,46兴.
Consequently, only a partial—and therefore imprecise—
characterization of subjective probabilities is normally available.
We address additional motivations for the use of imprecise probabilities in the next section.
We prefer to adopt a loosely subjective interpretation of probability because true relative frequencies cannot be determined
with any finite number of data samples, and because a subjective
interpretation is applicable to a broader class of problems, as it is
not limited to repeatable events. Our interpretation is not as strict
as the traditional views 共see Lindley 关45兴 for a summary of the
strict subjective tradition兲, because we admit imprecisely known
subjective probabilities. The traditional school claims that by definition, subjective probabilities are known to a decision maker
共DM兲, because they are his or her beliefs. We prefer an interpretation that acknowledges the practical difficulties 关27,36,46兴 in
arriving at a precise characterization of such beliefs. Naturally,
subjective probabilities should be consistent with available information, including knowledge about observed relative frequencies
and the DMs actual beliefs; such probabilities can be considered
rationalist subjective probabilities 关36兴.
The Motivation for Imprecise Probabilities. The general motivation for imprecise probabilities is that the more evidence on
Transactions of the ASME
which a probability estimate is based, the more confidence a decision maker can have in it. Thus, the imprecision in the probabilities should be expressed explicitly in order to signal the appropriate level of confidence to ascribe to them.
Adapting an example from Walley 关36兴, consider the toss of a
thumbtack. The goal of an exercise is to determine the probability
that the tack lands pin-up. Three experimenters perform this exercise, as follows:
—
—
—
Experimenter A is in a hurry and does not even look at the
thumbtack. Experimenter A employs a noninformative
prior distribution, in this case using principle of indifference or insufficient reason 关32兴, and assumes that the probability of the tack landing pin-up is equal to the probability
it lands pin-down, thus ascribing a probability of 0.5 to
both outcomes.
Experimenter B tosses the thumbtack ten times and gets
six pin-ups. Experimenter B’s estimated precise probability of the tack landing pin-up is thus 0.6.
Experimenter C tosses the thumbtack 1000 times and gets
400 pin-ups. Experimenter C’s estimated precise probability of the tack landing pin-up is thus 0.4.
If we think of the three experimenters as three analysts that
could provide us with information, which analyst would we prefer
to hire? Because experimenter C’s estimate was based on more
data, it is more precise than experimenter B’s estimate. Experimenter A’s estimate was based on no data, so it does not seem
reasonable to place much confidence in it. Nevertheless, the precise probability estimates of 0.5, 0.6, and 0.4 appear equally credible. By not expressing the imprecision in these estimates, one is
arbitrarily eliminating it by assuming precision that has no justification in the available evidence. This problem can be overcome
by allowing analysts to state imprecise probabilities.
A full discussion of Walley’s formalization of imprecise probabilities 关36兴 is well outside the scope of this paper, but we summarize a few essentials. Walley defines upper and lower probabilities, which are special cases of upper and lower previsions 关47兴. In
simple terms, the lower prevision is a price at which a DM is sure
he or she would buy a bet, and the upper prevision is a price at
which the DM is sure he or she would buy the opposite of the bet
共which is equivalent to selling the original bet兲. Previsions are
equivalent to probabilities if the stakes of the bet are one unit of
currency, such as one dollar. If the upper and lower previsions are
equal, then they jointly represent the DMs fair price for the bet,
the price at which the DM is willing to take either side of the bet.
At a price between the upper and lower previsions, the DM is not
willing to enter the bet on either side, at least not without collecting more information and updating his or her previsions.
One possible objection to the use of imprecise probabilities is
that they can lead to such indeterminacy of action during decision
making. That is, given imprecise probabilities, there may not be a
single, clear “best” solution according to standard decision theories. We counter this by arguing that if the available evidence does
not clearly suggest a particular course of action, then the representation of this evidence should not arbitrarily pretend that it
does. An approach that demands precise probabilities necessitates
an arbitrary resolution of the indeterminacy. Such an approach
does not differentiate well-grounded probabilities 共such as experimenter C’s in the tack-tossing example兲 from arbitrary ones 共such
as experimenter A’s兲. By admitting imprecise probabilities, one
can abstain from these arbitrary judgments during analysis and
support better decision making, as follows.
It is true that in order for a single solution to be chosen, any
indeterminacy will need to be resolved. However, the explicit representation of imprecision in the characterization of uncertainty
allows for the direct management of the imprecision in the context
of a decision. For example, if a decision is not sensitive to the
current level of imprecision, a robust decision can be made using
rules such as arbitrary choice 关36兴, ⌫-maximin 关48兴, or the HurJournal of Mechanical Design
wicz criterion 关49兴. On the other hand, if the decision is sensitive
to the existing imprecision, one can decide to collect more information 共such as more tack tosses in the earlier example兲, perhaps
managing the set of alternatives according to policies of
E-admissibility 关39兴 or maximality 关36兴.
Another frequently levied—but invalid—objection to imprecise
probabilities is the Dutch Book Argument that they are irrational
关36,42,50兴. The general idea of a Dutch Book is that if a DMs
probabilities violate certain rules, a group of bets can be constructed, all of which the DM is willing to accept, but the combination of which results in a sure loss; the DM will lose money
under any outcome. This argument is often presented in favor of
precise probabilities and the axioms of Kolmorogov 关51兴. However, Walley 关36兴 presents axioms of coherence for imprecise
probabilities that also avoid sure loss.
Once again, the details of Walley’s formalization of imprecise
probabilities are beyond the scope of this paper, but he begins
with the same fundamental notion of rationality as de Finetti
关42,47兴—avoidance of a sure loss. His axioms of coherence assure
that if a DMs imprecise probabilities satisfy them, then the DM is
not subject to a sure loss. His primary deviation from axioms of
precise probabilities such as de Finetti’s is the allowance for a
range of indeterminacy—prices at which the DM will not enter a
bet as either a buyer or a seller. Thus, if a DMs imprecise probabilities are coherent, he or she will not enter into a situation in
which he or she suffers a sure loss. If a DM spends the time and
effort to collect complete evidence and fully elicit his or her beliefs, then the imprecise probabilities will collapse into precise
probabilities.
The only concrete way to determine the value of using imprecise probabilities in engineering design is in the context of decision theory. Informally, one method of representing uncertainty is
better than another if it allows designers to make better decisions.
With this in mind, we discuss decision theory in the following
section.
Decision-Making Under Uncertainty
Engineering design decisions are hampered by the uncertainty
inherent in predicting the outcomes, payoffs, and risks of different
actions. In this section, we discuss a safety-factor approach to
design, probabilistic risk assessment, and traditional statistical decision theory.
Design With Safety Factors. It may be that due to complex
interactions in a system or due to a lack of valid data, engineers
cannot characterize risk very well. One way engineers strive to
reduce risk is by overdesigning the product by a safety factor. In
this sense, a safety factor is an attempt to wrap all uncertainty into
one number. The simplicity of employing safety factors is its biggest advantage, but the unanswered question in this approach is
how large of a safety factor is necessary to meet risk, reliability,
and performance requirements given the existing uncertainty? In
the presence of large data histories, safety factors can be linked to
probabilistic characterizations of uncertainty and reliability 关1兴.
However, this link assumes precise probabilities are available.
In practice, particularly for novel design tasks, engineers do not
have precise probabilities and often must resort to an ad hoc
choice of a safety factor. Because engineers do not want structures
to fail, they usually choose safety factors that are much larger than
necessary. This frequently results in costly overdesign of the product. At the same time, the ad hoc method fails to provide any
guarantee of reliability. Consequently, engineers have pursued
more formal methods for dealing with uncertainty, such as probabilistic risk assessment and traditional statistical decision theory.
Probabilistic Risk Assessment. Since the 1970s, one widely
used approach to quantitative risk assessment has been probabilistic risk assessment 共PRA兲关13兴. PRA, while serving somewhat as
an umbrella term, is essentially a process for assessing risk in a
rigorous and quantitative manner. PRA involves identifying sceJULY 2006, Vol. 128 / 971
narios, consequences, and probabilities. It is clear that poor decisions may result from a PRA if the numbers used in the quantification of risk are incorrect. However, the most important time to
use PRA is when there is little data available on which to base
estimates of probabilities 关13兴.
PRA deals with this imprecision by beginning the characterization of probabilities with prior distributions and updating these
priors according to Bayes’ rule as evidence is collected. We believe there are several problems with this approach, including both
with interpretation and application. First, the probability distributions that result from this approach attempt to characterize uncertainty from both randomness and imprecision. This inherently
confounds two different aspects of the problem, and therefore
makes it difficult to draw useful insights from the resultant distributions 关52兴. Conversely, a method based on imprecise probabilities presents a quantitative way to handle imprecision independently from randomness. Second, there is no general agreement
on the selection of priors 关36兴. The choice of the prior distribution,
especially the statistical variance that is supposed to act as a proxy
for imprecision, is a very qualitative act.
Traditional Statistical Decision Theory. In traditional statistical decision theory 关53兴, utility analysis, as originally proposed by
von Neumann and Morgenstern 关54兴, is used for making decisions
under uncertainty. In general, utility expresses preference—more
preferred decision outcomes are assigned higher utility values.
The utility theory has been studied extensively by economists and
decision theorists, and there continues to be an increasing interest
in applying utility methods to engineering problems, as in 关2–5兴.
If chosen correctly, utilities reflect the decision-maker’s preferences, even under uncertainty. By applying the expected value
operator, the decision maker weights all possible outcomes according to their likelihood of occurring, and then chooses the
action that maximizes the expected utility.
In order to calculate expectations, common practice in traditional statistical decision theory is to characterize uncertainty with
a precise probability density function. In order to use this representation, a designer is forced to either eliminate imprecision or
ignore it. Eliminating imprecision requires the designer to expend
resources to acquire more information, thus increasing the costs of
the design process. Ignoring imprecision involves overstating the
true current state of information by making assumptions. This is
equivalent to forcing a decision maker to make an exact statement
or choice, even if he or she has not yet reached an exact belief of
judgment 关27兴. Since both of these approaches have inherent problems, it appears that an extension to existing statistical decision
theory is needed. In the following sections of this paper, we investigate the hypothesis that by generalizing the statistical formalism to allow imprecise probabilities, better decisions can be
obtained.
Probability Bounds Analysis
Some alternatives to precise probabilities, such as fuzzy sets
关6,55兴, possibility theory 关21,22,56兴, fuzzy measures 关57兴, the
transferable belief model 关58兴, and some interpretations of the
Dempster-Shafer evidence theory 共see Ref. 关59兴 for a discussion
of interpretations兲, abandon the notion of probability entirely. This
opens them up to significant criticism, at least in part due to their
unfamiliarity to practicing engineers and their lack of a clear operational interpretation 关60兴. Imprecise probabilities, on the other
hand, are an extension of traditional probability theory and therefore have a clear behavioral interpretation.
There are many ways to compute with imprecise probabilities,
including second-order 关61兴 or joint Monte Carlo sampling 关62兴.
However, these methods are computationally expensive. In order
to consider imprecision explicitly and distinctly from irreducible
uncertainty, we use a recently developed formalism that extends
traditional probability theory and incorporates imprecise
probabilities—called probability boxes, or p-boxes 关63兴. A p-box
972 / Vol. 128, JULY 2006
Fig. 2 Dimensions of uncertainty
is a slightly simplified representation of imprecise probabilities
that allows for much more flexible and efficient computations, as
described in the Discussion and Future Work section. The analysis
of uncertainty that is represented in a p-box is called probability
bounds analysis 共PBA兲.
PBA Basics. A p-box is a more expressive generalization of
both traditional probability distributions and interval representations, as is illustrated in Fig. 2. The p-box incorporates both imprecision and probabilistic characterizations by expressing interval bounds on the cumulative probability distribution function
共CDF兲 for a random variable. A general p-box explicitly expresses
both probability 共represented by the shapes of the boundary
CDFs兲 and imprecision 共represented by the separation between the
upper and lower bounds兲. More formally, the bounds on a p-box,
such as shown in Fig. 3共a兲, are given by two CDFs 共F1 and F2兲
that enclose a set of CDFs that are, under some interpretation,
consistent with the current state of information.
Constructing p-Boxes. There are several ways to construct
p-boxes 关64,65兴, depending on the type of information available.
In this paper, we construct p-boxes based on 95% confidence intervals on the parameters of a known distribution type, as described in Ref. 关66兴. While the distribution type will not always be
known, in engineering applications it is common that some theoretical knowledge can guide the selection of a distribution type
关67兴. However, such knowledge is not a requirement for using
PBA. Probability boxes also can be constructed based on
distribution-free methods that just use statistical data or moments.
An engineer could also assume different types of distribution,
construct p-boxes for each, and then take the envelope of all of
these to form the most general p-box.
In general, there is no rule for selecting the confidence level at
which to construct a p-box. However, a confidence level has a
clear interpretation; if something, such as confidence interval on a
point estimate, is constructed at the 95% confidence level, it
Fig. 3 Example p-box and distributions
Transactions of the ASME
Fig. 4 Pressure vessel schematic and design variables
means that if the experiment were repeated many times and a 95%
confidence interval constructed for each repetition, then 95% of
those intervals contain the true value. An individual designer must
understand the consequences of confidence levels and choose one
that is appropriate for the problem and preferences at hand.
In this way, the selection of a confidence level has some of the
same limitations as the selection of a prior in Bayesian analysis.
However, we see a major advantage to using confidence levels
and p-boxes instead of Bayesian analysis. The first is the separation of imprecision and irreducible uncertainty. The second is that
confidence levels have a clear meaning and are consistent, in that
a 99% confidence interval always contains the 95% confidence
interval which always contains the 90% confidence level and so
on. Conversely, the use of different prior distribution in Bayesian
analysis can lead to very different results, yet it is not obvious in
the posterior distribution what the initial assumptions were.
Interpreting a p-Box. For illustration, it is easiest to consider a
p-box as constructed at the 100% confidence level, though in any
practical problem such a p-box would be infinite. In this case, the
p-box expresses the range of all CDFs that are still deemed possible based on existing information. For example, assume that for
all practical purposes, X is a random variable, and engineers have
strong theoretical information that X is normally distributed with
known variance ␴2 = 1. However, the engineers can only characterize the mean imprecisely, bounding it in the interval ␮ = 关0 , 1兴.
Extending the notation of probability, we can write
X ⬃ N共␮, ␴ 兲 = N共关0,1兴,1兲
2
共1兲
such that X is distributed normally with mean ␮ and variance ␴2.
The corresponding p-box is shown in Fig. 3共a兲. In this case, the
bounds on the p-box are defined by the two distributions, F1
⬃ N共0 , 1兲 and F2 ⬃ N共1 , 1兲. The true CDF is unknown, and any of
the infinite number of normal CDFs with ␴2 = 1 inside the p-box
could be the true one, such as those shown in Fig. 3共b兲. However,
any distribution that falls partially or entirely outside of the p-box
is inconsistent with the present state of information.
Vertical slices of the p-box yield intervals on the CDF for a
particular realization. For example, a vertical slice at zero yields
the interval for the CDF of 关0.1587, 0.5兴. This means that the
probability that X is less than zero is between 0.1587 and 0.5, but
one does not have enough information to specify a precise probability within that interval. Horizontal slices of a p-box result in
intervals on the quantiles of the CDF. For example, a slice at the
median 共CDF= 0.5兲 gives the interval 关0,1兴 for the median.
Measuring the Value of Using Imprecise Probabilities
In this section, we use as an example two approaches for designing a pressure vessel under incomplete information to demonstrate the value of imprecise probabilities. Descriptions of the design scenario and the computational experiment follow.
Design Scenario. Assume one needs to design a pressure vessel
that is to contain 0.15 m3 of gas under 7 MPa of pressure. Due to
space limitations, certain maximum dimensions are imposed. The
goal is to determine the dimensions 共radius R, wall thickness t,
and length L of the vessel, shown in Fig. 4, for which the overall
Journal of Mechanical Design
utility, defined in Eq. 共2兲, is maximized. The vessel will be made
of a new type of steel for which the yield strength is not well
characterized. The extension of these methods to problems with
multiple uncertain variables is mentioned in the Discussion and
Future Work section. The material production process produces
variations in the material properties such that the material yield
strength is well modeled by a normally distributed random
variable.
Because the material is new and testing is relatively expensive,
variations in yield strength have only been measured in a set ⌺ of
n independent tension tests, where n is a relatively small number
due to cost considerations. These tests can at best give an estimate
of the true distribution, so in addition to inherent randomness
共irreducible uncertainty兲, engineers also face imprecision—they
cannot characterize the parameters of the random variable precisely. The number of samples n can be varied to explore different
levels of imprecision, just as the number of tosses was varied in
the thumbtack example.
Since the vessel will be used in a human-occupied location, the
consequences of failure are significant. Designers account for the
risk associated with vessel failure explicitly in a utility function
based on payoff in dollars and shown in Eq. 共2兲
*
*
U共␴y,DV兲 = Pselling − Cmaterial
volume 共DV兲 − Cfailure
␦共␴y,DV兲,
where
Pselling ⬅ selling price = $200
Cmaterial ⬅ material cost per volume = $8500/m3
␴y ⬅ true yield strength of pressure vessel
DV ⬅ design variables 共radius, thickness, length兲
Cfailure ⬅ cost incurred if vessel fails = $1,000,000
␦共␴y,DV兲 ⬅ failure indicator =
再
0 if ␴y 艌 ␴max共DV兲
1 otherwise
冎
共2兲
The explicit inclusion of risk is seen more obviously if we consider the expected utility
*
volume 共DV兲
EU共␴y,DV兲 = Pselling − Cmaterial
*
␦共␴y,DV兲兴
− E关Cfailure
共3兲
*
*
␦共␴y,DV兲兴 = Cfailure
P关␴y ⬍ ␴max共DV兲兴
where E关Cfailure
= consequences* probability of failure
⬅ risk
This formulation allows us to recognize that risk, and more specifically the pressure vessel’s probability of failure 共meaning the
probability that the yield strength is less than the maximum stress
in the walls of the pressure vessel兲, plays a very important part in
design decisions. It thus seems important to characterize these
probabilities appropriately.
The Computational Experiment. The goal of the computational experiment is to compare the utility of the design solutions
that result when different approaches for representing uncertainty
are applied to the same design problem. The comparison is made
possible in this experiment because we assume that overseeing the
experiment is a supervisor who is in a state of precise information
about the steel’s material properties. From the supervisor’s perspective, only irreducible uncertainty exists—uncertainty about
the yield strength of the material that is precisely characterized by
a normal distribution with a mean of 180 MPa and a standard
deviation of 15 MPa. The supervisor can therefore determine precisely the dimensions of the pressure vessel that result in the
maximum expected utility. This optimal design under the precise
information is the benchmark for comparison of the other design
JULY 2006, Vol. 128 / 973
E␴y兩⌺关U共␴y,a⌺* 兲兴
共7兲
E␴y兩⌺关U共␴y,b⌺* 兲兴
共8兲
and for approach B as
In order to compare the value of the two approaches, the supervisor, who has access to precise information, computes the difference in expected utility. Using the fact that the two approaches
start with the same information 共sample ⌺兲, the value of approach
B over approach A can then be expressed as
V共B兲 = E␴y兩⌺关U共␴y,b⌺* 兲 − U共␴y,a⌺* 兲兴
共9兲
It is necessary to note that this value was for only one particular
⌺—the set of yield strength measurements with which both designers start. Due to the randomness in ⌺, one trial is not sufficient to judge the relative value of each approach; the supervisor
needs to repeat the above experiment many times in order to determine which design approach performs best on average, over m
different sample sets ⌺. Mathematically, the expectation must be
taken with respect to ⌺ in order to calculate the average expected
value of approach B over A, written
E⌺关V共B兲兴 = E⌺兵E␴y兩⌺关U共␴y,b⌺* 兲 − U共␴y,a⌺* 兲兴其
共10兲
The addition of the word average emphasizes that this quantity is
the expectation over the samples of the expected utility of particular design solutions.
In this section, we presented a method for comparing design
approaches. The approaches that we compare are described in the
following section.
Design Using Approach A: Precise Normal Fit. Designer A
does not have access to precise information, but instead only has
access to the set ⌺ of n data samples. Because the designer does
not know the true distribution of ␴y, he or she must make an
approximation, denoted ˜␴y共A , ⌺兲. In this approximation, the representation of ˜␴y depends on both the approach, in this case A,
and the observed random sample ⌺. Designer A represents the
uncertainty as a normal distribution, using the sample mean and
sample variance as unbiased estimates of the true mean and true
variance, respectively. This yields the probabilistic model
Fig. 5 A computational experiment for determining the value
of using imprecise probabilities
approaches.
The general layout of the experiment is shown in Fig. 5. The
experiment consists of two designers: one using a single best-fit
normal distribution 共approach A兲, and the other 共approach B兲 using a p-box to represent the uncertainty about the yield strength.
The details of these approaches are explained in the sections following this general overview of the experiment.
Both approaches start with the same information about the uncertain yield strength. This information is a set ⌺ of n random
samples 共torsion test results兲 from the true normal distribution
n
⌺ = 兵␴y其i=1
共4兲
According to decision policies explained in the sections that follow, each designer then selects an optimal design, denoted as
a⌺* = 兵RA* ,tA* ,LA* 其
共5兲
b⌺* = 兵RB* ,tB* ,LB* 其
共6兲
and
respectively, for approach A and approach B. The supervisor compares each of the design solutions to determine the expected utility
evaluated under precise information for each solution.
For approach A this is written as
974 / Vol. 128, JULY 2006
冋兺
n
˜␴y共A,⌺兲 ⬃ N
n
兺
1
1
␴y ,
n i=1 i n − 1 i=1
冉
n
兺
1
␴yi −
␴y
n i=1 i
冊册
2
. 共11兲
Designer A therefore chooses design variables
a⌺ = 兵RA,tA,LA其
共12兲
that maximize the estimated expected utility given his or her information about the randomness. The expected utility is only estimated because designer A does not have access to a precise
characterization of the random variable ␴y. The expected utility
maximization results in the optimal design using approach A
given samples ⌺, denoted
a⌺* = arg max兵E˜␴y兩⌺关U共˜␴y共A,⌺兲,a⌺兲兴其
共13兲
a⌺
Design Using Approach B: Imprecise Probabilities. Designer
B takes a different approach for capturing the uncertainty in the
yield strength. Specifically, designer B represents the uncertainty
in ␴y by ˜␴y共B , ⌺兲, where the nature of ˜␴y共B , ⌺兲 is expressed as a
p-box rather than a pure normal distribution. The p-box is constructed using 95% confidence intervals on the mean and variance
of the yield strength 关68兴, such that with ␣ = 0.05
冋
¯兴 = ␮
ˆ − t␣/2,n−1
关␮
គ ,␮
s
s
冑n , ␮ + t␣/2,n−1 冑n
ˆ
册
共14兲
Transactions of the ASME
a point in the interval. More practically, the p-box could be discretized into a very large number of distributions that are representative of the types of distributions and parameters. The expected utility for each of these could be created, and then all the
points aggregated into an approximate interval. In this example
problem, the bounds on the expected utility interval result from
the distributions bounding the p-box due to the monotonicity of
the utility function with respect to the uncertain parameter, and
thus the calculations were straightforward. For a more complex
problem, the computations could be more involved, as described
in the Discussion and Future Work section.
Because the expected utility is now an interval, the designer
cannot choose design variables b⌺ that maximize the expected
utility in the traditional sense. Instead, a new decision rule is
required. While other approaches exist as discussed in the Discussion and Future Work section, in this experiment a conservative
best-worst case, or ⌫ maximin 关48兴 is used. Designer B therefore
chooses the design action b⌺ that has the highest lower bound Eគ
on the expected utility. This results in an optimal design decision
using approach B given the observed samples ⌺, denoted:
Fig. 6 Forming bounds of the p-box
关var,var兴 =
冋
共n − 1兲s2 共n − 1兲s2
, 2
␹␣2 /2,n−1 ␹1−
␣/2,n−1
册
b⌺* = arg max共Eគ 兲 = arg max兵Eគ ˜␴y兩⌺关U共˜␴y共B,⌺兲,b⌺兲兴其
b⌺
共15兲
Written in terms of mean and standard deviation, these two intervals form four extreme combinations, such as shown in Fig. 6.
These four distributions form the boundaries of the p-box shown
in Fig. 7. All normal distributions with means and variances given
by Eqs. 共14兲 and 共15兲 are contained inside this p-box.
The estimated expected utility under design approach B is defined as
E˜␴y兩⌺关U共˜␴y共B,⌺兲,b⌺兲兴
共16兲
where
b⌺ = 兵RB,tB,LB其
共17兲
is the designer’s chosen design action given sample set ⌺. Because the p-box expresses a range of possible distributions, the
expected utility is no longer a crisp number but rather an interval
defined by lower-bound Eគ and upper-bound Ē, such that
E˜␴y兩⌺关U共˜␴y共B,⌺兲,b⌺兲兴 = 关Eគ ,Ē兴
共18兲
This interval can be found by a variety of methods 关69–71兴. The
conceptually simplest 共but mathematically complex兲 is to try all
distributions that are inside the p-box, since they all correspond to
b⌺
共19兲
Supervisor’s Design Under Precise Information. In addition
to approach A and approach B, the experiment’s supervisor can
create a design using the true distribution, since he or she is in a
state of precise information. The supervisor therefore knows precisely that ␴y ⬃ N共180 MPa, 共15 MPa兲2兲. If we define this as approach K 共not shown in Fig. 5兲, the supervisor then chooses the
design variables
DV = k = 兵RK,tK,LK其
共20兲
such that the expected utility E␴y关U共␴y , k兲兴 is maximized. This
leads to the optimal design under precise information
k* = arg max兵E␴y关U共␴y,k兲兴其
k
共21兲
This optimal design, with expected utility denoted E关U共k*兲兴 for
brevity, serves as the base line for comparison because no other
approach can yield an average higher expected utility across many
repetitions m.
Experimental Results
The computational experiment was repeated for many different
initial sample set sizes, that is many different n. For each level of
n, the design process was repeated for m = 100, 000 different initial
sample sets ⌺ in order to determine the average performance of
the two methods. We first present the results for the particular
sample size n = 25, and then discuss the results over varying values of n, which represent different levels of imprecision.
Value of Using Imprecise Probabilities for n = 25 Samples of
the True Yield Strength. We first conduct the experiment with a
sample set ⌺ of size n = 25, meaning the designers are given the
results of 25 independent yield stress tests. As measured by the
supervisor using precise information and averaged over m
= 100, 000 initial sets, approach B on average yields designs with
greater expected utility than approach A. Specifically, using standard statistical analysis, we find the 95% confidence interval 共CI兲
on the value of approach B over A to be
95 % CI on V共B兲 is 关$22, $26兴
Fig. 7 Resulting p-box
Journal of Mechanical Design
To put this result in perspective, the expected utility of the supervisor’s design, which is the best possible because it is designed
under a state of precise information, is E关U共k*兲兴 = $104. Thus the
CI on the expected value of approach B over A can also be expressed as 关21%, 25%兴 of the optimal utility E关U共k*兲兴. This is a
substantial deviation that suggests that there is value in using the
p-box approach for this design problem. We also note the average
JULY 2006, Vol. 128 / 975
Fig. 8 Variation of value with imprecision
Fig. 9 Histogram of value of p-box approach
expected utilities realized under approach A and B recalling that
larger utility, like larger profit, is more desirable
Approach A:
E⌺兵E␴y兩⌺关U共␴y,a⌺* 兲兴其
= $34
Approach B: E⌺兵E␴y兩⌺关U共␴y,b⌺* 兲兴其 = $58
The total deviations from optimal 共$70 for approach A and $46 for
approach B兲, coupled with the relative value of approach B over
A, indicate that B is a better approach at this level of imprecision.
In the next section, we explore how these results differ at varying
levels of imprecision.
Variation of Value With Level of Imprecision. The previous
discussion dealt with a fixed sample set size of n = 25 material
strength tests. While those results demonstrated that it was valuable to use the p-box approach in that case, a more general result
is desirable. By varying the number of material strength tests n,
we can vary the imprecision of the characterization. The supervisor’s design yields the best possible expected utility E关U共k*兲兴, and
hence, the designs of designers A and B can at best equal it. In
Fig. 8, we plot 共in log-log scale兲 the percent deviation from this
best-possible expected utility for approach A and approach B for
different values of n. Because this is a deviation from best possible, the smallest absolute value is desirable. Hence, examining
Fig. 8, smaller is better.
When the imprecision is large, approach B performs significantly better than approach A. For example, at a sample size of
10, a 95% CI on the value of approach B over A is 关520%, 570%兴
of E关U共k*兲兴. There is no doubt that this value is significant.
Around sample size 40, the two approaches yield similar results.
Past this point, approach A performs better, but not by much. For
example, for a sample size of 100, the 95% CI on the value of
approach B is 关−5.5%, −5.3%兴 of E关U共k*兲兴.
In summary, as the imprecision increases, the value of approach
B over approach A increases significantly. As the imprecision approaches zero, approach A becomes only slightly better than B.
For different design problems, the designer will not necessarily
know where the two curves cross. Thus, unless the designer is
sure a priori that the consequences of the imprecision are insignificant, the results of this computational experiment suggest that
it is valuable to explicitly represent the imprecision in the
available characterization of uncertainties by using imprecise
probabilities.
Explanation of Results. In this section, we provide some insight into the results of the previous two sections. The results for
976 / Vol. 128, JULY 2006
100,000 different sample sets ⌺ of sizes n = 10, 25, and 100 are
shown in histograms in Fig. 9. The x-axis of the histogram in
Figure 9 is the expected value of approach B over A, denoted as
E␴y兩⌺关U共␴y , b*⌺兲 − U共␴y , a*⌺兲兴.
In many cases, approach B yields a design with a lower utility
than approach A due to the extra material costs of the more conservative design. However, in some cases, approach B yields a
design with a much higher expected utility. The tails of the distribution for n = 10 and n = 25 extend much farther to the right than
shown in the figure. For example, the maximum expected value of
approach B over A seen in any trial of n = 10 was $220,000. Results such as these skew the overall distribution such that on average, approach B yields a design with a higher expected utility
than approach A yields. As the imprecision decreases, both the
skewness of the distribution and the value of approach B over A
decrease.
The results can also be understood in terms of the expected
utility curves used in the experiment. In Figs. 10 and 11, we
illustrate the expected utility as a function of wall thickness t for
two different samples, ⌺1 and ⌺2 respectively, both of size n
= 25. The four curves shown in each figure are:
1. estimated
expected
E˜␴y兩⌺关U共˜␴y共A , ⌺兲 , t兲兴
utility
under
approach
A:
Fig. 10 Example expected utility functions, V„B… < 0
Transactions of the ASME
Fig. 11 Example expected utility functions, V„B… > 0
Fig. 12 Example expected utility functions, V„B… = 0
2. estimated lower bound on expected utility under approach
B: Eគ ˜␴y兩⌺关U共˜␴y共B , ⌺兲 , t兲兴
3. estimated upper bound on expected utility under approach
B: Ē˜␴y兩⌺关U共˜␴y共B , ⌺兲 , t兲兴
4. The true expected utility in a state of precise information:
E␴y关U共␴y , t兲兴
共t*A
t*B,
and
respectively兲
Designers A and B choose a thickness
that is optimal according to their estimated expected utilities.
Their estimates are in general not equivalent to the true expected
utilities. Therefore, the expected utility actually realized by a particular design is not reflected in their estimates, but rather in the
true curve E␴y关U共␴y , t兲兴, which is known only to the supervisor.
In Fig. 10 共based on sample set ⌺1兲, the true curve is between
the curve from approach A and the upper bound from approach B.
By noting t*A and t*B, we can read off the true expected utility
evaluated under precise information for each approach from the
truth, E␴y关U共␴y , t兲兴. For this particular sample set ⌺1, the expected
utility realized from approach A is about $20 higher than the expected utility realized from approach B. This means for sample ⌺1
the relative value of approach B is negative: V共B兲 = − $20, indicating approach A performs better.
In Fig. 11 共based on sample set ⌺2兲, the true curve is near the
lower bound of approach B. For this particular sample set ⌺2, the
expected utility from approach B is about $70 more than that from
approach A, so the relative value of approach B for sample ⌺2 is
V共B兲 = $70.
The preceding results are for two representative cases. The
overall results of the experiment, discussed previously, indicate
that, on average, the latter case dominates. Approach A is more
likely to overestimate the true material strength, and when it does,
the consequences are disastrous—a high probability of failure.
Approach B is more conservative, resulting in higher material
costs, but, on average, these material costs are offset by the reduced failure costs. As the sample size increases, the best-fit normal distribution of approach A becomes, on average, closer to the
true distribution, such as the example sample ⌺3 shown in Fig. 12.
For this sample, the optimal designs of the three approaches converge, and therefore yield similar utilities. Thus we can see that
when the imprecision is small, the value of approach B over A is
near zero.
Summary of Results. For this design problem, the experimental results indicate that when the imprecision is large, approach B
共using imprecise probabilities兲 performs significantly better than
approach A 共using precise probabilities兲. When the imprecision is
small, the difference between the two approaches is insignificant.
Journal of Mechanical Design
This computational experiment has therefore demonstrated that
there are scenarios in which it is valuable to explicitly represent
the imprecision in the available characterization of uncertainties
by using imprecise probabilities.
Discussion
In this paper, we present a specific and simplified design problem and computational experiment that demonstrates the value of
using imprecise probabilities in engineering design. There are
other issues to consider, such as computation cost and decision
policies. Additionally, it is always useful to explore the results in
more general problems.
Computational costs. In this paper, we have demonstrated the
potential value of a design method that uses imprecise probabilities. What we have heretofore referred to as value is really the
gross value or benefit. According to the principles of information
economics 关68兴, designers should really care about the gain, or net
value, which is defined as the difference between gross value
共benefit兲 and cost, including computational cost.
In this example, the computational costs were small due to the
structure of the problem. In reality, even the best-fit probabilistic
approach often will require Monte Carlo analysis or related methods in order to calculate expected values 关72兴. These methods
have an inherently high computational cost, even when used in
combination with surrogate modeling 关73,74兴 to reduce the computational burden. An obvious way to compute with p-boxes is to
perform a second-order Monte Carlo simulation, in which an outer
loop is added around the traditional loop 关61兴. Such methods can
be extremely expensive in terms of computations. However, PBA
can be performed using algorithms with foundations in interval
analysis that do not require second order Monte Carlo techniques
关70,71兴. Ferson and Ginzburg 关15兴 show that these methods are on
average much less computationally expensive than second order
Monte Carlo methods.
An added benefit is the flexibility of these algorithms. In Monte
Carlo analysis, engineers often assume specific dependencies between variables as a matter of convenience that enables the use of
standard statistics. This may understate the true imprecision in the
available characterizations of uncertainty and may have serious
consequences. PBA provides methods 关65兴 that can propagate uncertainty under various conditions of dependence and correlation,
including the extremes of fully known and completely unknown
dependencies.
Despite this promise, the application of these techniques has yet
to be explored in complex engineering design problems. It is
therefore unclear how well PBA can propagate uncertainty from
JULY 2006, Vol. 128 / 977
Fig. 14 Midpoint policy results
Fig. 13 Variation in value with imprecision for midpoint policy
many sources, or how well PBA can be integrated with more
complex design tools and models, such as discrete-event simulations and optimization methods. These issues need to be resolved
before PBA can be put to use in general engineering design
problems.
Decision Policies and Preferences. In this experiment, the
relatively high consequences of vessel failure, in comparison to
material costs, makes failure avoidance a key driver in the design.
The ⌫-maximin decision policy used in this experiment is conservative compared to a normal fit approach. Because it uses the
lower bound on the expected utility, it is actually the most conservative policy that is consistent with the available evidence.
Other decision policies 共such as maximality 关36兴, ⌫-maximin 关48兴,
E-admissibility 关39兴, or the Hurwicz criterion 关49兴兲 are both possible and rational, and may perform better in different circumstances. If instead of the lower bound on expected utility we use
the midpoint between the upper and lower bounds 共a Hurwicz
policy with criterion of 0.5 关49兴兲, the results change. For example,
the two curves in Fig. 13 cross at 45 samples, as compared to 40
in Fig. 8 with the ⌫-maximin policy.
By using imprecise probabilities, the designer’s preferences are
a factor in the resolution of imprecision, whereas a best-fit approach ignores the nature of the decision-maker’s preferences
when arbitrarily assuming precision when modeling the uncertainty, rather than the decision. When the utility function is
skewed due to a high cost of consequences in the risk term, such
as in this example, the maximum of the midpoint of the expected
utility bounds is much closer to the maximum of the lower bound
than to the maximum of the best-fit solution, as shown in Fig. 14.
In short, by explicitly accounting for imprecision, a more appropriate and quantitative performance/risk tradeoff can be
performed.
An obvious question is: why is the midpoint solution not similar
to the best-fit solution? The answer highlights an important advantage of a p-box approach over a best-fit approach. The midpoint is
applied to the expected utility distributions, whereas the best fit is
done on the parameters of the uncertainty model. In general, the
best-fit approach leaves the decision maker to deal with imprecision qualitatively and outside the uncertainty formalism, while the
p-box approach quantifies the imprecision explicitly in utility
curves.
Summary
Although engineers inherently lack information during the engineering design process and therefore face uncertainty and risks,
existing design methods do not explicitly represent the imprecision in the available characterizations of uncertainty. The pressure
978 / Vol. 128, JULY 2006
vessel design example in this paper demonstrates that when the
designers only have access to a small set of sample data, a design
approach that uses imprecise probabilities to model uncertainty in
design decisions leads on average to better designs than a purely
probabilistic approach that requires precise probabilities. We can
then conclude that in some design problems, it is valuable to
represent the imprecision in the available characterization of uncertainties explicitly by using imprecise probabilities.
Acknowledgment
Jason Aughenbaugh is supported under a National Science
Foundation Graduate Research Fellowship. This work was supported in part by NSF Grant No. DMI-0522116. This research is
further sponsored in part by NASA Ames Research Center under
cooperative Agreement No. NNA04CK40A.
References
关1兴 Elishakoff, I., 2004, Safety Factors and Reliability: Friends or Foes? Kluwer
Academic, Dordrecht.
关2兴 Hazelrigg, G. A., 1998, “A Framework for Decision-Based Design,” ASME J.
Mech. Des., 120共4兲, pp. 653–658.
关3兴 Fernández, M. C., Seepersad, C. C., Rosen, D. W., Allen, J. K., and Mistree,
F., 2001, “Utility-Based Decision Support for Selection in Engineering Design,” ASME DETC Design Automation Conference, Pittsburgh, PA, pp.
DETC2001/DAC-21106.
关4兴 Scott, M. J., 2004, “Utility Methods in Engineering Design,” Engineering
Design Reliability Handbook, CRC Press, New York.
关5兴 Thurston, D. L., 1990, “Multiattribute Utility Analysis in Design Management,” IEEE Trans. Eng. Manage., 37共4兲, pp. 296–301.
关6兴 Dai, Z., Scott, M. J., and Mourelatos, Z. P., 2003, “Incorporating Epistemic
Uncertainty in Robust Design,” 2003 ASME DETC, American Society of Mechanical Engineers, Chicago, IL, Vol. 2A, pp. 85–95.
关7兴 Taguchi, G., translated by Tung, L. W., 1987, System of Experimental Design:
Engineering Methods to Optimize Quality and Minimize Cost, UNIPUB/Kraus
International, Dearborn, MI.
关8兴 Gunawan, S., and Azarm, S., 2005, “A Feasibility Robust Optimization
Method Using Sensitivity Region Concept,” ASME J. Mech. Des., 127共5兲, p.
858.
关9兴 Youn, B. D., and Choi, K. K., 2004, “Selecting Probabilistic Approaches for
Realiability-Based Design Optimization,” AIAA J., 42共1兲, pp. 124–131.
关10兴 Nikolaidis, E., and Stroud, W. P. J., 1996, “Reliability-Based Optimization—A
Proposed Analytical-Experimental Study,” AIAA J., 34共10兲, pp. 2154–2161.
关11兴 Du, X., Sudjianto, A., and Huang, B., 2005, “Reliability-Based Design with
the Mixture of Random and Interval Variables,” ASME J. Mech. Des., 127共6兲,
p. 1068.
关12兴 Bedford, T., and Cooke, R. M., 2001, Probabilistic Risk Analysis: Foundations
and Methods, Cambridge University Press, New York.
关13兴 Stamatelatos, M., Apostolakis, G., Dezfuli, H., Everline, C., Guarro, S.,
Moieni, P., Mosleh, A., Paulos, T., and Youngblood, R., 2002, “Probabilistic
Risk Assessment Procedures Guide for Nasa Managers and Practitioners,” Office of Safety and Mission Assurance, NASA, Washington, DC.
关14兴 Nikolaidis, E., 2005, “Types of Uncertainty in Design Decision Making,” Engineering Design Reliability Handbook, E. Nikolaidis, D. M. Ghiocel, and S.
Singhal, eds., CRC Press, New York.
关15兴 Ferson, S., and Ginzburg, L. R., 1996, “Different Methods Are Needed to
Propagate Ignorance and Variability,” Reliab. Eng. Syst. Saf., 54共2–3兲, pp.
133–144.
关16兴 Hofer, E., 1996, “When to Separate Uncertainties and When Not to Separate,”
Transactions of the ASME
Reliab. Eng. Syst. Saf., 54共2–3兲, pp. 113–118.
关17兴 Parry, G. W., 1996, “The Characterization of Uncertainty in Probabilistic Risk
Assessment of Complex Systems,” Reliab. Eng. Syst. Saf., 54共2–3兲, pp. 119–
126.
关18兴 Winkler, R. L., 1996, “Uncertainty in Probabilistic Risk Assessment,” Reliab.
Eng. Syst. Saf., 54共2–3兲, pp. 127–132.
关19兴 Antonsson, E. K., and Otto, K. N., 1995, “Imprecision in Engineering Design,”
ASME J. Mech. Des., 117, pp. 25–32.
关20兴 Cao, L., and Rao, S. S., 2002, “Optimum Design of Mechanical Systems
Involving Interval Parameters,” ASME J. Mech. Des., 124共3兲, p. 465.
关21兴 Nikolaidis, E., Chen, S., Cudney, H., Haftka, R. T., and Rosca, R., 2004,
“Comparison of Probability and Possibility for Design Against Catastrophic
Failure Under Uncertainty,” ASME J. Mech. Des., 126共3兲, pp. 386–394.
关22兴 Mourelatos, Z. P., and Zhou, J., 2005, “Reliability Estimation and Design with
Insufficient Data Based on Possibility Theory,” AIAA J., 43共8兲, p. 1696.
关23兴 Knight, F. H., 1921, Risk, Uncertainty and Profit, Houghton Mifflin, Boston.
关24兴 Cannon, C. M., and Kmietowicz, Z. W., 1974, “Decision Theory and Incomplete Knowledge,” J. Manage. Stud. 共Oxford兲, 11共3兲, pp. 224–232.
关25兴 Fishburn, P., 1964, Decision and Value Theory, Wiley, New York.
关26兴 Kmietowicz, Z. W., and Pearman, A. D., 1984, “Decision Theory, Linear Partial Information and Statistical Dominance,” Omega, 12, pp. 391–399.
关27兴 Weber, M., 1987, “Decision Making with Incomplete Information,” Eur. J.
Oper. Res., 28共1兲, p. 44.
关28兴 Carnahan, J. V., Thurston, D. L., and Liu, T., 1994, “Fuzzy Ratings for Multiattribute Design Decision-Making,” ASME J. Mech. Des., 116共2兲, p. 511.
关29兴 Otto, K. N., and Antonsson, E. K., 1992, “The Method of Imprecision Compared to Utility Theory for Design Selection Problems,” ASME 1993 Design
Theory and Methodology Conference.
关30兴 Seidenfeld, T., Schervish, M. J., and Kadane, J. B., 1995, “A Representation of
Partially Ordered Preferences,” Ann. Stat., 23共9兲, pp. 2168–2217.
关31兴 Kirkwood, C. W., and Sarin, R. K., 1985, “Ranking with Partial Information: A
Method and an Application,” Oper. Res., 33, pp. 38–48.
关32兴 Keynes, J. M., 1962, A Treatise on Probability, Harper and Row, New York.
关33兴 Good, I. J., 1983, Good Thinking: The Foundations of Probability and Its
Applications, University of Minnesota Press, Minneapolis.
关34兴 Kyburg, H. E., 1987, “Objective Probabilities,” IJCAI-87, pp. 902–904.
关35兴 Sarin, R. K., 1978, “Elicitation of Subjective Probabilities in the Context of
Decision-Making,” Decision Sci., 9, pp. 37–48.
关36兴 Walley, P., 1991, Statistical Reasoning with Imprecise Probabilities, Chapman
and Hall, New York.
关37兴 Weichselberger, K., 2000, “The Theory of Interval-Probability as a Unifying
Concept for Uncertainty,” Int. J. Approx. Reason., 24共2–3兲, p. 149.
关38兴 Hart, A. G., 1942, “Risk, Uncertainty and the Unprofitability of Compounding
Probabilities,” Studies in Mathematical Economics and Econometrics, O.
Lange, F. Mcintyre, and T. O. Yntema, eds., University of Chicago Press,
Chicago.
关39兴 Levi, I., 1974, “On Indeterminate Probabilities,” J. Philos., 71, pp. 391–418.
关40兴 Tintner, G., 1941, “The Theory of Choice under Subjective Risk and Uncertainty,” Econometrica, 9, pp. 298–304.
关41兴 Joslyn, C. A., and Booker, J. M., 2005, “Generalized Information Theory for
Engineering Modeling and Simulation,” Engineering Design Reliability Handbook, E. Nikolaidis, D. M. Ghiocel, and S. Singhal, eds., CRC Press, New
York, pp. 9.1–9.40.
关42兴 de Finetti, B., 1974, Theory of Probability Volume 1: A Critical Introductory
Treatment, Wiley, New York.
关43兴 Savage, L. J., 1972, The Foundation of Statistics, Dover, New York.
关44兴 Lindley, D. V., 2000, “The Philosophy of Statistics,” J. R. Stat. Soc. Series D,
49共3兲, p. 293–337.
关45兴 Lindley, D. V., 1982, “Subjectivist View of Decision Making,” Eur. J. Oper.
Res., 9共3兲, p. 213.
关46兴 Groen, F. J., and Mosleh, A., 2005, “Foundations of Probabilistic Inference
with Uncertain Evidence,” Int. J. Approx. Reason., 39共1兲, pp. 49–83.
关47兴 de Finetti, B., 1937, “La Prevision: Ses Lois Logiques, Ses Sources Subjectives,” Ann. Inst. Henri Poincare, 7, pp. 1–68; English translation 1964,
“Foresight. Its Logical Laws, Its Subjective Sources,” in Studies in Subjective
Probability, H. E. Kyburg, and H. E. Smokler, eds., Wiley, New York.
关48兴 Berger, J. O., 1985, Statistical Decision Theory and Bayesian Analysis, 2nd.
Springer, New York.
Journal of Mechanical Design
关49兴 Arrow, K., and Hurwicz, L., 1972, “An Optimality Criterion for DecisionMaking under Uncertainty,” Uncertainty and Expectation in Economics: Essays in Honour of G. L. S. Shackle, C. F. Carter and J. Ford, eds., Blackwell,
Oxford, pp. 1–11.
关50兴 Lindley, D. V., 1982, “Scoring Rules and the Inevitability of Probability,”
Geogr. Rev., 50, pp. 1–16.
关51兴 Kolmogorov, A. N., translated by Morrison, N., 1956, Foundations of Probability, Chelsea, New York.
关52兴 Helton, J. C., 1994, “Treatment of Uncertainty in Performance Assessments for
Complex Systems,” Risk Anal., 14, pp. 483–511.
关53兴 Pratt, J. W., Raiffa, H., and Schlaifer, R., 1995, Introduction to Statistical
Decision Theory, The MIT Press, Cambridge, MA.
关54兴 von Neumann, J., and Morgenstern, O., 1980, Theory of Games and Economic
Behavior, 3rd ed., Princeton University Press, Princeton, NJ.
关55兴 Zadeh, L. A., 1965, “Fuzzy Sets,” Inf. Control., 8, pp. 338–353.
关56兴 Zadeh, L. A., 1978, “Fuzzy Sets as a Basis for a Theory of Possibility,” Fuzzy
Sets Syst., 1, pp. 3–28.
关57兴 Sugeno, M., 1977, “Fuzzy Measures and Fuzzy Integrals: A Survey,” Fuzzy
Automata and Decision Processes, M. M. Gupta, G. N. Saridis, and B. R.
Gaines, eds., North-Holland, New York, pp. 89–102.
关58兴 Smets, P., 1988, “Belief Functions, in Non-Standard Logics for Automated
Reasoning,” P. Smets, A. Mandami, D. Dubois, and H. Prade, eds., Academic,
New York, pp. 252–286.
关59兴 Shafer, G., 1992, “Rejoinders to Comments on “Perspectives on the Theory
and Practice of Belief Functions”,” Int. J. Approx. Reason., 6共3兲, pp. 445–480.
关60兴 Cooke, R., 2004, “The Anatomy of the Squizzel—the Role of Operational
Definitions in Representing Uncertainty,” Reliab. Eng. Syst. Saf., 85共1–3兲, p.
313.
关61兴 Vose, D., 2000, Risk Analysis, a Quantitative Guide, 2nd ed., Wiley, New
York.
关62兴 Hofer, E., Kloos, M., Krzykacz-Hausmann, B., Peschke, J., and Woltereck, M.,
2002, “An Approximate Epistemic Uncertainty Analysis Approach in the Presence of Epistemic and Aleatory Uncertainties,” Reliab. Eng. Syst. Saf., 77共3兲,
pp. 229–238.
关63兴 Ferson, S., and Donald, S., 1998, “Probability Bounds Analysis,” International
Conference on Probabilistic Safety Assessment and Management (PSAM4),
Springer-Verlag, New York, pp. 1203–1208.
关64兴 Ferson, S., Hajagos, J., Myers, D. S., and Tucker, W. T., 2005, “Constructor:
Synthesizing Information About Uncertain Variables,” SAND2005–3769, Sandia National Laboratories, Albuquerque, NM.
关65兴 Ferson, S., Kreinovich, V., Ginzburg, L., Myers, D. S., and Sentz, K., 2002,
“Constructing Probability Boxes and Dempster-Shafer Structures,”
SAND2002–4015, Sandia National Laboratories, Albuquerque, NM.
关66兴 Aughenbaugh, J. M., and Paredis, C. J. J., 2005, “The Value of Using Imprecise Probabilities in Engineering Design,” ASME 2005 DETC DTM, Long
Beach, CA, pp. DETC2005–85354.
关67兴 Utkin, L. V., and Augustin, T., 2003, “Decision Making with Imprecise
Second-Order Probabilities,” Proceedings of the Third International Symposium on Imprecise Probabilities and Their Applications, Lugano, Switzerland.
关68兴 Aughenbaugh, J. M., Ling, J. M., and Paredis, C. J. J., 2005, “Applying Information Economics and Imprecise Probabilities to Data Collection in Design,”
2005 ASME International Mechanical Engineering Congress and Exposition,
Paper No. IMECE2005–81181, Orlando, FL.
关69兴 Berleant, D., Xie, L., and Zhang, J., 2003, “Statool: A Tool for Distribution
Envelope Determination 共DEnv兲, An Interval-Based Algorithm for Arithmetic
on Random Variables,” Reliable Comput., 9共2兲, pp. 91–108.
关70兴 Ferson, S., 2002, Ramas Risk Calc, 4th ed., Lewis, New York.
关71兴 Williamson, R. C., and Downs, T., 1990, “Probabilistic Arithmetic I: Numerical Methods for Calculating Convolutions and Dependency Bounds,” Int. J.
Approx. Reason., 4, pp. 89–158.
关72兴 Robert, C. P., and Casella, G., 1999, Monte Carlo Statistical Methods,
Springer-Verlag, New York.
关73兴 Myers, R. H., and Montgomery, D. C., 1995, Response Surface Methodology:
Process and Product Optimization Using Designed Experiments, Wiley, New
York.
关74兴 Simpson, T. W., Peplinski, J. D., Koch, P. N., and Allen, J. K., 2001, “Metamodels for Computer-Based Engineering Design,” Eng. Comput., 17共2兲, pp.
129–150.
JULY 2006, Vol. 128 / 979