Download File

Document related concepts
no text concepts found
Transcript
Chapter 13
Decision Analysis
Md. Abdullah Al Mahmud
Assistant Professor
Manarat International University
Chapter 13
Decision Analysis
•
•
•
•
•
•
•
Decision Analysis
Decision Making without Probabilities
Decision Making with Probabilities
Risk Analysis and Sensitivity Analysis
Decision Analysis with Sample Information
Computing Branch Probabilities
Utility and Decision Making
Decision Analysis: Making Justifiable, Defensible Decisions
 Decision analysis is the discipline of evaluating
complex alternatives
uncertainty.
in
terms
of
values
and
 Decision analysis provides insight into how the
defined alternatives differ from one another and then
generates suggestions for new and improved
alternatives.
Decision Analysis: Making Justifiable, Defensible Decisions
 Numbers quantify subjective values and uncertainties,
which enable us to understand the decision situation.
These numerical results then must be translated back
into words in order to generate qualitative insight.
 Humans can understand, compare, and manipulate
numbers. Therefore, in order to create a decision
analysis model, it is necessary to create the model
structure and assign probabilities and values to fill the
model for computation.
Decision Analysis: Making Justifiable, Defensible Decisions
 Complexity
in the modern world, along with
information quantity, uncertainty, and risk, make it
necessary to provide a rational decision making
framework. The goal of decision analysis is to give
guidance, information, insight, and structure to the
decision-making process in order to make better, more
'rational' decisions.
Elements of Decision Analysis Models
 A sole individual is designated as the decision-
maker. For example, the CEO of a company, who is
accountable to the shareholders.
 A finite number of possible (future) events called the
'States of Nature' (a set of possible scenarios). They
are the circumstances under which a decision is made.
The states of nature are identified and grouped in set
"S"; its members are denoted by "s(j)". Set S is a
collection of mutually exclusive events meaning that
only one state of nature will occur.
Elements of Decision Analysis Models
 A finite number of possible decision alternatives (i.e.,
actions) is available to the decision-maker. Only one
action may be taken.
 Payoff is the return of a decision. Different
combinations of decisions and states of nature
(uncertainty) generate different payoffs. Payoffs are
usually shown in tables. In decision analysis payoff is
represented by positive (+) value for net revenue,
income, or profit and negative (-) value for expense,
cost or net loss.
Decision Categories
There are different types of decision models that help
to analyze the different scenarios. Depending on the
amount and degree of knowledge we have, the three
most widely used types are:
 Decision-making under pure uncertainty
 Decision-making under risk
 Decision-making by buying information (pushing the
problem towards the deterministic "pole")
Decision Making without Probabilities
 In decision-making under pure uncertainty, the
decision maker has absolutely no knowledge,
not even about the likelihood of occurrence for
any state of nature. In such situations, the
decision-maker's behavior is purely based on
his/her attitude toward the unknown. Some of
these behaviors are
 optimistic,
 pessimistic, and
 least regret,.
Decision Making without Probabilities
Optimist: The glass is half-full.
Pessimist: The glass is half-empty.
Manager: The glass is twice as large as it needs to be.
 Or, as in the following metaphor of a captain in a
rough sea:
The pessimist complains about the wind;
the optimist expects it to change;
the realist adjusts the sails.
Decision Making without Probabilities
 Optimists are right; so are the pessimists. It is up to
you to choose which you will be.
 The optimist sees opportunity in every problem;
the pessimist sees problem in every opportunity.
 Both optimists and pessimists contribute to our
society.
Optimistic Approach
 The optimistic approach would be used by an
optimistic decision maker.
 The decision with the largest possible payoff is
chosen.
 If the payoff table was in terms of costs, the
decision with the lowest cost would be chosen.
Conservative Approach
 The conservative approach would be used by a
conservative decision maker.
 For each decision the minimum payoff is listed and
then the decision corresponding to the maximum
of these minimum payoffs is selected. (Hence, the
minimum possible payoff is maximized.)
 If the payoff was in terms of costs, the maximum
costs would be determined for each decision and
then the decision corresponding to the minimum
of these maximum costs is selected. (Hence, the
maximum possible cost is minimized.)
Minimax Regret Approach
 The
minimax regret approach requires the
construction of a regret table or an opportunity loss
table.
 This is done by calculating for each state of nature
the difference between each payoff and the largest
payoff for that state of nature.
 Then, using this regret table, the maximum regret
for each possible decision is listed.
 The decision chosen is the one corresponding to
the minimum of the maximum regrets.
Example
Consider the following problem with three
decision alternatives and three states of nature with
the following payoff table representing profits:
States of Nature
d1
Decisions d2
d3
s1
s2
s3
4
0
1
4
3
5
-2
-1
-3
Example
 Optimistic Approach
An optimistic decision maker would use the
optimistic (maximax) approach. We choose the
decision that has the largest single value in the payoff
table.
Maximum
Decision
Payoff
d1
4
Maximax
Maximax
decision
payoff
d2
3
d3
5
Example
 Conservative Approach
A conservative decision maker would use the
conservative (maximin) approach. List the minimum
payoff for each decision. Choose the decision with
the maximum of these minimum payoffs.
Minimum
Decision
Payoff
Maximin
Maximin
decision
payoff
d1
-2
d2
-1
d3
-3
Example
 Minimax Regret Approach
For the minimax regret approach, first
compute a regret table by subtracting each payoff
in a column from the largest payoff in that column.
In this example, in the first column subtract 4, 0,
and 1 from 4; etc. The resulting regret table is:
d1
d2
d3
s1
s2
s3
0
4
3
1
2
0
1
0
2
Example
 Minimax Regret Approach (continued)
For each decision list the maximum regret.
Choose the decision with the minimum of these
values.
Minimax
decision
Maximum
Decision
Regret
d1
1
d2
4
d3
3
Minimax
regret
Decision Making with Probabilities
 Expected Value Approach
 If probabilistic information regarding the states of
nature is available, one may use the expected value
(EV) approach.
 Here the expected return for each decision is
calculated by summing the products of the payoff
under each state of nature and the probability of the
respective state of nature occurring.
 The decision yielding the best expected return is
chosen.
Expected Value of a Decision Alternative
 The expected value of a decision alternative is the
sum of weighted payoffs for the decision alternative.
 The expected value (EV) of decision alternative di is
defined as:
N
EV (di )   P(s j )Vij
j 1
where:
N = the number of states of nature
P(sj ) = the probability of state of nature sj
Vij = the payoff corresponding to decision
alternative di and state of nature sj
Example: Burger Prince
Burger Prince Restaurant is contemplating
opening a new restaurant on Main Street. It has
three different models, each with a different seating
capacity. Burger Prince estimates that the average
number of customers per hour will be 80, 100, or
120. The payoff table for the three models is on the
next slide.
Example: Burger Prince
 Payoff Table
Average Number of Customers Per Hour
s1 = 80 s2 = 100 s3 = 120
Model A
Model B
Model C
$10,000
$ 8,000
$ 6,000
$15,000
$18,000
$16,000
$14,000
$12,000
$21,000
Example: Burger Prince
 Expected Value Approach
Calculate the expected value for each decision.
The decision tree on the next slide can assist in this
calculation. Here d1, d2, d3 represent the decision
alternatives of models A, B, C, and s1, s2, s3 represent
the states of nature of 80, 100, and 120.
Example: Burger Prince
 Decision Tree
d1
1
d2
d3
Payoffs
2
3
4
s1
.4
s2
s3
.2
s1
.4
s2
s3
.2
.4
.4
s1
.4
s2
s3
.2
.4
10,000
15,000
14,000
8,000
18,000
12,000
6,000
16,000
21,000
Example: Burger Prince

Expected Value For Each Decision
Model A
d1
Model B d2
Model C
d3
EMV = .4(10,000) + .2(15,000) + .4(14,000)
= $12,600
EMV = .4(8,000) + .2(18,000) + .4(12,000)
= $11,600
EMV = .4(6,000) + .2(16,000) + .4(21,000)
= $14,000
Choose the model with largest EV, Model C.
Expected Value of Perfect Information
 Frequently information is available which can
improve the probability estimates for the states of
nature.
 The expected value of perfect information (EVPI) is
the increase in the expected profit that would result
if one knew with certainty which state of nature
would occur.
 The EVPI provides an upper bound on the expected
value of any sample or survey information.
Expected Value of Perfect Information
 EVPI Calculation
 Step 1:
Determine the optimal return corresponding to
each state of nature.
 Step 2:
Compute the expected value of these optimal
returns.
 Step 3:
Subtract the EV of the optimal decision from the
amount determined in step (2).
Example: Burger Prince
 Expected Value of Perfect Information
Calculate the expected value for the optimum
payoff for each state of nature and subtract the EV of
the optimal decision.
EVPI=.4(10,000)+ .2(18,000) + .4(21,000) - 14,000
=$2,000
Risk Analysis
 Risk analysis helps the decision maker recognize the
difference between:
 the expected value of a decision alternative, and
 the payoff that might actually occur
 The risk profile for a decision alternative shows the
possible payoffs for the decision alternative along
with their associated probabilities.
Sensitivity Analysis
 Sensitivity analysis can be used to determine how
changes to the following inputs
recommended decision alternative:
affect
the
 probabilities for the states of nature
 values of the payoffs
 If a small change in the value of one of the inputs
causes a change in the recommended decision
alternative, extra effort and care should be taken in
estimating the input value.
Bayes’ Theorem and Posterior Probabilities
 Knowledge of sample or survey information can be
used to revise the probability estimates for the states
of nature.
 Prior to obtaining this information, the probability
estimates for the states of nature are called prior
probabilities.
 With knowledge of conditional probabilities for the
outcomes or indicators of the sample or survey
information, these prior probabilities can be revised
by employing Bayes' Theorem.
 The outcomes of this analysis are called posterior
probabilities or branch probabilities for decision
trees.
Computing Branch Probabilities
 Branch (Posterior) Probabilities Calculation
 Step 1:
For each state of nature, multiply the prior
probability by its conditional probability for the
indicator -- this gives the joint probabilities for the
states and indicator.
 Step 2:
Sum these joint probabilities over all states -- this
gives the marginal probability for the indicator.
 Step 3:
For each state, divide its joint probability by the
marginal probability for the indicator -- this gives the
posterior probability distribution.
Expected Value of Sample Information
 The expected value of sample information (EVSI) is
the additional expected profit possible through
knowledge of the sample or survey information.
Expected Value of Sample Information
 EVSI Calculation
 Step 1:
Determine the optimal decision and its expected
return for the possible outcomes of the sample using
the posterior probabilities for the states of nature.
 Step 2:
Compute the expected value of these optimal
returns.
 Step 3:
Subtract the EV of the optimal decision obtained
without using the sample information from the amount
determined in step (2).
Efficiency of Sample Information
 Efficiency of sample information is the ratio of EVSI
to EVPI.
 As the EVPI provides an upper bound for the EVSI,
efficiency is always a number between 0 and 1.
Example: Burger Prince
 Sample Information
Burger Prince must decide whether or not to
purchase a marketing survey from Stanton Marketing
for $1,000. The results of the survey are "favorable" or
"unfavorable". The conditional probabilities are:
P(favorable | 80 customers per hour) = .2
P(favorable | 100 customers per hour) = .5
P(favorable | 120 customers per hour) = .9
Should Burger Prince have the survey performed
by Stanton Marketing?
Example: Burger Prince
 Posterior Probabilities
Favorable
State Prior Conditional Joint Posterior
80
.4
.2
.08
.148
100
.2
.5
.10
.185
120
.4
.9
.36
.667
Total .54
1.000
P(favorable) = .54
Example: Burger Prince
 Posterior Probabilities
Unfavorable
State Prior Conditional Joint Posterior
80
.4
.8
.32
.696
100
.2
.5
.10
.217
120
.4
.1
.04
.087
Total .46
1.000
P(unfavorable) = .46
Example: Burger Prince
 Expected Value of Sample Information
If the outcome of the survey is "favorable”, choose
Model C. If it is “unfavorable”, choose model A.
EVSI = .54($17,855) + .46($11,433) - $14,000 = $900.88
Since this is less than the cost of the survey, the
survey should not be purchased.
Example: Burger Prince
 Efficiency of Sample Information
The efficiency of the survey:
EVSI/EVPI = ($900.88)/($2000) = .4504
Meaning of Utility
 Utilities are used when the decision criteria must be
based on more than just expected monetary values.
 Utility is a measure of the total worth of a particular
outcome, reflecting the decision maker’s attitude
towards a collection of factors.
 Some of these factors may be profit, loss, and risk.
 This analysis is particularly appropriate in cases where
payoffs can assume extremely high or extremely low
values.
Example: Weiss Advisors
Weiss Advisors have analyzed the profit potential
of five different investments. The probabilities of the
gains on $1000 are as follows:
Gain
Investment $0 $200 $500 $1000
A
.9
0
0
.1
B
0
.8
.2
0
C
.05 .9
0
.05
D
0
.8
.1
.1
E
.6
0
.3
.1
One of Weiss’ investors informs Weiss that he is
indifferent between investments A, B, and C.
Example: Weiss Advisors
 Developing Utilities for Payoffs
Assign a utility of 10 to a $1000 gain and a utility of
0 to a gain of $0. Let x = the utility of a $200 gain and
y = the utility of a $500 gain. The expected utility on
investment A is then .9(0) + .1(10) = 1.
Since the investor is indifferent between
investments A and C, this must mean the expected
utility of investment C = the expected utility of
investment A = 1. But the expected utility of
investment C = .05(0) +.90x + .05(10). Since this must
equal 1, solving for x, gives x = 5/9.
Example: Weiss Advisors
 Developing Utilities for Payoffs (continue)
Also since the investor is indifferent between A,
B, and C, the expected utility of investment B must be
1. Thus, 0(0) + .8(5/9) + .2y + 0(10) = 1. Solving for y,
gives y = 25/9.
Thus the utility values for gains of 0, 200, 500,
and 1000 are 0, 5/9, 25/9, and 10, respectively.
Expected Utility Approach
 Once a utility function has been determined, the
optimal decision can be chosen using the expected
utility approach.
 Here, the expected utility for each decision alternative
is computed as:
N
EU( di )   P(s j )U ij
j 1
 The decision alternative with the highest expected
utility is chosen.
Example: Risk Avoider
Consider a three-state, three-decision problem
with the following payoff table in dollars:
The probabilities for the three states of nature are:
P(s1) = .1, P(s2) = .3, and P(s3) = .6.
Example: Risk Avoider
 Utility Table for Decision Maker
s1
d1
d2
d3
s2
100 90
94 80
80 80
s3
0
40
60
Example: Risk Avoider
 Expected Utility
s1
d1
d2
d3
Probability
100
94
80
.1
s2
s3
90 0
80 40
80 60
.3
.6
Expected
Utility
37.0
57.4
68.0
Decision maker should choose decision d3.
End of Chapter 13
Md. Abdullah Al Mahmud
Assistant Professor
Manarat International University