Download Incentives - Faculty Directory | Berkeley-Haas

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Icarus paradox wikipedia , lookup

Brander–Spencer model wikipedia , lookup

Microeconomics wikipedia , lookup

Transcript
Incentives
Overview

Misaligned Goals or Interests

Hidden Action




Moral Hazard Problems
Solutions
Risk-Reward Tradeoff
Hidden Information


Adverse selection problems
Solutions
The Strategic Process
Where Do We Want to Be?
Vision / Mission
Where Are We Now?
Strategic Options
Feasibility
Strategic Plan
EXECUTION
Anticipate
Obstacles
Leadership
Align Business
System
Incentives in the Strategic Process
Where Do We Want to Be?
Vision / Mission
Where Are We Now?
Strategic Options
Existing incentives
affect
Feasibility
Strategic Plan
EXECUTION
Anticipate
Obstacles
Leadership
Align Business
System
Incentives in the Strategic Process
Where Do We Want to Be?
Vision / Mission
Where Are We Now?
Strategic Options
Existing incentives
affect
Feasibility
Strategic Plan
Execution can
require changing
incentives
EXECUTION
Anticipate
Obstacles
Leadership
Align Business
System
Where Incentives Come into Play

Executive compensation


Done correctly can encourage risk taking and doing
unpleasant jobs (e.g., restructuring)
Done incorrectly can ruin the company or create problems



Divisional incentives



International Harvester (Navistar)
American Airlines
Done correctly can encourage cooperation and
development of synergies
Done incorrectly can lead to infighting & inefficiency (e.g.,
bad transfer pricing)
Employee level


Can encourage better performance
But can be implemented poorly – typists at Lincoln Electric
An Organizational Chart
SP4U
Biff & Buffy Banyon, Haas ‘03
Co-Owners
Joe Flunky, Stanford ‘03
Peon
Ralph B. Kisser, HBS ‘03
Gopher
Ima Loser, Kellogg ‘03
Toady
Principals and Agents




A principal-agent relationship exists when
one party, the principal, hires or employs
another party, the agent, to do some set of
tasks.
Example: Biff & Buffy are the principals and
Joe, Ralph, and Ima are the agents.
Example: DoD hires a defense contractor.
Example: Shareholders (P) and CEOs (A).
Agency Problems

An agency problem exists when the goals or interests of the
principal & agent are not in alignment.
Example: principal wishes employees promoted solely on merit,
while agent (manager) considers his friendships with employees
as well as merit.
A hidden-action problem is an agency problem in which some
of the agent’s actions are unobservable to the principal.
 Example: agent may know how much effort (negative leisure) he
is exerting on a problem, but the principal may be unable to
observe or measure his effort directly.
 Also called a moral hazard problem.


The Role of Monitoring
Agency problems in which the principal can fully and
costlessly monitor the agent are easily dealt with
through contracts that stipulate exactly what the
agent is to do. More serious agency problems arise
when the principal can neither fully nor costlessly
monitor the agent.


Recall that costly monitoring can result in mixed-strategy
equilibria in which there is not full monitoring and, thus,
undesired behavior.
Hence, even when monitoring is feasible, if it is costly may
also wish to use incentive contracts.
Hidden Action Framework
Principal’s Targets
Agent’s Actions
Performance
Measures
Compensation
Function
Incentive Scheme
Exogenous
Factors
Agent’s
Reward
A Theoretical Framework



Principal’s targets: what she wants agent to
do.
Agent’s actions: what he does in response to
incentive scheme.
Exogenous factors: noise that prevents
principal from getting perfect signal.
The Incentive Scheme



Performance Measures: That upon which the
agent’s performance is measured.
Compensation Function: The contractual
relationship between the performance
measures and the agent’s compensation.
Agent’s Reward: The agent’s realized
compensation.
Example: A Hidden-Action Problem



Agent has choice of two actions: work hard or be
lazy.
“Costs” the agent $10 to work hard rather than to be
lazy.
Can think of “hard” and “lazy” as metaphors




Pleasant vs. unpleasant actions (toughness on
subordinates; implement new strategy; etc.)
Pursue actions possibly at odds with career concerns (rock
the boat; choose risky projects; etc.)
If the agent works hard, then probability that firm
does well is q.
If he is lazy, then probability that firm does well is 0.
Example continued …





If firm does well, principal earns $30.
If firm does poorly, she earns $0.
Both principal & agent are risk neutral.
Negative pay is not permitted.
Let W = agent’s pay if firm does well & let P =
his pay if firm does poorly.
Example continued …
q
Firm does well
W -10
Work Hard
qW + (1-q )P - 10
1-q
Firm does poorly
P -10
?
Be Lazy
Firm does poorly
P
P
Example continued …


Agent works hard if qW + (1-q)P - 10  P;
that is, if q(W - P)  10.
Principal’s problem is to minimize her
expected wage bill subject to the constraints
imposed by the problem; that is,
min qW  1  q  P subject to
W ,P
W  0, P  0, and qW  P  10.
Example continued …





Solution is P = 0 and W = 10/q.
The difference W - P is the power of the
incentives.
Note that the power of the incentives increases
as q falls.
As q falls, there is less information in the firm
doing poorly with respect to whether the agent
worked hard or not.
General conclusion: The less informative the
performance measure, the more powerful the
incentives must be.
Example continued …
Does principal use incentives?
 Profit without incentives is $0.
 Expected profit with incentives is
($30 - $10/q)q + ($0 - $0)(1-q) = $30q - $10.
 So uses incentives provided q  1/3.
 General Conclusion: The less informative
the performance measure, the less likely it is
that the principal uses incentives.

Example extended …


Realistic to expect in many situations that the
agent is risk averse (e.g., top managers w/
incentive pay as large percentage of income).
Let agent have utility function yb - e, where 0
< b  1 and e = 0 or 10. Less risk averse as
b becomes larger.
Example extended …
q
Firm does well
W b - 10
Work hard
qW b + (1-q )P b - 10
1-q
Firm does poorly
P b - 10
?
Be lazy
Firm does poorly
Pb
Pb
Example extended …
Principal’s problem:
min qW  1  q  P subject to
W ,P
W  0, P  0, and qW  1  q  P  10  P .
b
b
Readily shown that P  0 and W  b 10q .
Hence, expected wage bill is q b 10q .
b
Example extended … b = 1
Expected Wage
11
10.5
10
9.5
9
.2
.4
q
.6
.8
1.0
Example extended … b = ¾
Expected Wage
45
40
35
30
25
0.2
0.4
q
0.6
0.8
1
Risk & Incentives


The lower is q, the more powerful the
incentives; that is, the greater the difference
between P and W.
This means the risk is greater.
200
Variance of pay when b = ¾
150
100
50
0
0.2
0.4
0.6
0.8
1
q
Risk & Incentives



Because agent is risk averse, he dislikes this
risk and must be compensated for bearing
the risk inherent in the incentive scheme.
The greater the risk, the greater the
compensation for risk; hence, the more
expensive the incentive scheme.
General Conclusion: There is a trade-off
between the power of incentives and their
cost due to compensation for bearing risk.
Implications for Designing Incentive Schemes


Want to limit noise in the performance
measure; extract exogenous factors as much
as possible.
Example: performance relative to industry
rather than performance relative to entire
stock mkt.
Other Issues in the Design of
Incentives

Avoid performance measures that can be
manipulated by the agent.


Don’t reward A, while hoping for B.



Int’l Harvester
Don’t ratchet—stay committed to scheme.


Int’l Harvester
Frequent problem with piece-rate systems
Be careful of violating horizontal equity norms.
Be careful of violating vertical equity norms.

Remember Donald J. Carty
More on Agency Problems – Hidden
Information

A hidden-information problem is an agency
problem in which the agent acquires information
about the possible tasks that the principal does
not possess.


Example: agent may know how difficult a task is,
while the principal may not.
Also called an adverse selection problem.
A Model of Task Difficulty




Agent’s utility function is y - x2/2t, where y is
money, x is output, and t is type. Assume t =
2 (good type) or t = 1 (bad type).
Price per unit of output is $1.
Prob{t = 1} = h.
If agent quits, his utility is zero.
Timing
Agent produces
output target
Principal hires Agent.
Principal & Agent
set output targets
Agent only learns t.
Agent quits.
Benchmark: t is common knowledge


Suppose, momentarily, that t is known by
both principal and agent
Principal can order any x she wants provided
that compensation, y, is adequate:
y  21t x 2  0.

So
2
x
y .
2t
Benchmark continued …




Principal’s profit is x - y = (2tx - x2)/(2t).
Maximizing her profit yields x(t) = t.
So x(1) = 1 and x(2) = 2.
Follows that y(1) = ½ and y(2) = 1.
Principal does not know t



Problem is that one type may mimic the
other; in particular, good type may claim to be
bad type.
Under benchmark solution good type gets 0 if
he tells the truth (1 - ¼[2]2 = 0); but he gets ¼
if he lies (½ - ¼[1]2 = ¼).
General problem: agents over-state difficulty of
tasks and the resources required.
The principal’s problem
Principal wants to maximize her expected profit
subject to " truth - telling" constraints and
" participation" constraints:
max h x1  y1   1  h x2  y2  subject to
y1 , y2 , x1 , x2
y1  x  y2  x ; y1  x  0 ;
1
2
2
1
1
2
2
2
1
2
2
1
y2  x  y1  x ; & y2  x  0.
1
4
2
2
1
4
2
1
1
4
2
2
Solution
x2  2
x1  12hh  1 (note: distortion  as h )
y1  x  21
1
2
2
1
y2  1 
   x  1.
1 2h 2
4 h 1
1
4
Expected profit =
2
2
1
1+h
.
Compared to the benchmark case


Good type has the same target, but the bad
type is given a lower target.
The good type is paid more than before and
more than is necessary to keep him from
quitting, while the bad type is paid less than
before and no more than is necessary to
keep him from quitting.
Intuition


When the principal does not know the agent’s type (e.g., t), she
must “bribe” a good-type agent to admit that he is the good type.
This bribe or information rent is an additional cost of doing
business.
 To reduce this cost, somewhat, the principal lowers what she
pays a bad-type agent, which makes it less desirable for a goodtype agent to pretend to be the bad-type agent; which, in turn,
reduces the information rent.
 Of course, if the principal lowers what she pays a bad-type
agent, she must also lower his output target.
Applications

Application: IBM used a scheme along this line for
setting commissions for sales agents:



Ideas often used in price regulation of utilities



Type corresponding to how good territory is.
Adjust compensation package to encourage salespeople with good
territories to (i) take advantage of them; but (ii) not capture entire value.
Type is utility’s information about cost structure.
Has led move from cost-plus pricing to price-cap and other reforms.
Ideas increasingly used in design of transfer-pricing
schemes


Type is upstream’s information about cost.
Has led to new managerial accounting (e.g., ABC)
Applying the theory





Unlikely to have such detailed information in the real
world.
Unlikely to have deterministic production in the real
world.
Hence, an exact solution like the one we just derived
is usually not possible in the real world.
The issues, however, still remain.
The theoretical insights still apply.
Conclusions

Incentives critical to strategy



Existing incentives are part of “where you are” and
help to determine feasibility of strategies.
Incentive problems are part of what determines
feasibility of strategy.
Designing correct incentives can be critical to
execution.
Conclusions continued …

Design of incentives involves tradeoffs




Recall monitoring often involves tradeoff of cost against
frequency of undesired behavior.
Resolution of moral hazard problems involves tradeoff of
the risk incentives impose against the requirement to
compensate for risk.
Resolution of adverse selection problems involves tradeoff
of the efficiency of actions against the desire to limit
information rents.
Because of these tradeoffs, most incentive systems are
inherently imperfect; that is, second best.
Conclusions continued …

Cost drivers in moral hazard problems:




Degree of misalignment between principal and
agent’s objectives and goals.
How informative performance measures are about
underlying actions taken.
How risk averse agent is.
Cost drivers in adverse selection:

Principal’s uncertainty about agent’s type.