Download SCREENING UNDER ADVERSE SELECTION Some Optimal

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Choice modelling wikipedia , lookup

Brander–Spencer model wikipedia , lookup

Microeconomics wikipedia , lookup

Mechanism design wikipedia , lookup

Transcript
SCREENING UNDER
ADVERSE SELECTION
Some Optimal
Regulation Theory
• Principal-Agent Theory in the pure Adverse-Selection Case.
• Principal :
Agent :
State
Public Firm
Banker
Entrepreneur
Firm
Worker
Shareholder
Manager
• Pure Adverse Selection : hidden characteristics of the agent
only.
• No hidden actions (No moral hazard).
1
• Important applications :
Optimal Regulation and Optimal Taxation.
• We study a basic model of the Principal-Agent relationship
• We look for the optimal contract between Principal and Agent
• The Principal is uninformed.
2
Must Read :
• Jean-Jacques LAFFONT and David MARTIMORT (2002),
The Theory of Incentives : The Principal-Agent Model, Princeton University Press, Chapter 2, pp. 28-81.
Other important references :
• Davis P. BARON and Roger B. MYERSON (1982), ”Regulating a Monopolist with Unknow Costs”, Econometrica, 50, pp.
911-930
• Jean-Jacques LAFFONT and Jean TIROLE (1986), ”Using
Cost Observation to Regulate Firms”, Journal of Political Economy, 94, pp. 614-41.
3
Basic Principal-Agent Model
• Agent : produces q ≥ 0
• Surplus due to production is S(q)
• We assume that S is differentiable and S 0 > 0, S 00 < 0, S(0) = 0.
• Marginal Cost of production is denoted θ ∈ {θ, θ̄} (Two types)
• Cost function : C(q, θ) = θq
• θ < θ̄
4
• So, θ̄ = ”inefficient type”
θ = ”efficient type”
• Prior probability of types :
Prob (θ = θ) = ν
ν ∈ (0, 1)
• Denote 4θ = θ̄ − θ (the ”spread”)
5
Definition of a Contract
• A pair of functions θ 7−→ (q(θ), t(θ))
• q(θ) = production of type θ
• t(θ) = transfer of money from Principal to Agent
6
• Utility of the Agent :
U = t − θq
• Utility of the Principal :
V = S(q) − t
• Social surplus :
W =U +V
= S(q) − θq
7
• First-Best Allocation :
maximize (S(q) − θq)
with respect to q, for all θ, s.t. q ≥ 0
or,
maximize (S(q) − t)
with respect to (q, t)
subject to, t − θq ≥ Uo(θ) (participation constraint)
and q ≥ 0, for all θ.
8
• First-Order Conditions for First-Best Optimality :

0 ∗

 S (q ) = θ
⇒ q ∗ > q̄ ∗

 S 0(q̄ ∗) = θ̄
• Transfers : to implement the first-best,

∗
∗

 t = θq + U0(θ)

 t̄∗ = θ̄q̄ ∗ + U (θ̄)
0
⇒ U = U0(θ) (No rents)
9
• For simplicity, we assume that Uo(θ) = Uo = 0
(reservation utilities do not depend on type)
• The timing of the model :
t = 0 Agent discovers θ
t = 1 Principal offers contract (q, t)
t = 2 Agent accepts or refuses
t = 3 Contract is executed.
10
Incentive Compatibility and Feasibility :
• A contract is an array : {(t̄, q̄), (t, q)}
• The contract is Incentive Compatible if
and
t − θq ≥ t̄ − θq̄
(IC)
t̄ − θ̄q̄ ≥ t − θ̄q
(IC)
11
• The contract is Individually Rational if
t − θq ≥ 0
(IR)
t̄ − θ̄q̄ ≥ 0
(IR)
• IR + IC = feasible contract
12
Monotonicity Property :
• Adding IC and IC yields,
(θ̄ − θ)q ≥ (θ̄ − θ)q̄
⇒
q ≥ q̄
• If monotonicity holds, then, there exists (t̄, t) such that IC
holds :
θ(q − q̄) ≤ t − t̄ ≤ θ̄(q − q̄)
(IC)
(IC)
13
INFORMATION RENTS :
• Denote U = t̄ − θ̄q̄ ; U = t − θq.
• We have,
t̄ − θq̄ = t̄ − θ̄q̄ + q̄4θ = U + q̄4θ
• It follows that IC is equivalent to,
U ≥ U + 4θq̄
14
• We also have,
t − θ̄q = t − θq − 4θq = U − 4θq
• Hence, IC is equivalent to
U ≥ U − q̄4θ
• And we see that U > U . The informational rent of type θ̄ is
q̄4θ.
15
The Principal’s Problem :
(Second-best problem)
max ν(S(q) − t) + (1 − ν)(S(q̄) − t̄)
(t̄,q̄,t,q)
subject to IC, IC, IR, IR.
Use change of variables :
U = t − θq
16
The Principal’s problem rewritten :
max {ν[S(q) − θq] + (1 − ν)[S(q̄) − θ̄q̄] − [νU + (1 − ν)U ]}
(U ,q,Ū ,q̄)
subject to,


U ≥ U + q̄4θ (IC)









 U ≥ U − q4θ (IC)



U ≥0








U ≥0
(IR)
(IR)
We consider contracts without ”shutdown”, i.e., q̄ > 0.
17
• If IC and IR hold, then IR is always satisfied :
U ≥ U + q̄4θ ≥ q̄4θ > 0.
• Both IC and IR must be binding at the second-best optimum.
• If not, let U = ε > 0. Choose dε < 0 and decrease U and U by
dε. Contradiction since (νU + (1 − ν)U ) decreases.
• If U = q̄4θ+η, η > 0, then, decrease η by dη < 0. Contradiction.
• We conclude that U = q̄4θ at the second-best optimum.
18
• The Principal’s Problem becomes :
max{ν[S(q) − θq] + (1 − ν)[S(q̄) − θ̄q̄] − ν4θq̄}
(q,q̄)
(if we substitute Ū = 0 and U = q̄4θ and we ignore IC for a
while...)
• The First-Order Conditions are

0 ∗∗

 S (q ) = θ

 {S 0(q̄ ∗∗) − θ̄}(1 − ν) = ν4θ
• Distortion = ν4θ/(1 − ν)
• Trade-off : EFFICIENCY
vs.
RENT EXTRACTION
19
• We finally check that IC also holds.
• From monotonicity : q̄ ∗∗ ≤ q ∗∗.
The omitted IC constraint can be written
U
∗∗
= 0 ≥ U ∗∗ − 4θq ∗∗
≥ 4θq̄ ∗∗ − 4θq ∗∗
⇔ q ∗∗
≥ q̄ ∗∗ (true).
20
• Note : q ∗∗ = q ∗ > q̄ ∗ > q̄ ∗∗.
This is because,
S 0(q̄ ∗∗) = θ̄ +
ν
4θ.
1−ν
21
Proposition : The second-best optimal contract is such that,
S 0(q ∗∗) = θ (”no distortion at the top”)
S 0(q̄ ∗∗) = θ̄ +
ν
4θ (distortion)
1−ν
U ∗∗ = q̄ ∗∗ 4θ (informational rent for efficient types)
U
∗∗
= 0 (full rent extraction for inefficient types)
• Discussion : rôle played by
ν
and 4θ.
1−ν
22
Some Mechanism Design.
Suppose we have one Principal and n agents indexed i = 1, ..., n.
Agent i’s type θi is drawn from a set Θ. Principal chooses a
decision x ∈ X. (For instance x = (q, t), a contract). The utility
of agent i is U (x; θi)
23
Definition 1 (Mechanism)
A mechanism is a pair (M, g) where M is a message space and
g : M n → X is an outcome function.
x = g(m1, m2, ..., mn)
• We suppose that θ = (θ1, ..., θn) is not observed by the Principal.
Each θi is private information of agent i.
Definition 2 (Direct Mechanism)
A mechanism is direct if M = Θ.
24
Definition 3 (Equilibrium in Dominant Strategies)
Let m∗i : Θ → M be agent i0s communication strategy
Then, (m∗i (θi))i=1...n is an equilibrium in dominant strategies if
for all i, for all (m∗j (θj ))j6=i = m∗−i(θ−i), we have
U [g(m∗(θ)), θi] ≥ U [g(m∗−i(θ−i), m̂i), θi]
for all m̂i ∈ M
• Notation : θ−i = (θj )j6=i
m(θ) = (m1(θ), ..., mn(θ))
25
Definition 4 (Revealing Mechanism)
A mechanism (M, g) is revealing (in dominant strategies) if
m∗i (θi) ≡ θi is a dominant strategy for all i = 1, ..., n.
Note : (M, g) is a direct and revealing mechanism if m∗i (θi) = θi
is an equilibrium is dominant strategies.
26
REVELATION PRINCIPLE
Theorem : If mechanism (M, g) implements decision f : Θ → X
in dominant strategies, that is, g(m∗(θ)) = f (θ) for all θ, m∗
being a n−tuple of dominant strategies, then, (Θ, f ) is revealing
in dominant strategies.
Remark : If (M, g) chooses f (θ) for all θ, then, there exists a
revealing mechanism which does the same job ; i.e., (Θ, f ).
27
Proof of the Revelation Principle :
• There exists a n−tuple of dominant strategies m∗ such that
g[m∗(θ)] = f (θ) for all θ, by assumption.
• U [g(m∗i (θi), m−i), θi] ≥ U [g(m0i, m−i), θi],
for all m0i ∈ M , all m−i ∈ M n−1.
28
• So, in particular, for all i and θi,
U [g(m∗i (θi), m∗−i(θ−i)), θi] ≥ U [g(m∗i (θi0 ), m∗−i(θ−i)), θi]
for all θi0 , all θ−i.
• Now, g[m∗(θ)] = f (θ), then, for all i, all θi,
U [f (θ), θi] ≥ U [f (θi0 , θ−i), θi]
for all θi0 , θ−i
29
• We conclude that f is truthfully implementable by the direct
revealing mechanism (Θ, f ).
• Agents report their types θi ∈ Θ directly.
• Agents have no incentive to make false reports (in a very strong
sense).
• There is no loss of generality in constraining optimal contracts
to be incentive compatible (i.e., revealing).
30
Other equivalent interpretation :
If g implements f and
g:M →X
is not revealing (and not direct).
Then g̃ = g ◦ m∗ is a direct and revealing mechanism that also
implements f .
m∗
g
Θ −−→ M −
→X
g̃(.) = g ◦ m∗(.)
31