Download 1 Model with Time-Consistent Agents

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Envy-free cake-cutting wikipedia , lookup

Rental harmony wikipedia , lookup

Mechanism design wikipedia , lookup

Transcript
Online Appendix to
“Optimal Retirement Policies”
Pei Cheng Yu
∗
December 12, 2016
1
Model with Time-Consistent Agents
In this online appendix, I explore an economy where some agents are timeconsistent. The presence of TC agents causes distortions, because TC agents
can follow through with their consumption plan and choose the imaginary
allocation and avoid the threat allocation. This restricts the ability of the
government to provide insurance.
By Theorem 3, it is without loss of generality to focus on TI agents with
the same temptation β. The government is uncertain whether the agents are
TC (β = 1) or TI (β < 1), with probability Pr (T I) = φ. The TC agents know
their consistency (sophisticated), while TI agents could be non-sophisticated.
I will relax this assumption and discuss the case with paranoid TC agents,
TC agents believing they may be TI agents, in the last subsection of this
section. I assume the distribution of productivity is independent of the agents’
consistency. I will focus on the case with κ = 1, so the government puts all
of the welfare weights on the planner. Thus, the welfare of both TC and TI
agents are measured using the same utility.
∗
University of New South Wales (e-mail: [email protected])
1
1.1
Threat Mechanism with Time-Consistent Agents
Suppose TI agents have a fixed sophistication of β̂ ∈ [β, 1), the government can implement a conditional commitment mechanism. The government
D D P
TC
; cm , ym
= cPm , ym
introduces the following menu for θm TC agents: Cm
R R T T TI
and the following menu for θm TI agents: Cm = cm , ym ; cm , ym . The
P
allocation cPm , ym
will be referred to as the persistent allocation, and it is the
allocation the government implements for TC agents of type θm . The alloca
D
,
y
tion cD
m m is referred to as the deterrent allocation, and it is meant to deter
the TI agents from misreporting as θm TC agents. The deterrent allocation
P
D
P
has cD
m,2 < cm,2 and cm,1 > cm,1 , which appeals to the doer’s present bias.
This deters the TI agent’s planner from misreporting as TC. The deterrent
allocation can also be constructed so that the TC agents would never prefer
it over the persistent allocation.
TC TI
, Cm θm ∈Θ .
The government offers the following at date t = 0, C = Cm
j
The planner first chooses a menu Cm from C, where j ∈ {T C, T I} , and then
j
an allocation from Cm
is chosen by the doer. The mechanism is meant to
separate agents along two dimensions: productivity and consistency. Let
(
1|β̂
Cm0 |m
TC
(cm0 , ym0 ) ∈ Cm0 (cm0 , ym0 ) ∈ arg
=
)
max
0
TC
(c0m0 ,ym
0 )∈Cm0
0
W0 (c0m0 , ym
.
0 ; θm )
1|β̂
Hence, Cm0 |m denotes the set of allocations a TI agent with productivity θm
and sophistication level β̂ predicts his doer will choose after he misreports to
be a TC agent with productivity θm0 . Let
1
Cm
TC
= (cm , ym ) ∈ Cm (cm , ym ) ∈ arg
max
0 )∈C T C
(c0m ,ym
m
0
U0 (c0m , ym
; θm )
.
1
Hence, Cm
denotes the set of allocations a truth-telling TC agent would choose.
β̂
β̂
Let Cm
and Cm
0 |m be defined as before. The following definition defines a direct
conditional commitment mechanism with TC agents.
Definition 1 A direct conditional commitment mechanism with TC agents
TC TI
P P D D TC
TI
has C = Cm
, Cm θm ∈Θ , with Cm
=
cm , ym ; cm , ym
and Cm
=
2
T T R
R
cR
satisfying: (i.) credible threat constraints: cR
m , ym ; cm , ym
m , ym ∈
β̂
β̂
T
and cTm0 , ym
Cm
∈ Cm
∈ Θ, (ii.) deterrent constraints:
0
0 |m , ∀θm , θm0
1|β̂
P
1
D
cPm , ym
∈ Cm
, and cD
m0 , ym0 ∈ Cm0 |m , ∀θm , θm0 ∈ Θ.
To define the set of truthfully implementable allocations that can be implemented by a direct conditional commitment mechanism with TC agents, note
that the incentive compatibility constraints for the TI agents are, ∀θm , θm0 ∈ Θ,
(
U0 (cm , ym ; θm ) ≥ max
max
max
β̂
(cm ,ym )∈Cm
U0 (cm0 , ym0 ; θm ) ,
(1)
β̂
(cm0 ,ym0 )∈Cm
0 |m
)
max
U0 (cm0 , ym0 ; θm ) .
1|β̂
(cm0 ,ym0 )∈Cm0 |m
By (1), it is optimal for the TI agents to report truthfully about their productivity and consistency. Let
(
TI
(cm0 , ym0 ) ∈ Cm0 (cm0 , ym0 ) ∈ arg
β̂|1
Cm0 |m =
)
0
max
U0 (c0m0 , ym
,
0 ; θm )
T
I
)∈Cm0
(
0
c0m0 ,ym
0
(
1
Cm
0 |m =
TC
(cm0 , ym0 ) ∈ Cm0 (cm0 , ym0 ) ∈ arg
)
(
0
max
U0 (c0m0 , ym
.
0 ; θm )
T
C
)∈Cm0
0
c0m0 ,ym
0
β̂|1
Hence, Cm0 |m denotes the set of allocations a TC agent with productivity θm
would select from a menu meant for TI agents with productivity θm0 , while
1
Cm
0 |m denotes the set of allocations a TC agent would pick if he misreports
his productivity as θm0 and is truthful about his consistency. The incentive
compatibility constraints for the TC agents are, ∀θm , θm0 ∈ Θ,
(
max
1
(cm ,ym )∈Cm
U0 (cm , ym ; θm ) ≥ max
max
1
(cm0 ,ym0 )∈Cm
0 |m
U0 (cm0 , ym0 ; θm ) ,
(2)
)
max
U0 (cm0 , ym0 ; θm ) .
β̂|1
(cm0 ,ym0 )∈Cm0 |m
Incentive compatibility constraints (2) discourage the TC agents from mis3
reporting about their productivity and consistency. The executability constraints for the real allocation are defined by (8).
P P R R Definition 2 The allocation
cm , ym , cm , ym θm ∈Θ is truthfully implementable by a direct conditional commitment mechanism if there exists
D D T T cm , ym , cm , ym θm ∈Θ such that (i.) incentive compatibility, and (ii.) executability are satisfied.
The government maximizes welfare
X
R
P
P
πm φU0 cR
,
m , ym ; θm + (1 − φ) U0 cm , ym ; θm
(3)
θm ∈Θ
subject to the incentive compatibility constraints ((1) and (2)), executability constraints (8), credible threat constraints, deterrent constraints and the
feasibility constraint
(
X
θm ∈Θ
φπm
"
2
X
"
#
R
ym,t
− cR
m,t
+ (1 − φ) πm
2
X
P
ym,t
− cPm,t
#)
= 0,
(4)
t=0
t=0
where y2 = 0.
The threat and deterrent allocations can help relax (1). As a result, the
government only needs to deter misreporting from the TC agents. The following theorem shows how the government takes advantage of the TI agents.
R
Theorem 1 There exists θ̄ > θ1 such that (i.) for θm ≥ θ̄, cPm ≥ cR
m with ym >
P
P
R
P
P
R
ym
, (ii.) for θ1 < θm < θ̄, cPm , ym
< cR
= cR
m , ym , (iii.) c1 , y1
1 , y1 .
By Theorem 1, full insurance is no longer incentive compatible when TC
agents are in the economy. The high productivity (θm ≥ θ̄) TC agents require
information rents, so they have lower marginal utilities from consumption and
lower disutility from effective labor y.
Since TI agents are manipulable, the government exploits the higher productivity TI agents by requiring them to work more and consume less, which
increases the resources available for redistribution. As a result, TC agents
4
with θm ≥ θ̄ would never misreport to be TI agents of the same productivity.
For agents with lower productivity (θm < θ̄), the government would exploit
the TI agents by requiring them to work more, but also compensate them with
more consumption for insurance. The consumption is limited by the incentive
compatibility constraint for the higher productivity agents. This increase in
production more than offsets the increase in consumption, so it also increases
the resources available for redistribution. The government refrains from exploiting the TI agents with the lowest productivity by bunching them with
the TC agents, because they are already receiving the least utility and any
exploitation would only lead to less insurance. The only binding incentive
compatibility constraints are the downward adjacent incentive compatibility
constraints for the TC agents. For θm > θ1 , TI agents have strictly lower
lifetime utility than TC agents of the same productivity.
P P R
and
cm , ym θm ∈Θ , to construct the deFor any given cR
m , ym
θm ∈Θ
P
D
D
D
be the deterrent allocation satterrent allocation, let c , ym with ym = ym
isfying:
D
min W0 cD , ym
≥
θm ,θm0 ∈Θ
P
max W0 cPm , ym
,
(5)
D
max U0 cD , ym
.
(6)
θm ,θm0 ∈Θ
and
R
min U0 cR
,
y
;
θ
≥
m
m m
θm ∈Θ
θm ,θm0 ∈Θ
By inequality (5), any TI agent who misreports his consistency level would
select the deterrent allocation over the persistent allocation. Inequality (6)
guarantees the TI agents would prefer to report their consistency truthfully. If
(6) is satisfied, the TC agents would never choose the deterrent allocations over
the persistent allocations. If (5) and (6) hold, then TI agents of all productivity
types would never misreport to be TC agents. Finally, the deterrent allocations
can always be constructed such that (5) and (6) are satisfied.1
D D
such
To see how, first choose cD so (6) holds. Next, increase cD
0 and decrease c1 , c2
D D
D
that U0 c , ym remains unchanged, and since 1 > β̂, then it is possible to find c such
that (5) holds.
1
5
u0 (c )
t
, and the labor
Let the intertemporal wedge be defined as τtC = 1− u0 t (ct+1
)
t+1
y
Ut,y (c, )
wedge as τtL = 1 + U c, yθ . The following theorem characterizes the optimal
t,c ( θ )
allocation and wedges in an environment with TC agents.
Theorem 2 For a conditional commitment mechanism, if φ ∈ (0, 1) and β̂ ∈
(β, 1), the optimal allocation has the following properties: for any t < 2, (i.)
τtC = 0 for all agents, (ii.) τtL = 0 for all TI agents with θm ≥ θ̄ and θM TC
agents, (iii.) τtL ≥ 0 for all TI agents with θm < θ̄ and τtL > 0 for all TC
agents with θm < θM .
The usual trade-off between insurance and output efficiency is present in
this economy. Theorem 2 demonstrates how the the output of the less productive agents is distorted downwards. This is standard in Mirrlees taxation.
The government is able to provide consumption smoothing for all agents.
The next corollary shows that as long as β̂ ∈ (β, 1) , then the conditional
commitment mechanism can implement the same optimal allocation for all
levels of sophistication. This follows from the fact that the optimal allocation
in a conditional commitment mechanism does not depend on the sophistication
level of the TI agents.
Corollary 1 In a conditional commitment mechanism, the optimal allocation
is the same for any sophistication level β̂ ∈ [β, 1).
The usual Mirrlees taxation with TC agents occurs when φ = 0, and the
constrained efficient optimum is achieved. This paper has shown that the full
information efficient optimum is attainable when the economy is populated
by TI agents (φ = 1). Let W T (φ, β̂) denote the welfare under a conditional
commitment mechanism with measure φ of TI agents with sophistication level
β̂ ∈ [β, 1). It is natural to presume that social welfare increases as the proportion of TI agents increases, as shown in Figure 4.
Theorem 3 W T (φ, β̂) increases continuously with φ from the constrained efficient optimum to the full information efficient optimum.
6
As the mass of TI agents increases, the first order effect is an increase
in the available resources for redistribution, which comes from the allocation
patterns in Theorem 1. The second order effect is that with less TC agents,
the government can provide each TC agent more information rent using fewer
resources. This relaxes the incentive compatibility constraints (2). Hence, the
government is able to provide better insurance with more TI agents.
Also of note, Theorem 3 shows that continuity in welfare can be achieved
with respect to the proportion of TC agents. However, in an economy with
homogeneous temptation β, there is a discontinuity in welfare as β → 1. This
discontinuity is due to the assumption on utility ut . Since ut is unbounded
below and above, the government is always able to construct extreme threats
to deter TI agents.
1.2
Betting Mechanism with Time-Consistent Agents
The government can also implement a betting mechanism when β̂ ∈ (β, 1).
The government introduces the following menu for agents of productivity θm :
TC TI
TC
Cm = Cm
, Cm , where Cm
consists of the persistent and deterrent alloTI
cations and Cm consists of the real and imaginary allocations.
Definition 3 A direct betting mechanism with TC agents has C =
P P D D TC TI
TI
TC
=
and Cm
=
cm , ym ; cm , ym
Cm , Cm θm ∈Θ , with Cm
R R I I I
β̂
I
cm , ym ; cm , ym satisfying: (i.) fooling constraints: cm , ym ∈ Cm and
β̂
I
I
P
1
cm0 , ym
∈ Cm
cPm , ym
∈ Cm
,
0
0 |m , ∀θm , θm0 ∈ Θ, (ii.) deterrent constraints:
1|
β̂
D
and cD
m0 , ym0 ∈ Cm0 |m , ∀θm , θm0 ∈ Θ.
The allocations that are truthfully implementable by a direct betting mechanism with TC agents is bounded by the incentive compatibility constraints
(1) and (2) and the executability constraints (8). The definition of truthfully
implementable allocations in this mechanism is similar to Definition 2.
P P R R Definition 4 The
allocation
cm , ym , cm , ym θm ∈Θ
is
truthfully implementable by a direct betting mechanism if there exists
7
I I D
cD
such that (i.)
m , ym , cm , ym
θm ∈Θ
(ii.) executability are satisfied.
incentive compatibility, and
The following theorem shows that, for a given (φ, β̂), the government is indifferent between implementing a betting or a conditional commitment mechanism when agents are partially naı̈ve.
Theorem 4 For β̂ ∈ (β, 1) , the optimal allocation in a direct betting mechanism is equivalent to the optimal allocation in a direct conditional commitment
mechanism.
Theorem 4 implies that the properties derived for the optimal allocation
in a conditional commitment mechanism with TC agents also applies to the
betting mechanism. Combined with Corollary 1, for a given φ, the optimal
allocation and welfare are the same for any β̂ ∈ (β, 1) regardless of whether a
conditional commitment or a betting mechanism is implemented. The caveat
is that this result is only true when TI agents are partially naı̈ve, β̂ ∈ (β, 1) .
1.2.1
Fully Naı̈ve Time-Inconsistent Agents
Separation in consistency is no longer possible when β̂ = 1. Here, I will
abstract away from life-cycle concerns and assume ut = u and ht = h to
achieve a sharper characterization of the optimal allocations. When β̂ = 1,
the government introduces the following menu at date t = 0, C = {Cm }θm ∈Θ ,
P P R
with Cm = cR
. The government is unable to separate the
m , ym , cm , ym
TC agents from the TI agents at date t = 0, so it expects agents of the same
productivity to ’mentally’ pick the same menu. After preference reversal, the
government expects the TI agents to choose the real allocations and the TC
agents to choose the persistent allocations. Here, the persistent allocations
serve a similar purpose as the imaginary allocations to the TI agents. TI
agents evaluate their reporting strategy using the persistent allocations, but
consume the real allocations instead.
The definition of a direct betting mechanism for full naiveté with TC agents
and the corresponding definition of truthful implementation is presented below.
8
Definition 5 A direct betting mechanism with TC agents and β̂ = 1 has
P P R
C = {Cm }θm ∈Θ , with Cm = cR
satisfying the fooling conm , ym , cm , ym
β̂
P
P
β̂
P
P
straints: c , y ∈ C and cm0 , ym0 ∈ Cm0 |m , ∀θm , θm0 ∈ Θ. The allocation
R R m P m P m
cm , ym , cm , ym θm ∈Θ is truthfully implementable by a direct betting mechanism if the allocation satisfies the incentive compatibility constraints (7) and
the executability constraints (8).
Note that since β̂ = 1, the incentive compatibility constraints (7) are the
same for the TC and TI agents. The government maximizes welfare (3) subject to the fooling constraints, the incentive compatibility constraints (7), executability constraints (8), and feasibility (4). The following theorem characterizes the intertemporal wedge in a fully naı̈ve environment.
Theorem 5 If φ ∈ (0, 1) and β̂ = 1, the intertemporal wedge has the following
properties: (i.) τ0C < 0 for all TC agents with θm > θ1 and τ0C > 0 for all TI
agents with θm > θ1 , and (ii.) τtC = 0 for all agents at t = 1,
Theorem 5 shows how the government provides consumption smoothing for
agents with θm > θ1 in expectation. In essence, the best the government can
do is to require the TC agents to over-save and the TI agents to under-save
at t = 0, when the fooling occurs. This is true even for the most productive
agents, so there will be distortion at the top. The reason for this distortion is
due to the binding executability constraints. If the government implements the
allocation described in Theorem 1, then the executability constraints would be
violated since the TI agents would pick the persistent allocation. This stems
from the government’s inability to separate the TI agents from the TC agents,
so the persistent allocation acts as a proxy for the imaginary allocation when
TI agents are fully naı̈ve.
1.3
Incentives for Self-Awareness
Comparing the optimal welfare for each sophistication level helps answer
the question of whether the government wants to raise the TI agents’ awareness of their present bias. It is clear from the setting where all agents were
9
TI, the government is indifferent among the various sophistication levels of
the agents. However, when TC agents are present, this is no longer the case.
Theorem 4 and Corollary 1 show that when agents are aware of their present
bias, the government is indifferent between implementing a conditional commitment mechanism or a betting mechanism, which would both achieve the
same optimal allocation and welfare regardless of the sophistication level of
the TI agents. Thus, the question is whether it is in the best interest of the
government to alert the fully naı̈ve agents of their present bias.
For a given φ, let W(β̂ < 1) denote the social welfare when agents are
sophisticated or partially naı̈ve, and W(β̂ = 1) be the welfare when agents are
fully naı̈ve. The following theorem shows that the government would want the
TI agents to be at least partially naı̈ve.
Theorem 6 If φ ∈ (0, 1), then W(β̂ < 1) > W(β̂ = 1).
The difference in welfare in Theorem 6 is due to the inability of the government to separate the fully naı̈ve TI agents from the TC agents, while separation
is possible if TI agents are at least partially naı̈ve. The inability to separate
along consistency causes two problems. First, the government is unable to design a deterrent allocation to dissuade TI agents from mimicking TC agents.
Second, the persistent allocation serves as a proxy for the imaginary allocation,
so the allocation used for fooling the TI agents are no longer empty promises
and are not off-equilibrium path. As a result, when β̂ = 1, the optimal allocation distorts the intertemporal wedge, which is an additional distortion not
present when TI agents are at least partially naı̈ve.
Theorem 6 has new implications on the education policy for TI agents.
Educating non-sophisticated TI agents is only weakly welfare increasing. This
result contrasts with the policy recommendations from the literature.2
2
Previous literature has focused on the pitfalls of being non-sophisticated. For example,
behavioral contract theory has focused on how limited awareness of self control problems
makes consumers vulnerable to exploitation by firms.
10
1.4
Paranoia
Previous analysis assumed the TC agents were sophisticated (β̂ = β = 1).
Sophisticated TC agents do not respond to threats or empty promises, and
require strictly positive information rents for truth-telling. Here, I will briefly
discuss how paranoia affects the behavior of TC agents, and consequently how
government policy can be adjusted to exploit this.
A paranoid agent is a non-sophisticated TC agent who believes he is timeinconsistent. Paranoid agents respond to threats, and can also be fooled. It is
easy to see that a conditional mechanism can extract all information rents from
paranoid agents, since the government can construct the threat allocation using
similar methods for the TI agents. The betting mechanism is more subtle.
To see how a betting mechanism achieves the full information optimum in
an economy with only paranoid agents, β̂ < β = 1, consider the example with
∗
∗
with
) , cIm , ym
two productivity types Θ = {θL , θH } . Let Cm = (c∗ , ym
∗
I
∗
I
cm,0 = c0 and cH = c . The allocations satisfy the fooling constraints
u1 cIm,1 + β̂u2 cIm,2 ≥ u1 (c∗1 ) + β̂u2 (c∗2 ) ,
the executability constraints
u1 (c∗1 ) + u2 (c∗2 ) ≥ u1 cIm,1 + u2 cIm,2 ,
and the incentive compatibility constraints, which implies the following
1
X
t=0
ht
∗
yH,t
θL
− ht
∗
yL,t
θL
≥ u1 (cIH,1 ) + u2 (cIH,1 ) − u1 (cIL,1 ) + u2 (cIL,2 )
≥
1
X
ht
t=0
∗
yH,t
θH
− ht
∗
yL,t
θH
.
The fooling and executability constraints imply cIm,1 > c∗1 and cIm,2 < c∗2 . Combined with the incentive compatibility constraints, it must be that cIL,1 > cIH,1
with cIL,2 < cIH,2 . In essence, the government fools the paranoid agents by choosing the imaginary allocations to exacerbate their fears. A paranoid agent would
11
predict choosing the imaginary allocations even though he is strictly worse off
by choosing it, because he does not think the real allocation is attainable.
The government takes advantage of this by making the imaginary allocation
for the θL agent even worse. Hence, the paranoid θH agent produces efficiently
because he is afraid that by misreporting, he would have even less savings.
The way the betting mechanism works for paranoid agents is in stark contrast to the logic presented in the previous sections. Non-sophisticated TI
agents were fooled by empty promises, but paranoid agents were fooled by
empty threats.3
If the economy has both paranoid and TI agents, then using a betting
mechanism could be problematic for the government. For example, imagine an economy with paranoid TC agents with incorrect belief β̂ < 1, which
corresponds to the belief of the non-sophisticated TI agents with temptation
β < β̂. If the government tries to fool the agents, then it is not possible for
the government to separate agents along consistency level. In this particular
case, depending on who the government chooses to fool, either the paranoid
TC agent would end up selecting the imaginary allocation used to fool the TI
agents or the TI agent would choose the imaginary allocation used to fool the
TC agents. The resulting welfare would be lower compared to when TC agents
are sophisticated. This is analogous to the case with sophisticated TC agents
and fully naı̈ve TI agents. However, such a problem does not arise when the
government uses a conditional commitment mechanism. This is because the
same threats for TI agents can also deter paranoid agents from misreporting.4
I
∗
∗
In the previous sections, it was sufficient to set cIL, kL
, yL
= (c∗ , k ∗ , ym
) . For paranoid
I
I
∗
∗ ∗ ∗
agents, however, it must be the case that cL , kL , yL 6= (c , k , yL ) , but the government
I
∗
∗
can set cIH , kH
, yH
= (c∗ , k ∗ , yH
).
4
This is true because the credible threat constraints and the incentive compatibility
constraints are the same for both the TI and TC agents who share the same beliefs. Finally,
the executability constraint is more relaxed for the TC agents than the TI agents.
3
12
2
Proofs for Model with Time-Consistent
Agents
Proof of Theorem 1:
Notice that the deterrent and threat allocations can deter TI agents of any
productivity type from misreporting. Hence, only the incentive compatibility
constraints for the TC agents need to be considered. The proof will focus
on a relaxed problem, where for within consistency deviations, only the local downward incentive compatibility constraints are considered along with a
P
P
monotonicity constraints on y P : ym
≤ ym+1
for all θm ∈ Θ.
The incentive compatibility constraints are non-smooth. The problem can
be reformulated by introducing a vector of new variables x = (x2 , . . . , xM ) so
the constraints are smooth. In essence, the incentive compatibility constraints
are rewritten as ∀θm , θm0 ∈ Θ
P
U0 cPm , ym
; θm ≥ xm
(7)
P
xm − U0 cPm−1 , ym−1
; θm ≥ 0
R
x m − U 0 cR
m0 , ym0 ; θm ≥ 0.
(8)
(9)
Let λm denote the Lagrange multiplier associated with inequality (7). Let
λTmC denote the Lagrange multiplier associated with inequality (8). Let λTmI0 |m
denote the Lagrange multiplier associated with inequality (9). Let µ be the Lagrange multiplier associated with the feasibility constraint. Finally, denote γm,t
as the Lagrange multiplier on the monotonicity constraint. The Lagrangian
13
for the problem is
L=
X
P
P
R
πm φU0 cR
m , ym ; θm + (1 − φ) U0 cm , ym ; θm
θm
+
X
X TC P
P
λm U0 cPm , ym
; θ m − xm +
λm xm − U0 cPm−1 , ym−1
; θm
θm
+
θm
1 X
X
R
;
θ
+
γm,t (ym,t − ym−1,t )
,
y
λTmI0 |m xm − U0 cR
0
0
m
m
m
XX
t=0 θm
θm θm 0
(
+µ
X
"
φπm
2
X
#
R
ym,t
− cR
m,t
"
+ (1 − φ) πm
t=0
θm
2
X
P
ym,t
− cPm,t
#)
.
t=0
The first order conditions for the real allocations are


X
I  0
φπm −
λTm|m
ut cR
0
m,t = φπm µ,
θm0
1
φπm h0t
θm
R
ym,t
θm
!
−
X
θm 0
R
ym,t
θm0
1 0
I
λTm|m
h
0
θm0 t
!
= φπm µ.
The first order conditions for the persistent allocations are
0 P C
(1 − φ) πm + λm − λTm+1
ut cm,t = (1 − φ) πm µ,
1
[(1 − φ) πm + λm ] h0t
θm
P
ym,t
θm
!
C
− λTm+1
1
θm+1
h0t
P
ym,t
θm+1
!
= (1 − φ) πm µ + γm,t − γm+1,t .
The first order condition on xm gives relationship for Lagrange multipliers
λm = λTmC +
X
λTmI0 |m .
θm0
The Kuhn-Tucker conditions for a solution are the first order conditions above
14
and the complementary slackness conditions. Note that perfect consumption
smoothing is implemented, so consumption or output reversal is not optimal:
P
R
P
for example, cR
m,t > cm,t and cm,t+1 < cm,t+1 .
P
P
R
Suppose for some θm , cR
m , ym 6= cm , ym , then there are three possible
P
R
P
R
P
R
P
P
R
cases: (i.) cR
m < cm and ym < ym , (ii.) cm > cm and ym > ym , (iii.) cm > cm
P
R
R
P
P
R
R
P
and ym
< ym
, (iv.) cPm = cR
m and ym > ym and (v.) cm > cm and ym = ym .
R
P
Note that cPm ≤ cR
m and ym ≥ ym is not possible because it violates incentive
compatibility.
Before continuing on with the proof, the following lemmas help characterize
the constrained optimum.
Lemma 1 For all t < 2 and θm > θ1 , γm,t = 0.
P
Proof Suppose for some t < 2 and θm > θ1 , γm,t > 0. The implies ym,t
=
P
ym−1,t
. By local downward incentive compatibility, cPm,t > cPm−1,t . However, with
consumption smoothing, this violates local upward incentive compatibility for
θm−1 .
P
P
; θm = U0 cPm−1 , ym−1
; θm .
Lemma 2 For all θm > θ1 , U0 cPm , ym
Proof Suppose there exists θm > θ1 such that the local downward incentive compatibility constraint is slack. This implies λTmC = 0. By the first
order conditions, µ ≥ u0t cPm−1,t for all t. By Lemma 1, it follows that
cPm,t > cPm−1,t , which implies µ > u0t cPm,t for all t. From the first order condiP
C
tions, this means that θm0 λTmI0 |m > λTm+1
. Hence, there exists θm0 such that
TI
P
P
R
R
λm0 |m > 0 and U0 cm , ym ; θm = U0 cm0 , ym
0 ; θm . By incentive compatibil
P
R
R
P
ity, U0 cPm−1 , ym−1
; θm−1 = U0 cR
m0 , ym0 ; θm−1 . This implies cm0 > cm−1 and
R
P
0
R
TI
ym
0 > ym−1 . Therefore, for any t, µ > ut cm0 ,t . However, when λm0 |m > 0,
the first order condition implies µ < u0t cR
m0 ,t . This is a contradiction.
The next few lemmas help characterize the five cases.
Lemma 3 Case (i.) and (v.) are never optimal.
15
P
P
R
TI
TI
Proof For cPm > cR
m and ym > ym ,
θm0 λm|m0 > 0 and λm|m0 = 0 for all
P
TI
P
R
θm0 > θm must be true. Since if
θm0 λm|m0 = 0, then cm > cm implies
R
P
R
P
I
R
P
. Also, if λTm|m
< ym
ym
0 > 0 for some θm0 > θm and cm > cm implies ym > ym ,
then type θm TC agent would strictly prefer to misreport as a θm TI agent.
I
I
Finally, it must be the case that λTm|m
> 0 and λTm|m
0 = 0 for all θm0 < θm .
I
Since if λTm|m
0 > 0 for some θm0 < θm , the government can raise welfare by
R
increasing ym and cR
m . Hence, the first order conditions for consumption imply,
TI
C
λm|m
λm − λTm+1
>−
.
1−φ
φ
From the first order conditions on output and by the fact that ht (·) is strictly
convex with θm+1 > θm , the following inequality must hold at the optimum
for any t < 2,
1−
I
λTm|m
φπm
!
1 0
h
θm t
R
ym,t
θm
!
C
λm − λTm+1
1 0
> 1+
h
(1 − φ) πm θm t
P
ym,t
θm
!
+
γm+1,t − γm,t
,
(1 − φ) πm
R
P
, it must be that γm,t > γm+1,t . By Lemma 1, this is not
> ym
Thus, for ym
optimal and rules out case (i.).
R
P
For cPm > cR
m and ym = ym , by Lemma 2, it must be the case that
P
TC
TI
θm0 λm|m0 = 0. By the first order conditions on consumption, λm > λm+1 .
P
R
, which rules out case (v.)
> ym
This implies that ym
R
P
Lemma 4 If for any t, u0t cPm,t ≤ µ, then cPm,t ≥ cR
m,t and ym,t > ym,t .
Proof From the first order conditions on consumption, for any t, if u0t cPm,t ≤
P
C
TI
P
R
µ, then λm ≥ λTm+1
. Since
θm0 λm|m0 ≥ 0, this implies that cm,t ≥ cm,t
and with strict inequality if u0t cPm,t > µ. From the first order conditions on
R P C
y
λm −λT
1 0 ym,t
m+1
output, the following holds, θ1m h0t θm,t
>
1
+
h
. Since
(1−φ)πm
θm t
θm
m
TC
R
P
λm ≥ λm+1 , it must be the case that ym,t > ym,t .
Lemma 4 shows that when the consumption for the θm TC agent is large,
then case (iii.) and case (iv.) would happen at the optimum. Note by Lemma
2, θM belongs to the third case.
16
P
R
Lemma 5 If for any t, u0t cPm,t > µ, then cPm,t < cR
m,t and ym,t < ym,t for
R
P
P
θm > θ1 and cR
1 , y1 = c1 , y1 .
TC
Proof Suppose u0t cPm,t > µ and cPm,t > cR
m,t , then λm+1 > λm and
C
λm − λTm+1
>−
1−φ
P
θm 0
I
λTm|m
0
φ
.
P
R
P
I
By Lemma 3, the output must be ym
> ym
. This implies θm0 λTm|m
0 = 0, so
TC
0
P
λm > λm+1 , which is a contradiction. Hence, if ut cm,t > µ, then cPm ≤ cR
m,
R
P
and due to incentive compatibility, ym
≥ ym
.
R
P
Next, notice that if for some θm > θ1 , cR
= cPm , ym
, then the
m , ym
government has a welfare improving deviation for the real allocations. ConP
sider the following small deviation in real allocation: c̃R
m,t = cm,t + α and
P
−1 1 0 ym,t
P
R
+ with α = u0t cPm,t
= ym,t
ỹm,t
h
. This is still incentive
θm+1 t θm+1
compatible and the gain in welfare from a higher net output more than offsets
the loss in utility for the type θm TI agent. Notice that if this deviation is
welfare improving for θ1 TI agents, then it is also welfare improving for θ1 TC
agents, which means the persistent allocation is not optimal. This is due to
the lack of binding downward incentive compatibility constraints. Hence, at
P
P
R
the optimum, cR
1 , y1 = c1 , y1 . This completes the proof.
Finally, due to Lemma 1, it must be the case that only high types belong
to the third case, and lower types belong to the second case. This establishes
the existence of a cutoff θ̄.
Proof of Theorem 2:
Part (i) is immediate from the first order conditions.
For part (ii), if θm ≥ θ̄, then by Theorem 1 the TI agents are strictly worse
P
I
off than any TC agents. Hence, from the first order conditions, θm0 λTm|m
0 = 0
TC
and the result follows. For θM TC agents, λM +1 = 0, the result follows.
For part (iii), if θm < θ̄, then by Theorem 1 the the TC agents are
either separated from the TI agents or bunched with the TI agents of the
same productivity. If the TI agents are separated from the TC agents,
17
P
TI
then from the first order conditions it must have
θm0 λm|m0 ≥ 0, which
P
1 0 ym,t
. It is thus immediate that τtL ≥ 0. If the TC
implies u0t cR
m,t > θm ht
θm
agents are bunched with TI agents, then by Lemma 2, the standard downward
distortion applies, τtL > 0, which is true also for any TC agent with θm < θM .
Proof of Corollary 1:
By Lemma 1, threats used to deter non-sophisticated agents from misreporting productivity can also be used to deter sophisticated agents. The
same also applies to the deterrent allocations. Hence, by adjusting the
deterrent allocation, the incentive compatibility constraints do not vary with
for β̂ ∈ [β, 1) .
R
R
P
Proof of Theorem 3: Let
cm (φ) , ym
(φ) , cPm (φ) , ym
(φ) θm ∈Θ denote the constrained optimal allocations implemented when Pr (T I) =
φ. Now consider an environment with Pr (T I) = φ0 . First note that
R
P
R
(φ) θm ∈Θ remains incentive compatible. Next
(φ) , cPm (φ) , ym
cm (φ) , ym
consider the following stochastic mechanism: if an agent reports as θm TI,
0
then on the equilibrium path, with probability φ φ−φ
the agent is assigned
0
P
R
(φ) and with probability φφ0 he is assigned cR
(φ)
. If
(φ)
,
y
cPm (φ) , ym
m
m
an agent reports as θm TC, then he is assigned the menu for TC agents. The
deterrent and threat allocations are unchanged.
Note that the resource constraint is satisfied, and the deterrent, threat
and executability constraints also hold. Also, the local downward incentive
compatibility constraints for the persistent allocations are trivially satisfied.
Finally, notice that by Theorem 1, for any θm , θm0 ∈ Θ the following holds
R
P
≥ U 0 cR
U0 cPm0 (φ) , ym
0 (φ) ; θm0
m (φ) , ym (φ) ; θm0 ,
P
P
(φ) ; θm0 .
U0 cPm0 (φ) , ym
≥ U0 cPm (φ) , ym
0 (φ) ; θm0
0 P
P
As a result, U0 cPm0 (φ) , ym
≥ φ φ−φ
U0 cPm (φ) , ym
(φ) ; θm0 +
0 (φ) ; θm0
0
φ
R
R
0 , the stochastic mechanism is also incentive compatiU
c
(φ)
,
y
(φ)
;
θ
0
m
0
m
m
φ
ble. Therefore, the stochastic mechanism guarantees a welfare at least W T (φ)
18
when Pr (T I) = φ0 .
Next, it is possible to construct a deterministic mechanism that delivers
the same utility for all agents while relaxing the resource constraint. For the
deterministic mechanism, the TC agents still receive the same menu, the deterrent and threat allocations remain the same, and the θm TI agents consume
P
c̃R
,
ỹ
m m on the equilibrium path, where for any t,
ut c̃R
m,t
ht
R
ỹm,t
θm
=
!
=
φ0 − φ
φ0
φ0 − φ
φ0
φ
ut cPm,t (φ) + 0 ut cR
m,t (φ) ,
φ
P
ym,t
(φ)
θm
ht
!
φ
+ 0 ht
φ
R
ym,t
(φ)
θm
!
.
Since the utility for each agent remains unchanged, only the resource constraint needsto be checked. Notice by
inequality, for any θm and t,
Jensen’s
φ0 −φ
φ0 −φ
φ R
R
P
R
P
R
c̃m,t < φ0 cm,t + φ0 cm,t and ỹm,t < φ0 ym,t + φφ0 ym,t
. Hence, the resource
constraint is relaxed while all the agents receive the same utility, which implies
that W T (φ0 ) > W T (φ) .
Furthermore, by the maximum theorem, W T (φ) is a continuous function
over [0, 1] . Thus, welfare increases continuously from the constrained efficient
level to the efficient level as φ increases.
Proof of Theorem 4:
The first part of the proof will establish the implementability of the optimal allocation from the conditional commitment mechanism. Set the persistent and real allocations to be the same as the optimal persistent and real
allocations from the conditional commitment mechanism. The deterrent allocations can be the same, or different as long as it is capable of deterring TI
agents from misreporting to be TC agents. For the imaginary allocations, for
I
P
all θm = Θ set ym
= ym
and
2
X
ut cIm,t
=
2
X
t=0
t=0
19
ut cPm,t .
In particular, set cI1,t = cP1,t = cR
1,t for any t, so the lowest type is not separated
along temptation. With this construction, all of the incentive compatibility
constraints are satisfied, since the persistent allocations from the conditional
commitment mechanism are incentive compatible. Next, choose cIm,t such that
the fooling and executability constraints are satisfied: ∀θm ∈ Θ
I
I
R
R
R
I
; θm ≥ W0 cR
W0 cIm , ym
m , ym ; θm ; V0 cm , ym ; θm ≥ V0 cm , ym ; θm .
Fooling takes place at t = 0. By the construction of the imaginary allocations,
the fooling and executability constraints imply
P
R
U0 cPm , ym
; θm − U0 cR
m , ym ; θm
1 − β̂
≥
U1I
P
R
U0 cPm , ym
; θm − U0 cR
m , ym ; θm
≥
.
1−β
By the incentive compatibility of the persistent and real allocations and by
P
P
R
R
Theorem 1, U cPm , km
, ym
; θm > U cR
m , km , ym ; θm for all θm > θ1 . Therefore, it is possible to find cIm for all productivity types to satisfy the fooling
constraints when 1 > β̂ > β.
The second part of the proof will argue the optimality of the optimal threat
allocation in a betting mechanism. Note that a direct truth-telling betting
mechanism has extra potentially binding constraints that a direct truth-telling
conditional commitment mechanism does not have: ∀θm , θm0 ∈ Θ,
P
I
U0 cPm , ym
; θm ≥ U0 cIm0 , ym
0 ; θm .
For a conditional commitment mechanism, a TC agent would never choose
the threat allocations. However, in a betting mechanism, the government
needs to deter the TC agent from pretending to be a TI agent and choose
the imaginary allocations. Hence, the government’s problem in a betting
mechanism has more constraints than a conditional commitment mechanism.
Since the optimal allocation from a direct conditional commitment mechanism
is implementable in a direct betting mechanism, then the optimal allocations
in both mechanisms must be equivalent.
20
Proof of Theorem 5:
Similar to the proof of Theorem 1, the proof will focus on a relaxed problem. In essence, only the local downward incentive compatibility constraints
are considered along with a monotonicity constraints on y P . The incentive
compatibility constraints are (7), (8) and (9). Notice the fooling constraint is
included in (9), when θm0 = θm . The executability constraints are
P
P
R
V0 cR
m , ym ; θm ≥ V0 cm , ym ; θm .
Fooling takes place at t = 0. Notice the fooling and executability constraints
imply that
P
R
U1 cPm , ym
; θm ≥ U1 cR
(10)
m , ym ; θm ,
and
R
ym,0
θm
u cR
m,0 − h
!
≥ u cPm,0 − h
P
ym,0
θm
!
.
(11)
Let ωm be the Lagrange multiplier associated with the executability constraint for TC agents of type θm . The rest of the Lagrange multipliers are the
same as in the proof for Theorem 1.
The first order conditions for the real allocations are


X
I
φπm −
 u 0 cR
λTm|m
0 + ωm
m,t = φπm µ,
θm 0

φπm −

X
I
 u0 cR
λTm|m
0 + βωm
m,t = φπm µ, ∀t > 0,
θm0
1
(φπm + ωm ) h0
θm
(φπm + βωm )
1 0
h
θm
R
ym,0
θm
!
R
ym,1
θm
−
X
I
λTm|m
0
θm0
1 0
h
θm0
!
−
X
θm 0
21
I
λTm|m
0
1 0
h
θm0
R
ym,0
θm0
!
R
ym,1
θm0
= φπm µ,
!
= φπm µ.
The first order conditions for the persistent allocations are
C
− ωm u0 cPm,0 = (1 − φ) πm µ,
(1 − φ) πm + λm − λTm+1
C
(1 − φ) πm + λm − λTm+1
− βωm u0 cPm,t = (1 − φ) πm µ, ∀t > 0,
1
[(1 − φ) πm + λm − ωm ] h0
θm
P
ym,0
θm
!
C
− λTm+1
1
θm+1
h0
P
ym,0
θm+1
!
= (1 − φ) πm µ + γm,0 − γm+1,0 ,
1
[(1 − φ) πm + λm − βωm ] h0
θm
P
ym,1
θm
!
C
− λTm+1
1
θm+1
h0
P
ym,1
θm+1
!
= (1 − φ) πm µ + γm,1 − γm+1,1 .
The first order condition on xm gives the following relationship for Lagrange
multipliers
X
λTmI0 |m .
λm = λTmC +
θm0
By the Kuhn-Tucker Theorem, the solution is characterized by the first order
conditions above and the complementary slackness conditions. Notice that if
ωm > 0, then there will be intertemporal distortions for type θm . The proof
will proceed to show how ωm > 0 for all θm > θ1 , there by proving part (i).
Suppose there exists θm > θ1 such that ωm = 0, then by the first order
R
P
P
R
R
P
P
conditions: cR
m,t = cm , cm,t = cm , ym,t = ym and ym,t = ym . If (11) is satisfied,
P
R
then cPm ≥ cR
m and ym ≥ ym for (10) to be satisfied. There are three cases: if
P
R
cPm > cR
m and cm = cm .
If cPm > cR
m , then by the first order condition,
C
λm − λTm+1
>1−
1+
(1 − φ) πm
22
P
θm0
I
λTm|m
0
φπm
.
P
R
Thus by (11), it must be the case that ym
> ym
. If
P
θm 0
I
λTm|m
0 = 0, then
P
R
C
λm − λTm+1
1 0 ym
γm+1,0 − γm,0
1 0 ym
1+
h
+
<
h
,
(1 − φ) πm θm
θm
(1 − φ) πm
θm
θm
(12)
P
P
R
P
= ym−1,0
is true. Thus, it follows that ym,0
> ym
which implies γm,0 > 0 if ym
with cPm,t > cPm−1,t for all t for local downward incentive compatibility to hold.
P
Hence, with θm̂ λTm|Im̂ = 0, a θm−1 agent has an incentive to misreport to be a
P
I
θm agent, which is a contradiction, so θm0 λTm|m
0 > 0. If this is the case, then
I
I
> 0 with λTm|m
it must be that λTm|m
0 = 0 for all θm0 ∈ Θ \ θm . This is because
TI
if λm|m0 > 0 for θm0 > θm , then θm agent would strictly prefer the real over
I
the persistent allocation. If λTm|m
0 > 0 for θm0 < θm , then there is a welfare
enhancing perturbation by increasing the output and consumption. However,
I
if λTm|m
> 0, then
P
C
λm − λTm+1
1 0 ym
γm+1,0 − γm,0
1+
h
+
<
(1 − φ) πm θm
θm
(1 − φ) πm
1−
I
λTm|m
φπm
!
1 0
h
θm
R
ym
θm
,
must be true, which creates a contradiction similar to the argument above.
Hence, if ωm = 0 for some θm > θ1 , then it can not be the case that cPm > cR
m.
Now, examine the case when cPm = cR
m . By the fooling and exeR
P
. However, this is not
cutability constraints, this means that ym = ym
R
R
R
optimal. Consider theperturbation:
c̃R
m = cm + α and ỹm = ym + , with
0 P −1 1 0 yP m
α = u cm
h θm+1
. As was demonstrated in the proof for Theorem
θm+1
1, the gain in welfare from the higher net output more than offsets the loss in
utility for the type θm TI agent. Hence, for all θm > θ1 , it must be that ωm > 0.
This implies the TI agents under-save, and the TC agents over-save at t = 0.
Proof of Theorem 6:
By Theorem 4, it is without loss of generality to focus on a betting mechanism for W(β̂ < 1). Let A(β̂ < 1) and A(β̂ = 1) denote the sets of real and
persistent allocations that are implementable under a betting mechanism with
partially naı̈ve and fully naı̈ve agents respectively.
23
First, I will show that A(β̂ = 1) ⊆ A(β̂ < 1). Note that for any a ∈ A(β̂ =
1), the persistent allocation is equivalent to the imaginary allocation and the
incentive compatibility constraints for all agents are trivially satisfied. Also,
since the executability constraints bind for a, the fooling constraints of the real
allocations hold. Hence, a ∈ A(β̂ < 1), which implies W(β̂ < 1) ≥ W(β̂ = 1).
Finally, I will show that the optimal allocation a∗ ∈ A(β̂ < 1) is not
implementable when agents are fully naı̈ve, in essence, a∗ 6∈ A(β̂ = 1).
To implement a∗ with fully naı̈ve agents, the government has to set the
imaginary allocations equal to the persistent allocations. By Theorem 1 and
P
R
Theorem 4, a∗ has the following property for θM : cPM > cR
M , and yM < yM .
Hence, the executability constraint is violated. Therefore, the real allocation
in a∗ is not implementable when agents are fully naı̈ve. This implies that
W(β̂ < 1) > W(β̂ = 1).
24