Download The communication cost of selfishness

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Mechanism design wikipedia , lookup

Transcript
Journal of Economic Theory 144 (2009) 1895–1920
www.elsevier.com/locate/jet
The communication cost of selfishness
Ronald Fadel a , Ilya Segal b,∗
a ABC Group, Lebanon
b Department of Economics, Stanford University, Stanford, CA 94305, United States
Received 26 January 2006; final version received 10 April 2007; accepted 13 September 2007
Available online 23 February 2009
Abstract
We consider how many bits need to be exchanged to implement a given decision rule when the mechanism
must be ex post or Bayesian incentive compatible. For ex post incentive compatibility, the communication
protocol must reveal enough information to calculate monetary transfers to the agents to motivate them to
be truthful (agents’ payoffs are assumed to be quasilinear in such transfers). For Bayesian incentive compatibility, the protocol may need to hide some information from the agents to prevent deviations contingent
on the information. In both settings with selfish agents, the communication cost can be higher than in the
case in which the agents are honest and can be relied upon to report truthfully. The increase is the “communication cost of selfishness.” We provide an exponential upper bound on the increase. We show that the
bound is tight in the Bayesian setting, but we do not know this in the ex post setting. We describe some
cases where the communication cost of selfishness proves to be very low.
© 2009 Elsevier Inc. All rights reserved.
JEL classification: D82; D83
Keywords: Communication complexity; Algorithmic mechanism design; Bayesian incentive compatibility; Ex post
incentive compatibility; Sequential and simultaneous communication protocols; Information sets
1. Introduction
This paper straddles two literatures on allocation mechanisms. One literature, known as
“mechanism design,” examines the agents’ incentives in the mechanism. Appealing to the “revelation principle,” the literature focuses on “direct revelation mechanisms” in which agents fully
* Corresponding author.
E-mail address: [email protected] (I. Segal).
0022-0531/$ – see front matter © 2009 Elsevier Inc. All rights reserved.
doi:10.1016/j.jet.2007.09.015
1896
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
describe their preferences, and checks their incentives to do so truthfully (e.g., [17, Chapter 23]).
However, full revelation of private information would be prohibitively costly in most practical
settings. For example, in a combinatorial auction with L objects, full revelation would require
describing the value of each of the 2L − 1 non-empty bundles of objects, which with L = 30
would take more than 1 billion numbers. The other literature examines how much communication, measured with the number of bits or real variables, is required in order to compute the social
outcome, assuming that agents communicate truthfully (e.g., [15,21,24], and references therein).
However, in most practical settings we should expect agents to communicate strategically to
maximize their own benefit.
This paper considers how many bits need to be exchanged among agents to implement a
given decision rule when the agents are selfish, and compares it to the number of bits required
when they are honest (the latter is known as “communication complexity”). We will refer to the
difference as the communication cost of selfishness, or, for short, the overhead. In cases in which
the overhead is high, economic goals that could be achieved with selfish agents but extensive
communication or with honest agents but limited communication are not achievable when the
agents are selfish and communication is limited at the same time.1
For a simple illustration, consider the problem of allocating an indivisible object efficiently
between two agents, who have privately known values for the object (which is formally described
in Example 1). In a direct revelation mechanism, we can make truthtelling a dominant strategy
for each agent by having the winner of the object pay a price equal to the loser’s report (as in the
second-price sealed-bid auction). We can try to reduce the communication cost by first asking
Agent 1 report his value and then asking Agent 2 to say whether his valuation exceeds Agent 1’s
report. (Since Agent 2 sends only one bit, the communication cost is reduced from full revelation
roughly by half if the two agents’ values have the same range, and possibly by more if they
have different ranges.) The problem with the new communication protocol, however, is that it
does not reveal enough information to construct payments to Agent 1 that make truthtelling a
dominant strategy or even an ex post best response for him. Intuitively, any such payments must
face Agent 1 with a price approximately equal to Agent 2’s valuation, but this valuation is not
revealed. We show that this mechanism cannot be incentivized with any payments, and that any
ex post incentive compatible mechanism that computes the efficient allocation in this example
requires sending more bits. Thus, the communication cost of ex post incentive compatibility is
strictly positive.
In general, a mechanism designer can use two instruments to motivate agents to be honest:
First, along with computing the desired allocation, she could use the communication protocol to
compute transfers to the agents (as in the above example). Second, in the course of computing the
outcome, the designer may hide some information from the agents (i.e., create information sets),
thus reducing the set of contingent deviations available to them. Both the need to compute motivating transfers and the need to hide information from agents may increase the communication
cost relative to that of computing the allocation when the agents are honest.
This paper analyzes the communication cost of selfishness for two distinct equilibrium concepts: Ex Post Incentive Compatibility (EPIC), in which an agent should report honestly even
1 Of course, prohibitive communication within a mechanism is not the only possible reason for why it may be impractical. For example, a mechanism may be difficult to explain to the agents, even if, once they understand it, it is easy
to execute. However, in situations in which the agents face the same type of problem and can use the same mechanism
again and again, the “fixed cost” of explaining the mechanism to the agents is amortized over a long horizon, and the
practicality of the mechanism is determined by the “variable cost” of communication within it.
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
1897
if he somehow finds out other agents’ private information, and Bayesian–Nash Incentive Compatibility (BIC), in which an agent reports honestly given his beliefs about the other agents’
information. In both settings, we focus on the case in which agents have private and independently drawn valuations over outcomes, their utilities are quasilinear in monetary transfers, and
the communication cost is defined as the maximal number of bits sent during the execution of
the mechanism.
For both EPIC and BIC implementation, we show that the communication cost of selfishness
may be strictly positive. However, the reasons for the overhead differ between the two cases: For
EPIC, there is no need to hide information from agents, and the overhead comes entirely from the
need to compute motivating transfers. For BIC, in contrast, computing transfers is not a problem,
and the overhead comes from the need to hide information from the agents, to eliminate some of
their contingent deviations.
We begin our analysis by showing that, both for EPIC and BIC implementation, any
simultaneous-communication protocol computing an implementable decision rule can be incentivized with some transfers. Intuitively, in such a protocol, we fully observe all agents’ strategies,
and this proves sufficient to compute incentivizing transfers. However, minimizing communication cost typically requires sequential (extensive-form) communication (so that the information
reported at a given stage is contingent on what has been reported previously), and such communication does not fully reveal the agents’ contingent strategies.
Next we observe that starting with any sequential protocol, we can convert it into a simultaneous protocol computing the same decision rule — the “normal-form” game in which agents
announce their complete contingent strategies in the original protocol. The resulting simultaneous protocol can then be incentivized. If the original sequential protocol communicated no more
than b bits, then the number of bits announced in the normal-form game will be at most 2b − 1.
This gives an upper bound on the communication cost of selfishness. However, the bound is
exponential, and we would like to know whether the bound is ever achieved.
For EPIC implementation, we do not know whether the exponential upper bound is ever
achieved — in fact, we do not have any examples where the overhead is large, and several examples in which it is fairly low. For example, the EPIC overhead is low for efficient allocation
rules (such as the example above), for which we can simply ask the agents to report their utilities
at the end of a protocol, and give each agent the sum of the other agents’ reports. On the other
hand, for the BIC case, we do provide an example with an exponential overhead. This example
(formally described in Example 2 below) can be interpreted as having an expert with private
knowledge and a private utility function, and a manager with a private goal that determines how
the expert’s knowledge is used. The expert will reveal his knowledge truthfully if he does not
know the manager’s goal, but this revelation requires long communication. Communication cost
could be reduced exponentially by having first the manager announce his goal and then the expert
say how to achieve it, but this communication would not be incentive-compatible — the expert
would manipulate the outcome to her advantage. We show that any communication in which the
expert’s incentives are satisfied would be exponentially longer — almost as long as full revelation
of the expert’s knowledge.
This example notwithstanding, we do find several cases in which the communication cost of
Bayesian incentives is low. In particular, it is zero for any EPIC-implementable rule (including,
e.g., efficient decision rules, such as in the allocation problem described above). Namely, we
show that any communication protocol that computes such a rule can be BIC incentivized by
computing transfers on the basis of its outcome, and without hiding any information from the
agents.
1898
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
Finally, in Appendices A and B we consider several extensions of the model. In particular, we
consider the communication cost measured as the average-case rather than worst-case number of
bits sent, given a probability distribution over agents’ private information. We also consider cases
in which the agents’ valuations are interdependent or correlated. In all these cases, we provide
examples in which the communication cost of selfishness can grow without bound even as the
communication cost with honest agents is fixed at just a few bits.
2. Related literature
A number of papers have proposed incentive-compatible indirect communication mechanisms
in various special settings. The first paper we know of is Reichelstein [22], who considered incentive compatibility in non-deterministic real-valued mechanisms, and showed that the communication cost of selfishness in achieving efficiency is low. Lahaie and Parkes [16] characterized the
communication problem of finding Vickrey–Clarke–Groves (VCG) transfers as that of finding a
“universal price equilibrium,” but did not examine the communication complexity of finding such
an equilibrium, or the possibility of implementing efficiency using non-VCG transfers. Neither
paper examined the communication complexity of decision rules other than surplus maximization. For an analysis of the communication requirements of incentive-compatible mechanisms in
networks, see Feigenbaum et al. [9].
A few papers on incentive-compatible communication have considered a “dual” question:
instead of asking how much communication is needed to achieve a given goal, they ask how
to maximize a given objective function subject to a fixed communication constraint. In one literature, the objective is to maximize the profits of one of the agents subject to other agents’
participation constraints (see, e.g., [12,18], and the recent survey [19]). A similar question is
studied in [14], which instead focuses on the efficiency objective.
Finally, the literature on communication without commitment (“cheap talk”) has offered examples in which incentive-compatible communication requires a large number of stages (e.g.,
[10]). In contrast, our mechanism commits to an outcome as a function of messages, yet we find
the communication cost as measured in bits to be potentially high (yet it would be possible to
send all the bits in one stage — e.g., in a direct revelation mechanism).
3. Communication with honest agents: communication complexity
The concept of communication complexity, introduced by Yao [26] and surveyed in [15],
describes how many bits agents
from a set I = {1, . . . , I } must exchange in order to compute
the value of a function f : i∈I Ui → X when each agent i ∈ I knows privately its argument
ui ∈ Ui , which we refer to as agent i’s “type.”2 Communication is modeled using the notion
of a protocol. In the language of game theory, a protocol is simply an extensive-form game
along with the agents’ strategies in it. Without loss of generality, the communication complexity
literature restricts attention to games of perfect information (i.e., each agent observes the history
of the game). Also, we restrict attention to protocols in which each agent has two possible moves
(messages, interpreted as sending a bit) at a decision node, since any message from a finite set
can be coded using a fixed number of bits. Formally,
2 With a slight abuse of notation, we will use the same letter to refer to a set and its cardinality.
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
Definition 1. A protocol P with the set of agents I over state space U =
space X is a tuple N1 , . . . , NI , L, r, c0 , c1 , x, σ1 , . . . , σI , where:
i∈I
1899
Ui and outcome
nodes of
• The sets N1 , . . . , NI and L are pairwise disjoint. Ni represents the set of decision
agent i ∈ I , while L denotes the set of terminal nodes (leaves). Let N = ( i∈I Ni ) ∪ L —
the set of all nodes of the protocol.
• The protocol forms a binary tree with root r ∈ N and the child relation described by c0 , c1 :
N\L → N\{r}. That is, c0 (n) and c1 (n) represent the two children of node n ∈ N \L. Each
node n ∈ N \{r} has a unique “parent” n such that either n = c0 (n ) or n = c1 (n ).
• x : L → X, where x(l) ∈ X is the outcome implemented at leaf l ∈ L.
• For each agent i ∈ I , σi : Ui → {0, 1}Ni is the agent’s strategy plan in the protocol, where
σi (ui ) ∈ {0, 1}Ni specifies the strategy of the agent of type ui — the moves he makes at
each of his decision nodes. That is, at each decision node n ∈ Ni , the agent moves to node
cσi (ui )(n) (n).3
For each strategy profile s = (s1 , . . . , sI ) ∈ i∈I {0, 1}Ni , let g(s) ∈ L denote the leaf l that
is reached when each agent i follows the strategy si . The function f : U → X computed by
protocol P is defined by f = x ◦ g ◦ σ , and denoted by Fun(P).
Given a protocol P, it is convenient to define for each node n ∈ N its “legal domain”
U (n) ⊂ U as the set of states in which node n is reached. For example, at the protocol’s root
r, U (r) = U . By forward induction
on the tree, it is easy to see that the legal domain at each
node n is a product set U (n) = i∈I Ui (n), using the fact that each agent’s prescribed move at
any node depends only on his own type. Without loss of generality, we consider only protocols
such that all the nodes n have a non-empty legal domain U (n).
The depth d(P) of a protocol P is the maximum possible number of moves between the root
and a leaf — i.e., the number of bits sent in the protocol in the worst case.4 The communication
complexity of a function is the minimal depth of a protocol computing it:
Definition 2. The communication complexity of a function f : U → X is CC(f ) =
infP : Fun(P )=f d(P).
4. Communication with selfish agents: binary dynamic mechanisms
4.1. The formalism
In our case, the function to be computed is the decision rule to be implemented. The protocol may also compute transfers to the agents. We now assume that each agent has preferences described by his type. With a slight abuse of notation, we identify each agent i’s type
3 It is customary in game theory to call the “strategy” of agent i the whole function σ , by interpreting the agent’s type
i
ui ∈ Ui as a “move of nature” on which his strategy could be contingent. However, for our purposes it is convenient to
reserve the term “strategy” to denote the agent’s behavior si ∈ {0, 1}Ni in the protocol.
4 We consider average-case communication costs in Appendix B.1.
1900
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
with his utility function ui : X → R over a set X of possible outcomes.5 We assume that the
agents’ payoffs are quasilinear in monetary transfers, hence the total payoff of agent i of type
ui ∈ Ui from outcome x ∈ X and transfer ti is ui (x) + ti . In particular, with such utilities,
we will often be concerned
with implementing a decision rule f that is efficient, i.e., satisfies
f (u) ∈ arg maxx∈X i∈I ui (x), ∀u ∈ U .
A protocol induces an extensive-form game, and prescribes a strategy in this game for each
agent. When agents are selfish, we need to consider their incentives to deviate to other strategies
in the game.6 For every agent i having type ui ∈ Ui , his incentive to follow the prescribed strategy
σi (ui ) depends on the monetary transfer that the protocol assigns to him along with the outcome.
In the Bayesian case, it also depends on how much the agent knows about the other agents’ types.
We formalize this with the notion of a binary dynamic mechanism:
Definition 3. A binary dynamic mechanism (BDM) is a triple P, H, t, where
• P is a protocol N1 , . . . , NI , L, r, c0 , c1 , x, σ1 , . . . , σI .
• H = (H1 , . . . , HI ), where Hi is the information partition of agent i — a partition of his
decision nodes Ni into information sets satisfying perfect recall.7
• t = (t1 , . . . , tI ), where ti : L → R is the transfer function for agent i. Thus, ti (l) is the transfer
made to agent i when leaf l ∈ L is reached.
• The agents’ strategies in protocol P are consistent with their information partitions. Namely,
say that agent i’s strategy si ∈ {0, 1}Ni is consistent with his information partition if it is
constant on any element of Hi , and let Si ⊂ {0, 1}Ni denote the set of such strategies. We
require that for every agent i ∈ I and type ui ∈ Ui , σi (ui ) ∈ Si .
Note that when the information partition of agent i becomes coarser (and hence its cardinality
|Hi | is reduced), his strategy space shrinks. The communication cost (or just depth) d(B) of a
BDM B = P, H, t is the depth d(P) of its protocol P.
4.2. Two particular classes of BDMs
It is useful to define two extreme cases of information revelation to the agents in mechanisms.
On the one hand, we define a Perfect information BDM (PBDM) as a BDM P, H, t in which
the agents observe the complete history of messages — i.e., every information set h ∈ H is
a singleton. On the other hand, we define a Simultaneous communication BDM (SBDM) as a
BDM in which agents never learn anything other than his previous moves, and so the game is
strategically equivalent to one in which all agents send all messages simultaneously. Formally,
an SBDM is a BDM P, H, t in which any two decision nodes n and n of an agent i that have
the same history of moves for agent i are in the same information set.
5 Sometimes we may want to allow different types to have the same utility function over outcomes from X, which can
be done by adding a fictitious outcome that gives a different utility for each of the types.
Note also that this formalism assumes private values, i.e., that the utility function of an agent is determined by his
own type. This assumption will be relaxed in Appendix B.3.
6 Our restriction to agents following the prescribed strategies is without loss of generality, for we can prescribe any
possible strategy profile of the game.
7 The information sets H of agent i satisfy perfect recall if, for every information set h ∈ H and every two nodes
i
i
n, n ∈ h, n and n have the same history of moves and information sets for agent i. See [11, p. 81] for more details.
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
1901
In particular, a direct revelation BDM is an SBDM in which the mapping g ◦ σ from the state
space into leaves is one-to-one (note that this is only possible when the state space is finite).
That is, at the end of the execution of the BDM the state is completely revealed but no agent has
observed anything besides his own moves.
Note that some protocols cannot be used in an SBDM, since their strategy plan may not
be compatible with simultaneous communication. A simultaneous communication protocol is
defined as a protocol that can be part of an SBDM. The following lemma will prove useful:
Lemma 1. For any protocol P, there exists a simultaneous communication protocol P such that
Fun(P) = Fun(P ) and d(P ) 2d(P ) − 1.
Proof. Construct protocol P by having each agent report his strategy in P, and prescribe the
outcome that would obtain in P given the reported strategies. (Since we require that all nodes
have a non-empty domain, the agents should be restricted to report only strategies in P that are
used by some types.) Since each agent’s strategy in P is not contingent on the other agents’
moves, P is a simultaneous communication protocol. To count the number of possible strategies
in P, note that each agent i will output
at most one bit for each of his decision nodes n ∈ Ni
in P, so the depth of P is at most i∈I |Ni | = |N \ L| 2d(P ) − 1. 2
4.3. Ex post incentive compatibility
The concept of ex post incentive compatibility means that for each input, the agents’ strategies
constitute a Nash equilibrium even if they know the complete input (i.e., each other’s types).
Definition 4. BDM P, H, t is Ex Post Incentive
Compatible (EPIC) if in any state u ∈ U , the
strategy profile s = (σ1 (u1 ), . . . , σI (uI )) ∈ i∈I Si is an ex post Nash equilibrium of the induced
game, i.e.,
+ ti g si , s−i .
∀i ∈ I, ∀si ∈ Si : ui x g(s) + ti g(s) ui x g si , s−i
In words, for every state u ∈ U , σi (ui ) is an optimal strategy for agent i whatever the types of
the other agents are, as long as they follow their prescribed strategies.8 When the BDM is EPIC,
we say that it implements Fun(P) in EPIC. It turns out that to check if a BDM is EPIC, we only
need to consider the transfer function and the legal domains of the leaves. Formally:
Lemma 2. BDM P, H, t is EPIC if and only if for every agent i ∈ I and every two leaves
l, l ∈ L:
(1)
U−i (l) ∩ U−i (l ) = ∅ ⇒ ∀ui ∈ Ui (l): ui x(l) + ti (l) ui x(l ) + ti (l ).
Proof. Suppose (1) holds. Then in each state, if the protocol should end at some leaf l, agent i
can only get to a leaf in {l ∈ L: U−i (l) ∩ U−i (l ) = ∅} by deviating, as it is the set of leaves
8 In our setting of private values, EPIC is equivalent to requiring that agent i’s strategy be optimal for him assuming
that each agent j = i follows a strategy prescribed for some type uj ∈ Uj . Note that we do not require the stronger
property of Dominant strategy Incentive Compatibility (DIC), which would allow agent i to expect agents j = i to use
contingent strategies sj ∈ Sj that are inconsistent with any type uj , and which would be violated in even the simplest
dynamic mechanisms. We discuss dominant strategy implementation in Appendix B.2.
1902
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
that are attainable for him given that the types of the other agents are in U−i (l). Hence he will
never be able to increase his payoff by deviating from the prescribed strategy, and the BDM is
EPIC. Now suppose (1) is violated for some agent i, leaves l and l , and type ui ∈ Ui (l). Then
in all states {ui } × (U−i (l) ∩ U−i (l )) = ∅, agent i would be strictly better off following the
strategy σi (ui ) for any type ui ∈ Ui (l ), which would violate EPIC. Note that we used the crucial
assumption that all leaves l have non-empty legal domains U (l). 2
It immediately follows from Lemma 2 that the information partition is irrelevant when considering ex post implementation.
Corollary 1. For every two BDMs B = P, H, t and B = P, H , t that differ only in their
information partitions, B is EPIC if and only if B is EPIC.
Hence, we can restrict attention without loss to PBDMs when we are concerned with EPIC.
Note that, by the Revelation Principle, any decision rule f that is implementable in an EPIC
BDM with transfer rule t : U → R must be Dominant strategy Incentive Compatible (DIC) with
this transfer rule, i.e., satisfy the following inequalities:
∀ui , ui ∈ Ui , ∀u−i ∈ U−i : ui f (ui , u−i ) + ti (ui , u−i ) ui f ui , u−i + ti ui , u−i .
(2)
Hence, all the EPIC-implementable rules we will consider in our private valuations setting are
necessarily DIC-implementable. In particular, using (2) for ui , ui ∈ Ui such that f (ui , u−i ) =
f (ui , u−i ), we see that ti (ui , u−i ) = ti (ui , u−i ), and therefore the transfer to each agent i can be
written in the form
for some tariff τi : X × U−i → R ∪ {−∞}.
(3)
ti (u) = τi f (u), u−i
4.4. Bayesian incentive-compatibility
The concept of Bayesian incentive-compatibility means that for each input, the agents’ strategies constitute an (interim) Bayesian Nash equilibrium given the probabilistic beliefs over the
states.
Definition 5. Given a probability distribution p over state space U , BDM P, H, t is Bayesian
Incentive Compatible for p (or BIC(p) for short) if the strategies σ1 , . . . , σN are measurable, and
∀i ∈ I, ∀ui ∈ U, ∀si ∈ Si : Eu−i ui x g σ (u) + ti g σ (u) ui
Eu ui x g s , σ−i (u−i ) + ti g s , σ−i (u−i ) ui .
−i
i
i
In words, for every agent i and every type ui ∈ Ui , σi (ui ) maximizes the expected utility
of the agent given the updated probability distribution p−i (.|ui ) over the other agents’ types
u−i ∈ U−i , as long as they follow their prescribed strategy plans. In the main text of the paper, we
focus on the case where the types of the agents are independently distributed, i.e., the probability
distribution p over states is a product of the individual probability distributions pi over each
agent i’s type, and so the expectations above need not be conditioned on ui . (This assumption
will be relaxed in Appendix B.4.)
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
1903
By definition, Bayesian implementation is weaker than ex post implementation: if BDM B
is EPIC, then it is BIC(p) for every distribution p. When the BDM is BIC(p), we say that it
implements Fun(P) in BIC(p).
4.5. Incentivizability
In standard mechanism design, according to the revelation principle, a decision rule is implementable for some equilibrium concept if and only if the direct revelation protocol for this
decision rule can be incentivized with some transfers. Now we want to define incentivizability
for general protocols.
Definition 6. A protocol P with I agents and set of leaves L is EPIC-incentivizable if there is a
transfer function t : L → RI and an information partition H of N \ L such that P, H, t is an
EPIC BDM.
A protocol P with I agents and set of leaves L is BIC(p)-incentivizable if there is a transfer
function t : L → RI and an information partition H of N \ L such that P, H, t is a BIC(p)
BDM.
Consider first EPIC incentivizability. By Lemma 2, it is equivalent to checking that there exists
a transfer rule that solves the system of linear inequalities, which can be done with standard linear
programming techniques (and regarding the information partition, we can restrict attention to
PBDMs). Note that by definition, a protocol is EPIC-incentivizable only if it computes an EPICimplementable decision rule. However, the converse is not true: not every protocol computing an
EPIC-implementable decision rule is EPIC-incentivizable:
Example 1. There are two agents and one indivisible object, which can be allocated to either
agent (and so we can write X = {(1, 0), (0, 1)}). The two agents’ values (utilities from receiving the object) lie in type spaces U1 = {1, 2, 3, 4} and U2 = [0, 5] respectively (their utilities
for not receiving the object are normalized to zero). The efficient decision rule f is EPICimplementable (e.g., using the Vickrey transfer rule ti (ui , u−i ) = −fi (ui , u−i )u−i for each i).
The efficient decision rule can be computed with the following protocol P0 : Agent 1 sends his
type u1 (using log2 4 = 2 bits), and then Agent 2 outputs allocation f (u1 , u2 ) (using 1 bit). Suppose in negation that protocol P0 computes a transfer rule t1 : U1 × U2 → R that satisfies the
ex post incentives of Agent 1. Given the information revealed in the protocol, t1 (u1 , u2 ) can depend only on u1 and f (u1 , u2 ), so it can be written as t1 (u1 , u2 ) = t1 (u1 , f (u1 , u2 )) for some
t1 : U1 × X → R. However, by (3), EPIC requires that t1 satisfy t1 (u1 , u2 ) = τ1 (f (u1 , u2 ), u2 )
for some τ1 : X × U2 → R. Hence t1 (u1 , u2 ) = t1∗ (f (u1 , u2 )) for some t1∗ : X → R. But then
if t1∗ (1, 0) − t1∗ (0, 1) 2.5, Agent 1 would want to deviate in state (u1 , u2 ) = (3, 3.5) to announcing u1 = 4 and getting the object, while if t1∗ (1, 0) − t1∗ (0, 1) > 2.5, Agent 1 would want
to deviate in state (u1 , u2 ) = (2, 1.5) to announcing u1 = 1 and not getting the object. In fact,
it can be shown in this example that no 3-bit protocol computing an efficient decision rule is
incentivizable, hence the communication cost of selfishness is positive.9
9 To see this, we can check by exhaustion that there are only two 3-bit protocols computing an efficient decision rule:
the one described in the example, and the one in which (i) Agent 1 first sends one bit about his valuation, (ii) Agent 2
says whether he already knows an efficient decision, (iii) if Agent 2 said yes, he announces the decision, and otherwise
1904
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
Now we turn to BIC incentivizability. By definition, a protocol is BIC(p)-incentivizable only
if it computes a BIC(p)-implementable decision rule. However, the converse is not true: not every
protocol computing a BIC(p)-implementable decision rule is BIC(p)-incentivizable:
Example 2. There are two agents — Agent 1 is the Expert and Agent 2 is the Manager. In addition to the set of outcomes X, there is a set of consequences M, with |M| = |X| = k. The
Manager’s private type is his “desired” consequence m ∈ M. The Expert’s private type is a pair
(u, δ), where u : X → [0, 1] is his utility function over outcomes, and δ : M → X is a oneto-one mapping, where δ(m) specifies the outcome needed to achieve consequence m.10 For
simplicity, we abstract from the Manager’s incentives by assuming that his utility over outcomes
is identically zero. The decision rule f implements the outcome that yields the Manager’s desired consequence m in to the mapping δ, i.e., f (m, (u, δ)) = δ(m). The two agents’ types are
independently and uniformly distributed on their respective domains.
We assume that the realized consequence is not contractible (e.g., it is not observed by the
Manager until after the mechanism is finished), so the transfers cannot be based on it, but only on
the agents’ messages. With a selfish Expert, a direct revelation mechanism satisfies the Expert’s
Bayesian incentives without using transfers, since the Expert faces the same uniform distribution over consequences regardless of the mapping δ he reports. Hence the decision rule is
BIC(p)-implementable. With an honest Expert, the outcome can instead be computed with the
following much simpler protocol P0 : the Manager reports the consequence m ∈ M he wants,
and the Expert reports the corresponding decision δ(m). However, if the Expert is selfish, he
will always report an outcome that maximizes his utility plus the transfer, and so no transfer rule
computed by P0 can induce him to be truthful for all possible utility functions u. Hence, P0 is
not BIC(p)-incentivizable. In fact, we will show in Section 6.3 that any BIC(p) mechanism must
take exponentially more communication than P0 and a comparable amount to a direct revelation
mechanism.
4.6. Incentive communication complexity
As we just saw, the cheapest communication protocol that computes a given EPIC (BIC(p))implementable decision rule may not be EPIC (respectively, BIC(p))-incentivizable. So the need
to satisfy incentive constraints may require an increase in the communication cost:
Definition 7. CCEPIC (f ), the ex post incentive communication complexity of a decision rule f ,
is the minimal depth d(B) of a BDM B that implements f in EPIC.
CCBIC
p (f ), the Bayesian incentive communication complexity of a decision rule f with state
distribution p, is the minimal depth d(B) of a BDM B that implements f in BIC(p).
The communication cost of selfishness (overhead for short) can now be defined as the difference between CC(f ) and CCEPIC (f ) in the ex post setting, or the difference between CC(f ) and
CCBIC
p (f ) in the Bayesian setting with state distribution p.
Agent 1 announces an efficient decision. In this protocol, the EPIC constraints of Agent 2 cannot be satisfied with any
transfer, by an argument similar to that in Example 1.
10 Even though δ does not affect the agent’s utility over X, this model still fits in our framework as explained in
footnote 5 above.
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
1905
5. Overhead for ex post implementation
5.1. Characterization of the overhead
For ex post implementation, according to Corollary 1, we never need to hide information from
agents, and so a restriction to PBDMs is without loss of generality. Intuitively, this is because ex
post implementation requires that every agent follows his prescribed strategy even if he knows
the types of the other agents (i.e., the complete state). Hence the overhead comes only from the
need to reveal additional information to compute the right transfers for each agent. Therefore,
the incentive communication complexity of ex post implementation can be seen as the communication cost of the cheapest protocol among all protocols that compute an EPIC social choice
rule (i.e., decision rule + transfer rule) implementing our decision rule.
Formally, let T be the set of all possible transfer rules t : U → RI that satisfy the ex post
incentive constraints (2) for a given decision rule f , and let the function ft : U → X × RI
be defined as ft (u) = (f (u), t (u)). Then, {ft : t ∈ T } is the set of EPIC social choice rules
implementing f , and CCEPIC (f ) is exactly mint∈T CC(ft ). As such, it is radically different
from the communication complexity of any given function or relation, as the set of acceptable
transfers at one leaf depends on the transfers given at other leaves.
5.2. An exponential upper bound
We now show that the communication cost of selfishness in EPIC implementation can be
bounded by an (exponential) function of the communication complexity of the decision rule. We
do this by noting that for a given EPIC-implementable decision rule, full revelation of types,
while sufficient, may not be necessary to find incentivizing transfers. In fact, we can show that
if we are given a protocol that computes an implementable decision rule, it is sufficient to reveal
what each agent would do for all possible messages of the other agents. (That is, it suffices to
reveal each agent’s strategy in the protocol, with two strategies viewed as equivalent whenever
they prescribe the same move for all possible messages of the other agents.) In particular, in an
SBDM, an agent’s strategy is not conditioned on the other agents’ messages, and so we can state:
Proposition 1. Given an EPIC-implementable decision rule f , every simultaneous communication protocol that computes f is EPIC-incentivizable.
Proof. A simultaneous communication protocol P can be thought of as encompassing I singleagent protocols: For each agent i, let Li be the set of possible move sequences by agent i, and let
gi (si ) ∈ Li be the agent’s move
sequence when he uses strategy si ∈ Si . The set of leaves of P
can then be described as L = i∈I Li , and the outcome function as g(s) = (g1 (s1 ), . . . , gI (sI )).
The legal domain of leaf l = (l1 , . . . , lI ) takes the form
(4)
Ui (li ), where Ui (li ) = ui ∈ Ui : gi σi (ui ) = li .
U (l) =
i∈I
Now, for each agent i, fix a selection γi : Li → Ui such that γi (li ) ∈ Ui (li ) for all li ∈ Li . Let
γ = (γ1 , . . . , γI ). Pick a transfer rule t : U → RI that satisfies the incentive constraints (2), and
define t¯ : L → RI by t¯ = t ◦ γ .
We now argue that the SBDM P, H, t¯ (with any information partition H ) is EPIC, using the
characterization in Lemma 2. For this purpose, observe that under (4) the inequalities (1) amount
to saying that for any l ∈ L, li ∈ Li , and any ui ∈ Ui (li ), we must have
1906
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
ui x(l) + t¯i (l) ui x li , l−i + t¯i li , l−i .
Let u−i = γ−i (l−i ) and ui = γi (li ). Since P implements f , we have x(l) = f (γi (li ), u−i ) =
f (ui , u−i ) and x(li , l−i ) = f (ui , u−i ). By (3) (which is implied by (2)), we must also have
t¯i (l) = ti (γi (li ), u−i ) = ti (ui , u−i ). Thus, the above inequality follows from (2). 2
Note that the resulting SBDM also satisfies the stronger Dominant strategy Incentive Constraints (DIC) since every strategy in the strategy space is used by some type.
Corollary 2. If f is an EPIC-implementable decision rule, then: CCEPIC (f ) 2CC(f ) − 1.
Proof. For any protocol P that achieves the lower bound CC(f ), by Lemma 1, there is a simultaneous communication protocol P computing the same decision rule such that d(P ) 2d(P ) −1 = 2CC(f ) −1. By Proposition 1, P is EPIC-incentivizable, which proves the result. 2
This upper bound shows that the communication cost of selfishness is not unbounded but is at
most exponential. In particular, it implies that any EPIC implementable decision rule f that can
be computed with finite communication can also be EPIC implemented in a finite BDM (even if
the state space U is infinite).11
The upper bound of Corollary 2 can be improved upon by eliminating those strategies in the
simultaneous protocol that are not used by any type in the original protocol.
Example 3. Consider the efficient object allocation setting described in Example 1. Protocol P0
has depth 3, so by Corollary 2, there is a protocol of depth 23 − 1 = 7 that is EPIC-incentivizable.
But we can go further: Agent 1 needs only 4 strategies in P0 (one for each of his types) out of
the 23 = 8 possible strategies, and Agent 2 needs only 5 strategies out of 24 = 16, each of the 5
strategies being described by a threshold of Agent 1’s announcement below which Agent 2 takes
the object. Hence, full description of such strategies takes log2 4 + log2 5 = 5 bits. So there
is a protocol of depth 5 that is EPIC-incentivizable.
It is unknown, however, whether there is an EPIC-implementable decision rule f such that
CCEPIC (f ) is even close to this exponential bound of 2CC(f ) . In particular, an open problem is
to determine the highest attainable upper bound, and to determine if there are any instances in
which the incentive communication complexity of decision rule f , CCEPIC (f ), is much higher
than the communication complexity of f , CC(f ).
5.3. Low overhead for efficient decision rules
It is well known that any efficient decision rule is EPIC-implementable (and implementable in
dominant strategies) by giving each agent a payment equal to the sum of the other agents’ utilities
from the computed outcome (as in the VCG mechanism). Following the same idea, starting with
any protocol computing an efficient decision rule f , we can satisfy EPIC by having the agents
report their utilities from the outcome computed by the protocol, and then transfer to each agent
11 Our restriction to private values and our use of the worst-case communication cost measure are both crucial for
this result. With interdependent valuations, or with the average-case communication cost, the EPIC overhead can be
arbitrarily high, as we show in Appendices B.3, B.1 respectively.
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
1907
a payment equal to the sum of the other agents’s reported utilities. This approach dates back to
Reichelstein [22] and was more recently used in [2], where it is called “the Team Mechanism.”
This approach has a low communication overhead when the utility ranges of the agents have
low cardinality. For simplicity, we consider the case when agents’ utilities are given in discrete
multiples:
Definition 8. The utility function space U has discrete range with precision γ if ui (x) ∈ {k2−γ :
k = 0, . . . , 2γ − 1} for all x ∈ X, i ∈ I , ui ∈ Ui .
In this case, we can modify any efficient protocol to make it EPIC-incentivizable as follows:
At each leaf l, let each agent report his utility vi = ui (x) using γ bits, and give each agent a
transfer ti = j =i vj . Thus, we have
Proposition 2. For a utility function space U with discrete precision-γ range, and an efficient
decision rule f ,
CCEPIC (f ) CC(f ) + I γ .
Thus, the communication cost of selfishness is at most γ I bits.
Thus, the EPIC overhead of efficiency is bounded above by a number that does not depend
on the communication complexity of the decision rule, and it is low if the utility range has a low
cardinality.12 Other low-overhead cases are considered in Appendix A.
6. Overhead for Bayesian implementation
6.1. Characterization of the overhead
Intuitively, with Bayesian implementation, agents might be better off lying if they have too
many contingent deviations, so the overhead comes from the need to hide information from the
agents, and the restriction to PBDM is not without loss of generality. Given a protocol, we can
minimize the set of deviations by having a maximally coarse information partition, subject to
perfect recall and to giving each agent enough information to compute his prescribed move at
each step. (E.g., the information partition in an SBDM is maximally coarse because each agent
never observes the others’ moves during the execution of the protocol.) However, in general even
the maximally coarse information partition may not ensure Bayesian incentives. In such cases,
the need to hide information from the agents will require more information to be communicated
simultaneously rather than sequentially, which may create a substantial BIC overhead.13
In contrast, computing incentivizing transfers is not a problem in itself with Bayesian implementation, as the following useful result suggests:
12 The result extends easily to decision rules f that are “affine maximizers” [4], which can be interpreted as efficient
upon rescaling the agents’ utilities and adding a fictitious agent with a known utility. [23] shows that all decision rules
that are DIC implementable on the universal domain are affine maximizers.
13 The idea of hiding as much information as possible to maximize incentives can be traced back to Myerson’s “communication equilibrium” [20] where a mediator receives the types of the agents and tells each of them only what his
prescribed move is. Likewise, in a protocol, if at each node n ∈ Ni agent i learns only what his prescribed move function
is (the function σi (.)(n) from types Ui to moves in {0, 1}), it yields maximally large information sets and minimizes the
set of his feasible deviations.
1908
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
Lemma 3. Let P be a BIC(p)-incentivizable protocol computing decision rule f , for some product state distribution p. Then the protocol P obtained from P by stopping its execution as soon
as the outcome is known is also BIC(p)-incentivizable.
Proof. It follows by induction using the following statement: Let P be a BIC(p)-incentivizable
protocol computing decision rule f , and suppose the left and right children of some node n are
leaves l and l that satisfy x(l) = x(l ) = x, then the protocol P obtained from P by removing
l, l and making n a leaf with outcome x(n) = x is also BIC(p)-incentivizable. To prove this
statement, consider a BIC(p) BDM B = P, H, t and let h ∈ Hi be the information set that
contains n. We construct from it a BIC(p) BDM B = P , H , t in the following way. First
note that the incentives of all agents except agent i can be satisfied in B by giving them at leaf n
the expected value of their transfers at l and l , so we focus on satisfying the incentives of agent i.
If h = {n}, agent i’s incentives in the original BDM require that ti (l) = ti (l ) (because both leaves
have a non-empty domain) and so we define H = H \ h, ti (n) = ti (l), with the same values as
t everywhere else. Clearly, B satisfies the incentives of agent i since it is exactly the same for
him to be at n, l or l in B. If instead h contains more nodes, let q ∈ (0, 1) be the probability
for agent i that he is at node n given that he is at information set h. First note that BIC(p) is
satisfied by the BDM P, H, t , where t is the same as t except that ti (l) = ti (l ) = ti (l)
and for every leaf l that can be reached after a right move from a node in h \ {n}, ti (l ) =
ti (l ) + (ti (l ) − ti (l))q/(1 − q). Hence, by defining ti (n) = ti (l) and ti as ti everywhere else,
the incentive constraints of agent i are satisfied in B . 2
The lemma means that if we have a BIC(p) BDM and if at some node n of this BDM we have
enough information to compute the outcome, we also have enough information to compute an
incentivizing transfer t (n), and so we do not need any more communication to satisfy the agents’
incentive constraints.14
In general, it is hard to check if a protocol is BIC(p)-incentivizable.15 However, a simple
sufficient condition for a BDM to be BIC(p) is that no information that an agent i could possibly
receive during the execution of the BDM (whatever strategy si ∈ Si he uses) could ever make his
prescribed strategy suboptimal. Formally:
Proposition 3. A protocol P computing decision rule f is BIC(p)-incentivizable with information partition H for some product distribution p if there is a transfer rule t : U → RI such that,
for every agent i ∈ I and each his information set h ∈ Hi that is reached with a positive probability, the decision rule f with transfer ti satisfies BIC(p h ) for agent i with the updated distribution
p h over U at h, i.e.,16
∀ui , ui ∈ Ui , Eu−i ui f (u) + ti (u)h Eu−i ui f ui , u−i + ti ui , u−i h .
Proof. Let t¯i (l) = Eu [ti (u)|u ∈ U (l)], where the expectation uses distribution p. Then the BDM
P, H, t¯ is BIC(p). Indeed, for any possible deviation, and whatever information he learns about
14 The assumed independence of types is crucial for this result. We will consider correlated types in Appendix B.4.
15 Verifying that a given BDM is BIC(p) could be done using the “one-step deviation principle” of dynamic program-
ming. Note, however, that we need to consider the strategy at every information set of every type of the agent, including
the types that are not “legal,” i.e., are not supposed to reach this information set given their equilibrium strategies.
16 Note that we use the interim definition of Bayesian incentive-compatibility: each agent i must be maximizing his
expected utility at all information sets h, including those at which his type ui has probability pih (ui ) = 0.
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
1909
the state, agent i will always (weakly) regret not having been truthful. Hence he cannot have a
profitable deviation. 2
Note that the transfer rule t used in the proposition need not be computed by the protocol —
we only need its existence. By the same logic as in Lemma 3, the protocol will then compute
some other transfer rule that will satisfy the incentives of the agents at each step. Also, note that
to apply Proposition 3, it suffices to check the BIC(p h ) inequalities for each agent i only at his
information sets h at which his move may be his last one. This will immediately imply BIC(p h )
at all the other information sets h.
6.2. An exponential upper bound
In the same spirit as with ex post implementation, we obtain the following result:
Proposition 4. Given a BIC(p)-implementable decision rule f for some product distribution p,
any simultaneous communication protocol that computes f is BIC(p)-incentivizable.
Proof. Consider the information partition H of an SBDM with this protocol. With this information partition, since no agent learns anything about the other agents’ types during the execution
of the protocol, the result follows immediately from Proposition 3 and from the definition of
BIC(p)-implementability. 2
Corollary 3. If f is a BIC(p)-implementable decision rule for some product distribution p, then
CC(f ) − 1.
CCBIC
p (f ) 2
Proof. Given the protocol P that achieves the lower bound CC(f ), by Lemma 1, there is a
simultaneous communication protocol P that computes the same decision rule such that d(P ) 2d(P ) − 1 = 2CC(f ) − 1. By Proposition 4, P is incentivizable, which proves the result. 2
This exponential upper bound implies, in particular, that any BIC implementable decision
rule f that can be computed with finite communication can also be BIC implemented in a finite
BDM (even if the state space U is infinite).17
Example 4. Consider the Manager–Expert setting of Example 2. Protocol P0 has depth
2log2 k, so by Corollary 3 there exists a simultaneous communication protocol of depth
22log2 k − 1 ∼ k 2 that computes the same decision rule and that is BIC(p)-incentivizable. Note
that the communication cost of this protocol is finite whereas full revelation would have required
infinite communication (because of the continuous utility range [0, 1] of the Expert).
6.3. The upper bound is tight
Unlike in the ex post implementation setting, where we do not know if the upper bound is
tight, in the Bayesian case we have an example that achieves the exponential upper bound.
17 Our restriction to independent types and our use of the worst-case communication cost measure are both crucial for
this result. With correlated types, or with the average-case communication cost, the BIC overhead can be arbitrarily high,
as we show in Appendices B.4, B.1 respectively.
1910
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
Proposition 5. For any integer k > 0, there exists a BIC(p)-implementable
decision rule
k
,
with
a
uniform state
(f
)
0.5
log
f : U → X such that CC(f ) 2log2 k but CCBIC
2 k/4
p
distribution p.
Proof. See Appendix C.1.
2
k
By Stirling’s formula, 0.5 log2 k/4
is asymptotically equivalent to Ak (where A ≈ 0.41),
which is exponentially higher than log2 k. This shows that our exponential upper bound is
essentially tight.
Proposition 5 is proven in Appendix C.1 by formalizing the following example:
Example 5. Consider the Manager–Expert setting of Example 2. The communication complexity of the decision rule is CC(f ) CC(P0 ) = 2log2 k. We have seen in Example 4 that there
exists a BIC(p)-incentivizable protocol with a depth of about k 2 . We can reduce the communication cost somewhat by using the following BIC(p) BDM: the Expert reports his entire mapping
δ, and then the Manager announces the outcome δ(m). But the communication cost of the Expert reporting his mapping δ is of order log2 k! ∼ k log2 k, which is still exponentially higher than
CC(f ). We prove that any mechanism satisfying the Expert’s Bayesian incentive constraints cannot have a significantly lower communication cost — it must be at least of order k, which is still
exponentially higher than CC(f ). The intuition for the proof has two parts: (1) Transfers cannot
be used effectively to counterbalance the Expert’s private utilities from the outcomes, since the
utility range of the Expert is continuous, and so cannot be extracted with finite communication,
and (2) Any mechanism with a lower communication cost than of order k must significantly reduce the information revelation by the Expert, which can be achieved only by giving the Expert
too much information too soon about the desired consequence m, allowing him to infer how his
knowledge will be used and to bias his report according to his preferences.
Proposition 5 demonstrates that the communication cost of selfishness can be prohibitive in
the Bayesian setting. After this negative result, we conclude with the following positive one.
6.4. No overhead for EPIC-implementable decision rules
In this case (which includes all efficient decision rules and all “affine maximizers” [4]), there
is no communication cost of selfishness.18
Proposition 6. If P is a protocol that computes an EPIC-implementable decision rule f , then P
is BIC(p)-incentivizable for every product state distribution p.
Proof. This follows from Proposition 3, as the decision rule f is EPIC-implementable with
some particular transfer rule t, and hence f is also BIC(p)-implementable for every distribution
p with the same transfer rule t. 2
Example 6. Consider the efficient object allocation setting described in Example 1 with some
product distribution p over U = U1 × U2 . Recall that the efficient allocation rule is f EPIC18 While the mechanism constructed in Proposition 6 need not be budget-balanced, it is in fact possible to construct BIC
transfers that achieve budget balance (i.e., i∈I ti (u) = 0 ∀u) along the lines suggested in [2, Proposition 2].
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
1911
implementable, e.g., using the Vickrey transfers that charge each agent a price for the object
that is equal to the other’s reported valuation, and charge nothing when he does not get the
object. Recall also that protocol P0 , which computes f without revealing Agent 2’s value, is
not EPIC-incentivizable. We can still EPIC-incentivize Agent 2 in P0 by having him pay a price
for the object that is equal to Agent 1’s announced valuation. As for Agent 1, we cannot EPICincentivize him, but we can BIC(p)-incentivize him by having him pay a price for the object that
is equal to the expected value of Agent 2 conditional on it being below Agent 1’s announcement.
Intuitively, this price makes him internalize the expected externality he imposes on Agent 2 with
his announcement (assuming that Agent 2 reports truthfully).
Proposition 6 also implies that there is no communication cost of selfishness with respect to
the average-case communication cost measure (defined in Appendix B.1) when we use Bayesian
implementation for an EPIC-implementable decision rule.
7. Conclusion
We have examined the communication cost of selfishness in the independent private valuations
settings. On the one hand, with ex post implementation, we have shown that the overhead comes
only from the need compute a transfer rule that will satisfy the agents’ incentives. On the other
hand, with Bayesian implementation, we have shown that we never need additional information
to compute transfers, and the overhead comes only from the need to hide information from the
agents. Quantitatively, the communication cost of selfishness turns out to be at most exponential,
and this upper bound is tight for Bayesian implementation.
Also, we have considered some special cases in which the communication cost of selfishness
is low. In the ex post setting, it includes the case where the decision rule is efficient and the utility
ranges of the agents have low cardinality. In the Bayesian setting, it includes the case where the
decision rule is EPIC-implementable: the communication cost of selfishness in this case is zero.
Two other low-overhead cases for the ex post setting are considered in Appendix A.
Finally, in Appendix B we consider some extensions of our basic setting. In particular, we
show that the communication cost of selfishness is unbounded when we allow interdependent
valuations with ex post implementation or correlated types with Bayesian implementation, or
when we consider the average-case communication cost measure with either implementation
concept. A similar conclusion for the problem of implementing a decision correspondence rather
than decision rule is shown in [8].
The main open questions raised by the paper are the following:
1. How high is the communication cost of selfishness with ex post implementation?
2. From a practical point of view, how broad are the cases in which the communication cost of
selfishness is low?
3. Can the communication cost of selfishness be reduced substantially if agents’ utilities have
a given finite precision (or, equivalently, their incentive constraints need to be satisfied only
approximately)?
We hope that these questions will be addressed in future research.
1912
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
Acknowledgments
The authors gratefully acknowledge the support of the National Science Foundation (grants
SES 0318447, ITR 0427770). Preliminary results from this work were reported in an extended
abstract [7]. The authors thank Kester Tong for excellent research assistance.
Appendix A. Low-overhead cases for ex post implementation
In this appendix, we consider two cases in which the communication cost of selfishness is
low. In the first section, we consider agents whose utilities are linear in a single privately known
parameter, and decision rules that are efficient or approximately efficient. In the second section,
we consider the case in which there are only two agents whose utilities are given with a fixed
precision, and arbitrary EPIC-implementable decision rules.
A.1. Efficient decision rule: single-parameter agents
Here we consider decision rules f that are
-efficient for some 0, i.e., always choose an
outcome x that maximizes the sum of utilities i∈I ui (x) within . Furthermore, we assume that
the agents are “single-parameter,” i.e.
Definition 9. Agent i is a single-parameter agent if there exists some set Vi ⊂ R and some
functions ai , bi : X → R such that the set of agent i’s possible utility functions on X takes the
form
Ui = {vi ai + bi : vi ∈ Vi }.
It turns out that we can incentivize single-parameter agents while preserving -efficiency with
an overhead that is at most linear in the communication complexity of the decision rule:
Proposition 7. Consider an environment with I single-parameter agents. Let f be an -efficient
decision rule. Then there exists an -efficient decision rule f for which
CCEPIC (f ) I · CC(f ) + 1 .
Proof. Consider a protocol P computing f . For each node l of the protocol and each agent i, let
v i (l) = inf vi ∈ Vi : vi ai + bi ∈ Ui (l) ,
v i (l) = sup vi ∈ Vi : vi ai + bi ∈ Ui (l) .
Given that the agents’ utilities are linear in vi ’s, the outcome x(l) must be -efficient for any
(v1 , . . . , vI ) ∈ i∈I [v i (l), v i (l)]. For each agent i, the thresholds v i (l), v i (l) for l ∈ L partition
set Vi into at most 2|L| intervals. Consider now the simultaneous communication protocol P in which each agent i reports which of these intervals
his vi lies in. This protocol allows us to
find a leaf l ∈ L of P for which (v1 , . . . , vI ) ∈ i∈I [v i (l), v i (l)] and implement the outcome
x(l). The new protocol P computes an -efficient outcome, with each agent sending at most
log2 (2|L|) 1 + d(P) bits. Furthermore, since it is a simultaneous communication protocol,
P is EPIC-incentivizable by Proposition 1. 2
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
1913
For example, in the problem of allocating one indivisible object among I agents without
externalities, the agents are single-parameter agents, and the theorem implies that selfishness
multiplies the communication complexity of achieving a given efficiency approximation by at
most I .19
A.2. Two agents with fixed utility precision
Recall from (3) that when decision rule f is EPIC-implementable with transfer rule t, the
transfer ti to agent i can be written as ti (u) = τi (f (u), u−i ). Furthermore, if the utilities have
discrete range, we can restrict attention to discrete transfers. With two agents, agent −i can output
the transfer at the end of any protocol computing f (u), and so we obtain a BDM implementing
f . This argument yields
Proposition 8. Suppose that I = 2 and that the utility function space U has discrete range with
precision γ . Then for every EPIC-implementable decision rule f ,
CCEPIC (f ) CC(f ) + 2(γ + 1).
Proof. We can fix τi (x0 , u−i ) = 0 for every u−i for an arbitrary fixed outcome x0 ∈ X. Then
EPIC implies that |τi (x, u−i )| 1 − 2−γ . Furthermore, since utilities have discrete range with
precision γ , we can round down all transfers to multiples of 2−γ while preserving EPIC. Then
reporting such a transfer takes 1 + γ bits. 2
Appendix B. Extensions
In this section, we consider several extensions of our analysis, namely to average-case communication cost, dominant strategy implementation, interdependent valuations, and correlated
types. We indicate when our previous results continue to hold, and when they need to be modified.
B.1. Average-case communication cost
The communication cost measure used so far is the number of bits sent during the execution
of a protocol in the worst case. We may also be interested in the average-case communication
cost, given some probability distribution over the states:
Definition 10. If u ∈ U , let d(P, u) be the depth of the (unique) leaf l in protocol P such that
u ∈ U (l). For each state probability distribution p over U , we define the average communication
cost of P as ACCp (P) = Eũ [d(P, ũ)], where ũ is drawn from p. Furthermore, given a function
f : U → X, we define the average communication complexity of f given state distribution p as
ACCp (f ) = minP :Fun(P )=f ACCp (P).
(f ) and ACCBIC
ACCEPIC
p
p (f ), the average ex post and Bayesian incentive communication
complexity of a decision rule f with state distribution p, is the minimal average communication
cost ACCp (B) over all BDMs B that implement f in EPIC or BIC(p) respectively.
19 This result could also be derived using a theorem in [3] that shows that any sequential communication in this setting
can be replaced with simultaneous communication with multiplying the complexity by at most I , using the fact that any
simultaneous communication protocol computing an EPIC-implementable decision rule is EPIC-incentivizable.
1914
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
The average-case communication cost of selfishness is the difference between ACCp (f )
(f ) with ex post implementation, and it is the difference between ACCp (f ) and
and ACCEPIC
p
ACCBIC
(f
)
with
Bayesian implementation.
p
Note that for every protocol P and for every state distribution p, ACCp (P) CC(P). It
follows immediately that for every decision rule f and every state distribution p: ACCp (f ) BIC
(f ) CCEPIC (f ) and ACCBIC
CC(f ), ACCEPIC
p
p (f ) CCp (f ).
We now show that with ex post implementation, the communication cost of selfishness is
unbounded, even if we restrict attention to the case of two agents, an efficient decision rule, and
the uniform state distribution.
Proposition 9. For every α > 0 there exists an efficient decision rule f with two agents such that,
given the uniform state distribution p: ACCp (f ) < 4 but ACCEPIC
(f ) > α.
p
Proof sketch. Consider the problem of allocating an indivisible object to one of the two agents,
as in Example 1 above, but with the agents’ types drawn independently from a uniform distribution over U1 = U2 = {k2−γ : k = 0, . . . , 2γ − 1}. Let f be the “efficient” decision rule. f can be
computed using a bisection protocol with an average-case communication cost of at most 4 bits,
whatever is the precision γ . However, any BDM that implements f in EPIC must compute the
exact valuation of at least one of the agents, with a positive probability (say, at least 1/32).
This will take communication that is of the order of γ bits. See Appendix C.2 for the complete
proof. 2
Our average-case analysis can be extended to infinite protocols. While we have not formally
defined such protocols, we can imagine that there are some such protocols whose average-case
cost is finite. E.g., we can use such a protocol to find an efficient allocation for an object between
two agents whose valuations are uniformly distributed on [0, 1] using only 4 bits on average.
However, no protocol having a finite average-case communication cost is EPIC-incentivizable in
this case.
The average-case communication cost of selfishness is also unbounded for Bayesian implementation, even with only two agents:
Proposition 10. For any α > 0 and > 0, there exists a BIC(p)-implementable decision rule f
with two agents such that: ACCp (f ) < 1 + but ACCBIC
p (f ) > α.
Proof. Consider the rule f used to prove Proposition 5. The rule satisfies ACCp (f ) CC(f ) k
2log2 k, but also satisfies ACCBIC
p (f ) 0.5 log2 k/4 , as shown in Appendix C.1. Let us construct the following rule f from the rule f , by extending the type of Agent 2 to include a bit
b that is equal to 1 with probability 0.5/log2 k, and by adding an outcome x0 that always
gives utility 0 to every agent for every type. f dictates x0 whenever b = 0 and dictates the same
outcome as f whenever b = 1. We get by construction:
ACCp (f ) 1 · 1 − 0.5/log2 k + 2log2 k · 0.5/log2 k < 1 + .
k
And we also get that ACCBIC
p (f ) 1/4 · log2 k/4 /log2 k, which grows to infinity as k
increases. Hence, by choosing k sufficiently large, we have constructed an example that satisfies
the proposition. 2
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
1915
To prove Proposition 10, we constructed an artificial decision rule where, with a high probability, the communication cost is very low. However, Appendix C.1 shows that in a more natural
decision rule (described in Example 5) with a uniform probability distribution over types the
communication cost of selfishness can be exponential for Bayesian implementation. Also, Proposition 9 above used the standard auction setting with a uniform probability distribution. These
cases suggest that the average-case communication cost of selfishness can be severe even in
simple and “standard” cases.
B.2. Dominant strategy implementation
Definition 11. BDM P, H, t is Dominant strategy Incentive Compatible (DIC) if in any state
u ∈ U , the strategy si∗ = σi (ui ) maximizes the utility of agent i, regardless of the strategies of
the other agents:
+ ti g si∗ , s−i ui x g(s) + ti g(s) .
∀i ∈ I, ∀s ∈ S: ui x g si∗ , s−i
Since DIC is stronger than EPIC, it is immediate that the average-case communication cost of
selfishness is also unbounded. Furthermore, we have shown in Section 5.2 that, as with Bayesian
and ex post implementations, any simultaneous communication protocol that computes a DICimplementable decision rule is DIC-incentivizable (even with interdependent valuations). Hence
the exponential upper bound on the communication cost of selfishness holds with dominant
strategy implementation, and can be proved along the same lines as the proof for ex post implementation.
Note that, contrary to ex post implementation, the restriction to perfect information is not
without loss of generality for dominant strategy implementation. Intuitively, as in the Bayesian
case, we need to hide information from the agents to reduce the set of available strategies. Also,
as in the Bayesian case, the incentives of the agents can be maximized by using a maximally
coarse information partition. However, the reasons behind the need of reducing the number of
deviations are different: in the Bayesian case, we need to reduce the number of strategies of an
agent to satisfy the incentives of the agent himself, whereas in the dominant strategy case, we
need to reduce the number of strategies available of an agent to satisfy the incentives of the other
agents.
B.3. Interdependent values
With interdependent values, an agent’s utility function is determined not only by his type, but
by the types of the other agents as well. In this case, the overhead may be unbounded for ex post
implementation. We illustrate this with the following example.
Example 7. Consider the efficient object allocation setting with interdependent values. For example, the object is a used car that initially belongs to Agent 1. Agent 1’s value for this car is his
private type v1 ∈ {k2−γ : k = 1, . . . , 2γ }. Agent 2’s type is a bit c ∈ {1, 0} that describes whether
he is a mechanic (c = 1) or not (c = 0). If c = 1, Agent 2 is able to repair the car, and his value
for it is v21 = v1 + 2−γ . However, if c = 0, Agent 2 cannot repair the car, and his valuation for it is
v20 = v1 − 2−γ . The efficient outcome is to give the car to Agent 2 if and only if he is a mechanic
(c = 1), and with honest agents, this outcome can be computed with a fixed communication of
just 1 bit: Agent 2 reports his type c. If agents are selfish, the rule is still EPIC-implementable
1916
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
with full revelation and the following transfer rule: Agent 1 never pays or receives any transfer,
but Agent 2 must pay v1 (to a third party) if he gets the car (i.e., if he announces that he is a
mechanic). However, any protocol that is EPIC-incentivizable needs to reveal Agent 1’s value v1
within 2−γ to satisfy the ex post incentive constraints of Agent 2, which takes at least γ − 1 bits.
(This can be made formal along the lines of the proof of Proposition 9.) Hence the communication cost of selfishness is arbitrarily high in this case.
As far as Bayesian implementation is concerned, however, we still have an exponential upper
bound on the overhead, provided that the types are independently distributed. Indeed, Propositions 3, 4 and Corollary 3 all hold for this case.
B.4. Correlated types
Our analysis of the overhead with ex post implementation need not be changed in this case,
since our results do not depend on the state distribution. As for Bayesian implementation, the
communication cost of selfishness with correlated types may be unbounded:
Example 8. There are two agents, and we consider the incentives of only Agent 1, who
privately knows a string w ∈ (0, 1)m . The desired outcome is the parity of the string, i.e.,
f (w) = ( m
j =1 w(j )) mod 2. Agent 1 gets a zero utility from outcome 0, and has utility that
is either 1 or −1 for outcome 1, both with the same probability 1/2. Agent 2’s type is an integer
k between 1 and m, and the value of w(k). The distribution of k and w is uniform. The communication complexity of the rule is 1 bit (Agent 1 just outputs the outcome). Also, the direct
revelation BDM satisfies BIC(p) with a high monetary punishment for Agent 1 “caught” lying,
i.e., announcing a wrong value of w(k). However, any BIC(p)-incentivizable protocol must have
depth at least log2 m, as otherwise Agent 1 would have fewer than 2m strategies, and hence there
would be two different types w and w that share the same prescribed strategy in the protocol.
Note that they must be of the same parity, say 0. But in this case, we could construct a type w that agrees on all the indexes where w and w agree but which has parity 1. There would be no
way to prevent Agent 1, when he has type w and prefers outcome 0, from choosing the strategy
of types w and w (without preventing w or w from being truthful). Hence the communication
cost of selfishness can be made arbitrarily high by choosing m large enough.
We can attribute this increase in the overhead to the failure of Lemma 3 with correlated types:
we cannot stop a BIC(p)-incentivizable protocol when the outcome is computed, and hence the
computation of the transfers may cause an increase in the communication requirements. Intuitively, this example offers one reason why surplus-extraction mechanisms for correlated types
proposed by Cremer and McLean [6] may not be practical: in some cases, such mechanisms may
require prohibitive communication.
Appendix C. Technical proofs
We begin with a simple lemma that is useful for bounding below the average-case communication complexity ACCp (P) of a protocol P (as defined in Appendix B.1):
Lemma 4. If protocol P has a subset L of leaves whose aggregate probability is at least α and
the probability of each leaf from L is at most β, then ACCp (P) −α log2 β.
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
1917
˜
Proof. We can consider the leaves L of P as the possible realizations of a random variable l,
each leaf l ∈ L having probability p(l) = Pr{u ∈ U (l)}. Shannon’s theory of information ([25],
˜ defined as H(l)
˜ =
surveyed
in [5]) implies that ACCp (P) is bounded below by the entropy of l,
− l∈L p(l) log2 p(l).
Under our assumptions, this entropy is in turn bounded below as follows:
˜ −
H(l)
p(l) log2 p(l) −
p(l) log2 β −α log2 β.
2
l∈L
l∈L
C.1. Proof of Proposition 5 (exponential overhead for Bayesian implementation)
Consider the Manager–Expert setting of Examples 2 and 5. Recall from Example 5 that
CC(f ) 2log2 k, and that f is BIC(p)-implementable. We now prove that CCBIC
p (f ) k
k
BIC
0.5 log2 k/4 by proving the stronger statement ACCp (f ) 0.5 log2 k/4 . (Where ACCBIC
p (f )
is the average Bayesian incentive communication complexity, as defined in Appendix B.1.)
First we observe that in a finite BDM, the Expert can communicate “little information” about
u. Formally, even if we execute the BDM for all possible values of δ and m, at the conclusion the
set U ⊂ U of still possible values of u must have a positive measure with probability 1. For any
such positive-measure set U , we can restrict attention to the protocol’s execution when we know
that u ∈ U (in which case the protocol reveals no further information about u). Informally, this
corresponds to a modified communication protocol in which the Expert announces upfront that
u ∈ U (and this announcement is not counted towards the communication cost), and then the
agents proceed to communicate information about δ and m. We will bound below the protocol’s
average communication complexity conditional on any positive-measure set U , and this will
also bound unconditional average communication complexity.
Now, let us focus on the set H1 of the Expert’s infosets. For each infoset h ∈ H1 , let Mh ⊂ M
denote the set of legal m’s at h — i.e., those the Expert still considers possible at h. (Thus, Mh is
the union of the legal m’s at all nodes in h.) Let Δh denote the set of legal δ’s at infoset h — i.e.,
those for which the Expert can arrive at h (which must be the same in every node in h, since the
Expert cannot distinguish among them).20 Since information about m is communicated by the
Manager and information about Δ is communicated by the Expert, the set of legal (m, δ) pairs at
infoset h must be Mh × Δh .
We will say that “at infoset h ∈ H1 , the Expert has revealed the image of set M ⊂ M” if δ(M)
has the same value for all δ ∈ Δh .
Claim 1. If B is a BIC(p) BDM implementing f , then at each Expert’s information set h ∈ H1
reached with a positive probability he has revealed the image of Mh .
Proof. Consider any infoset h ∈ H1 at which the Expert has not revealed the image of Mh , i.e.,
there exist δ , δ ∈ Δh such that δ (Mh ) = δ (Mh ). Starting from h, by reporting according to δ the Expert would induce a uniform probability distribution over outcomes from δ (Mh ), while
by reporting according to δ he would induce a uniform probability distribution over outcomes
20 It is important to keep in mind that in our terminology the protocol does not include the “moves of nature” informing
agents about their types, and therefore the Expert’s information sets in the protocol only describe information revealed
through the agents’ moves and do not reveal the Expert’s observation of his own type.
1918
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
from δ (Mh ). For the Expert to report truthfully regardless of whether his true type is (δ , u) or
(δ , u), the difference between his expected transfers in the two cases must equal
1
F (u) ≡
u(x) −
u(x)
|Mh |
x∈δ (Mh )
x∈δ (Mh )
1
u(x) −
u(x) .
=
|Mh |
x∈δ (Mh )\δ (Mh )
x∈δ (Mh )\δ (Mh )
Since the difference between the two expected transfers must be known at the infoset, and the
expert should report truthfully for all u ∈ U , F (u) must be constant on u ∈ U . But since
δ (Mh ) = δ (Mh ), the sums in the last expression contain at least one term and so any set on
which F (u) is constant must be a zero-measure set. 2
Claim 2. In any BDM that implements f in BIC(p), the Expert must with probability 1/2 eventually reveal the image of a set of size between k/4 and k/2.
Proof. We show that the probability is at least 1/2 conditional on any fixed δ = δ̂, which will
imply that the unconditional probability is at least 1/2 as well. Construct a tree T (δ̂) (not necessarily binary) consisting of the Expert’s infosets from H1 that he could possibly visit when δ = δ̂
and he reports truthfully (i.e., those infosets h ∈ H1 at which δ̂ ∈ Δh ). For any given h ∈ T (δ̂),
let child(h) ⊂ T (δ̂) denote the set of children of h in T (δ̂) (i.e., the set of infosets in T (δ̂) that
can be visited by the Expert immediately after visiting h). (That this child relation must induce a
tree follows from the Expert’s perfect recall.)
Let us walk from the root of T (δ̂) down a path in the tree while always choosing the child h
that has the largest |Mh |. We continue until we get to a first node h all of whose children h have
|Mh | < k/2. By construction we must have |Mh | k/2, and so the protocol’s execution will
δ̂. Now, we should be able to select
go through h with probability |Mh |/|M| 1/2 when δ =
a subset H ⊂ child(h) such that the size of the set M ≡ h ∈H Mh is between k/4 and k/2.
(Indeed, if there exists h ∈ child(H ) with |Mh | k/4 then we can take H = {h }, otherwise we
can keep adding elements of child(H ) to H until |M| first exceeds k/4, in which case it still
falls short of k/2.)
Recall from Claim 1 that at any h ∈ child(h) the Expert must have revealed the image of
Mh . This revelation must have happened with the Expert’s move at h. Formally, all infosets
h ∈ child(h) have the same history of the Expert’s moves (the same moves that the Expert of
type δ̂ would make), hence they all have the same Δh , and so Claim 1 implies that at each of
them the Expert has revealed the image of Mh for any h ∈ child(h), and therefore the image
of M as well. Thus, the image of M is revealed whenever the execution of the protocol passes
through infoset h, and we have already established that this occurs with probability at least 1/2
when δ = δ̂. 2
Claim 3. Any protocol in which with probability 1/2 the Expert reveals the image ofa set
of size
k .
between k/4 and k/2 must have an average communication cost of at least 0.5 log2 k/4
Proof. If at an infoset h ∈ H1 the Expert has revealed the image of set M ⊂ M, then |Δh | |M|!|M\M|! If we know that k/4 |M| k/2, then |Δh | (3k/4)!(k/4)!. Since all the δ’s are
k −1
,
equiprobable, the probability of the protocol’s passing through h is at most |Δh |/k! k/4
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
1919
which also bounds above the probability of arriving at any given leaf following h. Thus, with
k −1
probability 1/2 the protocol must arrive at a leaf whose probability is at most k/4
, hence by
k
Lemma 4, the average communication cost of the protocol is at least 0.5 log2 k/4 . 2
k
on the average communication
Thus, we have established a lower bound of 0.5 log2 k/4
complexity of any BDM that BIC implements f . (Actually, this is a lower bound on the expected
number of bits send by the Expert regarding δ; the proof does not count the information that the
Expert might give about his valuations u or the Manager’s messages about m.) So ACCBIC
p (f ) k
k
0.5 log2 k/4
, which also implies that CCBIC
.
(f
)
0.5
log
2 k/4
p
C.2. Proof of Proposition 9 (unbounded average-case overhead for ex post implementation)
Consider the problem of allocating an indivisible object to one of two agents, but with the
agents’ types drawn independently from a uniform distribution over U1 = U2 = U = {k2−γ : k =
0, . . . , 2γ − 1}. Let f be the “efficient” decision rule that allocates the object to the agent with
the higher valuation, and gives it to Agent 1 in the case of a tie. f can be computed using the
following bisection protocol suggested in [1] and [13]: At each round m = 1, . . . , γ , each agent
i reports the mth bit in the binary expansion of his valuation ui . The protocol stops as soon as
the two agent report different bits, and then the object is given to the agent who reported 1 (he
is proven to have the higher valuation). If the agents have not disagreed after γ steps, the object
is allocated to Agent 1 (in this case the two agents are shown to have the same valuations, and
either allocation would be efficient). At any given round, the probability that the protocol stops
conditional on arriving there is 1/2. Therefore, the expected number of rounds is at most 2, and
so the average-case communication complexity is at most 4, for any γ .
Now, consider an EPIC BDM that implements decision rule f (in fact, the argument below applies to any efficient decision rule). By (3), the transfer to Agent 2 can be written as
τ2 (f (u1 , u2 ), u1 ). Furthermore, EPIC inequalities imply that
τ2 (2, u1 ) − τ2 (1, u1 ) − u1 2−γ
for every u1 ∈ 0, 1 − 2−γ ,
(5)
for otherwise Agent 2 would prefer either to understate his valuation when u1 = u2 − 2−γ or to
overstate it when u1 = u2 + 2−γ .
Suppose that γ 3. Let us now run the EPIC BDM twice with 3 agents whose valuations are
drawn independently from U . The first run is with Agent 1 and Agent 2, and the second run is
with Agent 1 and Agent 3 taking the place of Agent 2. Clearly, the total average communication
cost of the two runs is twice the average communication cost of the EPIC BDM.
In the event where Agent 2 has type u2 3/4, Agent 3 has type u3 < 1/4, and Agent 1 has
type u1 ∈ [1/4, 3/4) (this event that has probability 1/4 · 1/4 · 1/2 = 1/32), the object goes to
Agent 2 in the first run and to Agent 1 in the second run, and, by (5) the difference between
Agent 2’s and Agent 3’s transfers pins down the realization of u1 within 2−γ . Thus, in this event,
each outcome of the pair of runs cannot have a probability more than 3 ·2−γ . Hence, by Lemma 4,
the average communication complexity of the two runs is at least 1/32·log2 (2γ /3) > (γ −2)/32.
The average-case communication cost of a single run of the EPIC BDM is then at least half
this number, i.e., (γ − 2)/64. We can then choose γ to get an arbitrarily high average-case
communication cost.
1920
R. Fadel, I. Segal / Journal of Economic Theory 144 (2009) 1895–1920
References
[1] K.J. Arrow, L. Pesotchinsky, M. Sobel, On partitioning a sample with binary-type questions in lieu of collecting
observations, J. Amer. Statistical Assoc. 76 (1981) 402–409.
[2] S. Athey, I. Segal, An efficient dynamic mechanism, Working paper, 2006.
[3] L. Blumrosen, N. Nisan, I. Segal, Auctions with severely bounded communication, J. Artif. Intell. Res. 28 (2007)
233–266.
[4] S. Bikhchandani, S. Chatterji, R. Lavi, A. Mu’alem, N. Nisan, A. Sen, Weak monotonicity characterizes deterministic dominant strategy implementation, Econometrica 74 (4) (2006) 1109–1132.
[5] T.M. Cover, J.A. Thomas, Elements of Information Theory, John Wiley & Sons, Inc., New York, 1991.
[6] J. Cremer, R.P. McLean, Full extraction of the surplus in Bayesian and dominant strategy auctions, Econometrica 56
(1988) 1247–1257.
[7] R. Fadel, I. Segal, The communication cost of selfishness: Ex post implementation, in: Proceedings of the 10th
Conference in Theoretical Aspects of Rationality and Knowledge, 2005.
[8] R. Fadel, A contribution to canonical hardness, in: The 7th International Workshop on Agent-Mediated Electronic
Commerce (AMEC), 2005.
[9] J. Feigenbaum, A. Krishnamurthy, R. Sami, S. Shenker, Hardness results for multicast cost sharing, Theoret. Comput. Sci. 304 (2003) 215–236.
[10] F. Forges, Equilibria with communication in a job market example, Quart. J. Econ. 105 (1990) 375–398.
[11] D. Fudenberg, J. Tirole, Game Theory, MIT Press, Cambridge, MA, 1991.
[12] J. Green, J.-J. Laffont, Limited communication and incentive compatibility, in: T. Groves, R. Radner, S. Reiter
(Eds.), Information, Incentives, and Economic Mechanisms: Essays in Honor of Leonid Hurwicz, University of
Minnesota Press, 1987.
[13] E. Grigorieva, P.J.-J. Herings, R. Muller, D. Vermeulen, The private value single item bisection auction, Econ.
Theory 30 (2007) 107–118.
[14] R. Johari, Efficiency loss in market mechanisms for resource allocation, PhD thesis, Massachusetts Institute of
Technology, 2004.
[15] E. Kushilevitz, N. Nisan, Communication Complexity, Cambridge University Press, 1997.
[16] S. Lahaie, D. Parkes, Applying learning algorithms to preference elicitation, in: Proceedings of the 5th ACM conference on Electronic Commerce, 2004.
[17] A. Mas-Colell, M. Whinston, J.R. Green, Microeconomic Theory, Oxford University Press, New York, 1995.
[18] N. Melumad, D. Mookherjee, S. Reichelstein, A theory of responsibility centers, J. Acc. Econ. 15 (1992) 445–484.
[19] D. Mookherjee, Decentralization, hierarchies and incentives: A mechanism design perspective, J. Econ. Lit. 44
(2006) 367–390.
[20] R.B. Myerson, Game Theory: Analysis of Conflict, Harvard University Press, Cambridge, MA, 1991.
[21] N. Nisan, I. Segal, The communication requirements of efficient allocations and supporting prices, J. Econ. Theory 129 (2006) 192–224.
[22] S. Reichelstein, Incentive compatibility and informational requirements, J. Econ. Theory 34 (1984) 32–51.
[23] K. Roberts, The characterization of implementable choice rules, in: J.-J. Laffont (Ed.), Aggregation and Revelation
of Preferences, North-Holland, 1979.
[24] I. Segal, The communication requirements of social choice rules and supporting budget sets, J. Econ. Theory 136
(2007) 341–378.
[25] C.E. Shannon, A mathematical theory of communication, Bell System Tech. J. 27 (1948) 379–423, 623–656.
[26] A.C.-C. Yao, Some complexity questions related to distributive computing (preliminary report), in: Proceedings of
the 11th Annual ACM Symposium on Theory of Computing, ACM Press, 1979, pp. 209–213.