Download The Complexity of Agreement

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
The Complexity of Agreement
A 100% Quantum-Free Talk
Scott Aaronson
MIT
People disagree. Why?
• Because some people are idiots?
• Because people care about things other than
truth (winning debates, not admitting error, etc.)?
• Because even given the same information,
people would reach different honest conclusions?
Can rational agents ever “agree to disagree”
solely because of differing information?
Aumann 1976: No.
Suppose Alice and Bob are Bayesians with a
common prior, but different knowledge.
Let EA be Alice’s estimate of (say) the chance of
rain tomorrow, conditioned on all her knowledge.
Let EB be Bob’s estimate, conditioned all his
knowledge.
Suppose EA and EB are common knowledge
(Alice and Bob both know their values, both know
that they both know them, etc.)
Theorem: EA = EB
“Standard Protocol”
Alice’s initial expectation EA,0
Bob’s new expectation EB,1
Alice’s new expectation EA,2
Bob’s new expectation EB,3
Geanakoplos & Polemarchakis 1982: Provided
the state space is finite, Alice and Bob will agree
after a finite number of rounds
What’s Going On?
We can represent an agent’s “knowledge” by a
partition of the state space…
00
01
10
11
Alice knows the first of 2 bits
11
Bob knows their sum
MESSAGE
00
01
10
If Alice and Bob don’t agree with certainty, then
one of them can learn something from the other’s
message—meaning its partition gets refined
But the partitions can get refined only finitely
many times
Problems
• Huge number of messages might be needed
before Alice and Bob agree
• Messages are real numbers
Conjecture
• For some function f of Alice’s input x{0,1}n
and Bob’s input y{0,1}n, and some
distribution over (x,y), Alice and Bob will need
to exchange (n) bits even to probably
approximately agree about EX[f(x,y)]
Intuition comes from communication complexity
Main Result: Conjecture Is False
Say Alice and Bob “(,)-agree” if they agree
within  with probability at least 1- over their prior
For all f:{0,1}n{0,1}n[0,1], and all prior
distributions over (x,y) pairs, Alice and Bob
can (,)-agree about the expectation of f, by
exchanging only  1  bits.
O 2 
  
Independent of n
Moral: “Agreeing to disagree” is problematic,
even for agents subject to realistic
communication constraints
Intuition
In the standard protocol, as long as Alice and
Bob disagree by , their expectations EA and
EB follow an unbiased random walk with step
size . But since EA,EB[0,1], such a walk hits
an absorbing barrier after ~1/2 steps.
To prove, use (EA)2,(EB)2 as progress measures
A bit more formally…
Let ={0,1}n{0,1}n be the state space, and let D
be the shared prior distribution
Let EA,t(), EB,t() be Alice and Bob’s
expectations at time t, assuming the true state
of the world is 
For any function F:[0,1], let
||F|| = EXD[F()2]
Lemma: Suppose Alice sends the tth message.
Then ||EB,t|| - ||EA,t-1|| = ||EB,t - EA,t-1||
Proof: Crucial observation: if Alice has just
sent Bob a message, then her expectation of
Bob’s expectation equals her expectation.
Hence
EB ,t  E A,t 1  EB ,t  E A,t 1  2 EX EB ,t  E A,t 1  
D
 EB ,t  E A,t 1  2 E A,t 1
 EB ,t  E A,t 1
Theorem: The standard protocol causes Alice
and Bob to (,)-agree after 1/(2) messages
Proof: Suppose Alice sends the tth message,
and suppose Pr E    E       .
D

B ,t
A,t 1

2
Then EB ,t  E A,t 1  EB ,t  E A,t 1  E A,t 1   .
Likewise, after Bob sends Alice the (t+1)st
message, ||EA,t+1|| > ||EB,t|| + 2.
But max{||EA,t||,||EB,t||} is initially 0 and can
never exceed 1.
Weren’t we cheating, since messages in the
standard protocol are unbounded-precision real
numbers?
Discretized standard protocol: Let Charlie (C)
be a “monkey in the middle” who sees Alice and
Bob’s messages but doesn’t know their inputs. At
the tth step, Alice tells Bob whether EA,t>EC,t+/4,
EA,t<EC,t-/4, or neither. Bob does likewise.
Theorem: The discretized standard protocol
causes Alice and Bob to (,)-agree after at most
3072/(2) messages.
Two interesting questions:
1. Does the standard protocol ever need ~1/2
messages?
2. Is there ever a protocol that does better?
Example answering both questions:
Let Alice’s input x=x1…xn and Bob’s input y=y1…yn
be uniform over {-1,1}n. Then let
n
1
F x, y    2   yi 1 xi  xi yi 
2
i 1
 F x, y  if F  x, y   0,1

f x, y   0 if F x, y   0
1 if F x, y   1

Theorem: The standard protocol requires
 1/  2 


 log 1 /  
messages before Alice and Bob’s expectations of
f agree within  with constant probability.
On the other hand, there’s an “attenuated
protocol” that causes them to agree within  with
constant probability after only O(1) messages
Three or More Agents
Could a weak-willed Charlie fail to bring Alice
and Bob into agreement with each other?
Theorem: On a strongly-connected graph with
2

 messages
Nd
N agents and diameter d, 

O 2 
  
suffice for every pair of agents to “(,)-agree”
Computational Complexity
Fine, few messages suffice for agreement—but
what if it took Alice and Bob billions of years to
calculate their messages?
Result: Provided the agents can sample from
their initial partitions, they can simulate
Bayesian rationality using a number of samples
independent of n (but alas, exponential in 1/6)
Meaning: By examining their messages, you
couldn’t distinguish these “Bayesian wannabes”
from true Bayesians with non-negligible bias
Computational Complexity
Problem: An agent’s expectation could lie on a
“knife-edge” between two messages
Pr[message]
Solution: Have agents “smooth” their
messages by adding random noise to them
0
E
t
i
1
Open Problems
• Do Alice and Bob need to exchange (1/2)
bits to agree within  with high probability?
Best lower bound I can show is the trivial
(log1/)
• Is there a scenario where Alice and Bob must
exchange (n) bits to (,0)-agree?
• Can “Bayesian wannabes” agree within  with
high probability using poly(1/) samples, rather
than 2poly(1/)?
Open Problems (con’t)
• Suppose Alice and Bob (,)-agree about the
expectation of f:[0,1]. Then do they also
have “approximate common knowledge”?
(Alice is pretty sure of Bob’s expectation,
Alice is pretty sure Bob’s pretty sure of her
expectation, Alice is pretty sure Bob’s pretty
sure she’s pretty sure of his expectation, etc.)