Download from consensus to social learning - CIS @ UPenn

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Consensus decision-making wikipedia , lookup

Transcript
From Consensus to Social
Learning in Complex Networks
Ali Jadbabaie
Skirkanich Associate Professor of innovation
Electrical & Systems Engineering
and GRASP Laboratory
University of Pennsylvania
With Alireza Tahbaz-Salehi and Victor Preciado
First Year Review, August 27, 2009
ONR MURI: NexGeNetSci
Jadbabaie
Collective behavior,
social aggregation
http://www.cis.upenn.edu/~ngns
Theory
• First principles
• Rigorous math
• Algorithms
• Proofs
Data
Analysis
• Correct
statistics
• Only as good
as underlying
data
Lab
Numerical
Experiments Experiments
• Simulation
• Synthetic,
clean data
• Stylized
• Controlled
• Clean,
real-world
data
Field
Exercises
Real-World
Operations
• SemiControlled
• Messy,
real-world
data
• Unpredictable
• After action
reports in lieu
of data
Good news:
Spectacular progress
• Consensus and information aggregation
• Random spectral graph theory
• synchronization, virus spreading
• New abstractions beyond graphs:
• understanding network topology
• simplicial homology
• computing homology groups
Consensus, Flocking and Synchronization
Flocking and opinion dynamics
• Bounded confidence
opinion model (Krause,
2000)
– Nodes update their opinions
as a weighted average
of the opinion value of their
friends
– Friends are those whose
opinion is already close
– When will there be
fragmentation and when will
there be convergence of
opinions?
– Dynamics changes topology
Consensus in random networks
• Consider a network with n nodes and a vector of initial values, x(0)
• Consensus using a switching and directed graph Gn(t)
• In each time step, Gn(t) is a realization of a random graph where edges
appear with probability, Pr(aij=1)=p, independently of each other
Consensus dynamics
x (k  1)  Wk x k 
Wk  ( Dk  I n ) 1 ( Ak  I n )
Random
Ensemble
Stationary behavior
x (k )  U k x 0, with U k  Wk 1Wk 2 ...W0 ,
lim k  U k  1v T , where v is a random vector,
x *  lim k  xi k  is a random variable.
Despite its easy formulation, very little is known about x* and v
Random Networks
The graphs could be correlated so long as they are stationary-ergodic.
What about the consensus value?
•
Random graph sequence means that consensus
value is a random variable
• Question: What is its distribution?
• A relatively easy case :
– Distribution is degenerate (a Dirac) if and only if all
matrices have the same left eigenvector with probability
1.
• In general:
Where is the eigenvector associated with the largest
eigenvalue (Perron vector)
Can we say more?
E[WkWk] for Erdos-Renyi graphs
Define:
Random Consensus
• For simplicity in our explanation, we illustrate the
structure of E[WkWk] using the case n=4:
These entries have the
following expressions:
where q=1-p and H(p,n) is a
special function that can be written
in terms of a hypergeometric
function (the detailed expression is
not relevant in our exposition)
Variance of consensus value
for Erdos-Renyi graphs
• Defining the parameter
we can finally write the left eigenvector of the expected
Kronecker as:
• Furthermore, substituting the above eigenvector in our
original expression for the variance (and simple
algebraic simplifications) we deduce the following final
expression as a function of p, n, and x(0):
where
Random Consensus (plots)
• var(x*) for initial conditions uniformly distributed in [0,1],
nЄ{3,6,9,12,15}, and p varying in the range (0,1]
What about other random graphs?
Var(x*)
n=3
n=6
n=9
n=12
n=15
p
Static Model with Prescribed Expected
Degree Distribution

Degree distributions are useful to the extent that they tell us
something about the spectral properties (at least for distributed
computation/optimization)
• Generalized static models [Chung and Lu, 2003]:
– Random graph with a prescribed expected degree
sequence
– We can impose an expected degree wi on thej i-th node
i
Eigenvalues of Chung-Lu Graph

Numerical Experiment: Represent the histogram of eigenvalues for several
realizations of this random graph
• What is the eigenvalue distribution of the adjacency matrix for
very large Chung-Lu random networks?
100 nodes
1000 nodes
500 nodes
6
30
60
5
25
50
4
20
40
3
15
30
2
10
20
1
5
10
0
-10

-8
-6
-4
-2
0
2
4
6
8
10
0
-10
-8
-6
-4
-2
0
2
4
6
8
10
0
-10
-8
-6
-4
-2
0
2
4
6
Limiting Spectral Density: Analytical expression only possible for very
particular cases.
Contribution: Estimation of the shape of the bulk for a given expected
degree sequence, (w1,…,wn).
8
10
Spectral moments of random graphs and
degree distributions
• Degree distributions can reveal the moments of the
spectra of graph Laplacians
• Determine synchronizability
• Speed of convergence of distributed algorithms
• Lower moments do not necessarily fix the support, but they fix the
shape
• Analysis of virus spreading (depends on spectral radius of
adjacency)
• Non-conservative synchronization conditions on graphs with
prescribed degree distributions
• Analytic expressions for spectral moments of random geometric
graphs
Consensus and Naïve Social learning
• When is consensus a good thing?
• Need to make sure update converges to the
correct value
Naïve
vs.
Bayesian
Naïve learning
just average
with neighbors
Fuse info with Bayes Rule
Social learning
• There is a true state of the world, among
countably many
• We start from a prior distribution, would like to
update the distribution (or belief on the true
state) with more observations
• Ideally we use Bayes rule to do the information
aggregation
• Works well when there is one agent (Blackwell,
Dubins’1962), become impossible when more
than 2!
Locally Rational, Globally Naïve:
Bayesian learning under peer pressure
Model Description
Model Description
Belief Update Rule
Why this update?
Eventually correct forecasts
Eventually-correct estimation of the output!
Why strong connectivity?
No convergence if different people interpret signals differently
N is misled by listening to the less informed agent B
Example
One can actually learn from others
Learning from others
Information in i’th signal only good for distinguishing
Convergence of beliefs and
consensus on correct value!
Learning from others
Summary
Only one agent needs a positive prior on the true state!