Download Week 3

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
Transcript
AM8204
Winter 2017
Week 3 –
Random Graphs
Dr. Anthony Bonato
Ryerson University
Random graphs
Paul Erdős
Alfred Rényi
Complex Networks
2
Complex Networks
3
G(n,p) random graph model
(Erdős, Rényi, 63)
• p = p(n) a real number in (0,1), n a positive
integer
• G(n,p): probability space on graphs with
nodes {1,…,n}, two nodes joined
independently and with probability p
1
2
3
4
Complex Networks
5
5
Formal definition
• n a positive integer p a real number in [0,1]
• G(n,p) is a probability space on labelled graphs
with vertex set V = [n] = {1,2,…,n} such that
Pr(G)  p
| E ( G )|
(1  p)
n
  | E ( G )|
 2
• NB: p can be a function of n
– today, p is a constant
Properties of G(n,p)
• consider some graph G in G(n,p)
• the graph G could be any n-vertex graph, so not
much can be said about G with certainty
• some properties of G, however, are likely to
hold
• we are interested in properties that occur with
high probability when n is large
A.a.s.
• an event An happens asymptotically
almost surely (a.a.s.) in G(n,p) if it holds
there with probability tending to 1 as n→∞
Theorem 3.1. A.a.s. G in G(n,p) is diameter
2.
• just say: A.a.s. G(n,p) has diameter 2.
First moment method
• in G(n,p), all graph parameters:
|E(G)|, γ(G), ω(G), …
become random variables
• we focus on computing the averages of
these parameters or expectation
Discussion
Calculate the expected number of edges in
G(n,p).
• use of expectation when studying random
graphs is sometimes referred to as the first
moment method
Degrees and diameter
Theorem 3.2: A.a.s. the degree of each vertex of
G in G(n,p) equals
pn  O( pn log n)  (1  o(1)) pn
• concentration: binomial distribution
11
Markov’s inequality
Theorem 3.3 (Markov’s inequality)
For any non-negative random variable X and
t > 0, we have that
Pr[ X  t ]  E[ X ] / t.
Chernoff bound
Theorem 3.4 (Chernoff bound)
Let X be a binomially distributed random
variable on G(n,p) with E[x] = np. Then for
ε ≤ 3/2 we have that
2
Pr[| X  E[ X ] |  | E[ X ]]  2 exp(  E[ X ]).
3
Martingales
• let X and Y be random variables on the same probability
space
• the conditional mass function of X given Y = y is defined
by
fx|y(x|y)=Pr[X=x | Y=y]
• note that for a fixed y, fx|y(x|y) is a function of x
• the conditional expection of X when Y=y is given by its
expectation
E[ X | Y  y ]   xfx| y ( x, y )
x
• let g(x) = E[X | Y=y]; g is the conditional expectation of X
on Y, written E[X|Y]
Intuition
• E[X|Y] is the expected value of X
assuming Y is known
• note that E[X|Y] is a random variable
– precise value depends on the value of Y
Definition
• a martingale is a sequence (X0,X1,...,Xt) of
random variables over a given probabiltiy
space such that for all i > 0,
E[Xi| X0,X1,...,Xi-1] = Xi-1
Example
• a gambler starts with $100
• she flips a fair coin t times; when the coin
is heads, she wins $1; tails, she loses $1.
• let Xi denote the gamblers bankroll after i
flips
• then (X0,X1,...,Xt) is a martingale, since:
E[Xi | X0,X1,...,Xi-1] = 1/2(Xi-1+1)+1/2(Xi-1-1)
= Xi
Doob martingales
• let A, Z1,..., Zt be random variables
• define X0 = E[A], Xi = E[A| Z1,..., Zi ] for 1 ≤ i ≤ t
• can be shown that (X0,X1,...,Xt) is a martingale;
called the Doob martingale
• Idea: A = f(Z1,..., Zt ) is some function f, with
X0 = E[A] and Xt = A
• each Zi is “revealed” more and more until we
know everything and hence, A
Azuma-Hoeffding inequality
Theorem 3.5 Let (X0,X1,...,Xt) be a
martingale such that |Xi+1 – Xi| ≤ c for all i
(c-Lipschitz condition).
Then for all λ > 0,
Pr[| X t  X 0 |  t ]  2 exp( 2 / 2c 2 ).
• concentration inequality
Example: vertex colouring
• let A = χ(G(n,p)), and let Zi contains the
information on the presence/absence of
edges ij with j < i
• Doob martingale here is called the vertexexposure martingale
– reveal one vertex at a time
Concentration of chromatic number
Theorem 3.6 For G in G(n,p) and all real λ
>0,
Pr[|  (G )  E[  (G )] |  n ]  2 exp(2 / 2).
• hence, χ(G(n,p)) is concentrated around
its expectation; proved before anyone
knew E(χ(G(n,p)))!
Aside: evolution of G(n,p)
• think of G(n,p) as evolving from a co-clique to clique as p
increases from 0 to 1
• at p=1/n, Erdős and Rényi observed something
interesting happens a.a.s.:
– with p = c/n, with c < 1, the graph is disconnected with all
components trees, the largest of order Θ(log(n))
– as p = c/n, with c > 1, the graph becomes connected with a giant
component of order Θ(n)
• Erdős and Rényi called this the double jump
• physicists call it the phase transition: it is similar to
phenomena like freezing or boiling
Complex Networks
23
Complex Networks
24