Download Ramanujan Graphs from Finite Free Convolutions

Document related concepts
no text concepts found
Transcript
Ramanujan Graphs
from Finite Free
Convolutions
Nikhil Srivastava
UC Berkeley
joint with Adam Marcus (Princeton) and
Daniel Spielman (Yale)
Expander Graphs
An expander is a sparse well-connected graph.
Every set of vertices has many neighbors.
Random walks mix quickly.
Large spectral gap
No bi-Lipschitz embedding into normed spaces.
Etc.
Expanders
Non-Expanders
Spectral Expanders
Let G be a graph and A be its adjacency matrix
a
b
e
c
d
0
1
0
0
1
1
0
1
0
1
eigenvalues πœ†1 β‰₯ πœ†2 β‰₯ β‹― πœ†π‘›
0
1
0
1
0
0
0
1
0
1
1
1
0
1
0
Spectral Expanders
Let G be a graph and A be its adjacency matrix
a
b
e
c
d
0
1
0
0
1
1
0
1
0
1
0
1
0
1
0
0
0
1
0
1
1
1
0
1
0
eigenvalues πœ†1 β‰₯ πœ†2 β‰₯ β‹― πœ†π‘›
If d-regular, then 𝐴𝟏 = π‘‘πŸ so
If bipartite then eigs are symmetric
about zero so
πœ†1 = 𝑑
πœ†π‘› = βˆ’π‘‘
β€œtrivial”
Spectral Expanders
Let G be a graph and A be its adjacency matrix
a
0
b
1
G is connected iff πœ†02
e
c
0
1
d
1 0 0
0 1 0
𝐴1 <0 𝑑 1
0 1 0
1 0 1
1 interesting
1
0
1
0
eigenvalues πœ†1 β‰₯ πœ†2 β‰₯ β‹― πœ†π‘›
If d-regular, then 𝐴𝟏 = π‘‘πŸ so
If bipartite then eigs are symmetric
about zero so
πœ†1 = 𝑑
πœ†π‘› = βˆ’π‘‘
β€œtrivial”
Spectral Expanders
Definition: G is a good expander
if all non-trivial eigenvalues β‰ͺ 𝑑
[
-d
0
]
d
(this implies all of the other connectivity and
pseudorandomness properties)
Spectral Expanders
Definition: G is a good expander
if all non-trivial eigenvalues β‰ͺ 𝑑
[
]
0
d
-d
e.g. 𝐾𝑑 and 𝐾𝑑,𝑑 have all nontrivial eigs -1,0.
Spectral Expanders
Definition: G is a good expander
if all non-trivial eigenvalues β‰ͺ 𝑑
[
]
0
d
-d
e.g. 𝐾𝑑 and 𝐾𝑑,𝑑 have all nontrivial eigs 0.
Goal: construct infinite families, 𝑑 fixed, 𝑛 β†’ ∞.
Spectral Expanders
Definition: G is a good expander
if all non-trivial eigenvalues β‰ͺ 𝑑
[
]
0
d
-d
e.g. 𝐾𝑑 and 𝐾𝑑,𝑑 have all nontrivial eigs 0.
Goal: construct infinite families, 𝑑 fixed, 𝑛 β†’ ∞.
Question: What are the best expanders, for fixed 𝑑?
Spectral Expanders
Definition: G is a good expander
if all non-trivial eigenvalues β‰ͺ 𝑑
[
]
0
d
-d
e.g. 𝐾𝑑 and 𝐾𝑑,𝑑 have all nontrivial eigs 0.
Goal: construct infinite families, 𝑑 fixed, 𝑛 β†’ ∞.
Alon-Boppana’86: Can’t beat
[βˆ’2 𝑑 βˆ’ 1, 2 𝑑 βˆ’ 1]
The meaning of 2 𝑑 βˆ’ 1
The infinite d-ary tree
πœ† 𝐴 𝑇 = [βˆ’2 𝑑 βˆ’ 1, 2 𝑑 βˆ’ 1]
(Cayley graph of the free group with 𝑑/2 generators)
The meaning of 2 𝑑 βˆ’ 1
The infinite d-ary tree
πœ† 𝐴 𝑇 = [βˆ’2 𝑑 βˆ’ 1, 2 𝑑 βˆ’ 1]
Alon-Boppana’86: This is the best possible spectral expander.
Ramanujan Graphs:
Definition: G is Ramanujan if all non-trivial eigs
have absolute value at most 2 𝑑 βˆ’ 1
[
-d
[
-2 𝑑 βˆ’ 1
0
]
2 π‘‘βˆ’1
]
d
Ramanujan Graphs:
Definition: G is Ramanujan if all non-trivial eigs
have absolute value at most 2 𝑑 βˆ’ 1
[
-d
[
-2 𝑑 βˆ’ 1
0
]
2 π‘‘βˆ’1
]
d
Margulis, Lubotzky-Phillips-Sarnak’88: Infinite
sequences of Ramanujan graphs exist for 𝑑 = 𝑝 + 1
(Cayley graphs, analysis based on number theory)
Ramanujan Graphs:
Definition: G is Ramanujan if all non-trivial eigs
have absolute value at most 2 𝑑 βˆ’ 1
[
-d
[
-2 𝑑 βˆ’ 1
0
]
2 π‘‘βˆ’1
]
d
Margulis, Lubotzky-Phillips-Sarnak’88: Infinite
sequences of Ramanujan graphs exist for 𝑑 = 𝑝 + 1
Friedman’08: A random d-regular graph is almost
Ramanujan : 2 𝑑 βˆ’ 1 + π‘œ(1)
More Recently
[MSS’13]. Infinite families of bipartite Ramanujan
graphs exist for every 𝑑 β‰₯ 3.
New Theorem
[MSS’13]. Infinite families of bipartite Ramanujan
graphs exist for every 𝑑 β‰₯ 3.
[MSS’15]. For every even 𝑛 and 𝑑 β‰₯ 3, there is a
d-regular bipartite Ramanujan (multi)graph.
New Theorem
[MSS’13]. Infinite families of bipartite Ramanujan
graphs exist for every 𝑑 β‰₯ 3.
[MSS’15]. For every even 𝑛 and 𝑑 β‰₯ 3, there is a
d-regular bipartite Ramanujan (multi)graph.
This Talk: Suppose G is a union of d random perfect
matchings on n vertices. Then, with nonzero probability
πœ†2 𝐴𝐺 ≀ 2 𝑑 βˆ’ 1
Random Graph Model
Let
𝐴 = 𝑃1 𝑀𝑃1𝑇 + 𝑃2 𝑀𝑃2𝑇 + β‹― + 𝑃𝑑 𝑀𝑃𝑑𝑇
For random permutations 𝑃1 , … , 𝑃𝑑 ∈ π‘†π‘¦π‘šπ‘› .
𝑀 = adjacency matrix of a fixed perfect
matching on 𝑛 vertices.
Random Graph Model
Let
𝐴 = 𝑃1 𝑀𝑃1𝑇 + 𝑃2 𝑀𝑃2𝑇 + β‹― + 𝑃𝑑 𝑀𝑃𝑑𝑇
For random permutations 𝑃1 , … , 𝑃𝑑 ∈ π‘†π‘¦π‘šπ‘› .
𝑀 = adjacency matrix of a fixed perfect
matching on 𝑛 vertices.
𝐴=
𝑃1 𝑀𝑃1𝑇 +
𝑃2 𝑀𝑃2𝑇
Random Graph Model
Let
𝐴 = 𝑃1 𝑀𝑃1𝑇 + 𝑃2 𝑀𝑃2𝑇 + β‹― + 𝑃𝑑 𝑀𝑃𝑑𝑇
For random permutations 𝑃1 , … , 𝑃𝑑 ∈ π‘†π‘¦π‘šπ‘› .
𝑀 = adjacency matrix of a fixed perfect
matching on 𝑛 vertices.
Theorem: With nonzero probability:
πœ†2 𝐴 < 2 𝑑 βˆ’ 1
Eigs of a random 5-regular graph
Eigs of a random 5-regular graph
Eigs of a random 5-regular graph
Limiting Spectral Distribution
Kesten-McKay Law
[Mckay’81] Let 𝐴𝑛 be a sequence of random 𝑑 βˆ’regular
graphs. Then the spectral distributions of the 𝐴𝑛 converge
weakly to the spectrum of 𝑇𝑑 :
πœ‡πΎπ‘€ π‘₯ =
𝑑 4 π‘‘βˆ’1 βˆ’π‘₯ 2
2πœ‹(𝑑 2 βˆ’π‘₯ 2)
on
[βˆ’2 𝑑 βˆ’ 1, 2 𝑑 βˆ’ 1]
infinite
"𝐴∞ "
2 π‘‘βˆ’1
[Mckay’81]
𝐴𝑛
finite
infinite
"𝐴∞ "
2 π‘‘βˆ’1
[Mckay’81]
𝐴𝑛
?
finite
infinite
"𝐴∞ "
2 π‘‘βˆ’1
[Mckay’81]
𝐴𝑛
No control on πœ†2:
convergence in distribution
oblivious to extreme eigs.
finite
infinite
"𝐴∞ "
2 π‘‘βˆ’1
π”Όπœ’ 𝐴𝑛 = 𝔼 det π‘₯𝐼 βˆ’ 𝐴𝑛
𝐴𝑛
expected
characteristic
polynomial
finite
Random Graph Model
Let
𝐴 = 𝑃1 𝑀𝑃1𝑇 + 𝑃2 𝑀𝑃2𝑇 + β‹― + 𝑃𝑑 𝑀𝑃𝑑𝑇
For random permutations 𝑃1 , … , 𝑃𝑑 ∈ π‘†π‘¦π‘šπ‘› .
𝑀 = adjacency matrix of a fixed perfect
matching on 𝑛 vertices.
Random Graph Model
Let
𝐴 = 𝑃1 𝑀𝑃1𝑇 + 𝑃2 𝑀𝑃2𝑇 + β‹― + 𝑃𝑑 𝑀𝑃𝑑𝑇
For random permutations 𝑃1 , … , 𝑃𝑑 ∈ π‘†π‘¦π‘šπ‘› .
𝑀 = adjacency matrix of a fixed perfect
matching on 𝑛 vertices.
Traditional Approaches:
1. Moments of Eigenvalues π”Όπ‘‡π‘Ÿ(𝐴𝑝 ).
2. Quadratic form 𝔼 sup π‘₯ 𝑇 𝐴π‘₯
Random Graph Model
Let
𝐴 = 𝑃1 𝑀𝑃1𝑇 + 𝑃2 𝑀𝑃2𝑇 + β‹― + 𝑃𝑑 𝑀𝑃𝑑𝑇
For random permutations 𝑃1 , … , 𝑃𝑑 ∈ π‘†π‘¦π‘šπ‘› .
𝑀 = adjacency matrix of a fixed perfect
matching on 𝑛 vertices.
Let πœ’ 𝐴 = det(π‘₯𝐼 βˆ’ 𝐴) be the characteristic poly.
We are interested in πœ†2 𝐴 = πœ†2 (πœ’ 𝐴 ).
Random Graph Model
Let
𝐴 = 𝑃1 𝑀𝑃1𝑇 + 𝑃2 𝑀𝑃2𝑇 + β‹― + 𝑃𝑑 𝑀𝑃𝑑𝑇
For random permutations 𝑃1 , … , 𝑃𝑑 ∈ π‘†π‘¦π‘šπ‘› .
𝑀 = adjacency matrix of a fixed perfect
matching on 𝑛 vertices.
Let πœ’ 𝐴 = det(π‘₯𝐼 βˆ’ 𝐴) be the characteristic poly.
We are interested in πœ†2 𝐴 = πœ†2 (πœ’ 𝐴 ).
Consider:
βˆ’1 π‘˜ π‘₯ π‘›βˆ’π‘˜ π”Όπ‘’π‘˜ (𝐴)
π”Όπœ’ 𝐴 = 𝔼 det π‘₯𝐼 βˆ’ 𝐴 =
𝑖
Outline of the Proof
1. Prove that π”Όπœ’(𝐴) has real roots.
2. Prove that the roots are Ramanujan:
πœ†2 π”Όπœ’ 𝐴 ≀ 2 𝑑 βˆ’ 1
3. Prove πœ†2 𝐴 ≀ πœ†2 (π”Όπœ’ 𝐴 ) with nonzero probability.
1. Prove that π”Όπœ’(𝐴) has real roots.
1. Prove that π”Όπœ’(𝐴) has real roots.
Quadrature Theorem. For any doubly stochastic 𝑀:
𝑑
𝑑
𝑃𝑖 𝑀𝑃𝑖𝑇 = 𝔼𝑄 πœ’
𝔼𝑃 πœ’
𝑖=1
𝑄𝑖 𝑀𝑄𝑖𝑇
𝑖=1
where 𝑄1 , … , 𝑄𝑛 are independent random (Haar)
orthogonal matrices conditioned on 𝑄𝑖 𝟏 = 𝟏.
1. Prove that π”Όπœ’(𝐴) has real roots.
Quadrature Theorem. For any doubly stochastic 𝑀:
𝑑
𝑑
𝑃𝑖 𝑀𝑃𝑖𝑇 = 𝔼𝑄 πœ’
𝔼𝑃 πœ’
𝑖=1
𝑄𝑖 𝑀𝑄𝑖𝑇
𝑖=1
where 𝑄1 , … , 𝑄𝑛 are independent random (Haar)
orthogonal matrices conditioned on 𝑄𝑖 𝟏 = 𝟏.
(determinant is a low degree degree poly
in entries of Q)
1. Prove that π”Όπœ’(𝐴) has real roots.
Quadrature Theorem. For any doubly stochastic 𝑀:
𝑑
𝑑
𝑃𝑖 𝑀𝑃𝑖𝑇 = 𝔼𝑄 πœ’
𝔼𝑃 πœ’
𝑖=1
𝑄𝑖 𝑀𝑄𝑖𝑇
𝑖=1
where 𝑄1 , … , 𝑄𝑛 are independent random (Haar)
orthogonal matrices conditioned on 𝑄𝑖 𝟏 = 𝟏.
For a polynomial 𝑝 of degree n
1. Prove that π”Όπœ’(𝐴) has real roots.
Quadrature Theorem. For any doubly stochastic 𝑀:
𝑑
𝑑
𝑃𝑖 𝑀𝑃𝑖𝑇 = 𝔼𝑄 πœ’
𝔼𝑃 πœ’
𝑖=1
𝑄𝑖 𝑀𝑄𝑖𝑇
𝑖=1
where 𝑄1 , … , 𝑄𝑛 are independent random (Haar)
orthogonal matrices conditioned on 𝑄𝑖 𝟏 = 𝟏.
1. Prove that π”Όπœ’(𝐴) has real roots.
Quadrature Theorem. For any doubly stochastic 𝑀:
𝑑
𝑑
𝑃𝑖 𝑀𝑃𝑖𝑇 = (π‘₯ βˆ’ 𝑑)𝔼𝑄 πœ’
𝔼𝑃 πœ’
𝑖=1
𝑄𝑖 𝑀𝑄𝑖𝑇
𝑖=1
where 𝑄1 , … , 𝑄𝑛 are independent random (Haar)
orthogonal matrices conditioned on 𝑄𝑖 𝟏 = 𝟏.
𝑀 = 𝑀 orthogonal to 1
1. Prove that π”Όπœ’(𝐴) has real roots.
Quadrature Theorem. For any doubly stochastic 𝑀:
𝑑
𝑑
𝑃𝑖 𝑀𝑃𝑖𝑇 = 𝔼𝑄 πœ’
𝔼𝑃 πœ’
𝑖=1
𝑄𝑖 𝑀𝑄𝑖𝑇
𝑖=1
where 𝑄1 , … , 𝑄𝑛 are independent random (Haar)
orthogonal matrices conditioned on 𝑄𝑖 𝟏 = 𝟏.
For the rest of the talk I will ignore the trivial
eigenspace 𝟏 and the trivial root/eigenvalue 𝐝.
So π›ŒπŸ is now the largest root.
1. Prove that π”Όπœ’(𝐴) has real roots.
Quadrature Theorem.
𝑑
𝑑
𝑃𝑖 𝑀𝑃𝑖𝑇 = 𝔼𝑄 πœ’
𝔼𝑃 πœ’
𝑖=1
𝑄𝑖 𝑀𝑄𝑖𝑇
𝑖=1
1. Prove that π”Όπœ’(𝐴) has real roots.
Quadrature Theorem.
𝑑
𝑑
𝑃𝑖 𝑀𝑃𝑖𝑇 = 𝔼𝑄 πœ’
𝔼𝑃 πœ’
𝑖=1
𝑄𝑖 𝑀𝑄𝑖𝑇
𝑖=1
this only depends on πœ’(𝑀)!
1. Prove that π”Όπœ’(𝐴) has real roots.
Quadrature Theorem.
𝑑
𝑑
𝑃𝑖 𝑀𝑃𝑖𝑇 = 𝔼𝑄 πœ’
𝔼𝑃 πœ’
𝑖=1
𝑄𝑖 𝑀𝑄𝑖𝑇
𝑖=1
<definition on board>
1. Prove that π”Όπœ’(𝐴) has real roots.
Quadrature Theorem.
𝑑
𝑑
𝑃𝑖 𝑀𝑃𝑖𝑇 = 𝔼𝑄 πœ’
𝔼𝑃 πœ’
𝑖=1
𝑄𝑖 𝑀𝑄𝑖𝑇
𝑖=1
Linearization Formula:
𝑑
𝑄𝑖 𝑀𝑄𝑖𝑇 = πœ’ 𝑀 βŠžπ‘› πœ’ 𝑀 βŠžπ‘› … βŠžπ‘› πœ’(𝑀)
𝔼𝑄 πœ’
𝑖=1
where βŠžπ‘›
is the Finite Free Convolution.
1. Prove that π”Όπœ’(𝐴) has real roots.
Quadrature Theorem.
𝑑
πœ’ 𝑀 𝑑= π‘₯ βˆ’ 1
𝑃𝑖 𝑀𝑃𝑖𝑇 = 𝔼𝑄 πœ’
𝔼𝑃 πœ’
𝑖=1
𝑛
βˆ’1
2
π‘₯+1
𝑛
2
𝑄𝑖 𝑀𝑄𝑖𝑇
𝑖=1
Linearization Formula:
𝑑
𝑄𝑖 𝑀𝑄𝑖𝑇 = πœ’ 𝑀 βŠžπ‘› πœ’ 𝑀 βŠžπ‘› … βŠžπ‘› πœ’(𝑀)
𝔼𝑄 πœ’
𝑖=1
where βŠžπ‘›
is the Finite Free Convolution.
1. Prove that π”Όπœ’(𝐴) has real roots.
Quadrature Theorem.
𝑑
πœ’ 𝑀 𝑑= π‘₯ βˆ’ 1
𝑃𝑖 𝑀𝑃𝑖𝑇 = 𝔼𝑄 πœ’
𝔼𝑃 πœ’
𝑖=1
𝑛
βˆ’1
2
π‘₯+1
𝑛
2
𝑄𝑖 𝑀𝑄𝑖𝑇
𝑖=1
Linearization Formula:
𝑑
𝑄𝑖 𝑀𝑄𝑖𝑇 = πœ’ 𝑀 βŠžπ‘› πœ’ 𝑀 βŠžπ‘› … βŠžπ‘› πœ’(𝑀)
𝔼𝑄 πœ’
𝑖=1
where βŠžπ‘›
is the Finite Free Convolution.
[Walsh’22]: The operation βŠžπ‘› preserves real-rootedness.
οƒ–
1. Prove that π”Όπœ’(𝐴) has real roots.
Quadrature Theorem.
𝑑
πœ’ 𝑀 𝑑= π‘₯ βˆ’ 1
𝑃𝑖 𝑀𝑃𝑖𝑇 = 𝔼𝑄 πœ’
𝔼𝑃 πœ’
𝑖=1
𝑛
βˆ’1
2
π‘₯+1
𝑛
2
𝑄𝑖 𝑀𝑄𝑖𝑇
𝑖=1
Linearization Formula:
𝑑
𝑄𝑖 𝑀𝑄𝑖𝑇 = πœ’ 𝑀 βŠžπ‘› πœ’ 𝑀 βŠžπ‘› … βŠžπ‘› πœ’(𝑀)
𝔼𝑄 πœ’
𝑖=1
where βŠžπ‘›
is the Finite Free Convolution.
[Walsh’22]: The operation βŠžπ‘› preserves real-rootedness.
Outline of the Proof
1. Prove that π”Όπœ’(𝐴) has real roots. Also showed:
π”Όπœ’
𝑑
𝑇
𝑃
𝑀𝑃
𝑖=1 𝑖
𝑖
= πœ’ 𝑀 βŠžπ‘› … βŠžπ‘› πœ’ 𝑀 .
2. Prove that the roots are Ramanujan:
πœ†2 π”Όπœ’ 𝐴 ≀ 2 𝑑 βˆ’ 1
3. Prove πœ†2 𝐴 ≀ πœ†2 (π”Όπœ’ 𝐴 ) with positive probability.
οƒ–
Outline of the Proof
2. Prove that the roots are Ramanujan:
πœ†2 πœ’ 𝑀 βŠžπ‘› … βŠžπ‘› πœ’ 𝑀 ≀ 2 𝑑 βˆ’ 1
Outline of the Proof
2. Prove that the roots are Ramanujan:
πœ†2 πœ’ 𝑀 βŠžπ‘› … βŠžπ‘› πœ’ 𝑀 ≀ 2 𝑑 βˆ’ 1
Easy bound: πœ†2 𝑝 βŠžπ‘› π‘ž ≀ πœ†2 𝑝 + πœ†2 (π‘ž)
<proof on board>
Outline of the Proof
2. Prove that the roots are Ramanujan:
πœ†2 πœ’ 𝑀 βŠžπ‘› … βŠžπ‘› πœ’ 𝑀 ≀ 2 𝑑 βˆ’ 1
Easy bound: πœ†2 𝑝 βŠžπ‘› π‘ž ≀ πœ†2 𝑝 + πœ†2 (π‘ž)
Gives πœ†2 πœ’ 𝑀 βŠžπ‘› … πœ’ 𝑀
≀ 𝑑,
trivial
Free Probability Detour
Sums of independent random matrices
Independent Hermitian random matrices 𝐴, 𝐡.
Q: What is the distribution of 𝑒𝑖𝑔 𝐴 + 𝐡 ?
No general method.
Depends on eigenvectors of 𝐴 and 𝐡.
Free Probability
Independent orthogonally invariant matrices 𝐴, 𝐡.
(𝐴 and 𝑄𝐴𝑄𝑇 have same distribution)
Intuition: eigenvectors in `generic position’, so
maybe can say something about 𝐴 + 𝐡...
Free Probability
Independent orthogonally invariant matrices 𝐴, 𝐡.
Moment generating function
1
𝛼: = π’žπ΄ 𝑧 = π”Όπ‘‡π‘Ÿ 𝑧𝐼 βˆ’ 𝐴
𝑛
βˆ’1
1
=
𝑛
π‘˜
π”Όπ‘‡π‘Ÿ π΄π‘˜
𝑧 π‘˜+1
a.k.a. Cauchy / Stieltjes transform
Free Probability
Independent orthogonally invariant matrices 𝐴, 𝐡.
Moment generating function
1
𝛼: = π’žπ΄ 𝑧 = π”Όπ‘‡π‘Ÿ 𝑧𝐼 βˆ’ 𝐴
𝑛
βˆ’1
Cumulant generating function: functional inverse
(βˆ’1)
𝑧 = π’žπ΄ (𝛼)
Free Probability
Independent orthogonally invariant matrices 𝐴, 𝐡.
Moment generating function
1
𝛼: = π’žπ΄ 𝑧 = π”Όπ‘‡π‘Ÿ 𝑧𝐼 βˆ’ 𝐴
𝑛
βˆ’1
Cumulant generating function: functional inverse
(βˆ’1)
𝑧 = π’žπ΄ (𝛼)
[Voiculescu’91] As 𝑛 β†’ ∞:
(βˆ’1)
(βˆ’1)
(βˆ’1)
π’žπ΄+𝐡 𝛼 = π’žπ΄
𝛼 + π’žπ΅
𝛼 βˆ’ 1/𝛼
Free Probability
[Voiculescu’91] As 𝑛 β†’ ∞:
(βˆ’1)
(βˆ’1)
(βˆ’1)
π’žπ΄+𝐡 𝛼 = π’žπ΄
𝛼 + π’žπ΅
𝛼 βˆ’ 1/𝛼
Summary: Precise description of limiting spectral
distribution of sums of random matrices in generic
position. Beats triangle inequality.
Free Probability
[Voiculescu’91] As 𝑛 β†’ ∞:
(βˆ’1)
(βˆ’1)
(βˆ’1)
π’žπ΄+𝐡 𝛼 = π’žπ΄
𝛼 + π’žπ΅
𝛼 βˆ’ 1/𝛼
Summary: Precise description of limiting spectral
distribution of sums of random matrices in generic
position. Beats triangle inequality.
Can be used to show:
𝑀𝑛 + 𝑄𝑛 𝑀𝑛 𝑄𝑛𝑇
Free Probability
[Voiculescu’91] As 𝑛 β†’ ∞:
(βˆ’1)
(βˆ’1)
(βˆ’1)
π’žπ΄+𝐡 𝛼 = π’žπ΄
𝛼 + π’žπ΅
𝛼 βˆ’ 1/𝛼
Summary: Precise description of limiting spectral
distribution of sums of random matrices in generic
position. Beats triangle inequality.
Can be used to show:
𝑀𝑛 + 𝑄𝑛 𝑀𝑛 𝑄𝑛𝑇
but we want extreme eigs in finite dimensions….
Back to the proof:
Finite R-Transform
Inequalities
Free Convolution for Polynomials
Main tool: Stieltjes/Cauchy Transform
For a real-rooted poly of degree 𝑛, consider:
1 𝑝′(π‘₯) 1
π’žπ‘ π‘₯ =
=
𝑛 𝑝(π‘₯) 𝑛
𝑖
1
π‘₯ βˆ’ πœ†π‘–
Define for 𝛼 > 0:
𝛼max 𝑝 = max{π‘₯: π’žπ‘ π‘₯ = 𝛼}
Notice
𝛼max p > πœ†1 (𝑝)
Free Convolution for Polynomials
Main tool: Stieltjes/Cauchy Transform
For a real-rooted poly of degree 𝑛, consider:
1 𝑝′(π‘₯) 1
π’žπ‘ π‘₯ =
=
𝑛 𝑝(π‘₯) 𝑛
𝑖
1
π‘₯ βˆ’ πœ†π‘–
Define for 𝛼 > 0:
𝛼max 𝑝 = max{π‘₯: π’žπ‘ π‘₯ = 𝛼}
Notice
𝛼max p > πœ†1 (𝑝)
A picture of π›Όπ‘šπ‘Žπ‘₯
1
π’žπ‘ π‘₯ =
𝑛
𝑖
1
π‘₯ βˆ’ πœ†π‘–
𝛼
π›Όπ‘šπ‘Žπ‘₯
For a single matching
πœ’ 𝑀 = π‘₯+1
So
π’žπœ’
𝑀
𝑛
2
π‘₯βˆ’1
𝑛
2
π‘₯
π‘₯ = 2
π‘₯ βˆ’1
and
𝛼max πœ’ 𝑀
1 + 1 + 4𝛼 2
=
2𝛼
For d matchings
Main inequality:
𝛼max 𝑝 βŠžπ‘› π‘ž ≀ 𝛼max 𝑝 + 𝛼max π‘ž βˆ’ 1/𝛼
Meaning:
When 𝛼 = ∞ same as triangle inequality.
When 𝛼 < ∞ can beat the triangle
inequality when roots are spread out.
For d matchings
Main inequality:
𝛼max 𝑝 βŠžπ‘› π‘ž ≀ 𝛼max 𝑝 + 𝛼max π‘ž βˆ’ 1/𝛼
Meaning:
When 𝛼 = ∞ same as triangle inequality.
When 𝛼 < ∞ can beat the triangle
inequality when roots are spread out.
Proof by characterizing extremizers, relies on convexity.
For d matchings
Main inequality:
𝛼max 𝑝 βŠžπ‘› π‘ž ≀ 𝛼max 𝑝 + 𝛼max π‘ž βˆ’ 1/𝛼
Cf. Voiculescu
𝛼max πœ‡ ⊞ 𝜈 = 𝛼max πœ‡ + 𝛼max 𝜈 βˆ’ 1/𝛼
(asymptotically, for large 𝛼)
For d matchings
Main inequality:
𝛼max 𝑝 βŠžπ‘› π‘ž ≀ 𝛼max 𝑝 + 𝛼max π‘ž βˆ’ 1/𝛼
Applying 𝑑 βˆ’ 1 times:
𝛼max πœ’ 𝑀 ⊞ β‹― ⊞ πœ’ 𝑀
1 + 1 + 4𝛼 2 𝑑 βˆ’ 1
≀𝑑
βˆ’
2𝛼
𝛼
For d matchings
Main inequality:
𝛼max 𝑝 βŠžπ‘› π‘ž ≀ 𝛼max 𝑝 + 𝛼max π‘ž βˆ’ 1/𝛼
Applying 𝑑 βˆ’ 1 times:
𝛼max πœ’ 𝑀 ⊞ β‹― ⊞ πœ’ 𝑀
1 + 1 + 4𝛼 2 𝑑 βˆ’ 1
≀𝑑
βˆ’
2𝛼
𝛼
= 2 𝑑 βˆ’ 1 for 𝛼 =
π‘‘βˆ’1
π‘‘βˆ’2
infinite
"𝐴∞ "
Finite analogue of Free Probability.
π”Όπœ’ 𝐴𝑛 = 𝔼 det π‘₯𝐼 βˆ’ 𝐴𝑛
𝐴𝑛
expected
characteristic
polynomial
finite
Outline of the Proof
1. Prove that π”Όπœ’(𝐴) has real roots. Also showed:
π”Όπœ’
𝑑
𝑇
𝑃
𝑀𝑃
𝑖=1 𝑖
𝑖
= πœ’ 𝑀 βŠžπ‘› … βŠžπ‘› πœ’ 𝑀 .
2. Prove that the roots are Ramanujan:
πœ†2 π”Όπœ’ 𝐴 ≀ 2 𝑑 βˆ’ 1
3. Prove πœ†2 𝐴 ≀ πœ†2 (π”Όπœ’ 𝐴 ) with positive probability.
οƒ–
οƒ–
infinite
"𝐴∞ "
Finite analogue of Free Probability.
π”Όπœ’ 𝐴𝑛 = 𝔼 det π‘₯𝐼 βˆ’ 𝐴𝑛
expected
characteristic
polynomial
?
𝐴𝑛
finite
Outline of the Proof
3. Prove πœ†2 𝐴 ≀ πœ†2 (π”Όπœ’ 𝐴 ) with positive probability.
Outline of the Proof
3. Prove: There is some setting of 𝑃1 , … , 𝑃𝑑 s.t:
𝑑
𝑑
𝑇𝐴 ) with positive probability.
𝑇
3. Proveπœ†πœ†2 2 πœ’
𝐴 ≀ πœ†2𝑃(π”Όπœ’
𝑀𝑃
≀
πœ†
π”Όπœ’
𝑃
𝑀𝑃
𝑖
2
𝑖
𝑖
𝑖
𝑖
𝑖
Averaging Polynomials
Basic Question: Given
when are the roots
of the
related to roots of
?
Averaging Polynomials
Basic Question: Given
when are the roots
of the
related to roots of
?
Answer: Certainly not always
Averaging Polynomials
Basic Question: Given
when are the roots
of the
related to roots of
?
Answer: Certainly not always…
Averaging Polynomials
Basic Question: Given
when are the roots
of the
related to roots of
?
But sometimes roots (avg) = avg (roots):
A Sufficient Condition
Basic Question: Given
when are the roots
of the
related to roots of
?
Answer: When they have a common interlacing.
Definition.
interlaces
if
Theorem. If
interlacing,
monic have a common
Theorem. If
interlacing,
Proof.
monic have a common
Theorem. If
interlacing,
Proof.
monic have a common
Theorem. If
interlacing,
Proof.
monic have a common
Theorem. If
interlacing,
Proof.
monic have a common
Theorem. If
interlacing,
Proof.
monic have a common
Theorem. If
interlacing,
Proof.
monic have a common
Many Swaps
Generate random permutations using swaps.
Swaps generate polynomials with common
interlacings. Apply theorem inductively.
𝔼𝑆1 𝔼𝑆2 … 𝔼𝑆𝑁 πœ’ 𝐴 + 𝑆𝑁 … 𝑆1 𝐡𝑆1𝑇 … 𝑆𝑁𝑇
Conclusion. There is some setting of 𝑃
𝑆11, … , 𝑃
𝑆𝑑𝑁1 ,s.t:
… , 𝑆1𝑑 , … , 𝑆𝑁𝑑 s.t:
𝑑
πœ†2 πœ’
𝑖
𝑑
𝑇
𝑇
𝑖
𝑖
𝑖
𝑖
𝑆𝑁 β€¦πœ†2𝑆1πœ’π‘€π‘†1 𝑃…
𝑆𝑁𝑖𝑇
𝑖 𝑀𝑃
𝑖
𝑑
≀ πœ†2 π”Όπœ’
𝑖
𝑇
𝑇
𝑖
𝑖
𝑖
𝑃
𝑆𝑁𝑖 𝑀𝑃
… 𝑆𝑖 1 𝑀𝑆1
𝑇
𝑖
… 𝑆𝑁
Outline of the Proof
1. Prove that π”Όπœ’(𝐴) has real roots. Also showed:
π”Όπœ’
𝑑
𝑇
𝑃
𝑀𝑃
𝑖=1 𝑖
𝑖
= πœ’ 𝑀 βŠžπ‘› … βŠžπ‘› πœ’ 𝑀 .
οƒ–
2. Prove that the roots are Ramanujan:
πœ†2 π”Όπœ’ 𝐴 ≀ 2 𝑑 βˆ’ 1
οƒ–
3. Prove πœ†2 𝐴 ≀ πœ†2 (π”Όπœ’ 𝐴 ) with positive probability.
οƒ–
Outline of the Proof
Quadrature + Walsh’22
1. Prove that π”Όπœ’(𝐴) has real roots. Also showed:
π”Όπœ’
𝑑
𝑇
𝑃
𝑀𝑃
𝑖=1 𝑖
𝑖
= πœ’ 𝑀 βŠžπ‘› … βŠžπ‘› πœ’ 𝑀 .
Finite Analogue of
2. Prove that the roots are Ramanujan:
Voiculescu’91
πœ†2 π”Όπœ’ 𝐴 ≀ 2 𝑑 βˆ’ 1
Interlacing
3. Prove πœ†2 𝐴 ≀ πœ†2 (π”Όπœ’ 𝐴 ) with positive probability.
οƒ–
οƒ–
οƒ–
infinite
"𝐴∞ "
𝑑 4 𝑑 βˆ’ 1 βˆ’ π‘₯2
2πœ‹(𝑑 2 βˆ’ π‘₯ 2 )
Finite analogue of Free Probability.
π”Όπœ’ 𝐴𝑛 = 𝔼 det π‘₯𝐼 βˆ’ 𝐴𝑛
expected
characteristic
polynomial
Interlacing
? Families
𝐴𝑛
finite
Is this a coincidence?
[Marcus’15]:
If πœ‡ and 𝜈 are discrete measures, each uniform on 𝑛
real points with characteristic polynomials 𝑝 and π‘ž,
then
Finite
convolution
βˆ’1
lim πΆπ‘π‘š ⊞ π‘žπ‘š
π‘šπ‘›
π‘šβ†’βˆž
𝛼 =
βˆ’1
πΆπœ‡βŠžπœˆ (𝛼)
For all sufficiently large real 𝛼.
(exact statement is a bit more technical)
Similar results for S-transform. Also: Free CLT.
Voiculescu
convolution
Is this a coincidence?
[Marcus’15]:
If πœ‡ and 𝜈 are discrete measures, each uniform on 𝑛
real points with characteristic polynomials 𝑝 and π‘ž,
then
Finite
convolution
βˆ’1
lim πΆπ‘π‘š ⊞ π‘žπ‘š
π‘šπ‘›
π‘šβ†’βˆž
𝛼 =
βˆ’1
πΆπœ‡βŠžπœˆ (𝛼)
For all sufficiently large real 𝛼.
(exact statement is a bit more technical)
Similar results for S-transform. Also: Free CLT.
Voiculescu
convolution
High-level question
Why do expected characteristic polynomials give
sharp bounds?
(also: 2-Covers, Kadison-Singer, Thin Trees [Oveis-Gharan-Anari’13])
What are expected characteristic polynomials?
High-level question
Why do expected characteristic polynomials give
sharp bounds?
(also: 2-Covers, Kadison-Singer, Thin Trees [Anari-Gharan’15])
What are expected characteristic polynomials?
Heuristic Observation + Marcus’15:
Finite approximations of asymptotic limits of finite
random matrices.
Open questions
β€’ More finite free probability notions: entropy
β€’ Nonbipartite Ramanujan graphs
β€’ Other random graph models
β€’ Algorithms
β€’ Bounds on probability (conjectured to be
52% by Novikov, Miller, Sabelli’06)
β€’ More connections between infinite and finite
β€’ Applications to free probability/vN algebras?
Related documents