Download Free probability, random Vandermonde matrices

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Statistics wikipedia , lookup

History of statistics wikipedia , lookup

Probability wikipedia , lookup

Probability interpretations wikipedia , lookup

Randomness wikipedia , lookup

Transcript
Naples 2008
Free probability, random Vandermonde matrices,
and applications
Øyvind Ryan
May 2008
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Some important concepts from classical probability
I
Random variables are functions (i.e. they commute w.r.t.
multiplication) with a given p.d.f. (denoted f )
I
Expectation (denoted E ) is integration
I
Independence
I
Additive convolution (∗) and the logarithm of the Fourier
transform
I
Multiplicative convolution
I
Central limit law, with special role of the Gaussian law
¢
¢∗n
¡¡
Poisson distribution Pc : The limit of 1 − nc δ(0) + nc δ(1)
as n → ∞.
I
I
Divisibility: For a given a, nd i.i.d. b1 , ..., bn such that
fa = fb1 +···+bn .
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Free probability
I
I
I
A more general theory, where the random variables are
matrices (or more generally, elements in a unital ∗-algebra
(denoted A), typically B (H)), with their eigenvalue
distribution (spectrum) taking the role as the p.d.f.
The above mentioned concepts have their analogues in this
theory. For instance, the expectation (denoted φ) is a
normalized linear functional on A. The pair (A, φ) is called a
noncommutative probability space.
For (random) matrices, φ will be the (expected) trace:
φ(A) = trn (A) =
n
1X
aii (φ(A) = E (trn (A)).
n
i =1
I
What should it mean that two random matrices are "free"
(=analogue of independent, to be dened)? Think of as two
independent random matrices, where eigenvectors of one point
in all directions with equal probability (unitary invariance).
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
The semicircle law
Free probability has a "Gaussian distribution counterpart":
35
30
25
20
15
10
5
0
−3
−2
−1
0
1
2
3
A = (1/sqrt(2000)) * (randn(1000,1000) +
j*randn(1000,1000));
A = (sqrt(2)/2)*(A+A');
hist(eig(A),40)
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Motivation for free probability
Assume that Xn , Yn are independent, Gaussian n × n-matrices. One
can show that the limits
¡
¢
¡
¢
φ X i1 Y j1 · · · X il Y jl := lim trn Xin1 Ynj1 · · · Xinl Ynjl
n→∞
exist. If we linearly extend the linear functional φ to all polynomials
in A and B, the following can be shown:
Theorem
If Pi , Qi are polynomials in X and Y respectively, with 1 ≤ i ≤ l ,
and φ(Pi (X )) = 0, φ(Qi (Y )) = 0 for all i, then
φ (P1 (X )Q1 (Y ) · · · Pl (X )Ql (Y )) = 0.
This motivates the denition of freeness (=analogue of
independence):
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Denition of freeness
Denition
A family of unital ∗-subalgebras (Ai )i ∈I is called a free family if


aj ∈ Aij


⇒ φ(a1 · · · an ) = 0. (1)
i1 6= i2 , i2 6= i3 , · · · , in−1 6= in


φ(a1 ) = φ(a2 ) = · · · = φ(an ) = 0
A family of random variables ai is called a free family if the algebras
they generate form a free family.
(1) is also called the freeness relation, and can be viewed as a rule
for computing the mixed moments φ(a1 · · · an ) of the ai from their
individual moments φ(aim ). In random matrix settings, it relates the
moments (E (trn (Xm )) of random matrices.
Recently, a theory called second order freeness has been developed,
which also relates "higher order moments":
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Second order freeness
For a random matrix ensemble X = {Xn }n , dene the second order
limit moments
¡
¢
¡
¢ ¡
¢
αiX,j = lim E Tr (Xni )Tr (Xnj ) − E Tr (Xni ) E Tr (Xnj ) .
n→∞
I
I
We say that {Xn }n has a second order limit distribution if the
second order limit moments exist, and the higher order limit
moments (not written down here) are all 0. Implies that
tr (Xni ) − E (tr (Xni )) is asymptotically Gaussian of order n1 .
It is known [1] that whenever An , Bn are independent, with
second order limit distributions, one of them unitarily
invariant, then An and Bn are asymptotically free of second
p (A,B )
from the
order [1] (a relation which determines all αi ,j
A
B
individual αi ,j , αi ,j ).
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
I
Free probability, random Vandermonde matrices, and applicati
Second order freeness is an eective machinery for calculating
p (A,B )
the αi ,j
from αiA,j , αiB,j . Takes a particularly nice form for
computation of αiA,j+B .
I
Gives alternative proofs of known reults.
I
More general than what we can do with the freeness relation,
which only enables us to compute the (rst order) limit
moments
¡
¢
αiX = lim E tr (Xni )
n→∞
I
from individual (rst order) limit moments.
Used in the literature:
I
I
Gaussian-type matrices have a second order limit distribution.
In [2], the asymptotic Gaussianity of tr (Xni ) − E (tr (Xni )) is
exploited with maximum likelihood estimation.
Optimal weighting of moments [3].
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Additve and multiplicative free convolution
I
I
I
I
I
Additive/multiplicative free convolution (¢/£) corresponds to
summing/multiplying free random variables.
Can also be viewed as operations on measures, by associating
the moments with a probability measure.
In random matrix settings, additive free convolution
cooresponds to estimating the eigenvalue distribution of the
sum of two large, independent random matrices, where one is
unitarily invariant.
Alternative functional equations exist for computing
additive/multiplicative free convolution. Uses the Stieltjes
transform.
One of the main questions in my papers: Let X and Y be
random matrices. How can we make a good prediction of the
eigenvalue distribution of X when one has the eigenvalue
distribution of XY and Y (i.e. problem turned around to a
deconvolution problem)? Simplest case is Y Gaussian.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Rectangular convolution
Recently, additive free convolution has been extended to handle
addition of rectangular matrices, with eigenvalue distributions
replaced with singular value distributions: Assume Ak , Bk
rectangular (n × N), with limiting singular laws µA , µB , and that
limk →∞ Nn = λ. In many cases (when for instance the matrices are
independent and unitarily invariant [4]), the limiting singular law of
Ak + Bk (µA+B ) exists and can be computed from (and depends
only on) µA , µB . We dene
µA ¢λ µB = µA+B .
I
I
I
¢λ is called rectangular free convolution with ratio λ.
¢λ can be extended to the set of all symmetric probability
measures [4].
Expressible in terms of the free probability constructs additive
(¢) and multiplicative (£) free convolution [5].
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Ways to compute free convolution
I
Using the fact that the Stieltjes transforms of the measures
µ1 , µ2 , µ1 ¢ µ2 , µ1 £ µ2 satisfy certain functional equations:
Discretize these equations, and turn the computational
problem into a convex optimization problem [6].
I
When the Stieltjes transforms of µ1 and µ2 satisfy certain
polynomial equations, one can show that The Stieltjes
transform of µ1 ¢ µ2 also satisfy a certain polynomial
equation, and the polynomial of µ1 ¢ µ2 can be compute from
those of µ1 and µ2 . This enables in turn to compute µ1 ¢ µ2
itself. This method is called the polynomial method of random
matrices [7].
I
Perform free convolution solely in terms of moments. Fast
algorithms exist, both for rst and second order moments.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Application of free probability to channel capacity
estimation [8]
The capacity per receiving antenna of a channel with n × m
channel matrix H and signal to noise ratio ρ = σ12 is given by
µ
¶
n
1
1
1X
1
H
C = log2 det In +
HH
=
log2 (1 + 2 λl )
2
n
mσ
n
σ
(2)
l =1
where λl are the eigenvalues of m1 HHH . We would like to estimate
C.
To estimate C , we will use free probability tools to estimate the
eigenvalues of m1 HHH based on some observations Ĥi
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Observation model
The following is a much used observation model:
Ĥi = H + σ Xi
(3)
where
I
The matrices are n × m (n is the number of receiving
antennas, m is the number of transmitting antennas)
I
Ĥi is the measured MIMO matrix,
Xi is the noise matrix with i.i.d standard complex Gaussian
I
entries.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Existing ways to estimate the channel capacity
Several channel capacity estimators have been used in the literature:
³
´
1 PL
1
H
Ĥ
Ĥ
C1 = nL
log
det
I
+
n
i =1
mσ 2 i i ´
³2
1
1 PL
C2 = n log2 det In + Lσ2 m i =1 Ĥi ĤH
(4)
i
³
´
P
P
C3 = n1 log2 det In + σ21m ( L1 Li=1 Ĥi )( L1 Li=1 Ĥi )H )
Why not try to formulate an estimator based on free probability
instead?
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
1.6
1.4
Capacity
1.2
1
0.8
0.6
0.4
True capacity
C1
C
2
C3
0.2
0
10
20
30
40
50
60
Number of observations
70
80
90
100
Comparison of the classical capacity estimators for various number
of observations. σ 2 = 0.1, n = 10 receive antennas, m = 10
transmit antennas. The rank of H was 3.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Form the compound observation matrix
σ
Ĥ1...L = H1...L + √ X1...L , where
L
i
1 h
Ĥ1...L = √ Ĥ1 , Ĥ2 , ..., ĤL ,
L
1
H1...L = √ [H, H, ..., H] ,
L
X1...L = [X1 , X2 , ..., XL ] .
With free probability, moments of the observation matrix
1
Ĥ1...L ĤH1...L ,
m
can be related with the moments of the channel matrix
1
1
H1...L HH1...L = HHH
m
m
(one needs to perform additive free- and multiplicative free
n ). From moments
convolution with the Marchenko Pastur law (µ mL
we can estimate eigenvalues, and then the channel capacity.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Free probability based estimator for the moments of the
channel matrix
Can also be written in the following way for the rst four moments:
ĥ1 = h1 + σ 2
ĥ2 = h2 + 2σ 2 (1 + c )h1 + σ 4 (1 + c )
ĥ3 = h3 + 3¡σ 2 (1 + c )h2 ¢+ 3σ 2 ch12
+3σ 4¡ c 2 + 3c + 1¢ h1
+σ 6 c 2 + 3c + 1
ĥ4 = h4 + 4σ 2 (1 + c )h3 + 8σ 2 ch2 h1
+σ 4 (6c 2 + 16c + 6)h2
+14σ 4 c (1 + c )h12
+4σ 6¡(c 3 + 6c 2 + 6c + 1¢)h1
+σ 8 c 3 + 6c 2 + 6c + 1 ,
where ĥi are the moments of the observation matrix
hi are the moments of m1 HHH .
Øyvind Ryan
(5)
1
H
m Ĥ1...L Ĥ1...L ,
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
0.4
0.35
Capacity
0.3
0.25
0.2
0.15
0.1
0.05
True capacity
Cf
CG
0
0
10
20
30
40
50
60
Number of observations
70
80
90
100
Comparison of Cf and CG for various number of observations.
σ 2 = 0.1, n = 10 receive antennas, m = 10 transmit antennas. The
rank of H was 3.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
0.9
0.8
0.7
Capacity
0.6
0.5
0.4
0.3
True capacity, rank 3
Cf, rank 3
0.2
True capacity, rank 5
Cf, rank 5
0.1
True capacity, rank 6
C , rank 6
f
0
0
10
20
30
40
50
60
Number of observations
70
80
90
100
Cf for various number of observations. σ 2 = 0.1, n = 10 receive
antennas, m = 10 transmit antennas. The rank of H was 3, 5 and
6.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Does other types of random matrices (i.e. non-unitarily
invariant) t into a framework similar to free probability?
We have investigated this for Vandermonde matrices [9, 10], which
are widely used. They have the form


1
··· 1
 x1

· · · xL

V=

 ..
..
..
 .

.
.
N −1
N −1
x1
· · · xL
It is straightforward to show that square Vandermonde matrices
have determinant
Y
(xl − xk ).
det(V) =
1≤k <l ≤N
In particular, V is nonsingular if the xk are dierent.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Various results exist on the distribution of the determinant of
Vandermonde matrices (Gaussian entries (Metha), entries with
B-distribution (Selberg)), but there are many open problems
(below, VH V is used since V is rectangular in general):
I How can³we nd the
¡ H ¢k ´ moments of the Vandermonde matrices
(i.e. trL V V
) (not the determinant itself)?
I
I
I
Deconvolution problem: How to estimate the moments of D
from mixed moments DVH V?
Mixed moments of independent Vandermonde matrices?
Asymptotic results? If X is an N × N standard, complex,
Gaussian matrix, then
¡
¡1
¢¢
H
limN →∞ N1 log
det
I
+
ρ
XX
= ´
2
N
³
¢2
¢2
¡
¡√
1 √
2 log2 1 + ρ − 4
4ρ + 1 − 1
− log4ρ2 e
4ρ + 1 − 1 .
(which is the expression for the capacity). We are not aware of
similar asymptotic expressions for the determinant/capacity of
Vandermonde matrices.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Random Vandermonde matrices
We will consider Vandermonde matrices V of dimension N × L of
the form


1
··· 1
−j ω

· · · e −j ωL
1  e 1

V= √ 
(6)
 ..

.
.
.
.

. .
N .
e −j (N −1)ω1 · · · e −j (N −1)ωL
(i.e. we assume that the xi lie on the unit circle). The ωi are called
phase distributions. We will limit the study of Vandermonde
matrices to cases where
I
I
The phase distributions are i.i.d.
The asymptotic case N , L → ∞ with limN →∞ NL = c. The
normalizing factor √1 is included to ensure limiting
N
asymptotic behaviour.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Where can such Vandermonde matrices appear?
Consider a multi-path channel of the form:
h(τ ) =
L
X
αi g (τ − τi )
i =1
αi are i.d. Gaussian random variables with power Pi ,
I τi are uniformly distributed delays over [0, T ],
I g is the low pass transmit lter.
I L is the number of paths
In the frequency domain, the channel is given by:
I
c (f ) =
L
X
αi G (f )e −j2πf τi
i =1
We suppose the transmit lter to be ideal (G (f ) = 1).
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Sampling the continuous frequency signal at fi = i W
N (N is the
number of frequency samples) where W is the bandwidth, our
model becomes

 

α1
n1
1
.   . 
r = VP 2 
(7)
 ..  +  ..  ,
αL
nN
where V is a random Vandermonde matrix of the type (6), and
I
P is the L × L diagonal power matrix,
I
ni is independent, additive, white, zero mean Gaussian noise of
variance √σ .
N
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Main result
Denition
Dene
Kρ,ω,N =
RN
1
n+1−|ρ| ×
(0,2π)|ρ|
Qn
jN (ω
−ω
)
b(k −1)
b(k )
1−e
k =1 1−e j (ωb(k −1) −ωb(k ) )
,
(8)
d ω1 · · · d ω|ρ| ,
where ωρ1 , ..., ωρ|ρ| are i.i.d. (indexed by the blocks of ρ), all with
the same distribution as ω , and where b(k ) is the block of ρ which
contains k (where notation is cyclic, i.e. b(−1) = b(n)). If the limit
Kρ,ω = lim Kρ,ω,N
N →∞
exists, then Kρ,ω is called a Vandermonde mixed moment expansion
coecient.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Main result 2
Assume that
I
{Dr (N )}1≤r ≤n are diagonal L × L matrices which have a joint
limit distribution as N → ∞,
I L
N
→ c.
We would like to express the limits
Mn = lim E [trL (D1 (N )VH VD2 (N )VH V · · · × Dn (N )VH V)]. (9)
N →∞
It turns out that this is feasible when all Vandermonde mixed
moment expansion coecients Kρ,ω exist.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
For convenience, dene
£ ¡¡
¢n ¢¤
mn = (cM )n = c limN →∞ E trL D(N )VH V
,
dn = (cD )n = c limN →∞ trL (Dn (N )) ,
(10)
Theorem
Assume D1 (N ) = D2 (N ) = · · · = Dn (N ). When ω = u,
m1 = d1
m2 = d2 + d12
m3 = d3 + 3d2 d1 + d13
m4 = d4 + 4d3 d1 + 8/3d22 + 6d2 d12 + d14
m5 = d5 + 5d4 d1 + 25/3d3 d2 + 10d3 d12 + 40/3d22 d1 + 10d2 d13 + d15
m6 = d6 + 6d5 d1 + 12d4 d2 + 15d4 d12 + 151/20d32 + 50d3 d2 d1
+20d3 d13 + 11d23 + 40d22 d12 + 15d2 d14 + d16
m7 = d7 + 7d6 d1 + 49/3d5 d2 + 21d5 d12 + 497/20d4 d3 + 84d4 d2 d1
+35d4 d13 + 1057/20d32 d1 + 693/10d3 d22 + 175d3 d2 d12
+35d3 d14 + 77d23 d1 + 280/3d22 d13 + 21d2 d15 + d17 .
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Comparison
The Gaussian equivalent of this is
m1
m2
m3
m4
m5
m6
d1
d2 + d12
d3 + 3d2 d1 + d13
d4 + 4d3 d1 + 3d22 + 6d2 d12 + d14
d5 + 5d4 d1 + 5d3 d2 + 10d3 d12 + 10d22 d1 + 10d2 d13 + d15
d6 + 6d5 d1 + 6d4 d2 + 15d4 d12 + 3d32 + 30d3 d2 d1
+20d3 d13 + 5d23 + 10d22 d12 + 15d2 d14 + d16
m7 = d7 + 7d6 d1 + 7d5 d2 + 21d5 d12 + 7d4 d3 + 42d4 d2 d1
+35d4 d13 + 21d32 d1 + 21d3 d22 + 105d3 d2 d12
+35d3 d14 + 35d23 d1 + 70d22 d13 + 21d2 d15 + d17 ,
(11)
1
H
H
when we replace V V with N XX , with X an L × N complex,
standard, Gaussian matrix.
=
=
=
=
=
=
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Sketch of proof
We can write
h ³
´i
E trL D1 (N )VH VD2 (N )VH V · · · Dn (N )VH V
as
L−1
P
i1 ,...,in
j1 ,...,jn
(12)
E ( D1 (N )(j1 , j1 )VH (j1 , i2 )V(i2 , j2 )
D2 (N )(j2 , j2 )VH (j2 , i3 )V(i3 , j3 )
..
.
(13)
Dn (N )(jn , jn )VH (jn , i1 )V(i1 , j1 ))
The (j1 , ..., jn ) give rise to a partition ρ of {1, ..., n}, where each
block ρj consists of equal values, i.e.
ρj = {k |jk = j }.
This ρ will actually represent the ρ used in the denition of Kρ,ω,n .
The rest of the proof goes by carefully computing this limit
quantity using much combinatorics.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Comparisons
Denote by
µµ
¶
¶∗n
λ
λ
ν(λ, α) = lim
1−
δ − 0 + δα
n→∞
n
n
the (classical) Poisson distribution of rate λ and jump size α.
Denote also by
µµ
¶
¶¢n
λ
λ
1−
δ − 0 + δα
n→∞
n
n
µ(λ, α) = lim
the free Poisson distribution of rate λ and jump size α (also called
the Marchenko Pastur law). Denote also µc = µ( c1 , c ),
νc = ν(c , 1).
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Comparisons 2
Corollary
Assume that V has uniformly distributed phases. Then the limit
moment
h ³³
´n ´i
Mn = lim E trL VH V
N →∞
satsies the inequality
φ(a1n ) ≤ Mn ≤
1
E (a2n ),
c
where a1 ∼ µc , a2 ∼ νc . In particular, equality occurs for
m = 1, 2, 3 and c = 1.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Comparisons 3
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
1
2
3
4
5
6
7
8
9
10
0
0
1
2
3
4
5
6
7
8
9
10
H
1
N XX ,
H
with X an 800 × 1600 com(a) V V, with V a 1600 × 800 Van- (b)
dermonde matrix with uniformly dis- plex, standard, Gaussian matrix.
tributed phases.
Figure: Histogram of mean eigenvalue distributions.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Other results
I
Mixed moments of (more than one) independent Vandermonde
matrices.
I
Generalized
¢Vandermonde matrices: These have the form
¡
V = e j αk βl 1≤k ≤N ,1≤l ≤L . It is known that V is nonsingular i
all αk are dierent, and all βl are dierent. The papers also
contain results on the asymptotics of generalized
Vandermonde matrices.
I
Exact moments of lower order Vandermonde matrices. Reveals
slower convergence.
I
Computation of the asymptotic moments when the phase
distribution is not uniform. Phase distributions with continous
density, and phase distributions with singularities.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Application 1: Estimation of the number of paths
Return to the multi-path channel model (7). For simplicity, set
W = T = 1, so that the phase distribution of the Vandermonde
matrix is uniform. We take K observations of (7) and form the
observation matrix
Y = [r1 · · · rK ]

(1)
(K )
α1
1  .
= VP 2  ..
···
..
.
α1
..
.
αL
···
αL
(1)
(K )


(1)
(K )
n1
  ..
+ .
···
..
.
n1
..
.
nN
···
nN
(1)


,
(K )
(14)
It is now possible to combine the deconvolution result for
Vandermonde matrices with known deconvolution results for
Gaussian matrices to estimate L from a number of observations
(assuming P is known). All values of L are tried, and the one which
"best matches" the observed values is chosen:
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Estimation of the number of paths 2
Proposition
Assume that V has uniformly distributed phases, and let mPi be the
^i ) the moments of the sample
moments of P, and mR̂i = trN (R
covariance matrix
^ = 1 YYH .
R
K
N
, c2 = NL , and c3 = KL . Then
Dene also c1 = K
£ ¤
E mR̂ = c2 mP1 + σ 2
µ
¶
h i
1
2
E mR̂ = c2 1 −
mP2 + c2 (c2 + c3 )(mP1 )2
N
+2σ 2 (c2 + c3 )mP1 + σ 4 (1 + c1 )
h i
E mR̂3
= ···
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Estimation of the number of paths 3
70
70
Estimate of L
Actual value of L
50
50
40
40
Estimate of L
Actual value of L
L
60
L
60
30
30
20
20
10
10
0
0
10
20
30
40
50
60
Number of observations
70
80
90
(a) K = 1
100
0
0
10
20
30
40
50
60
Number of observations
70
80
90
100
(b) K = 10
Figure:
√ Estimate for the number of paths. Actual value of L is 36. Also,
σ = 0.1, N = 100.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Application 2: Wireless capacity Analysis
For a general matrix W, the mean capacity is dened as
¡
¡
¢¢
CN = N1 E log2 det IN + σ12 WWH
¡
¡
¡
¢¢¢ R
¡
P
1
H
= N1 N
= log2 1 +
k =1 E log2 1 + σ 2 λk WW
¢
t µ(dt )
(15)
H
where µ is the mean empirical eigenvalue distribution of WW .
P
k +1 t k ,
Substituting the Taylor series log2 (1 + t ) = ln12 ∞
k =1 (−1)
k
we obtain
P
(−1)k +1 mk (µ)ρk
(16)
,
CN = ln12 ∞
k =1
k
where ρ is SNR, and where
mk (µ) =
1
σ2
Z
t k d µ(t ) for k ∈ Z+
However, many more moments are required for precise estimation
of capacity than we can provide with the formulas for the rst 7
moments.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
Wireless capacity Analysis 2
3
Asymptotic capacity
sample capacity
2.5
Capacity
2
1.5
1
0.5
0
0
1
2
3
4
5
ρ
6
7
8
9
10
¡
¢
Figure: Several realizations of the capacity N1 log2 det I + ρ N1 XXH
when X is standard, complex, Gaussian. Matrices of size 36 × 36 were
used. The known expression for the asymptotic capacity is also shown.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
3
3
2.5
2.5
2
2
Capacity
Capacity
Wireless capacity Analysis 3
1.5
1.5
1
1
0.5
0.5
0
0
1
2
3
4
5
ρ
6
7
8
(a)
¡Realizations
H¢
1
when
N log2 det I + ρVV
has uniform phase distribution.
9
10
0
0
1
2
3
4
5
ρ
6
7
8
9
10
of (b)
of
¡Realizations
¢
ω N1 log2 det I + ρVVH
when
ω
has a certain non-uniform phase
distribution.
Figure: Several realizations of the capacity for Vandermonde matrices for
two dierent phase distributions. Matrices of size 36 × 36 were used.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
I
This talk is available at
http://heim.i.uio.no/∼oyvindry/talks.shtml.
I
My publications are listed at
http://heim.i.uio.no/∼oyvindry/publications.shtml
THANK YOU!
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
B. Collins, J. A. Mingo, P. ‘niady, and R. Speicher, Second
order freeness and uctuations of random matrices: III. higher
order freeness and free cumulants, Documenta Math., vol. 12,
pp. 170, 2007.
N. R. Rao, J. Mingo, R. Speicher, and A. Edelman, Statistical
eigen-inference from large Wishart matrices, 2007,
arxiv.org/abs/math.ST/0701314.
Ø. Ryan and M. Debbah, Free deconvolution for signal
processing applications, Submitted to IEEE Trans. on
Information Theory, 2007, http://arxiv.org/abs/cs.IT/0701025.
F. Benaych-Georges, Rectangular random matrices. related
convolution, 2008, arxiv.org/abs/math.PR/0507336.
Ø. Ryan and M. Debbah, Multiplicative free convolution and
information-plus-noise type matrices, 2007,
http://arxiv.org/abs/math.PR/0702342.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica
Naples 2008
Free probability, random Vandermonde matrices, and applicati
N. E. Karoui, Spectrum estimation for large dimensional
covariance matrices using random matrix theory, 2006,
arxiv.org/abs/math/0609418.
N. R. Rao and A. Edelman, The polynomial method for
random matrices, 2007, arxiv.org/abs/math.PR/0601389.
Ø. Ryan and M. Debbah, Channel capacity estimation using
free probability theory, To appear in IEEE Trans. Signal
Process., 2007, http://arxiv.org/abs/0707.3095.
, Random Vandermonde matrices-part I: Fundamental
results, Submitted to IEEE Trans. on Information Theory,
2008.
, Random Vandermonde matrices-part II: Applications,
Submitted to IEEE Trans. on Information Theory, 2008.
Øyvind Ryan
Free probability, random Vandermonde matrices, and applica