Download Overview of the Mathematics of Compressive Sensing Simon Foucart

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Overview of the
Mathematics of Compressive Sensing
Simon Foucart
Reading Seminar on
“Compressive Sensing, Extensions, and Applications”
Texas A&M University
15 October 2015
RIP for Random Matrices
Concentration Inequality
Concentration Inequality
I
Let A ∈ Rm×N be a random matrix with entries
gi,j
ai,j = √
m
where the gi,j are independent N (0, 1).
Concentration Inequality
I
Let A ∈ Rm×N be a random matrix with entries
gi,j
ai,j = √
m
I
where the gi,j are independent N (0, 1).
For a fixed x ∈ RN , note that (Ax)i =
PN
j=1 ai,j xj ,
hence
X
X 2
kxk22
ai,j xj =
xj V(ai,j ) =
,
E (Ax)2i = V
m
E kAxk22 = kxk22 .
Concentration Inequality
I
Let A ∈ Rm×N be a random matrix with entries
gi,j
ai,j = √
m
I
where the gi,j are independent N (0, 1).
For a fixed x ∈ RN , note that (Ax)i =
PN
j=1 ai,j xj ,
hence
X
X 2
kxk22
ai,j xj =
xj V(ai,j ) =
,
E (Ax)2i = V
m
E kAxk22 = kxk22 .
I
In fact, kAxk22 concentrates around its mean: for t ∈ (0, 1),
(CI)
P(kAxk22 − kxk22 > tkxk22 ) ≤ 2 exp(−ct 2 m).
Covering Arguments
Covering Arguments
Suppose that the random matrix A ∈ Rm×N satisfies (CI).
Covering Arguments
Suppose that the random matrix A ∈ Rm×N satisfies (CI).
Let S ⊆ [N] with card(S) = s.
Covering Arguments
Suppose that the random matrix A ∈ Rm×N satisfies (CI).
Let S ⊆ [N] with card(S) = s. Then
P kA∗S AS − Idk2→2 > δ ≤ 2 exp(−cδ 2 m)
Covering Arguments
Suppose that the random matrix A ∈ Rm×N satisfies (CI).
Let S ⊆ [N] with card(S) = s. Then
P kA∗S AS − Idk2→2 > δ ≤ 2 exp(−cδ 2 m)
provided
m≥
c0
s.
δ2
Covering Arguments
Suppose that the random matrix A ∈ Rm×N satisfies (CI).
Let S ⊆ [N] with card(S) = s. Then
P kA∗S AS − Idk2→2 > δ ≤ 2 exp(−cδ 2 m)
provided
m≥
c0
s.
δ2
The argument relies on the following fact:
A subset U of the unit ball of Rk relative to a norm k · k has
covering and separating numbers satisfying
2 k
N (U, k · k, ρ) ≤ S(U, k · k, ρ) ≤ 1 +
.
ρ
Restricted Isometry Property
Restricted Isometry Property
I
Suppose that the random matrix A ∈ Rm×N satisfies (CI).
Then
P(δs > δ) ≤ 2 exp(−cδ 2 m)
provided
m≥
c0
s ln(eN/s).
δ2
Restricted Isometry Property
I
Suppose that the random matrix A ∈ Rm×N satisfies (CI).
Then
P(δs > δ) ≤ 2 exp(−cδ 2 m)
provided
c0
s ln(eN/s).
δ2
The arguments are also valid for subgaussian matrices
(e.g. Bernoulli matrices), since these satisfy (CI), too.
m≥
I
Restricted Isometry Property
I
Suppose that the random matrix A ∈ Rm×N satisfies (CI).
Then
P(δs > δ) ≤ 2 exp(−cδ 2 m)
provided
c0
s ln(eN/s).
δ2
The arguments are also valid for subgaussian matrices
(e.g. Bernoulli matrices), since these satisfy (CI), too.
m≥
I
I
For Gaussian matrices, more powerful techniques can provide
an explicit value for c 0 .
Summary
Summary
The RI conditions for s-sparse recovery are of the type
δκs < δ∗ .
Summary
The RI conditions for s-sparse recovery are of the type
δκs < δ∗ .
They guarantee stable and robust reconstructions in the form, say,
C
(1) kx−∆(Ax+e)k2 ≤ √ σs (x)1 +Dkek2
s
for all x and all e.
Summary
The RI conditions for s-sparse recovery are of the type
δκs < δ∗ .
They guarantee stable and robust reconstructions in the form, say,
C
(1) kx−∆(Ax+e)k2 ≤ √ σs (x)1 +Dkek2
s
for all x and all e.
Random matrices fulfill the RI conditions with high probability as
soon as
(2)
m ≥ c s ln(N/s).
Summary
The RI conditions for s-sparse recovery are of the type
δκs < δ∗ .
They guarantee stable and robust reconstructions in the form, say,
C
(1) kx−∆(Ax+e)k2 ≤ √ σs (x)1 +Dkek2
s
for all x and all e.
Random matrices fulfill the RI conditions with high probability as
soon as
(2)
m ≥ c s ln(N/s).
Next, we will see that this number of measurement is optimal, in
the sense that estimates of type (1) require (2) to hold.
Summary
The RI conditions for s-sparse recovery are of the type
δκs < δ∗ .
They guarantee stable and robust reconstructions in the form, say,
C
(1) kx−∆(Ax+e)k2 ≤ √ σs (x)1 +Dkek2
s
for all x and all e.
Random matrices fulfill the RI conditions with high probability as
soon as
(2)
m ≥ c s ln(N/s).
Next, we will see that this number of measurement is optimal, in
the sense that estimates of type (1) require (2) to hold. We will
also examine the gain in replacing for all x in (1) by for a fixed x.
Other types of random matrices
Other types of random matrices
Under proper normalization of the matrices,
Other types of random matrices
Under proper normalization of the matrices,
I
Partial Fourier matrices with m ≥ cδ −2 s ln3 (N) rows selected
at random satisfy the RIP with high probability.
Other types of random matrices
Under proper normalization of the matrices,
I
Partial Fourier matrices with m ≥ cδ −2 s ln3 (N) rows selected
at random satisfy the RIP with high probability.
I
Pregaussian (e.g. Laplace) random matrices with
m ≥ cδ s ln(eN/s) rows satisfy with overwhelming probability
(1−δ)z ≤ kAzk1 ≤ (1+δ)z
for all s-sparse z ∈ RN ,
where the slanted norm is comparable to the `2 -norm.
Other types of random matrices
Under proper normalization of the matrices,
I
Partial Fourier matrices with m ≥ cδ −2 s ln3 (N) rows selected
at random satisfy the RIP with high probability.
I
Pregaussian (e.g. Laplace) random matrices with
m ≥ cδ s ln(eN/s) rows satisfy with overwhelming probability
(1−δ)z ≤ kAzk1 ≤ (1+δ)z
for all s-sparse z ∈ RN ,
where the slanted norm is comparable to the `2 -norm.
I
Adjacency matrices of lossless expanders (which exist with
nonzero probability) satisfy
(1 − θ)kzk1 ≤ kAzk1 ≤ kzk1
for all s-sparse z ∈ RN .
Related documents