Download An Overview of Compressed sensing

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Cartesian tensor wikipedia , lookup

Determinant wikipedia , lookup

Linear algebra wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Oscillator representation wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Four-vector wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Matrix calculus wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Matrix multiplication wikipedia , lookup

Compressed sensing wikipedia , lookup

Transcript
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
An Overview of Compressed sensing
M. Vidyasagar FRS
Cecil & Ida Green Chair, The University of Texas at Dallas
[email protected], www.utdallas.edu/∼m.vidyasagar
Distinguished Professor, IIT Hyderabad
[email protected]
Ba;a;ga;va;tua;l .=+a;ma mUa;a;tRa and Ba;a;ga;va;tua;l Za;a:=+d;a;}ba Memorial Lecture
University of Hyderabad, 16 March 2015
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Outline
1
What is Compressed Sensing?
2
Main Results
3
Construction of Measurement Matrices
Deterministic Approaches
Probabilistic Approaches
A Case Study
4
Some Topics Not Covered
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Outline
1
What is Compressed Sensing?
2
Main Results
3
Construction of Measurement Matrices
Deterministic Approaches
Probabilistic Approaches
A Case Study
4
Some Topics Not Covered
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Compressed Sensing: Basic Problem Formulation
Suppose x ∈ Rn is known to be k-sparse, where k n. That is,
|supp(x)| ≤ k n, where supp denotes the support of a vector.
However, it is not known which k components are nonzero.
Can we recover x exactly by taking m n linear measurements of
x?
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Precise Problem Statement: Basic
Define the set of k-sparse vectors
Σk = {x ∈ Rn : |supp(x)| ≤ k}.
Do there exist an integer m, a “measurement matrix” A ∈ Rm×n ,
and a “demodulation map” ∆ : Rm → Rn , such that
∆(Ax) = x, ∀x ∈ Σk ?
Note:
Measurements are linear but demodulation map could be
nonlinear.
The algorithm is universal – the same A and ∆ need to work
for every vector x, and nonadaptive – one has to choose all m
rows of A at the outset.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Further Considerations
What if:
The vector x is not exactly sparse, but only nearly sparse?
The measurement is corrupted by noise, and equals Ax + η
The vector x is (nearly) sparse in some other basis, and not
the canonical basis (e.g., time and frequency)?
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Some Definitions
Given a norm k · k on Rn , for each integer k < n, define
σk (x, k · k) := min kx − zk,
z∈Σk
the k-sparsity index of the vector x.
x ∈ Σk (x is k-sparse) if and only if σk (x, k · k) = 0 for every
norm.
If x 6∈ Σk , then σk (x, k · k) depends on the specific norm.
The k-sparsity index w.r.t. to an `p -norm is easy to compute.
Let Λ0 denote the index set of the k largest components by
magnitude of x. Then
σk (x, k · kp ) = kxΛc0 kp .
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Precise Problem Statement: Final
Given integers n, k n and a real number > 0, do there exist an
integer m, a matrix A ∈ Rm×n , and a map ∆ : Rm → Rn such
that
k∆(Ax + η) − xk2 ≤ C1 σk (x, k · kp ) + C2 ,
whenever η ∈ Rm satisfies kηk2 ≤ ? Here C1 , C2 are “universal”
constants that do not depend on x or η.
If so the pair (A, ∆) is said to display near-ideal signal recovery.
This formulation combines several desirable features into one.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Interpretation
Suppose x is k-sparse, and suppose y = Ax + η. If an “oracle”
knows the support set of x, call it J, then the standard
least-squares estimate is
x̂ = (AtJ AJ )−1 AtJ y = x + (AtJ AJ )−1 AtJ η,
kx̂ − xk2 = k(AtJ AJ )−1 AtJ ηk2 ≤ C0 kηk2 ,
where C0 is the induced norm of (AtJ AJ )−1 AtJ . With a pair
(A, ∆) that achieves near-ideal signal recovery, we have
kx̂ − xk2 ≤ C2 kηk2 .
So, if (A, ∆) achieves near-ideal signal recovery, then the
estimation error is a constant multiple of that achievable by an
“oracle,” but without knowing the support set of x.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Interpretation (Cont’d)
If x is not sparse, but measurements are noise-free (η = 0), then
the estimate x̂ = ∆(Ax) satisfies
kx̂ − xk2 ≤ C1 σk (x, k · kp ).
So the estimate is within a “universal constant” times the
k-sparsity index of x.
If x is k-sparse and measurements are noise-free, we get exact
signal recovery.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Outline
1
What is Compressed Sensing?
2
Main Results
3
Construction of Measurement Matrices
Deterministic Approaches
Probabilistic Approaches
A Case Study
4
Some Topics Not Covered
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Restricted Isometry Property
Definition
A matrix A ∈ Rm×n is said to satisfy the restricted isometry
property (RIP) or order k with constant δk if
(1 − δk )kuk22 ≤ kAuk22 ≤ (1 + δk )kuk22 , ∀u ∈ Σk .
If J ⊆ {1, . . . , n} and |J| ≤ k, then the spectrum of AtJ AJ lies in
the interval [1 − δk , 1 + δk ].
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Candeès-Tao Result on `1 -Norm Minimization
Theorem
Suppose
√ A satisfies the RIP of order 2k with constant
δ2k < 2 − 1. Given x ∈ Rn , define the demodulation map ∆ by
∆(y) = x̂ := arg min kzk1 s.t. Az = y.
z
Then
∆(Ax) = x, ∀x ∈ Σk .
In plain English, any k-sparse vector can be recovered exactly by
minimizing kzk1 for z in A−1 (Ax). Note that this is a linear
programming problem.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
A More General Result
Theorem
m×n satisfies the RIP of order 2k with constant
Suppose
√A∈R
δ2k < 2 − 1, and that y = Ax + η where kηk2 ≤ . Define
x̂ = arg min kzk1 s.t. ky − Azk2 ≤ .
z
Then
kx̂ − xk2 ≤ C0
σk (x, k · k1 )
√
+ C2 ,
k
where
√
√
1 + ( 2 − 1)δ2k
4 1 + δ2k
√
√
C0 = 2
, C2 =
.
1 − ( 2 + 1)δ2k
1 − ( 2 + 1)δ2k
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Some Observations
Bounds are valid for all vectors x ∈ Rn – no assumptions of
sparsity.
The smaller we can make the RIP constant δ, the tighter the
bounds.
We suspect that making δ smaller requires larger values of m
(more measurements).
Indeed this is so, as we shall discover next.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Outline
1
What is Compressed Sensing?
2
Main Results
3
Construction of Measurement Matrices
Deterministic Approaches
Probabilistic Approaches
A Case Study
4
Some Topics Not Covered
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Construction of Matrices with the RIP
Challenge: Given integers n (dimension of√the vector), k (desired
level of sparsity), and real number δ ∈ (0, 2 − 1), choose an
integer m and a matrix A ∈ Rm×n such that A has the RIP of
order 2k with constant δ.
Refresher: A matrix A ∈ Rm×n is said to satisfy the restricted
isometry property (RIP) or order k with constant δk if
(1 − δk )kuk22 ≤ kAuk22 ≤ (1 + δk )kuk22 , ∀u ∈ Σk .
Two distinct classes of approaches: deterministic and probabilistic.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Outline
1
What is Compressed Sensing?
2
Main Results
3
Construction of Measurement Matrices
Deterministic Approaches
Probabilistic Approaches
A Case Study
4
Some Topics Not Covered
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Coherence of a Matrix
Given a matrix A ∈ Rm×n , assume w.l.o.g. that it is
column-normalized, that is each column has `2 -norm of one.
Definition
The one-column coherence µ1 (A) is defined as
µ1 (A) := max max |hai , aj i|.
i∈[n] j∈[n]\{i}
The k-column coherence µk (A) is defined as
X
µk (A) := max
max
|hai , aj i|.
i∈[n] S⊆[n]\{i},|S|≤k
j∈S
Coherence quantifies how “nearly orthonormal” the columns are.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Consequence of Low Coherence
Easy consequence of the Gerschgorin circle theorem:
Lemma
Suppose A ∈ Rm×n is column-normalized. Let k ≤ m be a fixed
integer. Suppose S ⊆ {1, . . . , n} and that |S| ≤ k ≤ m. Then
spec(AtS AS ) ∈ [1 − µk−1 (A), 1 + µk−1 (A)].
Consequently A satisfies the RIP of order k with constant
δk = µk−1 (A).
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
A Lower Bound on Coherence
The following result is known as the “Welch bound.”
Lemma
Suppose A ∈ Rm×n is column-normalized; then
r
n−m
1
≈√ ,
µ1 (A) ≥
m(n − 1)
m
r
n−m
k
µk (A) ≥ k
≈√ ,
m(n − 1)
m
√
In view of the Welch bound, any matrix with µk (A) ≈ k/ m can
be thought of as having “optimal coherence.”
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
A Typical Construction
Due to DeVore (2007). Let p be a prime, and let a be a polynomial
over the finite field Z/(p). Define the p × p matrix M (a) by
1 if j = a(i),
[M (a)]ij =
0 if j 6= a(i).
Note that each column of M (a) has one 1 and the rest 0. Define
the p2 × 1 column vector ua by concatenating the p columns of
M (a), and note that ua has exactly p ones and the rest are zero.
2
r+1
Now define A0 ∈ {0, 1}p ×p
by lining up the columns M (a), as
a varies over all pr+1 polynomials of degree r or less with
√
coefficients in Z/(p). Finally, define A = A0 / p.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
A Typical Construction (Cont’d)
Theorem
The matrix A ∈ Rp
2 ×pr+1
has coherence
µk (A) ≤
kr
.
p
Note that p is the square root of the number of rows of A. So
this construction is within a factor of r of an “optimally
coherent” matrix.
The matrix A is very sparse, with only 1/p elements being
nonzero.
√
The nonzero elements are all equal to 1/ p; so this matrix is
“multiplication free.”
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Examples
Given n, k, δ, we need to choose a prime number p such that
(2k − 1)r
(2k − 1)r 1/(r+1)
r+1
≤ δ, n ≤ p
⇐⇒ p ≥ max
,n
.
p
δ
We can choose r as we wish. Then m = p2 .
Example 1: Let n = 10, 000, k = 5, δ = 0.4. Choosing r = 3 gives
p ≥ 67.5 =⇒ p = 71 and m = p2 = 5, 041. Choosing r = 2 gives
p ≥ 45 =⇒ p = 47 and m = p2 = 2, 209.
Example 2: Let n = 106 , k = 10, δ = 0.4. Choosing r = 3 gives
p ≥ 142.5 =⇒ p = 149 and m = p2 = 22, 201. Choosing r = 2
gives p ≥ 100 =⇒ p = 101 and m = p2 = 10, 201.
In general we get m ≈ n2/3 unless k is very large.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Outline
1
What is Compressed Sensing?
2
Main Results
3
Construction of Measurement Matrices
Deterministic Approaches
Probabilistic Approaches
A Case Study
4
Some Topics Not Covered
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Probabilistic Approach
Let X 0 be a random variable with zero mean and standard
√
deviation of one. Define X = X 0 / m, and define Φ ∈ Rm×n to
consist of nm independent realizations of X. Then it is easy to see
that
E[kΦuk22 ] = kuk22 , ∀u ∈ Rn .
If the r.v. kΦuk22 is also “highly concentrated” around its expected
value, then “with high probability” the matrix Φ satisfies the RIP.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Sub-Gaussian Random Variables
A r.v. X is said to be sub-Gaussian if there exist constants α, β
such that
Pr{|X| > t} ≤ α exp(−βt2 ), ∀t > 0.
A normal r.v. satisfies the above with α = 2, β = 0.5.
For later use, define
c=
β2
4α + 2β
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Main Result
Set-up: Given a constant δ, choose any < δ (preferably very
close to δ), and choose any r such that
r
1+
r ≤1−
.
1+δ
Theorem
Suppose X 0 is sub-Gaussian with constants α, β, and define
c = β 2 /(4α + 2β). Define Φ as nm independent realizations of
√
X 0 / m. Then Φ satisfies the RIP of order k with constant δ with
probability at least equal to 1 − ζ, where
ζ=2
en k 3 k
k
M. Vidyasagar FRS
k
exp(−mc2 ).
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Implementation of the Approach
Suppose k, δ are given, and choose , r as above. To ensure that Φ
satisfies the RIP of order k with constant δ with probability
≥ 1 − ζ, it suffices to choose
2
en
3
1
+ k log
m ≥ 2 log + k log
c
ζ
k
r
samples of X. Note that
m≈
k
log n
cδ 2
plus other terms.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Examples
Choose X to be a Gaussian, so that α = 2, β = 0.5, and
c = β 2 /(4α + 2β) = 1/44. Let ζ = 10−6 .
Example 1: Let n = 10, 000, k = 5, δ = 0.4. Then
m = 21, 683 > n. Compare with m = 2, 209 for the deterministic
approach.
Example 2: Let n = 106 , k = 10, δ = 0.4. Then m = 49, 863.
Compare with m = 10, 201 for the deterministic approach.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Some General Considerations
In the deterministic approach, m ≈ n2/(r+1) for some integer r,
whereas in the probabilistic approach, m = O(k log n). But the O
symbol hides a huge constant!
In both cases, the probabilistic approach requires more
measurements than the deterministic approach! Moreover, the
deterministic approach leads to a highly sparse matrix, whereas the
probabilistic approach leads to a matrix where every element is
nonzero, with probability one.
Open Problem: Find a deterministic approach that leads to
m = O(k log n) measurements.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Outline
1
What is Compressed Sensing?
2
Main Results
3
Construction of Measurement Matrices
Deterministic Approaches
Probabilistic Approaches
A Case Study
4
Some Topics Not Covered
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Time vs. Frequency Domain
Whether a signal is “sparse” depends on the basis used. For
example, a vector x denoting time samples of a signal may not be
sparse, but its discrete cosine transform (or discrete Fourier
transform) may be sparse.
The use of the DFT requires measurement matrices with complex
elements, but the theory works just the same.
In particular, suppose M is the n × n discrete cosine transform
matrix which is real and orthogonal, or the n × n discrete Fourier
transform matrix which is complex and unitary.
Transposes of “randomly selected” rows of these matrices satisfy
the RIP.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Signal Reconstruction from Random Samples
This example is due to Cleve Moler (2010).
Suppose
x(t) = sin(1394πt) + sin(3296πt),
and we sample at 40KHz for 0.2 seconds (8,000 samples).
The three frequencies involved (1394 Hz, 3296 Hz and 40,000 Hz)
are not commensurate.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Discrete Cosine Transformation
This signal is not sparse in the time domain! However, it is sparse
in the frequency domain. For this purpose we employ the discrete
cosine transform (dct).
Given a vector x ∈ RN , its discrete cosine transform (dct)
y ∈ RN is given by
y(k) = w(k)
N
X
x(n) cos
n=1
π(2n − 1)(k − 1)
2N
, k = [N ],
where the weight vector w is defined by
 q
1

, k = 1,
qN
w(k)
2

N , k = 2, . . . , N.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Inverse Discrete Cosine Transformation
Given a vector y ∈ RN , its inverse discrete cosine transform
(idct) is given by
x(n) =
N
X
w(k)y(k) cos
k=1
π(2n − 1)(k − 1)
2N
, n = [N ],
where the weight vector is as before.
Both the dct and idct correspond to multiplying the vector by an
orthogonal matrix.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Discrete Cosine Transform of Signal
The dct of x(t) = sin(1394πt) + sin(3296πt) is shown below. It is
highly concentrated around two frequencies, as expected.
Spectra of Original and Approximated Signals
60
Original
Approximated
40
Coefficient
20
0
−20
−40
−60
0
100
200
300
400
M. Vidyasagar FRS
500
600
Frequency
700
800
900
1000
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Reconstruction of Signal from Random Samples
There are 8,000 samples of the signal x(·). Now we will choose
500 samples at random from these 8,000 samples, and use those to
reconstruct the signal.
Note: When we generate 500 integers at random between 1 and
8,000, ony 485 distinct integers result.
The next slide shows some of the samples.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Sampling of Signal at Random Locations
The figure below shows the actual samples and some of the
randomly chosen samples.
Original Signal and Random Samples
2
1.5
Signal Value
1
0.5
0
−0.5
−1
−1.5
−2
0
0.002
0.004
0.006
0.008
Time in Seconds
M. Vidyasagar FRS
0.01
0.012
0.014
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Signal Reconstruction via `1 -Norm Minimization
Let n = 8000, k = 485. Let S denote the index set of the
randomly selected samples; note that |S| = 485 = k. Define D to
be the dct of the n × n identity matrix. Thus the j-th column of
D is the dct of the j-th elementary vector. Define A ∈ Rk×n to
consist of the rows of D corresponding to the randomly selected
samples; that is, A equals the projection of D onto the rows in S.
Finally, let b ∈ Rk denote the randomly selected samples.
We will reconstruct the original signal x ∈ Rn by setting
x̂ = arg min kzk1 s.t. Az = b.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Reconstructed Signal in the Time Domain – 1
The figure below shows the original and reconstructed signal for
small values of t.
Reconstruction of a Signal with 8000 Samples Using 500 Frequencies
2
Original
Reconstructed
Original and Reconstructed Signals
1.5
1
0.5
0
−0.5
−1
−1.5
−2
0
0.005
0.01
0.015
0.02
0.025
Time
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Reconstructed Signal in the Time Domain – 2
The figure below shows the original and reconstructed signal for
slightly larger values of t. The reconstruction is indistinguishable
from the original signal.
Reconstruction of a Signal with 8000 Samples Using 500 Frequencies
2
Original
Reconstructed
Original and Reconstructed Signals
1.5
1
0.5
0
−0.5
−1
−1.5
−2
−2.5
0.025
0.03
0.035
0.04
0.045
0.05
Time
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Deterministic Approaches
Probabilistic Approaches
A Case Study
Reconstructed Signal in the Frequency Domain
The figure below shows the dcts of the original and reconstructed
signals.
Spectra of Original and Approximated Signals
60
Original
Approximated
40
Coefficient
20
0
−20
−40
−60
0
100
200
300
400
M. Vidyasagar FRS
500
600
Frequency
700
800
900
1000
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Outline
1
What is Compressed Sensing?
2
Main Results
3
Construction of Measurement Matrices
Deterministic Approaches
Probabilistic Approaches
A Case Study
4
Some Topics Not Covered
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Group Sparsity
Partition the index set {1, . . . , n} into g disjoint groups
G1 , . . . , G g .
A vector x ∈ Rn is “group k-sparse” if its support contains
elements from very few groups, and |supp(x)| ≤ k.
“Group” analogs of all previous results exist, e.g. group RIP
(GRIP), and both exact recovery of group k-sparse vectors, as well
as approximate recovery of nearly group k-sparse vectors.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Alternatives to the `1 -Norm
What is so special about the `1 -norm?
It turns out: Nothing!
There are infinitely many norms that permit exact recovery of
sparse or group-sparse vectors.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
One-Bit Compressed Sensing
In standard compressed sensing, the measurement vector is
y = Ax, or yi = hai , xi, for i = 1, . . . , m.
What if yi = sign(hai , xi), for i = 1, . . . , m? This is called one-bit
compressed sensing.
The subject is still in its infancy.
Perhaps the problem can be effectively analyzed using probably
approximately correct (PAC) learning theory.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Low Rank Matrix Recovery
Suppose M ∈ Rl×s has low rank, say ≤ k. By randomly sampling
just m ls elements of M , is it possible to recover M exactly?
Results to similar to vector recovery. Norm of a vector is replaced
by “nuclear norm,” which is the sum of the singular values of a
matrix.
M. Vidyasagar FRS
Overview of Compressed Sensing
What is Compressed Sensing?
Main Results
Construction of Measurement Matrices
Some Topics Not Covered
Questions?
M. Vidyasagar FRS
Overview of Compressed Sensing