Download SVD and Image Compression

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Rotation matrix wikipedia , lookup

Linear least squares (mathematics) wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Determinant wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Jordan normal form wikipedia , lookup

Four-vector wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Matrix calculus wikipedia , lookup

Gaussian elimination wikipedia , lookup

Matrix multiplication wikipedia , lookup

Principal component analysis wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Transcript
Dimensionality Reduction with
Singular Value Decomposition
& Non-Negative Matrix Factorization
Media Signal Processing,
Presentation 3
Presented By: Jahanzeb Farooq
Michael Osadebey
Singular Value Decomposition
Definition
-A usefull tool of linear algebra.
-Used to reduce a large matrix into significantly small matrix
(invertible and square matrix).
Any m x n matrix A, with m > n, can be written using a singual value
decomposition
A=USVT
Where, U is an orthogonal m x n matrix, S is a diagonal matrix of
singualr values (ranked from greatest to least), and V is an orthogonal
n x n matrix.
and UT U = VT V = I
SVD and Image Compression
Looking it again,
A=USVT
-We can say that A can be factored/represented as a combination of 3
matrices.
SVD and Image Compression
- Images are represented as matrices,
- with values in the elements to describe the intensity of the color.
- Color images are actually a composite of matrices each representing
different colors; generally red, green, and blue
SVD and Image Compression
A=USVT
- Here S is a diagonal matrix of singular values ranked from greatest
to least. These determine the ’rank’ of the original matrix.
-Each singular value in S corresponds to a two-dimensional image
built from a column in U and a row in V. The reconstructed image is
the sum of each partial image scaled by the corresponding singular
value in S.
-The concept of SVD image
compression is to use
a smaller number of
rank to approximate
the original matrix.
SVD and Image Compression
Smallest Singular Values Can Be Ignored
- The singular values in S are representative of the clarity of the
image.
- When some of these values are discarded the image loses clarity.
-The smallest singular values and their corresponding images do not
significantly contribute to the final image.
-By ignoring the smallest singular values the original image can be
accurately reconstructed from a much smaller matrix than the
original matrix.
SVD and Image Compression
Original image
rank 1
rank 2
rank 4
* Singular value is referred as ’rank’.
rank 8
rank 16
rank 32
SVD and Image Compression
-There are 276 singular
values for this image.
-The ’sigma’ in lower
left graph is representing
the singular value.
SVD and Image Compression
-Image is reconstructed
by using 5, 10, 15 and
30 singualr values.
-30 singualr values
reconstructed an image
very close to the
original.
-30 singular values out
of 276 is an excellent
compression ratio.
SVD and Image Compression
-A 512 X 512 image
-Here ’k’ is representing
singular value or rank.
-First image is the actual
image with same number
of ranks.
-Before compression the
required storage is
m2=512x512=262144
-After SVD, 2mk + k.
So value of k can be
calculated as
k=m2/(1+2m). It gives
minimum k=255.
Non-Negative Matrix Factorization
Definition
- A computational method used for dimentionality reduction by
factoring the data matrix into a low rank, sparse and non-negative
factors.
- Also known as ‘Positive Matrix Factorization’.
Principle
·
- Let V be a non-negative matrix of dimension: n x m.
·
- NMF algorithm decomposes or factors the matrix into a low rank,
sparse and non-negative factors such that the original data can be
approximated as ≈
V WH
Non-Negative Matrix Factorization
Where,
W and H have non-negative elements.
- W is of dimension n x r, and is called the basis matrix because its row contains
set of basis vectors.
- H is of dimension r x m, and is called a weight matrix because its row contains
coefficient sequences.
- The rank r of the factorization is chosen such that (n+m) r < nm.
- The columns of H are in one-to-one correspondence with the columns of
V. Thus the result (WH) can be interpreted as weighted sum of each of the
basis vectors in W, the weights been the corresponding columns of H.
- The additive properties resulting from the non-negative constraints of NMF
results in basis vector that represents local components of the original data.
Examples: doors for houses, eyes for faces, curves for letters and notes in a
chord.
Non-Negative Matrix Factorization
Non-Negative Matrix Factorization
Applications
-Digital image analysis, word count in documents and stock prices
since they are always positive numbers.
-Genetics for analysis of DNA data
-Medicine in formulating new drugs
-Chemical spectral analysis
NMF versus VQ & PCA
NMF versus VQ & PCA
- In NMF the columns in the basis matrix can be visualised in the
same manner as the columns in the original data matrix. In PCA and
VQ the factors W and H can be positive or negative even if the input
matrix is all positive. Basis vectors may contain negative
components that prevent similar visualization.
- In NMF the algorithm result in parts-based representation of the
original data.Whereas VQ and PCA result in holistic representations.
- The r representations stored in the column of the basis vector W
can sum up to approximately reconstruct the original data. But in
PCA and VQ this not possible.
- In NMF algorithm there is optimal use of error estimates. The
solution is computed by minimizing the least squares of the fit
weighted with the error estimates.
NMF versus VQ & PCA