Download Face Recognition using Tensor Analysis

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Rotation matrix wikipedia , lookup

Lie derivative wikipedia , lookup

Differentiable manifold wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Cauchy stress tensor wikipedia , lookup

Curvilinear coordinates wikipedia , lookup

Relativistic angular momentum wikipedia , lookup

Tensors in curvilinear coordinates wikipedia , lookup

Metric tensor wikipedia , lookup

Transcript
Face Recognition using Tensor
Analysis
Presented by
Prahlad R Enuganti
Face Recognition

Why is it necessary?




Human Computer Interaction
Authentication
Surveillance
Problems include change in




Illumination
Expression
Pose
Aging
Existing Techniques
Technique
EigenFaces
Resistance to Variations in
Pose
Illumination
Expression
Average
Poor
Average
Good
Good
Good
Good
Good
Very Good
Very
Good
Very Good
Very Good
[Turk et al., 1991]
Support
Vector
Machines
[Guo et al., 2001]
Multiresolution
analysis
[Ekenel et al., 2005]
TensorFaces
[Vasilescu et al.,
2004]
Tensor Algebra


[Vasilescu et al., 2002]
Higher order generalization
of vectors and matrices.
An Nth order tensor is
represented as
A Є R I1 x I2 x…. IN and each
element by aijk….N
The mode n vectors of a tensor
are obtained by varying
index n while keeping other
indices fixed. They are
obtained by flattening the
tensor A and are represented
by A(n)
Example of flattening a 3rd order tensor
Tensor Decomposition

In case of 2-D, a matrix D can be decomposing using SVD
D = U1 ∑ U2T , where
∑ is a diagonal singular matrix,
U1 and U2 are column and row orthogonal space respectively
In terms of mode - n vectors, the product can be rewritten as
D = (∑ ) X1 (U1) X2 (U2)
In case of a Tensor of dimension N, the N-mode SVD can be
expressed as
D = (Z ) X1 (U1) X2 (U2) … … XN (UN)
Where Z is known as the core tensor and is analogous to
diagonal singular value matrix in 2-D SVD
N – mode SVD Algorithm


For n = 1 , 2 … N, compute matrix Un by calculating
the SVD of flattened matrix D(n) and setting Un to be
the left matrix of the SVD.
Core Tensor can be solved as
Z = (D) X1 (U1T) X2 (U2T) ...... XN (UNT)
TensorFaces


Our data here consists of 5 variables: people, pixels,
pose, illumination and expression.
Therefore we perform the N –mode decomposition of
the 5th order tensor and obtain

D = Z X1 Upeople X2 Uviews X3 Uillum X4 Uexpr X5 Upixels
The main advantage of tensor analysis is that it
maps all images of a person regardless of other
variables to the same coefficient vector giving zero
inter-class scatter.
ISOMAP (Isometric Feature Mapping)
[Tenenbaum et al. ]



Finds meaningful low-dimensional manifold of higher
dimensional data by preserving the geodesic distances.
Unlike PCA or MDS, ISOMAP is capable of discovering
even the nonlinear degrees of freedom.
It is guaranteed to converge to the true structure.
ISOMAP : How does it work?



Calculates the weighted neighborhood graph for
every point by either the Є neighborhood rule or the
k nearest neighbor rule.
Estimates the geodesic distances between all pairs of
points on the lower dimensional manifold by
computing the shortest path distances in the graph
Applies classical MDS to construct an embedding in
the lower dimensional space that best preserves the
manifold’s estimated geometry.