Download A v

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Symmetric cone wikipedia , lookup

Euclidean vector wikipedia , lookup

Exterior algebra wikipedia , lookup

Linear least squares (mathematics) wikipedia , lookup

Vector space wikipedia , lookup

Determinant wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Gaussian elimination wikipedia , lookup

Rotation matrix wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Jordan normal form wikipedia , lookup

Matrix calculus wikipedia , lookup

Four-vector wikipedia , lookup

Principal component analysis wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Matrix multiplication wikipedia , lookup

System of linear equations wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Transcript
3D Geometry for
Computer Graphics
Class 2
The plan today

Basic linear algebra review
Eigenvalues and eigenvectors

Why??

 Manipulations
of 3D objects require (linear)
transformations
 Need to understand and analyze linear stuff
Motivation – Shape Matching
There are tagged feature points in both sets that are matched by the user
What is the best transformation that aligns the unicorn with the lion?
Motivation – Shape Matching
Regard the shapes as sets of points and try to “match” these sets
using a linear transformation
The above is not a good alignment….
Motivation – Shape Matching
Regard the shapes as sets of points and try to “match” these sets
using a linear transformation
To find the best rotation we need to know SVD…
Principal Component Analysis
y
x
Given a set of points,
find the best line that approximates it
Principal Component Analysis
y
y’
x’
x
Given a set of points,
find the best line that approximates it
Principal Component Analysis
y
y’
x’
x
When we learn PCA (Principal Component Analysis),
we’ll know how to find these axes that minimize
the sum of distances2
PCA and SVD

PCA and SVD are important tools not only in graphics
but also in statistics, computer vision and more.
SVD: A = UV
 is diagonal contains the singular values of A
T


To learn about them, we first need to get familiar with
eigenvalue decomposition.
So, today we’ll start with linear algebra basics reminder.
Vector space

Informal definition:

 v, w  V  v + w  V
 v  V,  is scalar  v  V
V


(a non-empty set of vectors)
(closed under addition)
(closed under multiplication by
scalar)
Formal definition includes axioms about associativity and
distributivity of the + and  operators.
0  V always!
Subspace - example


Let l be a 2D line though the origin
2
L = {p – O | p  l} is a linear subspace of R
y
O
x
Subspace - example


Let  be a plane through the origin in 3D
3
V = {p – O | p  } is a linear subspace of R
z
O
x
y
Linear independence

The vectors {v1, v2, …, vk} are a linearly
independent set if:
1v1 + 2v2 + … + kvk = 0  i = 0  i

It means that none of the vectors can be
obtained as a linear combination of the others.
Linear independence - example

Parallel vectors are always dependent:
v
v = 2.4 w  v + (2.4)w = 0
w

Orthogonal vectors are always independent.
Basis of V




{v1, v2, …, vn} are linearly independent
{v1, v2, …, vn} span the whole vector space V:
V = {1v1 + 2v2 + … + nvn | i is scalar}
Any vector in V is a unique linear combination of
the basis.
The number of basis vectors is called the
dimension of V.
Basis - example

3
The standard basis of R – three unit orthogonal
vectors x, y, z: (sometimes called i, j, k or e1, e2, e3)
z
y
x
Basis – another example

Grayscale NM images:


Each pixel has value between
0 (black) and 1 (white)
The image can be interpreted
as a vector  RNM
N
M
The “standard” basis (44)
Linear combinations of the basis
*1 +
*(2/3) +
*(1/3) =
There are also other bases!

The cosine basis – used for JPEG encoding
Matrix representation



Let {v1, v2, …, vn} be a basis of V
Every vV has a unique representation
v = 1v1 + 2v2 + … + nvn
Denote v by the column-vector:
 1 
 
v 
 
 n

The basis vectors are therefore denoted:
 1
 
 0
  ,
 
 0
 
 0
 0
 
 
 1
 0
,



 
 
 0
 1
 
 
Linear operators

A:V W




is called linear operator if:
A(v + w) = A(v) + A(w)
A( v) =  A(v)
In particular, A(0) = 0
Linear operators we know:
 Scaling
 Rotation,
reflection
 Translation is not linear – moves the origin
Linear operators - illustration

Rotation is a linear operator:
R(v+w)
v+w
w
w
v
v
Linear operators - illustration

Rotation is a linear operator:
R(v+w)
v+w
R(w)
w
v
w
v
Linear operators - illustration

Rotation is a linear operator:
R(v+w)
v+w
R(w)
w
v
R(v)
w
v
Linear operators - illustration

Rotation is a linear operator:
R(v)+R(w)
R(v+w)
v+w
R(w)
w
v
R(v)
w
v
R(v+w) = R(v) + R(w)
Matrix representation of linear operators




Look at A(v1),…, A(vn) where {v1, v2, …, vn} is a basis.
For all other vectors: v = 1v1 + 2v2 + … + nvn
A(v) = 1A(v1) + 2A(v2) + … + nA(vn)
So, knowing what A does to the basis is enough
The matrix representing A is:
|
 |

M A   A ( v1 ) A ( v 2 )
 |
|

| 

A (vn ) 
| 
Matrix representation of linear operators
|
 |

A
(
v
)
A
(
v
)
1
2

 |
|

1
|    | 
 0 

A (vn ) 
  A ( v1 ) 


|     | 
 0
|
 |

A
(
v
)
A
(
v
)
1
2

 |
|

 0
|    | 
1 

A (vn ) 
  A (v2 ) 
  

|     | 
 0
Matrix operations


Addition, subtraction, scalar multiplication –
simple…
Multiplication of matrix by column vector:
  a1i bi 
a1n  b1   i
   row1 , b  
  




   


  a b    row , b  
amn 
b
m
 n    mi i  

 i

 a11


a
 m1
A
b
Matrix by vector multiplication

Sometimes a better way to look at it:

Ab is a linear combination of A’s columns!
| |

 a1 a2
| |

|  b1 
|
 | 
 
 
 
a n    b1  a1   b2  a2  

|
 | 
| 
b
 n 
 
 
 | 
 
 bn  a n 
 | 
 
Matrix operations

Transposition: make the rows to be the columns
T
 a11  a1n 
 a11  am1 




          
a

a  a 

a
mn 
mn 
 m1
 1n

(AB)T = BTAT
Matrix operations

Inner product can in matrix form:
 v, w   vT w  wT v
Matrix properties





Matrix A (nn) is non-singular if B, AB = BA = I
1
B = A is called the inverse of A
A is non-singular  det A  0
If A is non-singular then the equation Ax=b has
one unique solution for each b.
A is non-singular  the rows of A are linearly
independent (and so are the columns).
Orthogonal matrices



1
T
Matrix A (nn) is orthogonal if A = A
T
T
Follows: AA = A A = I
The rows of A are orthonormal vectors!
Proof:
I = ATA =
v1
v2
v1 v2
vn
=
viTvj = ij
vn
 <vi, vi> = 1  ||vi|| = 1;
<vi, vj> = 0
Orthogonal operators

A is orthogonal matrix  A represents a linear
operator that preserves inner product (i.e.,
preserves lengths and angles):
 Av, Aw   ( Av)T ( Aw)  vT AT Aw 
 vT I w  v T w   v, w  .

Therefore, ||Av|| = ||v|| and (Av,Aw) = (v,w)
Orthogonal operators - example

Rotation by  around the z-axis in R :
3
 cos 

R    sin 
 0


sin  0 

cos  0 
0
1 
In fact, any orthogonal 33 matrix represents a
rotation around some axis and/or a reflection
 detA
= +1
 detA = 1
rotation only
with reflection
Eigenvectors and eigenvalues


Let A be a square nn matrix
v is eigenvector of A if:






Av = v ( is a scalar)
v0
The scalar  is called eigenvalue
Av = v  A(v) = ( v)   v is also eigenvector
Av = v, Aw = w  A(v+w) = (v+w)
Therefore, eigenvectors of the same  form a linear
subspace.
Eigenvectors and eigenvalues


An eigenvector spans an axis (subspace of dimension 1)
that is invariant to A – it remains the same under the
transformation.
Example – the axis of rotation:
Eigenvector of the
rotation transformation
O
Finding eigenvalues







For which  is there a non-zero solution to Ax = x ?
Ax = x  Ax – x = 0  Ax – Ix = 0  (AI) x = 0
So, non trivial solution exists  det(A – I) = 0
A() = det(A – I) is a polynomial of degree n.
It is called the characteristic polynomial of A.
The roots of A are the eigenvalues of A.
Therefore, there are always at least complex eigenvalues.
If n is odd, there is at least one real eigenvalue.
Example of computing A
2
 1 0


A   3 0  3
 1 1
4 

1  

 A ( )  det  3
 1

2 

3  
1 4   
 (1   )(  (4   )  3)  2(3   )  (1   ) 2 (3   )  2(3   ) 
0

 (3   )(2  2  3)
Cannot be factorized over R
Over C: (1  i 2 )(1  i 2 )
Computing eigenvectors


Solve the equation (A – I)x = 0
We’ll get a subspace of solutions.
Spectra and diagonalization


The set of all the eigenvalues of A is called
the spectrum of A.
A is diagonalizable if A has n independent
eigenvectors. Then: AV = VD
Av1  1 v1
Av 2  2 v 2
Av n  n v n
1
A
v1 v2
vn
=
v1 v2
2
vn
n
Spectra and diagonalization


1
Therefore, A = VDV , where D is diagonal
A represents a scaling along the eigenvector
axes!
Av1  1 v1
Av 2  2 v 2
1
A
=
v1 v2
1
2
v1 v2
vn
n
Av n  n v n
1
A = VDV
vn
Spectra and diagonalization
V
A
orthonormal
basis
rotation
reflection
D
other
orthonormal
basis
Spectra and diagonalization
A
Spectra and diagonalization
A
Spectra and diagonalization
A
Spectra and diagonalization
A
eigenbasis
Spectra and diagonalization




A is called normal if AAT = ATA.
Important example: symmetric matrices A = AT.
It can be proved that normal nn matrices have
exactly n linearly independent eigenvectors
(over C).
If A is symmetric, all eigenvalues of A are real,
and it has n independent real orthonormal
eigenvectors.
Why SVD…




Diagonalizable matrix is essentially a scaling.
Most matrices are not diagonalizable – they do
other things along with scaling (such as rotation)
So, to understand how general matrices behave,
only eigenvalues are not enough
SVD tells us how general linear transformations
behave, and other things…
See you next time