Download Multivariate observations: x = is a multivariate observation. x1,…,xn

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Euclidean vector wikipedia , lookup

Linear least squares (mathematics) wikipedia , lookup

System of linear equations wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Determinant wikipedia , lookup

Rotation matrix wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Jordan normal form wikipedia , lookup

Gaussian elimination wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Four-vector wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Principal component analysis wikipedia , lookup

Matrix multiplication wikipedia , lookup

Matrix calculus wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Transcript
VECTORS:
Multivariate observations:
 x1 
 
x2
x =   is a multivariate observation.
 ... 
 
 xp 
x1,…,xn is a sample of multivariate observations.
A sample can be represented by a data matrix:
 x11 ,..., x1 p 


x21 ,..., x2 p 

X= 

...


 x ,..., x 
np 
 n1
Vector addition, transpose and multiplication:
1   2   3 
     
 3    2   5
 4  1   5 
     
T
1 
 
 3   1 3 4 
 4
 
T
1 
 
3
 4
 
 2
 
 2   1 2  3  2  4 1  12
1 
 
Norm of a vector: |v|= (vTv)1/2
Unit vector: |v|=1
1   0 
0 
   
 
0  1 
0

Canonical basis {e1,e2,…,eP} = {
,
,...,  }
 ...   ... 
 ... 
   
 
0  0 
1 
Orthogonal basis, coordinate system,
Gram-Schmidt: Given a linearly independent vector system convert it to an orthogonal system.
1 
 
1
Unary vector: 1 =  
 ... 
 
1 
Matrix multiplication:
1 2 
 1 2  2  2 1 3  2  1   6 5 

  2 3 
 

 3 2   2 1    3  2  2  2 3  3  2 1  10 11
 
 4 1 
 



 4  2  1 2 4  3  11  10 13 
Determinant of a square matrix:
3 1 
det 
  3  2  4 1
 4 2
1 2 0 
 2 1
3 1 
3 2


det  3 2 1   1det    2 det 
  0 det 
  34
1 2 
 4 2
4 1
 4 1 2


Covariance matrix: Symmetric positive definite
Eigenvalues and Eigenvectors of a covariance or correlation matrix
Multivariate Normal distribution has constant density on ellipsoids:
 is a pxp dimensional Covariance Matrix, Ellipsoids: z t 1 z  c
Ellipsoids are characterized also by the semi-axes direction and length.
Spectral decomposition of a covariance matrix:
p
   i vi viT where i ,i=1,…,p is called the i-th eigenvalue and vi , i=1,…,p is called the i-th
i 1
eigenvector.
Synonyms: eigenvalue == semi-axis length == variance == characteristic root == latent root,
eigenvector==semi-axis direction == principal component==characteristic vector==latent vector.
In order to find the eigenvalues and eigenvectors of  we use the equation
vi  i vi
(  i I )vi  0
det(  i I )  0
PROPERTIES
(i)
  V V t , where V is the matrix of eigenvectors and  the diagonal matrix of eigenvaues
of  .
(ii)
The eignevectors are used to describe the orthogonal rotation from maximum variance. This
type of rotation is often referred to as principal-axes rotation.
The trace of the matrix  is the sum of its eigenvlaues {λi}.
The determinant of the matrix  is the product of the λi
Two eigenvectors vi and vj associated with two distinct eigenvalues λi and λj of a symmetric
matrix are mutually orthogonal, vitvj = 0.
Square Root Matrix. There are several matrices that are the square root of Σ.
(iii)
(iv)
(v)
(vi)
E.g. 1/2  V 1/2V t , Choleski factorization: Σ = RTR
(vii)
(viii)
Given a set of variables X1, X2, ...,Xp, with nonsingular covariance matrix Σ, we can always
derive a set of uncorrelated variables Y1, Y2, ...,Yp by a set of linear transformations
corresponding to the principal-axes rotation. The covariance matrix of this new set of
variables is the diagonal matrix Λ = VtΣV
Given a set of variables X1, X2, ...,Xp, with nonsingular covariance matrix Σx, a new set of
variables Y1, Y2, ..., Yp is defined by the transformation Y' = X'V, where V is an orthogonal
matrix. If the covariance matrix of the Y's is Σy, then the following relation holds:
yT y 1 y  xT x 1 x In other words, the quadratic form is invariant under rigid rotation.
Singular value decomposition:
X = UDVT where U is (n by p) orthogonal, D is diagonal, V is (p by p) orthogonal X is centered.