Download eigen-pwrmethdn5-1

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Rotation matrix wikipedia , lookup

System of linear equations wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Determinant wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Euclidean vector wikipedia , lookup

Vector space wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Gaussian elimination wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Principal component analysis wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Matrix multiplication wikipedia , lookup

Four-vector wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Matrix calculus wikipedia , lookup

Jordan normal form wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Transcript
Eigenvalues
Dominant Eigenvalues
&
The Power Method
Eigenvalues & Eigenvectors
In linear algebra we learned that a scalar  is an eigenvalue for a square nn
matrix A if there is a non-zero vector w such that Aw = w, we call the vector w an
eigenvector for matrix A. The eigenvalue acts like scalar multiplication instead of
matrix multiplication for the vector w. Eigenvalues are important for many
applications in mathematics, physics, engineering and other disciplines.
The Dominant Eigenvalue
A nn matrix A will have n eigenvalues (some may be repeated). By the dominant
eigenvalue we refer to the one that is biggest in terms of absolute value. This
would include any eigenvalues that are complex.
1 3  7
A  0  4 1 
0 0
2 
The matrix A to the right has as its eigenvalues the
set of numbers {1,-4,2} (i.e. it is upper triangular). In
absolute value this set is {|1|,|-4|,|2|}={1,4,2}. Since -4
is the largest in absolute value we say that -4 is the
dominant eigenvalue.
The problem we want to solve is that if we are given a matrix A can we estimate
the dominant eigenvalue?
The Power Method
The Power Method works for matrices sort of like how the fixed point method works
for functions. The iteration step is a bit different though. To explain how it works we
need to introduce a bit of terminology. The dominant term of a vector v is the term
that has the greatest absolute value (careful: it is the term itself not the absolute
value of the term). If there are two terms that have the same absolute value you can
pick either one for our purposes.
7
v1   3
 6 
dominant term=7
5
 8
v2   
4
 
 8
dominant term=-8
The algorithm consists of the
following steps. Start with an initial
vector w0. Let the approximation for
dominant eigenvalue be z0 the
dominant term in w0. Use the
iteration to the right:
 6 
v3  


10


dominant term=-10
 5 
v 4   37 
 132 
dominant term=6.5
zk+1 = dominate term in Awk
wk+1 = (1/zk+1) Awk
To get the next approximation for the dominant eigenvalue multiply
the previous eigenvector by the matrix A and take that vectors
dominant term. To get the next approximation for the eigenvector
divide the product by it dominant term. The vector w0 given to the
right is often used as the initial vector.
Example:
Apply the power method for 3 iterations to find z3 and w3 for the
matrix A given to the right.
0 1 2 1 3
Aw 0  1 1 0 1  2
2 0 1 1 3
0 1 2 1  83 
Aw 1  1 1 0  23    53 
2 0 1 1 3
1
1
w0   
 

1
 0 1 2
A  1 1 0
2 0 1
0 1 2  89   239 
Aw 2  1 1 0  95    139 
2 0 1 1  259 
1
w 0  1
1
1
w1   23 
1
 89 
w 2   95 
1
23
 25


w 3   13
25 
 1 
z0  1
z1  3
z2  3
z3  259  2.7
Power Method Convergence
The Power method will not converge for a real matrix A the power method will
converge to the dominate eigenvalue if the dominant eigenvalue is a real number. If
the dominant eigenvalue is a complex number an initial vector with complex entries
would need to be used. If the dominant eigenvalue is repeated it will find it.
How this convergence can be seen is as follows.
Given a nn matrix A with n eigenvalues 1,2,3,…,n with 1>2>3>…>n (i.e. 1
is the dominant eigenvalue) find a corresponding basis of eigenvectors
w1,w2,w3,…,wn. Let the initial vector w0 be a linear combination of the vectors
w1,w2,w3,…,wn.
w0
= a1w1+a2w2+a3w3+…+anwn
Aw0
= A(a1w1+a2w2+a3w3+…+anwn)
=a1Aw1+a2Aw2+a3Aw3+…+anAwn
(replace with eigenvalues)
=a11w1+a22w2+a33w3+…+annwn
Akw0
=a1(1)kw1+a2(2)kw2+…+an(n)kwn
Akw0/(1)k-1
=a1(1)k /(1)k-1 w1+ a2(2)k /(1)k-1 w2 +…+an(n)k /(1)k-1 wn
(repeat for powers of A)
Ak w0
1k 1
 
 a11w1  a2 2  2 
 1 
k 1
 
w 2  a33  3 
 1 
k 1
 
w 3    an n  n 
 1 
k 1
wn
For large values of k (i.e. as k goes to infinity) we get the following:
Ak w0
1k 1
 a11w1 since :


2
 1, 3  1,, n  1
1
1
1
At each stage of the process we divide by the dominant term of
the vector. If we write w1 as shown to the right and consider what
happens between two consecutive estimates we get the following.
 c1   a11c1 
c   a  c 
Ak w0
 2    1 1 2  and

a

1 1
   
1k 1
  

c
a

c
 n  1 1 n
 c1   a112c1 
c   2 
A k 1w 0
2 2
a11 c2 

a


1 1
   
1k
   2 
cn  a11 cn 
 c1 
c 
w1   2 

 
 cn 
Dividing by the dominant term gives something that is approximately 1.