* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download The Eigenvalue Problem: Properties and Decompositions
Generalized eigenvector wikipedia , lookup
Linear least squares (mathematics) wikipedia , lookup
System of linear equations wikipedia , lookup
Symmetric cone wikipedia , lookup
Rotation matrix wikipedia , lookup
Determinant wikipedia , lookup
Matrix (mathematics) wikipedia , lookup
Four-vector wikipedia , lookup
Non-negative matrix factorization wikipedia , lookup
Gaussian elimination wikipedia , lookup
Orthogonal matrix wikipedia , lookup
Principal component analysis wikipedia , lookup
Matrix calculus wikipedia , lookup
Matrix multiplication wikipedia , lookup
Singular-value decomposition wikipedia , lookup
CayleyβHamilton theorem wikipedia , lookup
PerronβFrobenius theorem wikipedia , lookup
Jim Lambers
MAT 610
Summer Session 2009-10
Lecture 12 Notes
These notes correspond to Sections 7.1 and 8.1 in the text.
The Eigenvalue Problem: Properties and Decompositions
The Unsymmetric Eigenvalue Problem
Let π΄ be an π × π matrix. A nonzero vector x is called an eigenvector of π΄ if there exists a scalar
π such that
π΄x = πx.
The scalar π is called an eigenvalue of π΄, and we say that x is an eigenvector of π΄ corresponding
to π. We see that an eigenvector of π΄ is a vector for which matrix-vector multiplication with π΄ is
equivalent to scalar multiplication by π.
We say that a nonzero vector y is a left eigenvector of π΄ if there exists a scalar π such that
πyπ» = yπ» π΄.
The superscript π» refers to the Hermitian transpose, which includes transposition and complex
conjugation. That is, for any matrix π΄, π΄π» = π΄π . An eigenvector of π΄, as deο¬ned above, is
sometimes called a right eigenvector of π΄, to distinguish from a left eigenvector. It can be seen
that if y is a left eigenvector of π΄ with eigenvalue π, then y is also a right eigenvector of π΄π» , with
eigenvalue π.
Because x is nonzero, it follows that if x is an eigenvector of π΄, then the matrix π΄ β ππΌ is
singular, where π is the corresponding eigenvalue. Therefore, π satisο¬es the equation
det(π΄ β ππΌ) = 0.
The expression det(π΄βππΌ) is a polynomial of degree π in π, and therefore is called the characteristic
polynomial of π΄ (eigenvalues are sometimes called characteristic values). It follows from the fact
that the eigenvalues of π΄ are the roots of the characteristic polynomial that π΄ has π eigenvalues,
which can repeat, and can also be complex, even if π΄ is real. However, if π΄ is real, any complex
eigenvalues must occur in complex-conjugate pairs.
The set of eigenvalues of π΄ is called the spectrum of π΄, and denoted by π(π΄). This terminology
explains why the magnitude of the largest eigenvalues is called the spectral radius of π΄. The trace
of π΄, denoted by tr(π΄), is the sum of the diagonal elements of π΄. It is also equal to the sum of the
eigenvalues of π΄. Furthermore, det(π΄) is equal to the product of the eigenvalues of π΄.
1
Example A 2 × 2 matrix
[
π΄=
π π
π π
]
has trace tr(π΄) = π + π and determinant det(π΄) = ππ β ππ. Its characteristic polynomial is
πβπ
π = (πβπ)(πβπ)βππ = π2 β(π+π)π+(ππβππ) = π2 βtr(π΄)π+det(π΄).
det(π΄βππΌ) = π
πβπ From the quadratic formula, the eigenvalues are
β
(π β π)2 + 4ππ
π+π
π1 =
+
,
2
2
π+π
π2 =
β
2
β
(π β π)2 + 4ππ
.
2
It can be veriο¬ed directly that the sum of these eigenvalues is equal to tr(π΄), and that their product
is equal to det(π΄). β‘
A subspace π of βπ is called an invariant subspace of π΄ if, for any vector x β π , π΄x β π .
Suppose that dim(π ) = π, and let π be an π × π matrix such that range(π) = π . Then, because
each column of π is a vector in π , each column of π΄π is also a vector in π , and therefore is a
linear combination of the columns of π. It follows that π΄π = ππ΅, where π΅ is a π × π matrix.
Now, suppose that y is an eigenvector of π΅, with eigenvalue π. It follows from π΅y = πy that
ππ΅y = π(π΅y) = π(πy) = ππy,
but we also have
ππ΅y = (ππ΅)y = π΄πy.
Therefore, we have
π΄(πy) = π(πy),
which implies that π is also an eigenvalue of π΄, with corresponding eigenvector πy. We conclude
that π(π΅) β π(π΄).
If π = π, then π is an π × π invertible matrix, and it follows that π΄ and π΅ have the same
eigenvalues. Furthermore, from π΄π = ππ΅, we now have π΅ = π β1 π΄π. We say that π΄ and π΅ are
similar matrices, and that π΅ is a similarity transformation of π΄.
Similarity transformations are essential tools in algorithms for computing the eigenvalues of a
matrix π΄, since the basic idea is to apply a sequence of similarity transformations to π΄ in order to
obtain a new matrix π΅ whose eigenvalues are easily obtained. For example, suppose that π΅ has a
2 × 2 block structure
[
]
π΅11 π΅12
π΅=
,
0 π΅22
where π΅11 is π × π and π΅22 is π × π.
2
[
]π
Let x = xπ1 xπ2
be an eigenvector of π΅, where x1 β βπ and x2 β βπ . Then, for some
scalar π β π(π΅), we have
[
][
]
[
]
π΅11 π΅12
x1
x1
=π
.
0 π΅22
x2
x2
If x2 β= 0, then π΅22 x2 = πx2 , and π β π(π΅22 ). But if x2 = 0, then π΅11 x1 = πx1 , and π β π(π΅11 ). It
follows that, π(π΅) β π(π΅11 ) βͺ π(π΅22 ). However, π(π΅) and π(π΅11 ) βͺ π(π΅22 ) have the same number
of elements, so the two sets must be equal. Because π΄ and π΅ are similar, we conclude that
π(π΄) = π(π΅) = π(π΅11 ) βͺ π(π΅22 ).
Therefore, if we can use similarity transformations to reduce π΄ to such a block structure, the
problem of computing the eigenvalues of π΄ decouples into two smaller problems of computing the
eigenvalues of π΅ππ for π = 1, 2. Using an inductive argument, it can be shown that if π΄ is block
upper-triangular, then the eigenvalues of π΄ are equal to the union of the eigenvalues of the diagonal
blocks. If each diagonal block is 1 × 1, then it follows that the eigenvalues of any upper-triangular
matrix are the diagonal elements. The same is true of any lower-triangular matrix; in fact, it can
be shown that because det(π΄) = det(π΄π ), the eigenvalues of π΄π are the same as the eigenvalues of
π΄.
Example The matrix
β‘
β’
β’
π΄=β’
β’
β£
β€
1 β2
3 β3
4
0
4 β5
6 β5 β₯
β₯
0
0
6 β7
8 β₯
β₯
0
0
0
7
0 β¦
0
0
0 β8
9
has eigenvalues 1, 4, 6, 7, and 9. This is because π΄ has a block upper-triangular structure
β‘
β€
]
[
]
[
1 β2
3
π΄11 π΄12
7
0
4 β5 β¦ , π΄22 =
.
π΄=
, π΄11 = β£ 0
β8 9
0 π΄22
0
0
6
Because both of these blocks are themselves triangular, their eigenvalues are equal to their diagonal
elements, and the spectrum of π΄ is the union of the spectra of these blocks. β‘
Suppose that x is a normalized eigenvector of π΄, with eigenvalue π. Furthermore, suppose that
π is a Householder reο¬ection such that π x = e1 . Because π is symmetric and orthogonal, π is its
own inverse, so π e1 = x. It follows that the matrix π π π΄π , which is a similarity transformation of
π΄, satisο¬es
π π π΄π e1 = π π π΄x = ππ π x = ππ x = πe1 .
That is, e1 is an eigenvector of π π π΄π with eigenvalue π, and therefore π π π΄π has the block
structure
[
]
π vπ
π
π π΄π =
.
0 π΅
3
Therefore, π(π΄) = {π} βͺ π(π΅), which means that we can now focus on the (π β 1) × (π β 1) matrix
π΅ to ο¬nd the rest of the eigenvalues of π΄. This process of reducing the eigenvalue problem for π΄
to that of π΅ is called deο¬ation.
Continuing this process, we obtain the Schur Decomposition
π΄ = ππ» π π
where π is an upper-triangular matrix whose diagonal elements are the eigenvalues of π΄, and π is
a unitary matrix, meaning that ππ» π = πΌ. That is, a unitary matrix is the generalization of a real
orthogonal matrix to complex matrices. Every square matrix has a Schur decomposition.
The columns of π are called Schur vectors. However, for a general matrix π΄, there is no relation
between Schur vectors of π΄ and eigenvectors of π΄, as each Schur vector qπ satisο¬es π΄qπ = π΄πeπ =
ππ eπ . That is, π΄qπ is a linear combination of q1 , . . . , qπ . It follows that for π = 1, 2, . . . , π, the
ο¬rst π Schur vectors q1 , q2 , . . . , qπ span an invariant subspace of π΄.
The Schur vectors and eigenvectors of π΄ are the same when π΄ is a normal matrix, which means
that π΄π» π΄ = π΄π΄π» . Any symmetric or skew-symmetric matrix, for example, is normal. It can be
shown that in this case, the normalized eigenvectors of π΄ form an orthonormal basis for βπ . It
follows that if π1 , π2 , . . . , ππ are the eigenvalues of π΄, with corresponding (orthonormal) eigenvectors
q1 , q2 , . . . , qπ , then we have
[
]
π΄π = ππ·, π = q1 β
β
β
qπ , π· = diag(π1 , . . . , ππ ).
Because π is a unitary matrix, it follows that
ππ» π΄π = ππ» ππ· = π·,
and π΄ is similar to a diagonal matrix. We say that π΄ is diagonalizable. Furthermore, because π·
can be obtained from π΄ by a similarity transformation involving a unitary matrix, we say that π΄
is unitarily diagonalizable.
Even if π΄ is not a normal matrix, it may be diagonalizable, meaning that there exists an
invertible matrix π such that π β1 π΄π = π·, where π· is a diagonal matrix. If this is the case, then,
because π΄π = π π·, the columns of π are eigenvectors of π΄, and the rows of π β1 are eigenvectors
of π΄π (as well as the left eigenvectors of π΄, if π is real).
By deο¬nition, an eigenvalue of π΄ corresponds to at least one eigenvector. Because any nonzero
scalar multiple of an eigenvector is also an eigenvector, corresponding to the same eigenvalue,
an eigenvalue actually corresponds to an eigenspace, which is the span of any set of eigenvectors
corresponding to the same eigenvalue, and this eigenspace must have a dimension of at least one.
Any invariant subspace of a diagonalizable matrix π΄ is a union of eigenspaces.
Now, suppose that π1 and π2 are distinct eigenvalues, with corresponding eigenvectors x1 and
x2 , respectively. Furthermore, suppose that x1 and x2 are linearly dependent. This means that
they must be parallel; that is, there exists a nonzero constant π such that x2 = πx1 . However, this
4
implies that π΄x2 = π2 x2 and π΄x2 = ππ΄x1 = ππ1 x1 = π1 x2 . However, because π1 β= π2 , this is a
contradiction. Therefore, x1 and x2 must be linearly independent.
More generally, it can be shown, using an inductive argument, that a set of π eigenvectors
corresponding to π distinct eigenvalues must be linearly independent. Suppose that x1 , . . . , xπ are
eigenvectors of π΄, with distinct eigenvalues π1 , . . . , ππ . Trivially, x1 is linearly independent. Using
induction, we assume that we have shown that x1 , . . . , xπβ1 are linearly independent, and show
that x1 , . . . , xπ must be linearly independent as well. If they are not, then there must be constants
π1 , . . . , ππβ1 , not all zero, such that
xπ = π1 x1 + π2 x2 + β
β
β
+ ππβ1 xπβ1 .
Multiplying both sides by π΄ yields
π΄xπ = π1 π1 x1 + π2 π2 x2 + β
β
β
+ ππβ1 ππβ1 xπβ1 ,
because π΄xπ = ππ xπ for π = 1, 2, . . . , π β 1. However, because both sides are equal to xπ , and
π΄xπ = ππ xπ , we also have
π΄xπ = π1 ππ x1 + π2 ππ x2 + β
β
β
+ ππβ1 ππ xπβ1 .
It follows that
π1 (ππ β π1 )x1 + π2 (ππ β π2 )x2 + β
β
β
+ ππβ1 (ππ β ππβ1 )xπβ1 = 0.
However, because the eigenvalues π1 , . . . , ππ are distinct, and not all of the coeο¬cients π1 , . . . , ππβ1
are zero, this means that we have a nontrivial linear combination of linearly independent vectors being equal to the zero vector, which is a contradiction. We conclude that eigenvectors corresponding
to distinct eigenvalues are linearly independent.
It follows that if π΄ has π distinct eigenvalues, then it has a set of π linearly indepndent eigenvectors. If π is a matrix whose columns are these eigenvectors, then π΄π = ππ·, where π· is a
diagonal matrix of the eigenvectors, and because the columns of π are linearly independent, π is
invertible, and therefore π β1 π΄π = π·, and π΄ is diagonalizable.
Now, suppose that the eigenvalues of π΄ are not distinct; that is, the characteristic polynomial
has repeated roots. Then an eigenvalue with multiplicity π does not necessarily correspond to π
linearly independent eigenvectors. The algebraic multiplicity of an eigenvalue π is the number of
times that π occurs as a root of the characteristic polynomial. The geometric multiplicity of π is
the dimension of the eigenspace corresponding to π, which is equal to the maximal size of a set of
linearly independent eigenvectors corresponding to π. The geometric multiplicity of an eigenvalue
π is always less than or equal to the algebraic multiplicity. When it is strictly less, then we say
that the eigenvalue is defective. When both multiplicities are equal to one, then we say that the
eigenvalue is simple.
5
The Jordan canonical form of an π × π matrix π΄ is a decomposition that yields information
about the eigenspaces of π΄. It has the form
π΄ = ππ½π β1
where π½ has the block diagonal structure
β‘
β’
β’
π½ =β’
β’
β£
π½1
0
..
.
0
β
β
β
0
.
.
π½2 . . ..
.. ..
.
. 0
β
β
β
0 π½π
0
β€
β₯
β₯
β₯.
β₯
β¦
Each diagonal block π½π is a Jordan block that has the form
β€
β‘
ππ 1
β₯
β’
..
β₯
β’
.
π
π
π½π = β’
β₯ , π = 1, 2, . . . , π.
β£
ππ 1 β¦
ππ
The number of Jordan blocks, π, is equal to the number of linearly independent eigenvectors of π΄.
The diagonal element of π½π , ππ , is an eigenvalue of π΄. The number of Jordan blocks associated with
ππ is equal to the geometric multiplicity of ππ . The sum of the sizes of these blocks is equal to the
algebraic multiplicity of ππ . If π΄ is diagonalizable, then each Jordan block is 1 × 1.
Example Consider a matrix with Jordan canonical form
β‘
2 1 0
β’ 0 2 1
β’
β’ 0 0 2
π½ =β’
β’
3 1
β’
β£
0 3
β€
β₯
β₯
β₯
β₯.
β₯
β₯
β¦
2
The eigenvalues of this matrix are 2, with algebraic multiplicity 4, and 3, with algebraic multiplicity
2. The geometric multiplicity of the eigenvalue 2 is 2, because it is associated with two Jordan
blocks. The geometric multiplicity of the eigenvalue 3 is 1, because it is associated with only one
Jordan block. Therefore, there are a total of three linearly independent eigenvectors, and the matrix
is not diagonalizable. β‘
The Jordan canonical form, while very informative about the eigensystem of π΄, is not practical
to compute using ο¬oating-point arithmetic. This is due to the fact that while the eigenvalues of a
matrix are continuous functions of its entries, the Jordan canonical form is not. If two computed
eigenvalues are nearly equal, and their computed corresponding eigenvectors are nearly parallel, we
do not know if they represent two distinct eigenvalues with linearly independent eigenvectors, or a
multiple eigenvalue that could be defective.
6
The Symmetric Eigenvalue Problem
The eigenvalue problem for a real, symmetric matrix π΄, or a complex, Hermitian matrix π΄, for which
π΄ = π΄π» , is a considerable simpliο¬cation of the eigenvalue problem for a general matrix. Consider
the Schur decomposition π΄ = ππ ππ» , where π is upper-triangular. Then, if π΄ is Hermitian, it
follows that π = π π» . But because π is upper-triangular, it follows that π must be diagonal. That
is, any symmetric real matrix, or Hermitian complex matrix, is unitarily diagonalizable, as stated
previously because π΄ is normal. Whatβs more, because the Hermitian transpose includes complex
conjugation, π must equal its complex conjugate, which implies that the eigenvalues of π΄ are real,
even if π΄ itself is complex.
Because the eigenvalues are real, we can order them. By convention, we prescribe that if π΄ is
an π × π symmetric matrix, then it has eigenvalues
π1 β₯ π2 β₯ β
β
β
β₯ ππ .
Furthermore, by the Courant-Fischer Minimax Theorem, each of these eigenvalues has the following
characterization:
yπ» π΄y
ππ = max
min
.
dim(π)=π yβπ,yβ=0 yπ» y
That is, the πth largest eigenvalue of π΄ is equal to the maximum, over all π-dimensional subspaces
of βπ , of the minimum value of the Rayleigh quotient
π(y) =
yπ» π΄y
,
yπ» y
y β= 0,
on each subspace. It follows that π1 , the largest eigenvalue, is the absolute maximum value of the
Rayleigh quotient on all of βπ , while ππ , the smallest eigenvalue, is the absolute minimum value.
In fact, by computing the gradient of π(y), it can be shown that every eigenvector of π΄ is a critical
point of π(y), with the corresponding eigenvalue being the value of π(y) at that critical point.
7