Survey

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts

Generalized eigenvector wikipedia, lookup

Linear least squares (mathematics) wikipedia, lookup

System of linear equations wikipedia, lookup

Symmetric cone wikipedia, lookup

Rotation matrix wikipedia, lookup

Determinant wikipedia, lookup

Matrix (mathematics) wikipedia, lookup

Four-vector wikipedia, lookup

Non-negative matrix factorization wikipedia, lookup

Gaussian elimination wikipedia, lookup

Orthogonal matrix wikipedia, lookup

Principal component analysis wikipedia, lookup

Matrix calculus wikipedia, lookup

Matrix multiplication wikipedia, lookup

Singular-value decomposition wikipedia, lookup

CayleyβHamilton theorem wikipedia, lookup

PerronβFrobenius theorem wikipedia, lookup

Jordan normal form wikipedia, lookup

Eigenvalues and eigenvectors wikipedia, lookup

Transcript
```Jim Lambers
MAT 610
Summer Session 2009-10
Lecture 12 Notes
These notes correspond to Sections 7.1 and 8.1 in the text.
The Eigenvalue Problem: Properties and Decompositions
The Unsymmetric Eigenvalue Problem
Let π΄ be an π × π matrix. A nonzero vector x is called an eigenvector of π΄ if there exists a scalar
π such that
π΄x = πx.
The scalar π is called an eigenvalue of π΄, and we say that x is an eigenvector of π΄ corresponding
to π. We see that an eigenvector of π΄ is a vector for which matrix-vector multiplication with π΄ is
equivalent to scalar multiplication by π.
We say that a nonzero vector y is a left eigenvector of π΄ if there exists a scalar π such that
πyπ» = yπ» π΄.
The superscript π» refers to the Hermitian transpose, which includes transposition and complex
conjugation. That is, for any matrix π΄, π΄π» = π΄π . An eigenvector of π΄, as deο¬ned above, is
sometimes called a right eigenvector of π΄, to distinguish from a left eigenvector. It can be seen
that if y is a left eigenvector of π΄ with eigenvalue π, then y is also a right eigenvector of π΄π» , with
eigenvalue π.
Because x is nonzero, it follows that if x is an eigenvector of π΄, then the matrix π΄ β ππΌ is
singular, where π is the corresponding eigenvalue. Therefore, π satisο¬es the equation
det(π΄ β ππΌ) = 0.
The expression det(π΄βππΌ) is a polynomial of degree π in π, and therefore is called the characteristic
polynomial of π΄ (eigenvalues are sometimes called characteristic values). It follows from the fact
that the eigenvalues of π΄ are the roots of the characteristic polynomial that π΄ has π eigenvalues,
which can repeat, and can also be complex, even if π΄ is real. However, if π΄ is real, any complex
eigenvalues must occur in complex-conjugate pairs.
The set of eigenvalues of π΄ is called the spectrum of π΄, and denoted by π(π΄). This terminology
explains why the magnitude of the largest eigenvalues is called the spectral radius of π΄. The trace
of π΄, denoted by tr(π΄), is the sum of the diagonal elements of π΄. It is also equal to the sum of the
eigenvalues of π΄. Furthermore, det(π΄) is equal to the product of the eigenvalues of π΄.
1
Example A 2 × 2 matrix
[
π΄=
π π
π π
]
has trace tr(π΄) = π + π and determinant det(π΄) = ππ β ππ. Its characteristic polynomial is
πβπ
π = (πβπ)(πβπ)βππ = π2 β(π+π)π+(ππβππ) = π2 βtr(π΄)π+det(π΄).
det(π΄βππΌ) = π
πβπ From the quadratic formula, the eigenvalues are
β
(π β π)2 + 4ππ
π+π
π1 =
+
,
2
2
π+π
π2 =
β
2
β
(π β π)2 + 4ππ
.
2
It can be veriο¬ed directly that the sum of these eigenvalues is equal to tr(π΄), and that their product
is equal to det(π΄). β‘
A subspace π of βπ is called an invariant subspace of π΄ if, for any vector x β π , π΄x β π .
Suppose that dim(π ) = π, and let π be an π × π matrix such that range(π) = π . Then, because
each column of π is a vector in π , each column of π΄π is also a vector in π , and therefore is a
linear combination of the columns of π. It follows that π΄π = ππ΅, where π΅ is a π × π matrix.
Now, suppose that y is an eigenvector of π΅, with eigenvalue π. It follows from π΅y = πy that
ππ΅y = π(π΅y) = π(πy) = ππy,
but we also have
ππ΅y = (ππ΅)y = π΄πy.
Therefore, we have
π΄(πy) = π(πy),
which implies that π is also an eigenvalue of π΄, with corresponding eigenvector πy. We conclude
that π(π΅) β π(π΄).
If π = π, then π is an π × π invertible matrix, and it follows that π΄ and π΅ have the same
eigenvalues. Furthermore, from π΄π = ππ΅, we now have π΅ = π β1 π΄π. We say that π΄ and π΅ are
similar matrices, and that π΅ is a similarity transformation of π΄.
Similarity transformations are essential tools in algorithms for computing the eigenvalues of a
matrix π΄, since the basic idea is to apply a sequence of similarity transformations to π΄ in order to
obtain a new matrix π΅ whose eigenvalues are easily obtained. For example, suppose that π΅ has a
2 × 2 block structure
[
]
π΅11 π΅12
π΅=
,
0 π΅22
where π΅11 is π × π and π΅22 is π × π.
2
[
]π
Let x = xπ1 xπ2
be an eigenvector of π΅, where x1 β βπ and x2 β βπ . Then, for some
scalar π β π(π΅), we have
[
][
]
[
]
π΅11 π΅12
x1
x1
=π
.
0 π΅22
x2
x2
If x2 β= 0, then π΅22 x2 = πx2 , and π β π(π΅22 ). But if x2 = 0, then π΅11 x1 = πx1 , and π β π(π΅11 ). It
follows that, π(π΅) β π(π΅11 ) βͺ π(π΅22 ). However, π(π΅) and π(π΅11 ) βͺ π(π΅22 ) have the same number
of elements, so the two sets must be equal. Because π΄ and π΅ are similar, we conclude that
π(π΄) = π(π΅) = π(π΅11 ) βͺ π(π΅22 ).
Therefore, if we can use similarity transformations to reduce π΄ to such a block structure, the
problem of computing the eigenvalues of π΄ decouples into two smaller problems of computing the
eigenvalues of π΅ππ for π = 1, 2. Using an inductive argument, it can be shown that if π΄ is block
upper-triangular, then the eigenvalues of π΄ are equal to the union of the eigenvalues of the diagonal
blocks. If each diagonal block is 1 × 1, then it follows that the eigenvalues of any upper-triangular
matrix are the diagonal elements. The same is true of any lower-triangular matrix; in fact, it can
be shown that because det(π΄) = det(π΄π ), the eigenvalues of π΄π are the same as the eigenvalues of
π΄.
Example The matrix
β‘
β’
β’
π΄=β’
β’
β£
β€
1 β2
3 β3
4
0
4 β5
6 β5 β₯
β₯
0
0
6 β7
8 β₯
β₯
0
0
0
7
0 β¦
0
0
0 β8
9
has eigenvalues 1, 4, 6, 7, and 9. This is because π΄ has a block upper-triangular structure
β‘
β€
]
[
]
[
1 β2
3
π΄11 π΄12
7
0
4 β5 β¦ , π΄22 =
.
π΄=
, π΄11 = β£ 0
β8 9
0 π΄22
0
0
6
Because both of these blocks are themselves triangular, their eigenvalues are equal to their diagonal
elements, and the spectrum of π΄ is the union of the spectra of these blocks. β‘
Suppose that x is a normalized eigenvector of π΄, with eigenvalue π. Furthermore, suppose that
π is a Householder reο¬ection such that π x = e1 . Because π is symmetric and orthogonal, π is its
own inverse, so π e1 = x. It follows that the matrix π π π΄π , which is a similarity transformation of
π΄, satisο¬es
π π π΄π e1 = π π π΄x = ππ π x = ππ x = πe1 .
That is, e1 is an eigenvector of π π π΄π with eigenvalue π, and therefore π π π΄π has the block
structure
[
]
π vπ
π
π π΄π =
.
0 π΅
3
Therefore, π(π΄) = {π} βͺ π(π΅), which means that we can now focus on the (π β 1) × (π β 1) matrix
π΅ to ο¬nd the rest of the eigenvalues of π΄. This process of reducing the eigenvalue problem for π΄
to that of π΅ is called deο¬ation.
Continuing this process, we obtain the Schur Decomposition
π΄ = ππ» π π
where π is an upper-triangular matrix whose diagonal elements are the eigenvalues of π΄, and π is
a unitary matrix, meaning that ππ» π = πΌ. That is, a unitary matrix is the generalization of a real
orthogonal matrix to complex matrices. Every square matrix has a Schur decomposition.
The columns of π are called Schur vectors. However, for a general matrix π΄, there is no relation
between Schur vectors of π΄ and eigenvectors of π΄, as each Schur vector qπ satisο¬es π΄qπ = π΄πeπ =
ππ eπ . That is, π΄qπ is a linear combination of q1 , . . . , qπ . It follows that for π = 1, 2, . . . , π, the
ο¬rst π Schur vectors q1 , q2 , . . . , qπ span an invariant subspace of π΄.
The Schur vectors and eigenvectors of π΄ are the same when π΄ is a normal matrix, which means
that π΄π» π΄ = π΄π΄π» . Any symmetric or skew-symmetric matrix, for example, is normal. It can be
shown that in this case, the normalized eigenvectors of π΄ form an orthonormal basis for βπ . It
follows that if π1 , π2 , . . . , ππ are the eigenvalues of π΄, with corresponding (orthonormal) eigenvectors
q1 , q2 , . . . , qπ , then we have
[
]
π΄π = ππ·, π = q1 β β β qπ , π· = diag(π1 , . . . , ππ ).
Because π is a unitary matrix, it follows that
ππ» π΄π = ππ» ππ· = π·,
and π΄ is similar to a diagonal matrix. We say that π΄ is diagonalizable. Furthermore, because π·
can be obtained from π΄ by a similarity transformation involving a unitary matrix, we say that π΄
is unitarily diagonalizable.
Even if π΄ is not a normal matrix, it may be diagonalizable, meaning that there exists an
invertible matrix π such that π β1 π΄π = π·, where π· is a diagonal matrix. If this is the case, then,
because π΄π = π π·, the columns of π are eigenvectors of π΄, and the rows of π β1 are eigenvectors
of π΄π (as well as the left eigenvectors of π΄, if π is real).
By deο¬nition, an eigenvalue of π΄ corresponds to at least one eigenvector. Because any nonzero
scalar multiple of an eigenvector is also an eigenvector, corresponding to the same eigenvalue,
an eigenvalue actually corresponds to an eigenspace, which is the span of any set of eigenvectors
corresponding to the same eigenvalue, and this eigenspace must have a dimension of at least one.
Any invariant subspace of a diagonalizable matrix π΄ is a union of eigenspaces.
Now, suppose that π1 and π2 are distinct eigenvalues, with corresponding eigenvectors x1 and
x2 , respectively. Furthermore, suppose that x1 and x2 are linearly dependent. This means that
they must be parallel; that is, there exists a nonzero constant π such that x2 = πx1 . However, this
4
implies that π΄x2 = π2 x2 and π΄x2 = ππ΄x1 = ππ1 x1 = π1 x2 . However, because π1 β= π2 , this is a
contradiction. Therefore, x1 and x2 must be linearly independent.
More generally, it can be shown, using an inductive argument, that a set of π eigenvectors
corresponding to π distinct eigenvalues must be linearly independent. Suppose that x1 , . . . , xπ are
eigenvectors of π΄, with distinct eigenvalues π1 , . . . , ππ . Trivially, x1 is linearly independent. Using
induction, we assume that we have shown that x1 , . . . , xπβ1 are linearly independent, and show
that x1 , . . . , xπ must be linearly independent as well. If they are not, then there must be constants
π1 , . . . , ππβ1 , not all zero, such that
xπ = π1 x1 + π2 x2 + β β β + ππβ1 xπβ1 .
Multiplying both sides by π΄ yields
π΄xπ = π1 π1 x1 + π2 π2 x2 + β β β + ππβ1 ππβ1 xπβ1 ,
because π΄xπ = ππ xπ for π = 1, 2, . . . , π β 1. However, because both sides are equal to xπ , and
π΄xπ = ππ xπ , we also have
π΄xπ = π1 ππ x1 + π2 ππ x2 + β β β + ππβ1 ππ xπβ1 .
It follows that
π1 (ππ β π1 )x1 + π2 (ππ β π2 )x2 + β β β + ππβ1 (ππ β ππβ1 )xπβ1 = 0.
However, because the eigenvalues π1 , . . . , ππ are distinct, and not all of the coeο¬cients π1 , . . . , ππβ1
are zero, this means that we have a nontrivial linear combination of linearly independent vectors being equal to the zero vector, which is a contradiction. We conclude that eigenvectors corresponding
to distinct eigenvalues are linearly independent.
It follows that if π΄ has π distinct eigenvalues, then it has a set of π linearly indepndent eigenvectors. If π is a matrix whose columns are these eigenvectors, then π΄π = ππ·, where π· is a
diagonal matrix of the eigenvectors, and because the columns of π are linearly independent, π is
invertible, and therefore π β1 π΄π = π·, and π΄ is diagonalizable.
Now, suppose that the eigenvalues of π΄ are not distinct; that is, the characteristic polynomial
has repeated roots. Then an eigenvalue with multiplicity π does not necessarily correspond to π
linearly independent eigenvectors. The algebraic multiplicity of an eigenvalue π is the number of
times that π occurs as a root of the characteristic polynomial. The geometric multiplicity of π is
the dimension of the eigenspace corresponding to π, which is equal to the maximal size of a set of
linearly independent eigenvectors corresponding to π. The geometric multiplicity of an eigenvalue
π is always less than or equal to the algebraic multiplicity. When it is strictly less, then we say
that the eigenvalue is defective. When both multiplicities are equal to one, then we say that the
eigenvalue is simple.
5
The Jordan canonical form of an π × π matrix π΄ is a decomposition that yields information
about the eigenspaces of π΄. It has the form
π΄ = ππ½π β1
where π½ has the block diagonal structure
β‘
β’
β’
π½ =β’
β’
β£
π½1
0
..
.
0
βββ 0
.
.
π½2 . . ..
.. ..
.
. 0
β β β 0 π½π
0
β€
β₯
β₯
β₯.
β₯
β¦
Each diagonal block π½π is a Jordan block that has the form
β€
β‘
ππ 1
β₯
β’
..
β₯
β’
.
π
π
π½π = β’
β₯ , π = 1, 2, . . . , π.
β£
ππ 1 β¦
ππ
The number of Jordan blocks, π, is equal to the number of linearly independent eigenvectors of π΄.
The diagonal element of π½π , ππ , is an eigenvalue of π΄. The number of Jordan blocks associated with
ππ is equal to the geometric multiplicity of ππ . The sum of the sizes of these blocks is equal to the
algebraic multiplicity of ππ . If π΄ is diagonalizable, then each Jordan block is 1 × 1.
Example Consider a matrix with Jordan canonical form
β‘
2 1 0
β’ 0 2 1
β’
β’ 0 0 2
π½ =β’
β’
3 1
β’
β£
0 3
β€
β₯
β₯
β₯
β₯.
β₯
β₯
β¦
2
The eigenvalues of this matrix are 2, with algebraic multiplicity 4, and 3, with algebraic multiplicity
2. The geometric multiplicity of the eigenvalue 2 is 2, because it is associated with two Jordan
blocks. The geometric multiplicity of the eigenvalue 3 is 1, because it is associated with only one
Jordan block. Therefore, there are a total of three linearly independent eigenvectors, and the matrix
is not diagonalizable. β‘
The Jordan canonical form, while very informative about the eigensystem of π΄, is not practical
to compute using ο¬oating-point arithmetic. This is due to the fact that while the eigenvalues of a
matrix are continuous functions of its entries, the Jordan canonical form is not. If two computed
eigenvalues are nearly equal, and their computed corresponding eigenvectors are nearly parallel, we
do not know if they represent two distinct eigenvalues with linearly independent eigenvectors, or a
multiple eigenvalue that could be defective.
6
The Symmetric Eigenvalue Problem
The eigenvalue problem for a real, symmetric matrix π΄, or a complex, Hermitian matrix π΄, for which
π΄ = π΄π» , is a considerable simpliο¬cation of the eigenvalue problem for a general matrix. Consider
the Schur decomposition π΄ = ππ ππ» , where π is upper-triangular. Then, if π΄ is Hermitian, it
follows that π = π π» . But because π is upper-triangular, it follows that π must be diagonal. That
is, any symmetric real matrix, or Hermitian complex matrix, is unitarily diagonalizable, as stated
previously because π΄ is normal. Whatβs more, because the Hermitian transpose includes complex
conjugation, π must equal its complex conjugate, which implies that the eigenvalues of π΄ are real,
even if π΄ itself is complex.
Because the eigenvalues are real, we can order them. By convention, we prescribe that if π΄ is
an π × π symmetric matrix, then it has eigenvalues
π1 β₯ π2 β₯ β β β β₯ ππ .
Furthermore, by the Courant-Fischer Minimax Theorem, each of these eigenvalues has the following
characterization:
yπ» π΄y
ππ = max
min
.
dim(π)=π yβπ,yβ=0 yπ» y
That is, the πth largest eigenvalue of π΄ is equal to the maximum, over all π-dimensional subspaces
of βπ , of the minimum value of the Rayleigh quotient
π(y) =
yπ» π΄y
,
yπ» y
y β= 0,
on each subspace. It follows that π1 , the largest eigenvalue, is the absolute maximum value of the
Rayleigh quotient on all of βπ , while ππ , the smallest eigenvalue, is the absolute minimum value.
In fact, by computing the gradient of π(y), it can be shown that every eigenvector of π΄ is a critical
point of π(y), with the corresponding eigenvalue being the value of π(y) at that critical point.
7
```
Related documents