Download Document

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Capelli's identity wikipedia , lookup

Linear least squares (mathematics) wikipedia , lookup

Symmetric cone wikipedia , lookup

System of linear equations wikipedia , lookup

Rotation matrix wikipedia , lookup

Determinant wikipedia , lookup

Four-vector wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Principal component analysis wikipedia , lookup

Gaussian elimination wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Matrix calculus wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Matrix multiplication wikipedia , lookup

Jordan normal form wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Transcript
Chapter
6
Eigenvalues
Example In a certain town, 30 percent of the married
women get divorced each year and 20 percent of the single
women get married each year. There are 8000 married
women and 2000 single women, and the total population
remains constant.
Let us investigate the long-range prospects if these
percentage of marriages and divorces continue indefinitely
into the future.
1 Eigenvalues and Eigenvectors
Definition
Let A be an n×n matrix. A scalar  is said to be an
eigenvalue or a characteristic value of A if there exists a
nonzero vector x such that Ax  x . The vector x is said to
be an eigenvector or a characteristic vector belonging to  .
Example Let
 4  2

A  
1 1 
and
 2
x   
1
The subspace N(A- I) is called the eigenspace corresponding
to the eigenvalue
.
The polynomial p( )  det( A  I ) is called the characteristic
polynomial, and equation det( A  I )  0 is called the
characteristic equation for the matrix A.
Let A be an n×n matrix and  be a scalar. The following
statements are equivalent:
(a)  is an eigenvalue of A.
(b) ( A  I ) x  0 has a nontrivial solution.
(c) N ( A  I )  0 
(d) A  I is singular.
(e) det( A  I )  0
Example Find the eigenvalues and the corresponding
eigenvectors of the matrix
3 2 

A  
3  2
2  3 1


Example Let A   1  2 1 
 1  3 2


Find the eigenvalues and the corresponding eigenspaces.
The Product and Sum of the Eigenvalues
a11  
p ( )  det( A  I ) 
a21

a1n
a22   
a2 n
a12

an1
 ann  
an 2
Expanding along the first column, we get
n
det( A  I )  (a11   ) det( M 11 )   ai1 (1)i 1 det( M i1 )
i 2
1  2 n  p(0)  det( A)
n
n
  a
i 1
i
i 1
ii
The sum of the diagonal elements of A is called the trace of A
and is denoted by tr(A).
Example If
 5  18 

A  
1 1 
Some Properties of the Eigenvalues:
1. Let A be a nonsingular matrix and let  be an eigenvalue
of A, then 1 is an eigenvalue of A-1.

2. Let  be an eigenvalue of A and let x be an eigenvector
belonging to  , then m is an eigenvalue of Am and x is
m
an eigenvector of Am belonging to  for m=1, 2, ….
3. Let f ( x)  a0 x m  a1 x m1 
 am1 x  am , and let  be
an eigenvalue of A, then f ( ) is an eigenvalue of f ( A) .
Example If the eigenvalues of matrix A are: 2, 1, -1, then
find the eigenvalues for the following matrices:
(a) A2  A  I
(b)
A A1
Similar Matrices
Theorem 6.1.1 Let A and B be n×n matrices. If B is similar to
A, then the two matrices both have the same characteristic
polynomial and consequently both have the same eigenvalues.
3 Diagonalization
Theorem 6.3.1 If 1,2,
,
k are distinct eigenvalues of an
n×n matrix A with corresponding eigenvectors x1, x2, …,xk,
then x1, …, xk are linearly independent.
Definition
An n×n matrix A is said to be diagonalizable if there exists a
nonsingular matrix X and a diagonal matrix D such that
X-1AX=D
We say that X diagonalizes A.
Theorem 6.3.2 An n×n matrix A is diagonalizable if and
only if A has n linearly independent eigenvectors.
Remarks
1. If A is diagonalizable, then the column vectors of the
diagonalizing matrix X are eigenvectors of A, and the diagonal
elements of D are the corresponding eigenvalues of A.
2. The diagonalizing matrix X is not unique. Reordering the
columns of a given diagonalizing matrix X or multiplying them
by nonzero scalars will produce a new diagonalizing matrix.
3. If A is an n×n matrix and A has n distinct eigenvalues, then
A is diagonalizable. If the eigenvalues are not distinct, then A
may or may not be diagonalizable depending on whether A has
n linearly independent eigenvectors.
4. If A is diagonalizable, then A can be factored into a product
XDX-1.
Ak  XD k X 1
 1k


 X



k2


 1
X



kn 
Example Let
 2  3

A  
 2  5
Determine whether the matrix is diagonalizable or not.
Example Let
 3 1  2 


A   2 0  2
 2 1 1 


Determine whether the matrix is diagonalizable or not.
Definition
If an n×n matrix A has fewer than n linearly independent
eigenvectors, we say that A is defective.
Theorem 6.3.3
,
s are s distinct eigenvalues
If A is an n×n matrix and 1,2,
for A, let i(1) ,i( 2) ,,i( nr ) be the basis of N (i I  A) , where
i
2(1) ,2( 2) ,,2( nr ) ,
ri  r (i I  A) (i  1, , s ) , then 1(1) ,1( 2) ,,1( nr ),
1
, s(1) , s( 2 ) , , s( n  rs ) are linearly independent.
2
Example Let
 2 0 0


A   0 4 0
 1 0 2


 2 0 0


and B    1 4 0 
  3 6 2


Determine whether the two matrices are diagonalizable or not.
Some Results for Real Symmetric Matrix:
1. The eigenvalues of a real symmetric matrix are all real.
2. If
1,2,
,
k are distinct eigenvalues of an
n×n real symmetric matrix A with corresponding eigenvectors
x1, x2, …,xk, then x1, …, xk are orthogonal.
3. If A is a real symmetric matrix, then there is an orthogonal
matrix U that diagonalizes A, that is, U-1AU=UTAU=D, where
D is diagonal.
1 2 0 
Example Let A   2 2  2 
0  2 3 


Find an orthogonal matrix U that diagonalizes A.
 2 2  2
Example Let A   2 5  4 
 2  4 5 


Find an orthogonal matrix U that diagonalizes A.
6 Quadratic Forms
Definition
A quadratic equation in two variables x and y is an equation
of the form
(1)
ax 2  2bxy  cy 2  dx  ey  f  0
Equation (1) may be rewritten in the form
 a b  x 
   d
y 
 b c  y 
(2)
x
Let
 x
x   
 y
The term
and
 x
e    f  0
 y
 a b

A  
b c
x T Ax  ax 2  2bxy  cy 2
is called the quadratic form associated with (1).
Conic Sections
The graph of an equation of the form (1) is called a conic section.
A conic section is said to be in standard position if its equation
can be put into one of these four standard forms:
(1) x 2  y 2  r 2
(2)
x2

y2
1
(circle)
(ellipse)


y 2 x2
x2 y 2
(hyperbola)


1
or
(3)


1
2
2
2
2




2
2
(4) x 2  y
or
y 2  x
(parabola)
Example Consider the conic section
3x 2  2 xy  3 y 2  8  0
This equation can be written in the form
x
 3 1

The matrix 
 1 3
 3 1  x 
   8
y 
 1 3  y 
has eigenvalues   2 and   4
 1 
 1 




with corresponding unit eigenvectors  21  and  12 








2
2




Let
 1

Q 2
 1

2

and set
Thus
1 
 

cos
45
2
1    sin 45

2
 1

 x  2
  
 y  1

2

sin 45 


cos 45 
1 
 ' 
2  x 
1  y ' 

2
 2 0

Q AQ  
 0 4
T
and the equation of the conic becomes
( x' )2 ( y ' )2

1
4
2
Quadratic Surfaces
(1)
x2 y 2 z 2
 2  2 1
2
a
b
c
(2)
(ellipsoid)
2
(3)
2
(cone)
2
x
y
z
 2  2 1
2
a
b
c
2
(4)
(hyperboloid of one sheet)
(5)
x2 y 2 z 2
 2  2 0
2
a
b
c
x2 y 2
 2  2z
2
a
b
(elliptic paraboloid)
2
2
x
y
z
 2  2  1
2
a
b
c
(hyperboloid of two sheets)
(6)
x2 y 2
 2  2z
2
a
b
(hyperbolic paraboloid)
A quadratic form including n variables is:
f  x1 x2  xn   a11 x12  2a12 x1 x2  2a13 x1 x3    2a1n x1 xn
 a22 x22  2a23 x2 x3    2a2 n x2 xn
 
 ann xn2
 ( x1
x2
 a11

 a12
 xn )

a
 1n
 a1n  x1 
 
a22  a2 n  x2 
T

X
AX



 

 
a2 n  ann  xn 
a12
Theorem 6.3.5
For any quadratic form XTAX, we can find an orthogonal
transformation X=CY such that YTBY is in standard form.
Example Let
f  x1 x2 x3 x4   2 x1 x2  2 x1 x3  2 x1 x4  2 x2 x3  2 x2 x4  2 x3 x4
Find an orthogonal transformation X=CY such that YTBY is in
Standard form.
Example For the conic section
2 x1 x2  2 x1 x3  2 x2 x3  1  0
Find an orthogonal transformation X=CY such that YTBY is
in standard form.
Definition
A quadratic form f(x)=xTAx is said to be definite if it takes
on only one sign as x varies over all nonzero vectors in Rn.
The form is positive definite if xTAx>0 for all nonzero x in
Rn and negative definite if xTAx<0 for all nonzero x in Rn.
A quadratic form is said to be indefinite if it takes on
values that differ in sign.
If f(x)=xTAx ≥0 and assumes the value 0 for some x≠0,
then f(x) is said to be positive semidefinite.
If f(x) ≤0 and assumes the value 0 for some x≠0, then f(x)
is said to be negative semidefinite.
Definition
A real symmetric matrix A is said to be
I. Positive definite if xTAx>0 for all nonzero x in Rn.
Ⅱ. Negative definite if xTAx<0 for all nonzero x in Rn.
III. Positive semidefinite if xTAx≥0 for all nonzero x in Rn.
IV. Negative semidefinite if xTAx≤0 for all nonzero x in Rn.
V. Indefinite if xTAx takes on values that differ in sign.
Theorem 6.6.2
Let A be a real symmetric n×n matrix. Then A is positive
Definite if and only if all its eigenvalues are positive.
Theorem 6.6.3
Let A be a real symmetric n×n matrix. Then A is positive
definite if the leading principal submatrices A1, A2, …, An of
A are all positive definite.
Example Determine whether the quadratic form
f ( x1 , x2 , x3 )  5 x12  6 x2 2  4 x32  4 x1 x2  4 x2 x3
is positive definite.