Download Chapter 3

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Exterior algebra wikipedia , lookup

Euclidean vector wikipedia , lookup

Vector space wikipedia , lookup

Linear least squares (mathematics) wikipedia , lookup

Rotation matrix wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Principal component analysis wikipedia , lookup

Jordan normal form wikipedia , lookup

Determinant wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Four-vector wikipedia , lookup

System of linear equations wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Matrix calculus wikipedia , lookup

Gaussian elimination wikipedia , lookup

Matrix multiplication wikipedia , lookup

Transcript
Chapter 3 Linear Algebra
February 29 Matrices
3.2 Matrices; Row reduction
Standard form of a set of linear equations:
2 x  5 y  z  2

 x  y  2z  1
x
 5z  3

Matrix of coefficients:
 2 5 1


M   1 1 2
1 0 5


Augmented matrix:
 2 5 1 2


A  1 1 2 1
 1 0 5 3


Elementary row operations:
1) Row switching.
2) Row multiplication by a nonzero number.
3) Adding a multiple of a row to another row.
These operations are reversible.
1
Solving a set of linear equations by row reduction:
 * * * *
*
Method :

 As close as possible 
*
*
*
*

     0
 * * * *
0



1

 0
0

* * *  *
 
* * *   0
* * *  0
* * *  1
 
1 * *   0
0 1 *  0
* * *

* * *
Row reduced matrix
0 * *
* * *  1 0 0 *
 

1 0 *   0 1 0 *
0 1 *  0 0 1 *
Example :
2 1
2 1
 2 5 1 2
 1 1 2 1  (2)(1)2  1 1
1 1

 (1)(2) 
 (3)(1)

 (2)/3 

 0 3  3 0  
 0 1  1 0 
 1 1 2 1    2 5 1 2   
 1 0 5 3
 1 0 5 3
0 1 3 2
0 1 3 2








1 1 2 1
1 1 2 1
 1 1 2 1






(2)
( 3)
(3)

 0 1  1 0  (3)/2

 0 1  1 0  (2)

 0 1 0 1 
0 0 2 2
 0 0 1 1
 0 0 1 1






 1 0 0  2   x  2

 
3 )2
(1)
(2)
(
 0 1 0 1    y  1
 0 0 1 1  z  1

 
2
Rank of a matrix:
The number of the nonzero rows in the row reduced matrix. It is the maximal number
of linearly independent row (or column) vectors of the matrix.
Rank of A = Rank of AT (where (AT)ij= (A)ji, transpose matrix).
Possible cases of the solution of a set of linear equation:
Solving m equations with n unknowns:
1) If Rank M < Rank A, the equations are inconsistent and there is no solution.
 * * * *


 0 * * *
 0 0 0 *


2) If Rank M = Rank A = n, there is one solution.
 * * * *


0
*
*
*


 0 0 * *


3) If Rank M = Rank A = R<n, then R unknowns can be expressed by the remaining
n−R unknowns.
 * * * *


 0 * * *
 0 0 0 0


3
Read: Chapter 3: 1-2
Homework: 3.2.4,8,10,12,15.
Due: March 11
4
March 7 Determinants
3.3 Determinates; Cramer’s rule
Determinate of an n×n matrix:
a11 a12 * a1n
 a11 a12 * a1n 


A *
* * * , detA | A | *
* * *
a

an1 * * ann
 n1 * * ann 
Minor:
a11 a12 * a1n
A *
* * * ,
an1 * * ann
j
a11 a12 * a1n
M ij  *
* * * i
an1 * * ann
Cofactor: Cij  (1)i  j M ij
Determinate of a 1×1 matrix: a  a
1 j
Definition of the determinate of an n×n matrix: A   a1 j C1 j   (1) a1 j M 1 j
j
j
5
Equivalent methods:
1) A determinate can be expanded by any row or any column: A   aijCij   aijCij
j
i
2) |A|=|AT|.
Example p90.1
Triple product of 3 vectors:
A1 A2 A3
A  ( B  C)  C  ( A  B)  B  (C  A )  B1 B2 B3
C1 C2 C3
 (signed ) volume of the parallelop iped formed by the 3 vectors.
Useful properties of determinants:
1) A common factor in a row (column) may be factored out.
2) Interchanging two rows (columns) changes the sign of the determinant.
3) A multiple of one row (column) can be added to another row (column) without
changing the determinant.
4) The determinate is zero if two rows (columns) are identical or proportional.
Example p91.2.
6
Cramer’s rule in solving a set of linear equations:
a11 x1  a12 x2  a13 x3  c1 
a11 a12

a21 x1  a22 x2  a23 x3  c2   x1 a21 a22
a31 a32
a31 x1  a32 x2  a33 x3  c3 
c1 a12 a13
c1 a12 a13
a11 x1  a12 x2  a13 x3
a12
a13
a23  a21 x1 a22
a23  a21 x1  a22 x2  a23 x3
a22
a23
a33
a33
a31 x1  a32 x2  a33 x3
a32
a33
a13
a11
a11 x1
a12
a31 x1
a12
a32
a13
a13
 c2
a22
a23  x1  c2
a22
a23
a21 a22
a23 , and similar solutions for x2 and x3 .
c3
a32
a33
a32
a33
a31 a32
a33
c3
Theorem:
For homogeneous linear equations, the determinant of the coefficient matrix must be zero
for a nontrivial solution to exist.
a11 a12 a13
0 a12 a13
a11x1  a12 x2  a13 x3  0 

a21x1  a22 x2  a23 x3  0  If a21 a22 a23  0, x1  0 a22 a23
a31x1  a32 x2  a33 x3  0 
a31 a32 a33
0 a32 a33
a11 a12 a13
a21 a22 a23  0.
a31 a32 a33
7
Read: Chapter 3: 3
Homework: 3.3.1,10,15,17.
(No computer work is needed.)
Due: March 25
8
March 11 Vectors
3.4 Vectors
Vector: A quantity that has both a magnitude and a direction.
Geometrical representation of a vector: An arrow with a length and a direction .
Addition and subtraction:
Vector addition is commutative and associative.
Algebraic representation of a vector:
A  ( Ax , Ay , Az )  Ax ˆi  Ay ˆj  Az kˆ
A  B  ( Ax  Bx )ˆi  ( Ay  By )ˆj  ( Az  Bz )kˆ
cA  cAx ˆi  cAy ˆj  cAz kˆ
Magnitude of a vector:
A  A  Ax2  Ay2  Az2
Note: A is a vector and A is its length.
They should be distinguished.
9
Scalar or dot product:
A  B  AB cos   Ax Bx  Ay By  Az Bz
B
(Proof)
Vector or cross product:
A  B  AB sin  , with its direction
A

determined by the right hand rule.
Cross product in determinant form:
ˆi
ˆj kˆ
A  B  Ax Ay Az (Proof)
Bx By Bz
A×B
B

A
Parallel and perpendicular vectors:
Ax Ay Az
A // B 


 A  B  0.
Bx By Bz
A  B  Ax Bx  Ay By  Az Bz  0  A  B  0.
Relations between the basis vectors: ˆi  ˆi  1, ˆi  ˆj  0, ˆi  ˆi  0, ˆi  ˆj  kˆ .
Examples p102.3, p105.4.
Problems 4.5,26.
10
Read: Chapter 3: 4
Homework: 3.4.5,7,12,18,26.
Due: March 25
11
March 21 Lines and planes
3.5 Lines and planes
z
Equations for a straight line:
r  r0  At
 x  x0  at

 y  y0  bt
 z  z  ct
0

x  x0 y  y0 z  z0


a
b
c
Equations for a plane:
N  (r  r0 )  0
a( x  x0 )  b( y  y0 )  c( z  z0 )  0
ax  by  cz  d ( ax0  by0  cz0 )
r−r0
( x, y , z )
( x0 , y0 , z0 )
A
y
x
N
r−r0
( x, y , z )
( x0 , y0 , z0 )
Examples p109.1, 2.
12
Distance from a point to a plane:
N
PR  PQ cos   PQ   PQ  n
N
P

n
Q
Example p110.3.
R
Distance from a point to a line:
A
PR  PQ sin   PQ   PQ  u
A
P
A

Example p110.4.
Q
Distance between two skew lines:
Let FG be the shortest distance, then it must
be perpendicular to both lines (proof).
FG  PQ  n  PQ 
P F
AB
AB
Example p110.5,6.
Problems 5.12,18,42.
R
A
n
Q
G
B
13
Read: Chapter 3: 5
Homework: 3.5.7,12,18,20,26,32,37,42.
Due: April 1
14
March 23,25 Matrix operations
3.6 Matrix operations
Matrix equation:
 a b   2 3

  
  a  2, b  3, c  4, d  5.
c
d
4
5

 

Multiplication by a number:
 a b   ka kb 
  

k 
 c d   kc kd 
while k
a b ka kb ka b


.
c d
c d
kc d
Matrix addition:
a b   e

  
c
d

 g
f  ae b f 


h   c  g d  h 
15
Matrix multiplication: ABij   Aik Bkj .
k
Note:
1) The element on the ith row and jth column of AB is equal to the dot product between
the ith row of A and the jth column of B.
2) The number of columns in A is equal to the number of rows in B.
Example:
 a b  e


c
d

 g
f   ae  bg af  bh 


h   ce  dg cf  dh 
More about matrix multiplication:
1) The product is associative: A(BC)=(AB)C
2) The product is distributive: A(B+C)=AB+AC
3) In general the product is not commutative: ABBA.
[A,B]=AB−BA is called the commutator.
Unit matrix:
1 0

I  
0 1
Zero matrix:
0 0

0  
0
0


Product theorem: det( AB)  det A  det B
16
Solving a set of linear equations by matrix operations:
2 x  5 y  z  2  2 5 1  x   2 
   
 
x  y  2 z  1   1 1 2  y    1 
x
 5 z  3  1 0 5  z   3 
M r  k  r  M 1 k.
Matrix inversion:
1) M−1 is the inverse of M if MM−1= M−1M =I=1.
2) Only square matrix can be inversed.
3) det (M M−1) = det(M) det( M−1) = det(I)=1, so det (M)0 is necessary for M to be
invertible.
CT
Calculating the inverse matrix: M 
.
det M
Proof :
det M, if i  j.
(MC T )ij   M ik CT kj   M ik C jk  
0, if i  j. (p3.8)
k
k
CT
T
1
 MC  I  det M  M 
.
det M
1
Example p120.3.
17
Three equivalent ways of solving a set of linear equations:
1) Row reduction
2) Cramer’s rule
3) Inverse matrix: r=M-1k
Equivalence between r=M-1k and the Cramer’s rule:
 a11 a12

 a21 a22
a
 31 a32
a13  x   k1 
   
CT
1
a23  y    k 2   M r  k  r  M k 
k
det
M
a33  z   k3 
k1 a12
a13
k 2 a22 a23
T
T
T
k a32 a33
C11
k1  C12
k 2  C13
k3 k1C11  k 2 C 21  k3C31
x

 3
.
a11 a12 a13
det M
det M
a21 a22
a31 a32
a23
a33
Cramer’s rule is an actual realization of r=M-1k.
18
Gauss-Jordan method of matrix inversion:
Let (MLp MLp-1 MLp-2 … ML2 ML1 ) M = MLM = I be the result of a series of elementary
row operations on M, then (MLp MLp-1 MLp-2 … ML2 ML1 ) I = MLI = M−1.
That is, M1 M I   I M1 .
 1 2 1 0  ( 2 )(1)3  1 2 1 0  (1)( 2 )  1 0  2 1
  
 

Example : 



 0  2  3 1
3
4
0
1
0

2

3
1






1 
1 0  2

.
 

 0 1 3 / 2  1/ 2
1   1 0
 1 2   2
Test : 

  
.
3
4
3
/
2

1
/
2
0
1


 

( 2 )( 1 / 2 )
Equivalence between row reduction and r=M-1k:
(M Lp M Lp1 M Lp2  M L2 M L1 )M  M 1 M  I 
(M Lp M Lp1 M Lp2  M L2 M L1 )( M, k )  M 1 (M, k )  (I, r ).
Row reduction is to decompose the M−1 in r=M−1k into many steps.
19
Rotation matrices: Rotation of vectors
 X   cos 
   
 Y   sin 
 sin   x 
 ,
cos   y 
y
( X ,Y )

( x, y )
 cos   sin  
x
.
or R  M r with M  
 sin  cos  
 cos  2  sin  2  cos 1  sin 1  x   cos(1   2 )  sin( 1   2 )  x 

   
 .
M 2 M1r  
 sin  2 cos  2  sin 1 cos 1  y   sin( 1   2 ) cos(1   2 )  y 
Functions of matrices:
Expansion or power series expansion is implied.
Examples:
(A  B) 2  A 2  AB  BA  B2
k 2 A 2 k 3A 3
exp( kA)  1  kA 


2!
3!
1
 1 A  A2 
1 A
20
Read: Chapter 3: 6
Homework: 3.6.9,13,15,18,21
Due: April 1
21
March 28, 30 Linear operators
3.7 Linear combinations, linear functions, linear operators
Linear combination: aA + bB
Linear function f (r):
f (r1  r2 )  f (r1 )  f (r2 ),
Examples :
f (r )  A  r, F(r )  br
f (ar )  a f (r )
Linear operator O:
O( f  g )  O( f )  O( g ), O(af )  a O( f )
Here f and g can be numbers, functions, vectors, matrices, etc.
d
Example :
f
dx
Example p125.1; Problem 7.15
Linear transformation:
 X   a b  x 
   
  or R  M r. M(r1  r2 )  Mr1  Mr2 , M( kr)  k (Mr).
 Y   c d  y 
The matrix M is a linear operator representing a linear transformation.
22
Orthogonal transformation:
An orthogonal transformation preserves the length of a vector.
Orthogonal matrix :
The matrix for an orthogonal transformation is an orthogonal matrix.
Theorem: M is an orthogonal matrix if and only if MT = M−1.
Proof :
(Mr) T (Mr)  r T r  r T M T Mr  r T r  M T M  I  M T  M 1.
Theorem: det M=1 if M is orthogonal.
Proof :
det I  det( MM 1 )  det( MM T )
 det(M)det( M T )  [det(M) ]2  1  det(M)  1.
23
2×2 orthogonal matrix:
 2
 a b   cos   sin  
2

a

b

1
  



c
d
sin

cos

 
 a b  a c 

 



  1  c 2  d 2  1   
 c d  b d 
ac  bd  0 sin  cos   sin  cos   0  sin(    )  0    

 

    


 a b   cos   sin  
1) 
  
 for    , or det M  1.
 c d   sin  cos  
 cos   sin   r cos    r cos(   ) 


  
. M is a rotaion.
sin

cos

r
sin

r
sin(



)


 

change the sign of   cos 
sin  
 a b   cos   sin  
2) 

  
 for      , or det M  1

.
c
d

sin


cos

sin


cos


 



 cos  sin   r cos    r cos(   ) 



  
. M is a reflection with respect to the line.
2
 sin   cos   r sin    r sin(    ) 
 cos 
Espe cially 
 sin 
sin   r cos( / 2)   r cos( / 2) 

  
.
 cos   r sin(  / 2)   r sin(  / 2) 
Conclusion: A 2×2 orthogonal matrix corresponds to either a rotation (with det M=1)
or a reflection (with det M=−1).
24
Two-dimensional rotation:
Rotating the vector :
 X   cos 
 
(Active transform ation :)  Y   sin 
Rotating the axes :
(Change of basis :)
 x' 
  
 y' 
y
 sin   x 
 
cos   y 
 cos 

  sin 
sin   x 
 
cos   y 
Two-dimensional reflection:
 x' 
  
 y' 
 cos 

 sin 
( X ,Y )
x
y'
y
( x, y )

sin   x 
  is a reflection
 cos   y 
with respect to the

( x, y )

x'


line, or y  x tan .
2
2
y
( x' , y ' )
Example p128.3.
 1
 
Note that A   2

3

 2
x
( x, y )
4
3 
  cos
3
2 
4
1 
   sin
3
2 
4
 sin
3
4
cos
3






 /2
x
25
Read: Chapter 3: 7
Homework: 3.7.9,15,22,26.
Due: April 8
26
April 1 Linear dependence and independence
3.8 Linear dependence and independence
Linear dependence of vectors: A set of vectors are linearly dependent if some linear
combination of them is zero, with not all the coefficients equal to zero.
1. If a set of vectors are linearly dependent, then at least one of the vectors can be
written as a linear combination of others.
b
c
aA  bB  cC  0, a  0  A   B  C
a
a
2. If a set of vectors are linearly dependent, then at least one row in the row-reduced
matrix of these vectors equals to zero. The rank of the matrix is then less than the
number of rows.
b
c
aA  bB  cC  0, a  0  A  B  C  0.
a
a
Example: Any three vectors in the x-y plane are linearly dependent. E.g. (1,2), (3,4), (5,6).
Linear independence of vectors: A set of vectors are linearly independent if any linear
combination of them is not zero, with not all the coefficients equal to zero.
27
Linear dependence of functions:
A set of functions are linearly dependent if some linear combination of them is always
zero, with not all the coefficients equal to zero.
Examples:
sin x and cos x are linearly independen t.
1, x and x 2 are linearly independen t.
sin 2 x, cos 2 x and 1 are linearly dependent.
Theorem: If the Wronskian of a set of functions
f1 ( x )
f ' ( x)
W 1

( n 1)
f1 ( x )
f2 ( x)
f2 ' ( x)

( n 1)
f2 ( x)




fn ( x)
f 'n ( x )
 0,

( n 1)
fn ( x)
then the functions are linearly independent .
28
Proof :
If f1 ( x ), f 2 ( x ), , f n ( x ) are linearly dependent, then there exit ki (supposed to be k1 )  0 and
n
k
j 1
j
n
n
j 1
j 1
f j ( x )  0. Now  k j f j ' ( x )  0, ,  k j f j ( x )  0.
0
0

f2 ( x)
f2 ' ( x)

( n 1)



0 f2 ( x) 
k1 
f1 ( x )
f2 ( x)
f1 ' ( x )
f2 ' ( x)


( n 1)
( n 1)
f1 ( x ) f 2 ( x )
fn ( x)
f 'n ( x )

(n)
f1 ( x )
( n 1)
f1 ' ( x )
fn
( x)
0

 0 W 

fn ( x)

W
(n)

f 'n ( x )
f1 ( x )


( n 1)
 fn
( x)
f2 ( x)

f2 ' ( x) 


(n)
f2 ( x) 
fn ( x)
f 'n ( x )
 0.

(n)
fn ( x)
Examples p133.1,2.
Note: W=0 does not always imply the functions are linearly dependent. E.g., x2 and x|x|
about x=0. However, when the functions are analytic (infinitely differentiable), which
we meet often, W=0 implies linear dependence.
29
Homogeneous equations:
* * * 0 
 * 0 0 0  * * * 0

 row reduction 
 

 0 * 0 0  or  0 * * 0 .
 * * * 0  
* * * 0 
 0 0 * 0  0 0 0 0



 

1. Homogeneous equations always have the trivial solution (all unknowns=0).
2. If Rank M=Number of unknowns, the trivial solution is the only solution.
3. If Rank M< Number of unknowns, there are infinitely many solutions.
Theorem:
A set of n homogeneous equations with n unknowns has nontrivial solutions if and only if
the determinant of the coefficients is zero.
Proof:
1. det M ≠0  r =M-10=0.
2. Only trivial solution exists  Columns of M are linearly independent det M ≠0.
Examples p135.4.
30
Read: Chapter 3: 8
Homework: 3.8.7,10,13,17,24.
Due: April 8
31
April 4 Special matrices
3.9 Special matrices and formulas
Transpose matrix AT of A: AT ij  A ji
Complex conjugate matrix A* of A: A* ij  Aij 
*
Adjoint (transpose conjugate) matrix A+ of A: A   (A T )*  (A* )T , A  ij  A* ji  A ji 
*
Inverse matrix A-1 of A: A-1A= AA-1=1.
Symmetric matrix: A=AT (A is real)
Orthogonal matrix: A−1 =AT (A is real)
Hermitian matrix: A =A+
Unitary matrix: A-1 =A+
Normal matrix: AA+=A+A, or [A,A+]=0.
32
Index notation for matrix multiplication: ABij   A ik Bkj
k
1, if i  j.
0, if i  j.
Kronecker d symbol: d ij  
Exercises on index notations:
Associative law for matrix multiplication: A(BC)=(AB)C
Proof :
ABC ij   Aik BC kj   Aik Bkl Clj   ABil Clj  ABCij
k
k ,l
l
Transpose of a product: (AB)T=BTAT
Proof :
ABijT  AB ji   A jk Bki   ATkj BikT   BikT ATkj  BT AT ij
k
k
k
Corollary: (ABC)T=CTBTAT
33
Inverse of a product: (AB)-1=B-1A-1
Proof :
(AB) (B-1A-1 )  A(BB -1 )A -1  AA -1  1  (AB) -1  B-1A-1.
Corollary: (ABC)-1=C-1B-1A-1
Trace of a matrix: Tr A   Aii
i
Trace of a product: Tr(AB)=Tr(BA)
Proof :
Tr(AB)   ( AB)ii   Aij B ji   B jiAij  Tr(BA)
i
i, j
i, j
Corollary: Tr(ABC)=Tr(BCA)=Tr(CAB)
34
Read: Chapter 3: 9
Homework: 3.9.2,4,5,23,24.
Due: April 15
35
April 6 Linear vector spaces
3.10 Linear vector spaces
n-dimensional vectors: A  ( A1 , A2 ,, An )
Linear vector space: Several vectors and the linear combinations of them form a space.
Subspace: A plane is a subspace of 3-dimensional space.
Span: A set of vectors spans the vector space if any vector in the space can be written as
a linear combination of the spanning set.
Basis: A set of linearly independent vectors that spans a vector space.
Dimension of a vector space: The number of basis vectors that span the vector space.
Examples p143.1; p144.2.
36
n
Inner product of two n-dimensional vectors: A  B   Ai Bi
i 1
Length of an n-dimensional vector: A  A  A 
n
A
2
i
i 1
n
Two n-dimensional vectors are orthogonal if A  B   Ai Bi  0.
i 1
Schwarz inequality: A  B  AB. That is
n
AB
i 1
i
i

n
n
 A B
i 1
2
i
i 1
2
i
.
Proof :
A
B
, e 2  , which are unit vecto rs along A and B.
A
B
To prove A  B  AB, we then need to prove e1  e 2  1.
Suppose A  0, B  0. Let e1 
Construct C  e1  (e1  e 2 )e 2 , which is the component of e1 othogonal to e 2 .
Then C  C  1  2(e1  e 2 ) 2  (e1  e 2 ) 2  1  (e1  e 2 ) 2  C 2  0  e1  e 2  1.
The equal sign holds only when A// B.
37
Orthonormal basis: A set of vectors form an orthonormal basis if 1) they are mutually
orthogonal and 2) each vector is normalized.
Gram-Schmidt orthonormalization:
Starting from n linearly independent vectors v1, v 2 , ,v n , we can construct an orthonormal
basis set w1, w 2 , ,w n .
v
w1  1 ,
v1
v  v 2  w1 w1
w2  2
,
v 2  v 2  w1 w1
v  v 3  w1 w1  v 3  w 2 w 2
w3  3
,
v 3  v 3  w1 w1  v 3  w 2 w 2
k 1
wk 
v k   v k  w i w i
i 1
k 1
v k   v k  w i w i
.
i 1
Example p146.4.
38
Bra-ket notation of vectors:
 A1 
 
*
*
A  A   A2 , A  A   ( A1 A2 ).
  
 
n
A B  A B   Ai* Bi
i 1
Complex Euclidean space:
n
Inner product:
A B   Ai* Bi
i 1
Length: A 
n
A A
AA 
i 1
*
i
i
n
Orthogonal vectors:
A B   Ai* Bi  0
i 1
n
Schwarz inequality:
A B
i 1
*
i
i

n
n
 A A B B
i 1
*
i
i
i 1
*
i
i
Example p146.5.
39
Read: Chapter 3: 10
Homework: 3.10.1,10.
Due: April 15
40
April 8, 11 Eigenvalues and eigenvectors
3.11 Eigenvalues and eigenvectors; Diagonalizing matrices
Eigenvalues and eigenvectors:
For a matrix M, if there is a nonzero vector r and a scalar l such that Mr  lr, then r is
called an eigenvector of M, and l is called the corresponding eigenvalue.
M only changes the “length” of its eigenvector r by a factor of eigenvalue l, without
affecting its “direction”.
Mr  lr  (M  l )r  0.
For nontrivial solutions of this homogeneous equation, we need
M 11  l
M 21
det( M  l ) 

M n1
M 12
M 22  l

M n2

M 1n

M 2n
 0.


 M nn  l
This is called the secular equation, or characteristic equation.
41
 4  3
.
Example: Calculate the eigenvalues and eigenvectors of M  
 2 5 
4l 3
det( M  l ) 
 0  l1  2, l2  7.
2 5l
 4  l1  3  x 
 3

   0  2 x  3 y  0  r1   
 2
  2 5  l1  y 
 3  x 
 4  l2
1

   0  x  y  0  r2   
  1
  2 5  l2  y 
 5  2
.
Example: Calculate the eigenvalues and eigenvectors of M  
 2 2 
5l 2
det( M  l ) 
 0  l1  1, l2  6.
2 2l
 5  l1  2  x 
1

   0  2 x  y  0  r1   
 2
  2 2  l1  y 
 2  x 
 5  l2
  2

   0  x  2 y  0  r2   
 1 
  2 2  l2  y 
42
Similarity transformation:
Let operator M actively change (rotate, stretch, etc.) a vector. The matrix representation
of the operator depends on the choice of basis vectors:
In the old basis : R  Mr; In the new basis : R '  M' r '.
y' y r (r') M (M')
Let matrix C change the basis (coordinate transformation):
R (R')
r'  Cr ; R '  CR.
x'
Question: M'  f (M, C)  ?
x
R'  M' r'  M' Cr 
1
  M' C  CM  M'  CMC .
R'  CR  CMr 
M′ =CMC-1 is called a similar transformation of M.
M′ and M are called similar matrices. They are the same
operator represented in different bases that are related by
the transformation matrix C.
That is: If r′ =Cr, then M′ = CMC-1.
Theorem: A similarity transformation does not change
the determinate or trace of a matrix:
det M'  det M
Tr M'  Tr M
r
C
M'
M
R
r'
C
R'
43
Diagonalization of a matrix:
For a 2  2 matrix M, suppose Mr1  l1r1 and Mr2  l2r2 , then
x2   l1 x1 l2 x2   x1 x2  l1 0 




y2   l1 y1 l2 y2   y1 y2  0 l2 
 x x2 
l 0 
 (columns of eigenvecto rs), D   1
 (diagnal of eigenvalue s), then
Let C   1
y
y
0
l
2
2
 1

MC  CD  C 1MC  D.
x
M 1
 y1
Theorem: A matrix M may be diagonalized by a similarity transformation C-1MC=D ,
where C consists of the column eigenvectors of M, and the diagonalized matrix D
consists of the corresponding eigenvalues. That is, the diagonization equation C-1MC=D
just summarizes the eigenvalues and eigenvectors of M.
An n  n matrix M can be diagonized by C 1MC  D, where Mri  li ri , and
 r r  rn 
,
C   1 2
    
 l1 0

 0 l2
D
 

0 0

0

0 0
.
  

0 ln 
0
44
 4  3
 3
1
. l1  2, r1   ; l2  7, r2   .
Example 1 : M  
 2 5 
 2
  1
 3 1  1 1  1 1 
 2 0
, C  
, D  
.
C  
2

1
2

3
0
7
5






1  1 1  4  3  3 1   2 0 



.
C MC  D  
5  2  3   2 5  2  1  0 7 
1
 5  2
1 1
1   2
. l1  1, r1 
 ; l2  6, r2 
 .
Example 2 : M  

2
2
2
5
5


 
 1 
1 0
1  1  2  1 1  1 2 

, C 

, D  
.
C
5 2 1 
5  2 1
0 6
C 1MC  D 
1  1 2  5  2  1  1  2   1 0 




  
.

2
1

2
2
2
1
0
6
5

 5
 

More examples : L  I ω, D   E.
45
More about the diagonalization of a matrix C-1MC=D: (2×2 matrix as an example):
1. D describes in the (x', y') system the same operation as M describes in the (x, y)
system.
2. The new x', y' axes are along the eigenvectors of M.
1
 x
 1   x1
-1  x 
Proof : e x '     C       C   
 0
 y  y
 0   y1
x2  1   x1 
    
y2  0   y1 
3. The operation is more clear in the new system:
 x'   l 0  x'   l1 x' 
   

D    1
 y'   0 l2  y'   l2 y' 
46
Diagonalization of Hermitian matrices:
1. The eigenvalues of a Hermitian matrix are always real.
2. The eigenvectors corresponding to different eigenvalues of a Hermitian matrix are
orthogonal.
Proof :
H i  li i  j H i  li j i

If i  j then li  l*i .
*
  (li  l j ) j i  0  

*
H  H  j H i  Hj i  l j j i 
If i  j then i j  0.
3. {A matrix has real eigenvalues and can be diagonalized by a unitary similarity
transformation}if and only if {it is Hermitian}.
Proof :
1) Suppose U 1MU  D, with U 1  U  and diagonal D  D*.
U 1MU  D  ( U 1MU)   U  M  (U 1 )   U 1M  U  D   D  M   UDU -1  M.
2) Suppose H  H  , Hri  liri , then li  li , r j  ri   rjk* rik  d ij .
*
k
Construct matrices C and D so that Cij  rji , Dij  lid ij , then C1HC  D, and D  D*.
r
r  d ij   Ckj* Cki  C jkCki  CC  1  C  C1.
*
jk ik
k
k
k
Example p155.2.
47
Corollary: {A matrix has real eigenvalues and can be diagonalized by an orthogonal
similarity transformation} if and only if {it is symmetric}.
Proof :
1) Suppose O 1MO  D, with O1  OT and diagonal D  D*.
O1MO  D  (O 1MO)T  OT M T (O 1 )T  O 1M T O  DT  D  M T  ODO-1  M.
*
2) Suppose S  ST , Sri  liri , then li  li , r j  ri   rjk rik  d ij .
k
Construct matrices C and D so that Cij  rji , Dij  lid ij , then C1SC  D, and D  D*.
 rjk rik  dij   CkjCki  C TjkCki  CTC  1  CT  C1.
k
k
k
 5  2
1 1
1   2
. l1  1, r1 
 ; l2  6, r2 
 .
Example : M  

2
2
2
5 
5 1 


1 0
1  1  2  1 1  1 2 




.
C
,C 
, D  




5 2 1 
5  2 1
0 6
1  1 2  5  2  1  1  2   1 0 




  
.
C 1MC  D 

2
1

2
2
2
1
0
6
5

 5
 

li  l*i
Notice that M  M   1
.
T
C  C
T
48
Read: Chapter 3: 11
Homework: 3.11.3,13,14,19,32,33,42.
Due: April 22
49