Download PPT - Jung Y. Huang

Document related concepts

Perron–Frobenius theorem wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Jordan normal form wikipedia , lookup

Exterior algebra wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Gaussian elimination wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Matrix multiplication wikipedia , lookup

System of linear equations wikipedia , lookup

Euclidean vector wikipedia , lookup

Vector space wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Matrix calculus wikipedia , lookup

Four-vector wikipedia , lookup

Transcript
Chapter 3 Vector Spaces







3.1 Vectors in Rn
3.2 Vector Spaces
3.3 Subspaces of Vector Spaces
3.4 Spanning Sets and Linear Independence
3.5 Basis and Dimension
3.6 Rank of a Matrix and Systems of Linear Equations
3.7 Coordinates and Change of Basis
The idea of vectors dates back to the early 1800’s, but the generality of the
concept waited until Peano’s work in 1888. It took many years to understand the
importance and extent of the ideas involved.
The underlying idea can be used to describe the forces and accelerations in
Newtonian mechanics and the potential functions of electromagnetism and the
states of systems in quantum mechanics and the least-square fitting of
experimental data and much more.
4-1
3.1 Vectors in R
n
The idea of a vector is far more general than the picture of a line with an
arrowhead attached to its end.
A short answer is “A vector is an element of a vector space”.
n

Vector in R is denoted as an ordered n-tuple:
which is shown to be a sequence of n real number ( x1, x2, L , xn )
 n-space: R
n
is defined to be the set of all ordered n-tuple
n
(1) An n-tuple ( x1 , x2 ,L , xn ) can be viewed as a point in R with the xi’s
as its coordinates.
x  ( x1 , x2 , L , xn ) can be viewed as a vector
(2) An n-tuple
n
( x1 , x2 ,L , xn ) in R with the xi’s as its components.

Ex:
x1 , x2 
x1 , x2 
a point
0,0
4-2
a vector
Note:
A vector space is some set of things for which the operation of addition
and the operation of multiplication by a scalar are defined.
You don’t necessarily have to be able to multiply two vectors by each
other or even to be able to define the length of a vector, though those are
very useful operations.
The common example of directed line segments (arrows) in 2D or 3D
fits this idea, because you can add such arrows by the parallelogram law
and you can multiply them by numbers, changing their length (and
reversing direction for negative numbers).
4-3

A complete definition of a vector space requires pinning
down these properties of the operators and making the
concept of vector space less vague.
A vector space is a set whose elements are called “vectors”
and such that there are two operations defined on them:
you can add vectors to each other and you can multiply them
by scalars (numbers). These operations must obey certain
simple rules, the axioms for a vector space.
4-4
u   u1 , u2 ,L , un  , v   v1 , v2 ,L , vn 
(two vectors in Rn)

Equal:
u  v if and only if u1  v1 , u2  v2 , L , un  vn

Vector addition (the sum of u and v):
u  v   u1  v1 , u2  v2 , L , un  vn 

Scalar multiplication (the scalar multiple of u by c):
cu   cu1 , cu2 ,L , cun 
4-5

Negative:
 u  (1)u  ( u1 ,  u2 ,  u3 ,...,  un )

Difference:
u  v  u  (1)v  (u1  v1 , u2  v2 , u3  v3 ,..., un  vn )

Zero vector:
0  (0, 0, ..., 0)
Notes:
(1) The zero vector 0 in Rn is called the additive identity in Rn.
(2) The vector –v is called the additive inverse of v.
4-6

Thm 3.1: (the axioms for a vector space)
n
Let v1, v2, and v3 be vectors in R , and let ,  and  be scalars.
4-7

Ex : (Vector operations in R4)
Let u=(2, – 1, 5, 0), v=(4, 3, 1, – 1), and w=(– 6, 2, 0, 3) be
4
vectors in R .
Solve x for 3(x+w) = 2u – v+x
Sol:
3(x  w)  2u  v  x
3x  3w  2u  v  x
3x  x  2u  v  3w
2x  2u  v  3w
x  u  12 v  23 w
  2,1, 5, 0    2, 23 , 21 , 12    9, 3, 0, 29 
  9, 211 , 92 , 4 
4-8

Thm 3.2: (Properties of additive identity and additive inverse)
n
Let v be a vector in R and c be a scalar. Then the following is true.
(1) The additive identity is unique. That is, if u+v=v, then u = 0
(2) The additive inverse of v is unique. That is, if v+u=0, then u = –v
4-9

Thm 3.3: (Properties of scalar multiplication)
Let v be any element of a vector space V, and let c be any
scalar. Then the following properties are true.
(1) 0v=0
(2) c0=0
(3) If cv=0, then c=0 or v=0
(4) (-1)v = -v and –(– v) = v
4 - 10
Notes:
A vector u  (u1 , u2 ,K , un ) in R n can be viewed as:
a 1×n row matrix (row vector): u  [u1 , u2 ,L , un ]
or
 u1 
 
u2 

a n×1 column matrix (column vector): u 
 M
 
 un 
(The matrix operations of addition and scalar multiplication give
the same results as the corresponding vector operations)
4 - 11
Vector addition
Scalar multiplication
u  v  ( u1 , u2 , L , un )  (v1 , v2 , L , vn )
 ( u1  v1 , u2  v2 , L ,un  vn )
cu  c(u1 , u2 , L , un )
 (cu1 , cu2 , L , cun )
Matrix Algebra
u  v  [u1 , u2 , L ,un ]  [v1 , v2 , L , vn ]
 [u1  v1 , u2  v2 , L ,un  vn ]
 u1   v1   u1  v1 
    

u2   v2   u2  v 2 

u v 


 M  M  M 
    

 un   vn   un  v n 
cu  c[u1 , u2 , L , un ]
 [cu1 , cu2 , L , cun ]
 u1   cu1 
   
u2   cu2 

cu  c

 M  M 
   
 un   cun 
4 - 12
Notes:
(1) A vector space consists of four entities:
a set of vectors, a set of scalars, and two operations
V:nonempty set
c:scalar
 (u, v )  u  v: vector addition
 (c, u)  cu: scalar multiplication
V ,
, 
is called a vector space
(2) V  0 : zero vector space
containing only additive identity
4 - 13

Examples of vector spaces:
(1) n-tuple space: Rn
(u1 , u2 ,L un )  (v1 , v2 ,L v2 )  (u1  v1 , u2  v2 ,L un  vn )
 (u1 , u2 ,L un )  ( u1 ,  u2 ,L  un )
vector addition
scalar multiplication
(2) Matrix space: V  M mn (the set of all m×n matrices with real values)
Ex: :(m = n = 2)
 u11
u
 21
u12   v11 v12   u11  v11




u22  v21 v22   u21  v21
u12  v12 
u22  v22 
 u11 u12   cu11 cu12 
c



u
u
cu
cu
 21 22   21
22 
4 - 14
vector addition
scalar multiplication
(3) n-th degree polynomial space: V  { Pn ( x )}
(the set of all real polynomials of degree n or less)
p( x )  q( x )  (a0  b0 )  (a1  b1 ) x  L  (an  bn ) x n
cp( x )  ca0  ca1 x  L  can x n
(4) Function space: The set of square-integrable real-valued functions of a
real variable on the domain [ax  b].
That is, those functions with b dx | f ( x ) |2   and
.

a

b
a
dx | g( x ) |2  
simply note the combination
So the axiom-1 is satisfied. You can verify the rest 9 axioms are also satisfied.
4 - 15

Function Spaces:
Is this a vector space? How can a function be a vector?
This comes down to your understanding of the word
“function.” Is f(x) a function or is f(x) a number?
Answer: It’s a number. This is a confusion caused by the
conventional notation for functions. We routinely call f(x) a
function, but it is really the result of feeding the particular
value, x, to the function f in order to get the number f(x).
Think of the function f as the whole graph relating input to output; the pair
{x, f(x)} is then just one point on the graph. Adding two functions is adding
their graphs.
4 - 16
Notes: To show that a set is not a vector space, you need
only find one axiom that is not satisfied.
Ex1: The set of all integer is not a vector space.
Pf:
1V , 1  R
2
( 12 )(1)  12 V (it is not closed under scalar multiplication)
  noninteger

scalar
integer
Ex2: The set of all second-degree polynomials is not a vector space.
Pf:
Let p( x)  x 2 and q( x)   x 2  x  1
 p ( x)  q ( x)  x  1 V
(it is not closed under vector addition)
4 - 17
3.3 Subspaces of Vector Spaces

Subspace:
(V ,,) : a vector space
W  
 : a nonempty subset
W V
(W ,,) :a vector space (under the operations of addition and
scalar multiplication defined in V)
 W is a subspace of V
 Trivial subspace:
Every vector space V has at least two subspaces.
(1) Zero vector space {0} is a subspace of V.
(2) V is a subspace of V.
4 - 18

Thm 3.4: (Test for a subspace)
If W is a nonempty subset of a vector space V, then W is
a subspace of V if and only if the following conditions hold.
(1) If u and v are in W, then u+v is in W.
Axiom 1
(2) If u is in W and c is any scalar, then cu is in W. Axiom 2
4 - 19

Ex: (A subspace of M2×2)
Let W be the set of all 2×2 symmetric matrices. Show that
W is a subspace of the vector space M2×2, with the standard
operations of matrix addition and scalar multiplication.
Sol:
QW  M 22
M 22 : vector spaces
Let A1, A2 W ( A1T  A1, A2T  A2 )
A1 W, A2 W  ( A1  A2 )T  A1T  A2T  A1  A2 ( A1  A2 W )
k  R , A W  (kA)T  kAT  kA
W is a subspace of M 22
4 - 20
(kA W )

Ex: (Determining subspaces of R3)
Which of the following subsets is a subspace of R 3?
(a) W  ( x1 , x2 ,1) x1 , x2  R
(b) W  ( x1 , x1  x3 , x3 ) x1 , x3  R
Sol:
(a) Let v  (0,0,1) W
 (1) v  (0,0,1) W
W is not a subspace of R3
(b) Let v  ( v1 , v1  v3 , v3 ) W , u  (u1 , u1  u 3 , u 3 ) W
Q v  u   v 1  u1 ,  v 1  u1    v 3  u 3  , v 3  u 3   W
kv  kv1 , kv1   kv3 , kv3  W
W is a subspace of R3
4 - 21

Thm 3.5: (The intersection of two subspaces is a subspace)
If V and W are both subspaces of a vector space U ,
then the intersection of V and W (denoted by V  W )
is also a subspace of U .
Proof: Automatically from Thm 3.4.
4 - 22
3.4 Spanning Sets and Linear Independence

Linear combination:
A vector v in a vector space V is called a linear combination of
the vectors u1 , u 2 ,L , u k in V if v can be written in the form
v  c1u1  c2u2  K  ck uk
Ex:
c1 ,c2 ,L ,ck : scalars
Given v = (– 1, – 2, – 2), u1 = (0,1,4), u2 = (– 1,1,2), and
3
u3 = (3,1,2) in R , find a, b, and c such that v = au1  bu 2  cu 3 .
Sol:
a
4a


b

3c

1
b
2b


c
2c
 2
 2
 a  1, b  2, c  1
4 - 23
Thus v  u1  2u2  u3
Ex: (Finding a linear combination)
v1  (1,2,3) v 2  (0,1,2) v 3  (  1,0,1)
Prove w  (1,1,1) is a linear combination of v1 , v 2 , v 3
Sol:
w  c1 v1  c2 v 2  c3 v 3
1,1,1  c1 1, 2, 3  c2  0,1, 2  c3  1,0,1
 (c1  c3 , 2c1  c2 , 3c1  2c2  c3 )
1
c1 -c3

2c1  c2
1
3c1  2c2  c3
4 - 24
1
 1 0 1 1
 1 0 1 1 
Gauss  Jordan



  0 1 2 1
  2 1 0 1


 3 2 1 1
 0 0 0 0 
 c1  1  t , c2  1  2t , c3  t
(this system has infinitely many solutions)
t 1
 w  2 v1  3v 2  v 3
4 - 25

the span of a set: span (S)
If S={v1, v2,…, vk} is a set of vectors in a vector space V,
then the span of S is the set of all linear combinations of
the vectors in S,
U  span (S )  c1 v1  c2 v 2  L  ck v k ci  R
(the set of all linear combinations of vectors in S )

a spanning set of a vector space:
If every vector in a given vector space U can be written as a
linear combination of vectors in a given set S, then S is
called a spanning set of the vector space U.
4 - 26
 Notes:
span ( S )  V
 S spans (generates ) V
V is spanned (generated ) by S
S is a spanning set of V
 Notes:
(1) span( )  0
(2) S  span( S )
(3) S1 , S 2  V
S1  S 2  span( S1 )  span( S 2 )
4 - 27
 Ex: (A spanning set for R3)
Show that the set S  {v1 , v2 , v3 }  (1, 2, 3),(0,1, 2),(2,0,1) spans R3
Sol:
We must determine whether an arbitrary vector u  (u1 , u2 , u3 )
in R 3 can be as a linear combinatio n of v1 , v 2 , and v 3 .
u  R 3  u  c1 v1  c2 v 2  c3 v 3
 c1
2c1  c2
 2c3  u1
 u2
3c1  2c2  c3  u3
The problem thus reduces to determinin g whether this system
is consistent for all values of u1 , u2 , and u3 .
4 - 28
1 0 2
Q A  2 1 0 0
3 2 1
c1
2c1  c2
 2c3  u1
 u2
3c1  2c2  c3  u3
 Ac  u has exactly one solution c for every u.
 span ( S )  R3
4 - 29

Thm 3.6: (Span (S) is a subspace of V)
If S={v1, v2,…, vk} is a set of vectors in a vector space V,
then
(a) span (S) is a subspace of V.
(b) span (S) is the smallest subspace of V that contains the spaning
S.
i.e.,
Every other subspace of V that contains S must contain span (S).
4 - 30

Linear Independent (L.I.) and Linear Dependent (L.D.):
S   v1 , v 2 ,L , v k  : a set of vectors in a vector space V
For the equation c1 v1  c2 v 2  L  ck v k  0
(1) If the equation has only the trivial solution (c1  c2  L  ck  0)
then S is called linearly independent.
(2) If the equation has a nontrivial solution (i.e., not all zeros),
then S is called linearly dependent.
4 - 31

Notes:
(1)  is linearly independent
(2) 0  S  S is linearly dependent.
(3) v  0  Single nonzero vector set  v is linearly independent
(4) S1  S2
S1 is linearly dependent  S2 is linearly dependent
S2 is linearly independen t  S1 is linearly independen t
4 - 32
 Ex: (Testing for linearly independent)
Determine whether the following set of vectors in R3 is L.I. or L.D.
S  {v1 , v2 , v3 }   1, 2, 3  ,  0, 1, 2  ,  2, 0, 1
c1
Sol:
c1v1  c2 v 2  c3 v 3  0 
 2c3  0
2c1  c2 
0
3c1  2c2  c3  0
1 0  2 0
1 0 0 0
- Jordan Eliminatio n


 0 1 0 0
 2 1 0 0 Gauss
3 2 1 0
0 0 1 0
 c1  c2  c3  0 only the trivial solution 
 S is linearly independen t
4 - 33

Ex: (Testing for linearly independent)
Determine whether the following set of vectors in P2 is L.I. or L.D.
S = {1+x – 2x2 , 2+5x – x2 , x+x2}
v1
v2
v3
Sol:
c1v1+c2v2+c3v3 = 0
i.e. c1(1+x – 2x2) + c2(2+5x – x2) + c3(x+x2) = 0+0x+0x2
1 2 0 0
 1 2 0 0
c1+2c2
=0
G. J.
 c1+5c2+c3 = 0   1 5 1 0   1 1 13 0
–2c1+ c2+c3 = 0
 2 1 1 0 
0 0 0 0
 This system has infinitely many solutions.
(i.e., This system has nontrivial solutions.)
 S is linearly dependent.
(Ex: c1=2 , c2= – 1 , c3=3)
4 - 34

Ex: (Testing for linearly independent)
Determine whether the following set of vectors in 2×2
matrix space is L.I. or L.D.
2 1 3 0 1 0 
S  
,
,




0 1 2 1 2 0 
v1
v2
v3
Sol:
c1v1+c2v2+c3v3 = 0
2 1
 3 0
1 0 0 0
c1 
 c2 
 c3 





0
1
2
1
2
0
0
0





 

4 - 35

2c1+3c2+ c3 = 0
c1
=0
2c2+2c3 = 0
c1 + c2
=0
 2 3 1 0
1 0 0 0 


0 2 2 0


1 1 0 0 
1
0
- Jordan Eliminatio n
Gauss


 
0

0
0
1
0
0
0
0
1
0
0
0
0

0
 c1 = c2 = c3= 0 (This system has only the trivial solution.)
 S is linearly independent.
4 - 36

Thm 3.7: (A property of linearly dependent sets)
A set S = {v1,v2,…,vk}, k2, is linearly dependent if and only if at least one of
the vectors vj in S can be written as a linear combination of the other vectors
in S.
Pf: ( )
Since S is linearly dependent
c1v1+c2v2+…+ckvk = 0
 ci  0 for some i
ci 1
ci 1
ck
c1
 v i  v1  L 
v i 1 
v i 1  L  v k
ci
ci
ci
ci
4 - 37
()
Let
vi = d1v1+…+di-1vi-1+di+1vi+1+…+dkvk
 d1v1+…+di-1vi-1+di+1vi+1-vi+…+dkvk = 0
 c1=d1 , c2=d2 ,…, ci=-1 ,…, ck=dk
(nontrivial solution)
 S is linearly dependent

Corollary to Theorem 3.7:
Two vectors u and v in a vector space V are linearly dependent
if and only if one is a scalar multiple of the other.
4 - 38
3.5 Basis and Dimension

Basis:
V:a vector space
S ={v1, v2, …, vn}V
Spanning
Sets
Bases
Linearly
Independent
Sets
(a ) S spans V (i.e., span (S) = V )

(b) S is linearly independent
 S is called a basis for V
Bases and Dimension
A basis for a vector space V is a linearly independent spanning set of the
vector space V,
i.e.,
any vector in the space can be written as a linear combination of elements
of this set. The dimension of the space is the number of elements in this
basis.
4 - 39
Note:
Beginning with the most elementary problems in physics and mathematics, it is
clear that the choice of an appropriate coordinate system can provide great
computational advantages.
For examples,
1. for the usual two and three dimensional vectors it is useful to express an
arbitrary vector as a sum of unit vectors.
2. Similarly, the use of Fourier series for the analysis of functions is a very
powerful tool in analysis.
These two ideas are essentially the same thing when you look at them as
aspects of vector spaces.
 Notes:
(1) Ø is a basis for {0}
(2) the standard basis for R3:
{i, j, k} i = (1, 0, 0), j = (0, 1, 0), k = (0, 0, 1)
4 - 40
n
(3) the standard basis for R :
{e1, e2, …, en} e1=(1,0,…,0), e2=(0,1,…,0), en=(0,0,…,1)
Ex: R4
{(1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1)}
(4) the standard basis for mn matrix space:
{ Eij | 1im , 1jn }
Ex: 2 2 matrix space:
1 0 0 1 0 0 0 0 
,
,
,





0
0
0
0
1
0
0
1





(5) the standard basis for Pn(x):
{1, x, x2, …, xn}
Ex: P3(x)
{1, x, x2, x3}
4 - 41

Thm 3.8: (Uniqueness of basis representation)
If S  v1 , v2 ,L , vn  is a basis for a vector space V, then every
vector in V can be written as a linear combination of vectors in
S in one and only one way.
Pf:
 1. Span (S) = V
Note S is a basis  
 2. S is linearly independent
Q Span (S) = V Let v = c1v1+c2v2+…+cnvn
v = b1v1+b2v2+…+bnvn
 0 = (c1–b1)v1+(c2 – b2)v2+…+(cn – bn)vn
Q S is linearly independent
 c1= b1 , c2= b2 ,…, cn= bn
4 - 42
(i.e., uniqueness)

Thm 3.9: (Bases and linear dependence)
If S  v1 , v2 ,L , vn  is a basis for a vector space V, then every
set containing more than n vectors in V is linearly dependent.
Pf:
Let S1 = {u1, u2, …, um} , m > n
Q span( S )  V
u1  c11 v1  c21 v 2  L  cn1 v n
uiV
 u 2  c12 v1  c22 v 2  L  cn 2 v n
M
u m  c1m v1  c2 m v 2  L  cnm v n
4 - 43
Let k1u1+k2u2+…+kmum= 0
 d1v1+d2v2+…+dnvn= 0 with di = ci1k1+ci2k2+…+cimkm
Q S is L.I.
 di=0
i
i.e.
c11k1  c12 k2  L  c1m km  0
c21k1  c22 k2  L  c2 m km  0
M
cn1k1  cn 2 k2  L  cnm km  0
Q
If the homogeneous system (n<m) has fewer equations
than variables, then it must have infinitely many solution.
m > n  k1u1+k2u2+…+kmum = 0 has nontrivial solution
 S1 is linearly dependent
4 - 44
 Notes:
dim(V) = n
(1) dim({0}) = 0 = #(Ø)
Spanning
Sets
(2) dim(V) = n , SV
#(S) > n
Bases
Linearly
Independent
Sets
#(S) = n
#(S) < n
S:a spanning set  #(S)  n
S:a L.I. set
 #(S)  n
S:a basis
 #(S) = n
(3) dim(V) = n , W is a subspace of V  dim(W)  n
4 - 45

Thm 3.10: (Number of vectors in a basis)
If a vector space V has one basis with n vectors, then every
basis for V has n vectors. i.e.,
All bases for a finite-dimensional vector space has the same
number of vectors.)
Pf: S ={v1, v2, …, vn}
are two bases for a vector space
S'={u1, u2, …, um}
S is a basis 

  n  m
S ' is L.I. 

n  m
S is L.I. 
  n  m

S ' is a basis 
4 - 46


Finite dimensional:
A vector space V is called finite dimensional,
if it has a basis consisting of a finite number of elements.
Infinite dimensional:
If a vector space V is not finite dimensional,
then it is called infinite dimensional.

Dimension:
The dimension of a finite dimensional vector space V is
defined to be the number of vectors in a basis for V.
V: a vector space
 dim(V) = #(S)
S:a basis for V
(the number of vectors in S)
4 - 47

Ex: (Finding the dimension of a subspace)
(a) W1={(d, c–d, c): c and d are real numbers}
(b) W2={(2b, b, 0): b is a real number}
Sol: Find a set of L.I. vectors that spans the subspace.
(a) (d, c– d, c) = c(0, 1, 1) + d(1, – 1, 0)
 S = {(0, 1, 1) , (1, – 1, 0)} (S is L.I. and S spans W1)
 S is a basis for W
 dim(W1) = #(S) = 2
(b) Q  2b, b,0  b  2,1,0 
 S = {(2, 1, 0)} spans W2 and S is L.I.
 S is a basis for W
 dim(W2) = #(S) = 1
4 - 48

Ex: (Finding the dimension of a subspace)
Let W be the subspace of all symmetric matrices in M22.
What is the dimension of W?
Sol:
  a b 

W  
a , b, c  R 

  b c 

a b
1 0
0 1
 0 0
Q
 a
 b
c




b
c
0
0
1
0
0
1








1 0 0 1 0 0 
 S  
,
,
 spans W and S is L.I.



0 0 1 0 0 1 
 S is a basis for W
 dim(W) = #(S) = 3
4 - 49

Thm 3.11: (Basis tests in an n-dimensional space)
Let V be a vector space of dimension n.
(1) If S  v1 , v 2 ,L , vn  is a linearly independent set of
vectors in V, then S is a basis for V.
(2) If S  v1 , v 2 ,L , vn  spans V, then S is a basis for V.
dim(V) = n
Spanning
Sets
Bases
Linearly
Independent
Sets
#(S) > n
#(S) = n
4 - 50
#(S) < n
3.6 Rank of a Matrix and Systems of Linear Equations
 row vectors:
 a11 a12 L
a
a 22 L
21

A
 M M

am1 am2 L
a1n   A 1 



a 2n   A 2 

M  M 

amn   A m 
  
Row vectors of A
(a11 , a12 , K , a1n )  A(1)
(a21 , a22 , K , a2n )  A(2)
M
(am1 , am2 , K , amn )  A(m)
 column vectors:
 a11 a12 L
a
a 22 L
21

A
 M M

am1 am2 L
Column vectors of A
a1n 
a 2n 
1
2
n
  A  M
A  M
LM
A  
M

amn 
 a11 
a 
 21 
 M
 
am1 
 a12 
 a1n 
a 
a 
22

 L  2n 
 M
 M




am2 
amn 
||
||
(1)
(2)
A
A
4 - 51
||
(n)
A
Let A be an m×n matrix.
 Row space:
The row space of A is the subspace of Rn spanned by
the m row vectors of A.
RS ( A)  { 1 A(1)   2 A(2)  ...   m A( m ) |  1,  2,...,  m  R}
 Column space:
The column space of A is the subspace of Rm spanned by
the n column vectors of A.

CS  A  {1 A(1)   2 A(2)  L   n A(n) 1 ,  2 ,L  n  R}
Null space:
The null space of A is the set of all solutions of Ax=0 and
it is a subspace of Rn.
NS ( A)  {x  R n | Ax  0}
4 - 52

Thm 3.12: (Row-equivalent matrices have the same row space)
If an mn matrix A is row equivalent to an mn matrix B,
then the row space of A is equal to the row space of B.

Notes:
(1) The row space of a matrix is not changed by
elementary row operations.
RS((A)) = RS(A)
 : elementary row operations
(2) However, elementary row operations do change the
column space.
4 - 53

Thm 3.13: (Basis for the row space of a matrix)
If a matrix A is row equivalent to a matrix B in row-echelon
form, then the nonzero row vectors of B form a basis for the
row space of A.
4 - 54
 Ex: ( Finding a basis for a row space)
 1
 0

Find a basis of row space of A =  3

 3
 2
Sol:
 1
 0

 3
A= 
 3
 2
a1
3
1
1
1
0
6
4 2
0 4
a2 a3
3
0
 1

1
2
a4
3
1
1
1
0
6
4 2
0 4
1
0

.E .
G

 B = 0
0
0
b1
4 - 55
3
0
 1

1
2
3 1 3 w 1
1 1 0 w 2
0 0 1 w 3

0 0 0
0 0 0
b2 b3 b4
a basis for RS(A) = {the nonzero row vectors of B} (Thm 3.13)
= {w1, w2, w3} = {(1, 3, 1, 3) , (0, 1, 1, 0) ,(0, 0, 0, 1)}

Notes:
(1) b3  2b1  b 2  a3  2a1  a 2
(2) {b1 , b 2 , b 4 } is L.I.  {a1 , a 2 , a 4 } is L.I.
4 - 56

Ex: (Finding a basis for the column space of a matrix)
Find a basis for the column space of the matrix A.
 1
 0

A   3

 3
 2
0 4
3
0
 1

1
 2
c3
c4
3
1
1
1
0
6
4 2
c1 c 2
Sol. 1:
1
3
AT  
1

3
0 3
3
2
1
0
1
0
4
0
.E.
G

 B
0
1
6  2  4


0 1
1  2
0
4 - 57
0
1
0
0
3
3
2 w1
9  5  6 w 2
1  1  1 w 3

0
0
0
Q CS(A)=RS(AT)
 a basis for CS(A)
= a basis for RS(AT)
= {the nonzero row vectors of B}
= {w1, w2, w3}
 1 
 
 0 

  3,
 3 
 
 2 
 0   0 
 1   0 
    
 9 ,  1   (a basis for the column space of A)
   
  5  1 
 6  1 

 Note: This basis is not a subset of {c1, c2, c3, c4}.
4 - 58

Sol. 2:
1
0

A   3

3
 2
c1
3
1
0
4
0
c2
3
1
0
1
0 

G.E .
 B  0
6  1 


2 1 
0
0
 4  2
c3 c4
v1
1
3 1 3
1 1 0
0 0 1

0 0 0
0 0 0
v2 v3 v4
The column vectors with leading 1 locate
{v1, v2, v4} is a basis for CS(B)
{c1, c2, c4} is a basis for CS(A)
 Notes: (1) This basis is a subset of {c1, c2, c3, c4}.
(2) v3 = –2v1+ v2, thus c3 = – 2c1+ c2 .
4 - 59

Thm 3.14: (Solutions of a homogeneous system)
If A is an mn matrix, then the set of all solutions of Ax = 0
is a subspace of Rn called the nullspace of A.
Proof: NS ( A)  R n
Q A0  0  NS ( A)  
Let x1, x 2  NS ( A) (i.e. Ax1  0, Ax 2  0)
Then (1) A(x1  x 2 )  Ax1  Ax 2  0  0  0
Addition
(2) A(cx1)  c( Ax1)  c(0)  0
Scalar multiplication
Thus NS ( A) is a subspace of R n
Notes: The nullspace of A is also called the solution space of
the homogeneous system Ax = 0.
4 - 60
 Ex: Find the solution space of a homogeneous system Ax = 0.
1 2  2 1
A  3 6  5 4
1 2 0 3
Sol: The nullspace of A is the solution space of Ax = 0. 
1 2  2 1
1 2 0 3
. J .E
A  3 6  5 4 G
0 0 1 1
1 2 0 3
0 0 0 0
 x1 = –2s – 3t, x2 = s, x3 = –t, x4 = t
 x1   2s  3t   2  3
 x   s   1  0
  s    t    sv1  tv 2
 x   2  
 x3    t   0   1
  
    
x
t
  0  1
 4 
 NS ( A)  {sv1  tv 2 | s, t  R}
4 - 61

Thm 3.15: (Row and column space have equal dimensions)
If A is an mn matrix, then the row space and the column
space of A have the same dimension.
dim(RS(A)) = dim(CS(A))
 Rank:
The dimension of the row (or column) space of a matrix A
is called the rank of A.
rank(A) = dim(RS(A)) = dim(CS(A))
4 - 62
 Nullity:
The dimension of the nullspace of A is called the nullity of A.
nullity(A) = dim(NS(A))
 Notes: rank(AT) = dim(RS(AT)) = dim(CS(A)) = rank(A)
Therefore rank(AT ) = rank(A)
4 - 63

Thm 3.16: (Dimension of the solution space)
If A is an mn matrix of rank r, then the dimension of
the solution space of Ax = 0 is n – r. That is
nullity(A) =n - rank(A)= n-r
n=rank(A)+nullity(A)

Notes: ( n = #variables= #leading variables + #nonleading variables )
(1) rank(A): The number of leading variables in the solution of Ax=0.
(i.e., The number of nonzero rows in the row-echelon form of A)
(2) nullity (A): The number of free variables (non leading variables)
in the solution of Ax = 0.
4 - 64

Notes:
If A is an mn matrix and rank(A) = r, then
Fundamental Space
Dimension
RS(A)=CS(AT)
r
CS(A)=RS(AT)
r
NS(A)
n–r
NS(AT)
m–r
4 - 65

Ex: (Rank and nullity of a matrix)
Let the column vectors of the matrix A be denoted by a1, a2,
a3, a4, and a5.
1
0
 1 0 2
 0 1  3

1
3

A
 2  1
1 1
3


9 0  12
 0 3
a1 a2 a3 a4 a5
(a) Find the rank and nullity of A.
(b) Find a subset of the column vectors of A that forms a basis for
the column space of A .
4 - 66
Sol: B is the reduced row-echelon form of A.
0
 1 0 2 1
 0 1  3 1

3

A
 2  1 1  1
3


 0 3 9 0  12
a1 a2 a3 a4 a5
(a) rank(A) = 3
1
0
B
0

0
b1
0 2 0
1
1 3 0  4
0 0 1  1

0 0 0 0
b2 b3 b4 b5
(the number of nonzero rows in B)
nullity( A)  n  rank( A)  5  3  2
4 - 67
(b) Leading 1
 {b1 , b 2 , b 4 } is a basis for CS ( B)
{a1 , a 2 , a 4 } is a basis for CS ( A)
 1
 0
 1
 0
 1
 1
a1   , a 2   , and a 4   ,
  2
 1
 1
 0
 3
 0
 
 
 
(c) b3  2b1  3b 2  a3  2a1  3a 2
4 - 68
 Thm 3.17: (Solutions of an inhomogeneous linear system)
If xp is a particular solution of the inhomogeneous system
Ax = b, then every solution of this system can be written in
the form x = xp + xh , wher xh is a solution of the corresponding
homogeneous system Ax = 0.
Pf:
Let x be any solution of Ax = b.
 A(x  x p )  Ax  Ax p  b  b  0.
 (x  x p )
is a solution of Ax = 0
Let x  x p  x h
 xh  x  x p
4 - 69

Ex: (Finding the solution set of an inhomogeneous system)
Find the set of all solution vectors of the system of linear equations.
x1
Sol:
3x1 
x2
x1  2 x2
 2 x3

x4
 5 x3
 5 x4

5

8
 9
1
5
1
5
1 0  2
1 0  2
3 1  5
 G
0 1

.E
0
8

1

3

7




 1 2
0 0
0  5  9
0
0
0
s
t
4 - 70
 x1   2 s
 x   s
 x   2  
 x3   s
  
 x4   0 s
 t
 3t
 0t

t
 5
 2   1  5
  1  3   7 
 7
 s   t     
 1  0  0
 0

     
 0
 0  1  0
 su1  tu 2  x p
5
 7 
i.e. x p    is a particular solution vector of Ax=b.
0
 
0
xh = su1 + tu2 is a solution of Ax = 0
4 - 71

Thm 3.18: (Solution of a system of linear equations)
The system of linear equations Ax = b is consistent if and only
if b is in the column space of A (i.e., bCS(A)).
Pf:
Let
 a11 a12 L
a
a22 L
21

A
 M M

 am 1 am 2 L
a1n 
a2 n 
,
M

amn 
 x1 
x 
x   2,
 M
 
 xn 
and
 b1 
b 
b   2
 M
 
 bn 
be the coefficient matrix, the column matrix of unknowns,
and the right-hand side, respectively, of the system Ax = b.
4 - 72
Then
 a11 a12 L a1n   x1   a11 x1  a12 x2
a
 x  a x  a x
a
L
a
22
2n   2 
22 2
Ax   21
  21 1
 M M
M  M  M

   
a
a
L
a
m2
mn   x n 
 m1
 am 1 x1  am 2 x2
 a11 
 a12 
 a1n 
a 
a 
a 
 x1  21   x2  22   L  xn  2 n   b.
 M
 M
 M
 




a
a
a
 m1 
 m2 
 mn 
 L
 L
 L

a1n xn 
 a2 n xn 
M 

 amn xn 
Hence, Ax = b is consistent if and only if b is a linear combination
of the columns of A. That is, the system is consistent if and only if
b is in the subspace of Rn spanned by the columns of A.
4 - 73
 Notes:
If rank([A|b])=rank(A) (Thm 3.18)
Then the system Ax=b is consistent.

Ex: (Consistency of a system of linear equations)
x1

x2
x1
3x1
Sol:
 2 x2
 x3
 1

x3

3
 x3

1
1
 1 1  1
1 0
.E .
A   1 0
1 G

 0 1  2
 3 2  1
0 0
0
4 - 74
1 3
 1 1  1  1
1 0
G.E .
[ A Mb]   1 0 1 3  
  0 1 2 4 
 3 2 1 1
 0 0 0 0 
c1 c2 c3
Q v  3w1  4w 2
b
w1 w2 w3 v
( Note : w 3 is not the leading-1 column vector)
 b  3c1  4c2  0c3
(b is in the column space of A)
 The system of linear equations is consistent.
 Check:
rank( A)  rank([ A b])  2
4 - 75

Summary of equivalent conditions for square matrices:
If A is an n×n matrix, then the following conditions are equivalent.
(1) A is invertible
(2) Ax = b has a unique solution for any n×1 matrix b.
(3) Ax = 0 has only the trivial solution
(4) A is row-equivalent to In
(5) | A | 0
(6) rank(A) = n
(7) The n row vectors of A are linearly independent.
(8) The n column vectors of A are linearly independent.
4 - 76
3.7 Coordinates and Change of Basis

Coordinate representation relative to a basis
Let B = {v1, v2, …, vn} be an ordered basis for a vector space V
and let x be a vector in V such that
x  c1 v1  c2 v 2  L  cn v n .
The scalars c1, c2, …, cn are called the coordinates of x relative
to the basis B. The coordinate matrix (or coordinate vector) of x
relative to B is the column matrix in Rn whose components are
the coordinates of x.
 xB
 c1 
c 
  2
 M
 
 cn 
4 - 77
n
 Ex: (Coordinates and components in R )
Find the coordinate matrix of x = (–2, 1, 3) in R3
relative to the standard basis
S = {(1, 0, 0), ( 0, 1, 0), (0, 0, 1)}
Sol:
Q x  ( 2, 1, 3)  2(1, 0, 0)  1(0, 1, 0)  3(0, 0, 1)
  2
[x]S   1.
 3
4 - 78

Ex: (Finding a coordinate matrix relative to a nonstandard basis)
Find the coordinate matrix of x=(1, 2, –1) in R3
relative to the (nonstandard) basis
B ' = {u1, u2, u3}={(1, 0, 1), (0, – 1, 2), (2, 3, – 5)}
Sol: x  c u  c u  c u
1 1
2 2
3 3
(1, 2,  1)  c1 (1, 0, 1)  c2 (0,  1, 2)  c3 (2, 3,  5)
c1
c1
[x]B
 c2
 2c2
 2c3
 3c3
 5c3
 c1   5
 c2    8
 c3   2 

1
 2
 1
4 - 79
2  c1   1
1 0
0  1
 c    2
3

 2   
 1 2  5 c3   1

Change of basis:
You were given the coordinates of a vector relative to one
basis B and were asked to find the coordinates relative to
another basis B'.

Ex: (Change of basis)
Consider two bases for a vector space V
B  {u1 , u2 }, B  {u1 , u2 }
a 
c 
If [u1 ]B   , [u2 ]B   
b 
d 
i.e., u1  au1  bu 2 , u2  cu1  du 2
4 - 80
 k1 
Let v V , [ v]B   
k 2 
 v  k1u1  k 2u2
 k1 (au1  bu 2 )  k 2 (cu1  du 2 )
 (k1a  k 2 c)u1  (k1b  k 2 d )u 2
 k1a  k2c   a c   k1 
 [v]B  


 k 
k
b

k
d
b
d

  2
 1
2 
 u1 B u2 B   v B  PBB '  v B
4 - 81

Transition matrix from B' to B:
Let B  {u1 , u 2 ,..., u n } and B  {u1 , u2 ..., un } be two bases
for a vector space V
If [v]B is the coordinate matrix of v relative to B
[v]B’ is the coordinate matrix of v relative to B'
then [v]B  P[v]B
 u1 B , u2 B ,..., un B 
 vB
where
P  u1 B , u2 B , ..., un B 
is called the transition matrix from B' to B
4 - 82

Thm 3.19: (The inverse of a transition matrix)
If P is the transition matrix from a basis B' to a basis B in Rn,
then
(1) P is invertible
–1
(2) The transition matrix from B to B' is P

Notes:
B  {u1 , u 2 , ..., u n }, B'  {u1 , u2 , ..., un }
vB  [u1 ]B , [u2 ]B , ..., [un ]B  vB  P vB
vB  [u1 ]B , [u 2 ]B , ..., [u n ]B  vB  P 1 vB
4 - 83

Thm 3.20: (Transition matrix from B to B')
Let B={v1, v2, … , vn} and B' ={u1, u2, … , un} be two bases
n
for R . Then the transition matrix P–1 from B to B' can be found
by using Gauss-Jordan elimination on the n×2n matrix  B MB 
as follows.
 B MB
 I n MP 1 
4 - 84
 Ex: (Finding a transition matrix)
B={(–3, 2), (4,–2)} and B' ={(–1, 2), (2,–2)} are two bases for R2
(a) Find the transition matrix from B' to B.
1 
(b) Let [ v]B '   , find [ v]B
 2
(c) Find the transition matrix from B to B' .
4 - 85
Sol:
(a)
 3
 2

4 M1
2 M 2
B
 3  2
P  

2

1


(b)
2
2 
G.J.E.
B'
 1 0 M3 2 
 0 1 M2 1


I
P
(the transition matrix from B' to B)
3  2 1  1
[v ] B  P [v ] B   
 



 2  1   2  0 
4 - 86
(c)
4
 1 2 M3
 2 2 M 2 2 


B'
B
  1 2
P  


2
3


1

G.J.E.
 1 0 M1 2 
 0 1 M2 3 


-1
I
P
(the transition matrix from B to B')
Check:
 3  2   1 2  1 0
PP  

 I2




2  1  2 3 0 1
1
4 - 87
 Ex: (Coordinate representation in P3(x))
Find the coordinate matrix of p = 3x3-2x2+4 relative to the
standard basis in P3(x), S = {1, 1+x, 1+ x2, 1+ x3}.
Sol:
p = 3(1) + 0(1+x) + (–2)(1+x2 ) + 3(1+x3 )
 3
 0
[p]s =  
 2
 
 3
4 - 88
 Ex: (Coordinate representation in M2x2)
Find the coordinate matrix of x = 5 6 relative to
7 8 
the standardbasis in M2x2.
B = 1 0, 0 1, 0 0, 0 0 

 0 0 1 0 0 1 
0
0





Sol:
5 6
1 0 0 1 0 0 0 0
x
 5
 6
 7
 8





7
8
0
0
0
0
1
0
0
1



 
 
 

5 
6 
 x B   
7 
 
8 
4 - 89