Download Vector-space-21-02-2016

Document related concepts

Non-negative matrix factorization wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Jordan normal form wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Matrix multiplication wikipedia , lookup

Euclidean vector wikipedia , lookup

Gaussian elimination wikipedia , lookup

Exterior algebra wikipedia , lookup

Vector space wikipedia , lookup

System of linear equations wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Matrix calculus wikipedia , lookup

Four-vector wikipedia , lookup

Transcript
Linear Algebra II
Chapter 2
Vector Spaces
2.1
2.2
2.3
2.4
2.5
2.6
2.7
Vectors in Rn
Vector Spaces
Subspaces of Vector Spaces
Spanning Sets and Linear Independence
Basis and Dimension
Rank of a Matrix and Systems of Linear Equations
Coordinates and Change of Basis
n
2.1 Vectors in R

An ordered n-tuple:
a sequence of n real number ( x1 , x2 ,, xn )
n
 n-space: R
the set of all ordered n-tuple

Ex:
1
n=1
R = 1-space
= set of all real number
n=2
R = 2-space
= set of all ordered pair of real numbers ( x1 , x2 )
n=3
2
3
R = 3-space
= set of all ordered triple of real numbers ( x1 , x2 , x3 )
n=4
4
R = 4-space
= set of all ordered quadruple of real numbers ( x1 , x2 , x3 , x4 )

Notes:
n
(1) An n-tuple ( x1 , x2 ,, xn ) can be viewed as a point in R
with the xi’s as its coordinates.
(2) An n-tuple ( x1 , x2 ,, xn ) can be viewed as a vector
x  ( x1 , x2 ,, xn ) in Rn with the xi’s as its components.

Ex:
x1 , x2 
x1 , x2 
a point
0,0
a vector
u  u1 , u2 ,, un , v  v1 , v2 ,, vn 

Equal:
u  v if and only if
(two vectors in Rn)
u1  v1 , u2  v2 , , un  vn
 Vector
addition (the sum of u and v):
u  v  u1  v1 , u2  v2 , , un  vn 

Scalar multiplication (the scalar multiple of u by c):
cu  cu1 , cu2 ,, cun 

Notes:
The sum of two vectors and the scalar multiple of a vector
n
in R are called the standard operations in Rn.

Negative:
 u  (u1 ,u2 ,u3 ,...,un )

Difference:
u  v  (u1  v1 , u2  v2 , u3  v3 ,..., un  vn )

Zero vector:
0  (0, 0, ..., 0)

Notes:
(1) The zero vector 0 in Rn is called the additive identity in Rn.
(2) The vector –v is called the additive inverse of v.

Thm .2: (Properties of vector addition and scalar multiplication)
n
Let u, v, and w be vectors in R , and let c and d be scalars.
(1)
(2)
(3)
(4)
(5)
(6)
u+v is a vector in Rn
u+v = v+u
(u+v)+w = u+(v+w)
u+0 = u
u+(–u) = 0
cu is a vector in Rn
(7) c(u+v) = cu+cv
(8) (c+d)u = cu+du
(9) c(du) = (cd)u
(10) 1(u) = u

Ex 5: (Vector operations in R4)
Let u=(2, – 1, 5, 0), v=(4, 3, 1, – 1), and w=(– 6, 2, 0, 3) be
4
vectors in R . Solve x for x in each of the following.
(a) x = 2u – (v + 3w)
(b) 3(x+w) = 2u – v+x
Sol: (a) x  2u  ( v  3w )
 2u  v  3w
 (4,  2, 10, 0)  (4, 3, 1,  1)  (18, 6, 0, 9)
 (4  4  18,  2  3  6, 10  1  0, 0  1  9)
 (18,  11, 9,  8).
(b) 3(x  w )  2u  v  x
3x  3w  2u  v  x
3x  x  2u  v  3w
2x  2u  v  3w
x  u  12 v  32 w
 2,1,5,0    2, 23 , 21 , 12   9,3,0, 29 
 9, 211 , 92 ,4 

Thm 4.3: (Properties of additive identity and additive inverse)
n
Let v be a vector in R and c be a scalar. Then the following is true.
(1) The additive identity is unique. That is, if u+v=v, then u = 0
(2) The additive inverse of v is unique. That is, if v+u=0, then u = –v
(3) 0v=0
(4) c0=0
(5) If cv=0, then c=0 or v=0
(6) –(– v) = v

Linear combination:
The vector x is called a linear combination of v1 , v 2 ,..., v n ,
if it can be expressed in the form
x  c1v1  c2 v 2    cn v n
c1 , c2 , , cn : scalar
 Ex 6:
Given x = (– 1, – 2, – 2), u = (0,1,4), v = (– 1,1,2), and
3
w = (3,1,2) in R , find a, b, and c such that x = au+bv+cw.
 b  3c 
Sol:
a 
4a 
1
b  c  2
2b  2c   2
 a  1, b  2, c  1
Thus x  u  2 v  w

Notes:
A vector u  (u1 , u2 ,, un ) in R n can be viewed as:
a 1×n row matrix (row vector): u  [u1 , u2 ,, un ]
or
u1 
u 
a n×1 column matrix (column vector): u   2 
 
 
u n 
(The matrix operations of addition and scalar multiplication
give the same results as the corresponding vector operations)
Vector addition
Scalar multiplication
u  v  (u1 , u2 , , un )  (v1 , v2 , , vn )
cu  c(u1 , u2 ,, un )
 (u1  v1 , u2  v2 ,  , un  vn )
u  v  [u1 , u2 ,  , un ]  [v1 , v2 , , vn ]
 [u1  v1 , u2  v2 ,  , un  vn ]
u1  v1  u1  v1 
u  v  u  v 
u v   2   2   2 2
      
    

un  vn  un  vn 
 (cu1 , cu2 , , cun )
cu  c[u1 , u2 ,, un ]
 [cu1 , cu2 ,, cun ]
u1  cu1 
u  cu 
cu  c  2    2 
   
   
un  cun 
Keywords in Section 2.1:










ordered n-tuple
n-space
equal
vector addition
scalar multiplication
negative
difference
zero vector
additive identity
additive inverse
2.2 Vector Spaces

Vector spaces:
Let V be a set on which two operations (vector addition and
scalar multiplication) are defined. If the following axioms are
satisfied for every u, v, and w in V and every scalar (real number)
c and d, then V is called a vector space.
Addition:
(1) u+v is in V
(2) u+v=v+u
(3) u+(v+w)=(u+v)+w
(4) V has a zero vector 0 such that for every u in V, u+0=u
(5) For every u in V, there is a vector in V denoted by –u
such that u+(–u)=0
Scalar multiplication:
(6) cu is in V.
(7) c(u  v)  cu  cv
(8) (c  d )u  cu  du
(9) c(du)  (cd )u
(10) 1(u )  u

Notes:
(1) A vector space consists of four entities:
a set of vectors, a set of scalars, and two operations
V:nonempty set
c:scalar
 (u, v )  u  v: vector addition
 (c, u)  cu: scalar multiplication
V ,
(2) V  0:
, 
is called a vector space
zero vector space

Examples of vector spaces:
(1) n-tuple space: Rn
(u1 , u2 ,, un )  (v1 , v2 ,, vn )  (u1  v1 , u2  v2 ,, un  vn ) vector addition
scalar multiplication
k (u1 , u2 ,, un )  (ku1 , ku2 ,, kun )
(2) Matrix space: V  M mn (the set of all m×n matrices with real values)
Ex: :(m = n = 2)
u11 u12  v11 v12   u11  v11 u12  v12 
u u   v v   u  v u  v 
 21 22   21 22   21 21 22 22 
u11 u12   ku11 ku12 
k



u
u
ku
ku
22 
 21 22   21
vector addition
scalar multiplication
(3) n-th degree polynomial space: V  Pn (x)
(the set of all real polynomials of degree n or less)
p( x)  q( x)  (a0  b0 )  (a1  b1 ) x    (an  bn ) x n
kp( x)  ka0  ka1 x    kan x n
(4) Function space: V  c(, ) (the set of all real-valued
continuous functions defined on the entire real line.)
( f  g )( x)  f ( x)  g ( x)
(kf )( x)  kf ( x)
 Theorem
2.4: (Properties of scalar multiplication)
Let v be any element of a vector space V, and let c be any
scalar. Then the following properties are true.
(1)
(2)
(3)
(4)
0v  0
c0  0
If cv  0, then c  0 or v  0
(1) v   v

Notes: To show that a set is not a vector space, you need
only find one axiom that is not satisfied.
 Ex 6: The set of all integer is not a vector space.
Pf:
1V , 12  R
( 12 )(1)  12 V (it is not closed under scalar multiplication)
  noninteger

scalar
integer
 Ex 7: The set of all second-degree polynomials is not a vector space.
Pf:
Let p( x)  x 2 and q( x)   x 2  x  1
 p ( x)  q ( x)  x  1 V
(it is not closed under vector addition)

Ex 8:
V=R2=the set of all ordered pairs of real numbers
vector addition: (u1 , u2 )  (v1 , v2 )  (u1  v1 , u2  v2 )
scalar multiplication: c(u1, u2 )  (cu1,0)
Verify V is not a vector space.
Sol:
1(1, 1)  (1, 0)  (1, 1)
 the set (together with the two given operations) is
not a vector space
Keywords in Section2.2:





vector space
n-space
matrix space
polynomial space
function space
2.3 Subspaces of Vector Spaces

Subspace:
(V ,,) : a vector space
W  
 : a nonempty subset
W V
(W ,,) :a vector space (under the operations of addition and
scalar multiplication defined in V)
 W is a subspace of V
 Trivial subspace:
Every vector space V has at least two subspaces.
(1) Zero vector space {0} is a subspace of V.
(2) V is a subspace of V.

Theorem 2.5: (Test for a subspace)
If W is a nonempty subset of a vector space V, then W is
a subspace of V if and only if the following conditions hold.
(1) If u and v are in W, then u+v is in W.
(2) If u is in W and c is any scalar, then cu is in W.

Ex: Subspace of R2
(1) 0
0  0, 0
(2) Lines through t he origin
(3) R 2

Ex: Subspace of R3
(1) 0
0  0, 0, 0
(2) Lines through t he origin
(3) Planes through t he origin
(4) R 3

Ex 2: (A subspace of M2×2)
Let W be the set of all 2×2 symmetric matrices. Show that
W is a subspace of the vector space M2×2, with the standard
operations of matrix addition and scalar multiplication.
Sol:
W  M 22
M 22 : vector sapces
Let A1, A2 W ( A1T  A1, A2T  A2 )
A1 W, A2 W  ( A1  A2 )T  A1T  A2T  A1  A2 ( A1  A2  W )
k  R , A W  (kA)T  kAT  kA
W is a subspace of M 22
( kA W )

Ex 3: (The set of singular matrices is not a subspace of M2×2)
Let W be the set of singular matrices of order 2. Show that
W is not a subspace of M2×2 with the standard operations.
Sol:
1 0
0 0
A
W , B  
W


0 0
0 1
1 0
A B  
W

0 1
W2 is not a subspace of M 22

Ex 4: (The set of first-quadrant vectors is not a subspace of R2)
Show that W  {( x1 , x2 ) : x1  0 and x2  0} , with the standard
operations, is not a subspace of R2.
Sol:
Let u  (1, 1) W
1u  11, 1  1, 1W
W is not a subspace of R 2
(not closed under scalar
multiplication)

Ex 6: (Determining subspaces of R2)
Which of the following two subsets is a subspace of R2?
(a) The set of points on the line given by x+2y=0.
(b) The set of points on the line given by x+2y=1.
Sol:
(a) W  ( x, y) x  2 y  0  (2t, t ) t  R
Let v1   2t1 , t1  W
v2   2t2 , t2  W
v1  v2   2t1  t2 ,t1  t2 W (closed under addition)
kv1   2kt1 ,kt1 W (closed under scalar multiplication)
W is a subspace of R 2
(b) W   x, y  x  2 y  1
(Note: the zero vector is not on the line)
Let v  (1,0) W
1v  1,0W
W is not a subspace of R 2

Ex 8: (Determining subspaces of R3)
Which of the following subsets is a subspace of R 3?
(a) W  ( x1 , x2 ,1) x1 , x2  R
(b) W  ( x1 , x1  x3 , x3 ) x1 , x3  R
Sol:
(a) Let v  (0,0,1) W
 (1) v  (0,0,1) W
W is not a subspace of R3
(b) Let v  ( v1 , v1  v3 , v3 ) W , u  (u1 , u1  u 3 , u 3 ) W
 v  u  v1  u1 , v1  u1   v3 u 3 , v3  u 3  W
kv  kv1 , kv1   kv3 , kv3  W
W is a subspace of R3
 Thm
4.6: (The intersection of two subspaces is a subspace)
If V and W are both subspaces of a vector space U ,
then the intersecti on of V and W (denoted by V  U )
is also a subspace of U .
Elementary Linear Algebra: Section 4.3, p.202
Keywords in Section 3.3:

Subspace

trivial subspace
2.4 Spanning Sets and Linear Independence

Linear combination:
A vector v in a vector space V is called a linear combinatio n of
the vectors u1,u 2 , ,u k in V if v can be written in the form
v  c1u1  c2u 2    ck u k
c1,c2 , ,ck : scalars

Ex 2-3: (Finding a linear combination)
v1  (1,2,3) v 2  (0,1,2) v 3  (1,0,1)
Prove (a) w  (1,1,1) is a linear combinatio n of v1 , v 2 , v 3
(b) w  (1,2,2) is not a linear combinatio n of v1 , v 2 , v 3
Sol:
(a) w  c1v1  c2 v 2  c3 v 3
1,1,1  c1 1,2,3  c2 0,1,2  c3 1,0,1
 (c1  c3 , 2c1  c2 , 3c1  2c2  c3 )
c1
 c3
 2c1  c2
3c1  2c2  c3
1
1
1
1 0  1 1
Guass Jordan Elimination



 2 1 0 1      
3 2 1 1
1 0  1 1 
0 1 2  1


0 0 0 0 
 c1  1  t , c2  1  2t , c3  t
(this system has infinitely many solutions)
t 1
 w  2 v1  3v 2  v 3
(b)
w  c1 v1  c2 v 2  c3 v 3
1 0  1 1 
Guass Jordan Elimination



 2 1 0  2      
3 2 1
2 
1 0  1 1 
0 1 2  4


0 0 0
7 
 this system has no solution ( 0  7)
 w  c1v1  c2 v 2  c3 v 3
Elementary Linear Algebra: Section 4.4, pp.208-209

the span of a set: span (S)
If S={v1, v2,…, vk} is a set of vectors in a vector space V,
then the span of S is the set of all linear combinations of
the vectors in S,
span(S ) c1 v1  c2 v 2    ck v k
ci  R
(the set of all linear combinatio ns of vectors in S )

a spanning set of a vector space:
If every vector in a given vector space can be written as a
linear combination of vectors in a given set S, then S is
called a spanning set of the vector space.
Elementary Linear Algebra: Section 4.4, p.209
 Notes:
span ( S )  V
 S spans (generates ) V
V is spanned (generated ) by S
S is a spanning set of V
 Notes:
(1) span( )  0
(2) S  span( S )
(3) S1 , S 2  V
S1  S 2  span( S1 )  span( S 2 )
Elementary Linear Algebra: Section 4.4, p.209
 Ex 5: (A spanning set for R3)
Show that the set S  (1,2,3), (0,1,2), (2,0,1) sapns R3
Sol:
We must determine whether an arbitrary vector u  (u1 , u2 , u3 )
in R 3 can be as a linear combinatio n of v1 , v 2 , and v 3 .
u  R 3  u  c1 v1  c2 v 2  c3 v 3
 c1
2c1  c2
 2c3  u1
 u2
3c1  2c2  c3  u3
The problem thus reduces to determinin g whether this system
is consistent for all values of u1 , u2 , and u3 .
Elementary Linear Algebra: Section 4.4, p.210
1 0 2
 A  2 1 0 0
3 2 1
 Ax  b has exactly one solution for every u.
 span( S )  R 3
Elementary Linear Algebra: Section 4.4, p.210

Thm 4.7: (Span(S) is a subspace of V)
If S={v1, v2,…, vk} is a set of vectors in a vector space V,
then
(a) span (S) is a subspace of V.
(b) span (S) is the smallest subspace of V that contains S.
(Every other subspace of V that contains S must contain span (S).)
Elementary Linear Algebra: Section 4.4, p.211

Linear Independent (L.I.) and Linear Dependent (L.D.):
S  v1 , v 2 ,, v k  : a set of vectors in a vector space V
c1 v1  c2 v 2    ck v k  0
(1) If the equation has only the trivial solution (c1  c2    ck  0)
then S is called linearly independen t.
(2) If the equation has a nontrivial solution (i.e., not all zeros),
then S is called linearly dependent.
Elementary Linear Algebra: Section 4.4, p.211

Notes:
(1)  is linearly independen t
(2) 0  S  S is linearly dependent.
(3) v  0  v is linearly independen t
(4) S1  S2
S1 is linearly dependent  S2 is linearly dependent
S2 is linearly independen t  S1 is linearly independen t
Elementary Linear Algebra: Section 4.4, p.211
 Ex 8: (Testing for linearly independent)
Determine whether the following set of vectors in R3 is L.I. or L.D.
S  1, 2, 3, 0, 1, 2,  2, 0, 1
v1
v2
v3
c1
 2c3  0
Sol:
0
c1v1  c2 v 2  c3 v 3  0  2c1  c2 
3c1  2c2  c3  0
1 0  2 0
1 0 0 0
- Jordan Eliminatio n


 0 1 0 0
 2 1 0 0 Gauss
3 2 1 0
0 0 1 0
 c1  c2  c3  0 only the trivial solution 
 S is linearly independen t
Elementary Linear Algebra: Section 4.4, p.213

Ex 9: (Testing for linearly independent)
Determine whether the following set of vectors in P2 is L.I. or L.D.
Sol:
S = {1+x – 2x2 , 2+5x – x2 , x+x2}
v1
v2
v3
c1v1+c2v2+c3v3 = 0
i.e. c1(1+x – 2x2) + c2(2+5x – x2) + c3(x+x2) = 0+0x+0x2
2 0 0
1
c1+2c2
=0
G. J.
 c1+5c2+c3 = 0   1 5 1 0 
–2c1 – c2+c3 = 0
 2  1 1 0
1 2 0 0

1 
1
1
0

0 0 03 0


 This system has infinitely many solutions.
(i.e., This system has nontrivial solutions.)
 S is linearly dependent.
Elementary Linear Algebra: Section 4.4, p.214
(Ex: c1=2 , c2= – 1 , c3=3)

Ex 10: (Testing for linearly independent)
Determine whether the following set of vectors in 2×2
matrix space is L.I. or L.D.
2 1 3 0 1 0 
S  
,
,




0 1 2 1 2 0 
Sol:
v1
v2
v3
c1v1+c2v2+c3v3 = 0
2 1
 3 0
1 0 0 0
c1 
 c2 
 c3 





0
1
2
1
2
0
0
0





 

Elementary Linear Algebra: Section 4.4, p.215

2c1+3c2+ c3 = 0
c1
=0
2c2+2c3 = 0
c1 + c2
=0
 2 3 1 0
1 0 0 0 


0 2 2 0


1 1 0 0 
1
0
- Jordan Eliminatio n
Gauss


 
0

0
0
1
0
0
0
0
1
0
0
0
0

0
 c1 = c2 = c3= 0 (This system has only the trivial solution.)
 S is linearly independent.
Elementary Linear Algebra: Section 4.4, p.215

Thm 4.8: (A property of linearly dependent sets)
A set S = {v1,v2,…,vk}, k2, is linearly independent if and
only if at least one of the vectors vj in S can be written as
a linear combination of the other vectors in S.
Pf:
() c1v1+c2v2+…+ckvk = 0
 S is linearly dependent
 ci  0 for some i
ci 1
ci 1
ck
c1
 v i  v1   
v i 1 
v i 1    v k
ci
ci
ci
ci
Elementary Linear Algebra: Section 4.4, p.217
()
Let
vi = d1v1+…+di-1vi-1+di+1vi+1+…+dkvk
 d1v1+…+di-1vi-1-vi+di+1vi+1+…+dkvk = 0
 c1=d1, …,ci-1=di-1, ci=-1,ci+1=di+1,…, ck=dk (nontrivial solution)
 S is linearly dependent

Corollary to Theorem 4.8:
Two vectors u and v in a vector space V are linearly dependent
if and only if one is a scalar multiple of the other.
Elementary Linear Algebra: Section 4.4, p.217
Keywords in Section 4.4:

linear combination:線性組合

spanning set:生成集合

trivial solution:顯然解

linear independent:線性獨立

linear dependent:線性相依
4.5 Basis and Dimension

Basis:
V:a vector space
S ={v1, v2, …, vn}V
Generating
Sets
Bases
Linearly
Independent
Sets
(a ) S spans V (i.e., span(S) = V )

(b) S is linearly independent
 S is called a basis for V
 Notes:
(1) Ø is a basis for {0}
(2) the standard basis for R3:
{i, j, k} i = (1, 0, 0), j = (0, 1, 0), k = (0, 0, 1)
Elementary Linear Algebra: Section 4.5, p.221
n
(3) the standard basis for R :
{e1, e2, …, en} e1=(1,0,…,0), e2=(0,1,…,0), en=(0,0,…,1)
Ex: R4
{(1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1)}
(4) the standard basis for mn matrix space:
{ Eij | 1im , 1jn }
Ex: 2 2 matrix space:
1 0 0 1 0 0 0 0 
,
,
,





0
0
0
0
1
0
0
1





(5) the standard basis for Pn(x):
{1, x, x2, …, xn}
Ex: P3(x)
{1, x, x2, x3}
Elementary Linear Algebra: Section 4.5, p.224

Thm 4.9: (Uniqueness of basis representation)
If S  v1 , v 2 ,, v n  is a basis for a vector space V, then every
vector in V can be written in one and only one way as a linear
combination of vectors in S.
Pf:
 1. span(S) = V
 S is a basis  
 2. S is linearly independent
 span(S) = V Let v = c1v1+c2v2+…+cnvn
v = b1v1+b2v2+…+bnvn
 0 = (c1–b1)v1+(c2 – b2)v2+…+(cn – bn)vn
 S is linearly independen t
 c1= b1 , c2= b2 ,…, cn= bn
Elementary Linear Algebra: Section 4.5, p.224
(i.e., uniqueness)

Thm 4.10: (Bases and linear dependence)
If S  v1 , v 2 ,, v n  is a basis for a vector space V, then every
set containing more than n vectors in V is linearly dependent.
Pf:
Let S1 = {u1, u2, …, um} , m > n
 span ( S )  V
u1  c11v1  c21v 2    cn1 v n
uiV

u 2  c12 v1  c22 v 2    cn 2 v n

u m  c1m v1  c2 m v 2    cnm v n
Elementary Linear Algebra: Section 4.5, p.225
Let k1u1+k2u2+…+kmum= 0
 d1v1+d2v2+…+dnvn= 0
(where di = ci1k1+ci2k2+…+cimkm)
 S is L.I.
 di=0
i
i.e.
c11k1  c12k 2    c1m k m  0
c21k1  c22k 2    c2 m k m  0

cn1k1  cn 2 k 2    cnm k m  0
 Thm 1.1: If the homogeneous system has fewer equations
than variables, then it must have infinitely many solution.
m > n  k1u1+k2u2+…+kmum = 0 has nontrivial solution
 S1 is linearly dependent
Elementary Linear Algebra: Section 4.5, p.225

Thm 4.11: (Number of vectors in a basis)
If a vector space V has one basis with n vectors, then every
basis for V has n vectors. (All bases for a finite-dimensional
vector space has the same number of vectors.)
Pf:
S ={v1, v2, …, vn}
S'={u1, u2, …, um}
S is a basis
S ' is L.I.
S is L.I.
S ' is a basis
two bases for a vector space

 Thm.4.10
  n  m


n  m
Thm
.
4
.
10

  n  m



Elementary Linear Algebra: Section 4.5, p.226


Finite dimensional:
A vector space V is called finite dimensional,
if it has a basis consisting of a finite number of elements.
Infinite dimensional:
If a vector space V is not finite dimensional,
then it is called infinite dimensional.

Dimension:
The dimension of a finite dimensional vector space V is
defined to be the number of vectors in a basis for V.
V: a vector space
 dim(V) = #(S)
S: a basis for V
(the number of vectors in S)
Elementary Linear Algebra: Section 4.5, p.227
dim(V) = n
 Notes:
(1) dim({0}) = 0 = #(Ø)
Generating
Sets
#(S) > n
(2) dim(V) = n , SV
Bases
Linearly
Independent
Sets
#(S) = n
#(S) < n
S:a generating set  #(S)  n
S:a L.I. set
 #(S)  n
S:a basis
 #(S) = n
(3) dim(V) = n , W is a subspace of V  dim(W)  n
Elementary Linear Algebra: Section 4.5, Addition

Ex:
(1) Vector space Rn
 basis {e1 , e2 ,  , en}
 dim(Rn) = n
(2) Vector space Mmn  basis {Eij | 1im , 1jn}
 dim(Mmn)=mn
(3) Vector space Pn(x)  basis {1, x, x2,  , xn}
 dim(Pn(x)) = n+1
(4) Vector space P(x)  basis {1, x, x2, }
 dim(P(x)) = 
Elementary Linear Algebra: Section 4.5, Addition

Ex 9: (Finding the dimension of a subspace)
(a) W={(d, c–d, c): c and d are real numbers}
(b) W={(2b, b, 0): b is a real number}
Sol: (Note: Find a set of L.I. vectors that spans the subspace)
(a) (d, c– d, c) = c(0, 1, 1) + d(1, – 1, 0)
 S = {(0, 1, 1) , (1, – 1, 0)} (S is L.I. and S spans W)
 S is a basis for W
 dim(W) = #(S) = 2
(b)  2b, b,0  b2,1,0
 S = {(2, 1, 0)} spans W and S is L.I.
 S is a basis for W
 dim(W) = #(S) = 1
Elementary Linear Algebra: Section 4.5, p.228

Ex 11: (Finding the dimension of a subspace)
Let W be the subspace of all symmetric matrices in M22.
What is the dimension of W?
Sol:
a b 

W  
a , b, c  R 

b c 

a b 
1 0 0 1 0 0

 a
 b
 c




b
c
0
0
1
0
0
1



 
 

1 0 0 1 0 0 
 S  
,
,
 spans W and S is L.I.



0 0 1 0 0 1 
 S is a basis for W
 dim(W) = #(S) = 3
Elementary Linear Algebra: Section 4.5, p.229

Thm 4.12: (Basis tests in an n-dimensional space)
Let V be a vector space of dimension n.
(1) If S  v1 , v 2 ,, v n  is a linearly independent set of
vectors in V, then S is a basis for V.
(2) If S  v1 , v 2 ,, v n  spans V, then S is a basis for V.
dim(V) = n
Generating
Sets
Bases
Linearly
Independent
Sets
#(S) > n
#(S) = n
Elementary Linear Algebra: Section 4.5, p.229
#(S) < n
Keywords in Section 4.5:

basis:基底

dimension:維度

finite dimension:有限維度

infinite dimension:無限維度
Let A be an m×n matrix.
 Row space:
The row space of A is the subspace of Rn spanned by
the row vectors of A.
RS ( A)  {1 A(1)   2 A( 2)  ...   m A( m) | 1 , 2 ,..., m  R}
 Column space:
The column space of A is the subspace of Rm spanned by
the column vectors of A.

CS  A  {1 A(1)  2 A(2)   n A( n) 1 , 2 ,n  R}
Null space:
The null space of A is the set of all solutions of Ax=0 and
it is a subspace of Rn.
NS ( A)  {x  Rn | Ax  0}
Elementary Linear Algebra: Section 4.6, p.233
4.6 Rank of a Matrix and Systems of Linear Equations
 row vectors:
Row vectors of A
 a11 a12  a1n   A1 
A 
a

a22  a2 n   2  
21

A

 

    


 
A
am1 am 2  amn   m  
[a11 , a12 ,, a1n ]  A(1)
[a21 , a22 ,, a2n ]  A(2)

[am1 , am2 ,, amn ]  A( n )
 column vectors:
 a11 a12  a1n 
a

a

a
22
2n 
A   21
 A1  A2   An 
 

 


am1 am 2  amn 

Elementary Linear Algebra: Section 4.6, p.232
Column vectors of A
 a11   a12   a1n 
a   a  a 
 21   22    2 n 
       
    
am1  am 2  amn 
||
||
||
(1)
(2)
(n)
A
A
A


Thm 4.13: (Row-equivalent matrices have the same row space)
If an mn matrix A is row equivalent to an mn matrix B,
then the row space of A is equal to the row space of B.

Notes:
(1) The row space of a matrix is not changed by elementary
row operations.
RS(r(A)) = RS(A)
r: elementary row operations
(2) Elementary row operations can change the column space.
Elementary Linear Algebra: Section 4.6, p.233

Thm 4.14: (Basis for the row space of a matrix)
If a matrix A is row equivalent to a matrix B in row-echelon
form, then the nonzero row vectors of B form a basis for the
row space of A.
Elementary Linear Algebra: Section 4.6, p.234
 Ex 2: ( Finding a basis for a row space)
 1
 0

Find a basis of row space of A =  3

 3
 2
Sol:
 1
 0

 3
A= 
 3
 2
a1
3
1
1
1
0
6
4 2
0 4
a2 a3
3
0
 1

1
2
a4
Elementary Linear Algebra: Section 4.6, p.234
3
1
1
1
0
6
4 2
0 4
1
0

.E .
G

 B = 0
0
0
b1
3
0
 1

1
2
3 1 3 w 1
1 1 0 w 2
0 0 1 w 3

0 0 0
0 0 0
b2 b3 b4
a basis for RS(A) = {the nonzero row vectors of B} (Thm 4.14)
= {w1, w2, w3} = {(1, 3, 1, 3), (0, 1, 1, 0), (0, 0, 0, 1)}

Notes:
(1) b3  2b1  b 2  a3  2a1  a 2
(2) {b1 , b 2 , b 4 } is L.I.  {a1 , a 2 , a 4 } is L.I.
Elementary Linear Algebra: Section 4.6, p.234

Ex 3: (Finding a basis for a subspace)
Find a basis for the subspace of R3 spanned by
v1
v3
v2
S  {(1, 2, 5), (3, 0, 3), (5, 1, 8)}
Sol:
 1 2 5 v 1
A =  3 0 3 v 2


 5 1 8 v 3
G.E.
 1  2  5 w1
B  0 1 3 w 2
0 0 0
a basis for span({v1, v2, v3})
= a basis for RS(A)
= {the nonzero row vectors of B}
= {w1, w2}
= {(1, –2, – 5) , (0, 1, 3)}
Elementary Linear Algebra: Section 4.6, p.235
(Thm 4.14)

Ex 4: (Finding a basis for the column space of a matrix)
Find a basis for the column space of the matrix A given in Ex 2.
 1
 0

A   3

 3
 2
3
1
1
1
0
6
4 2
0 4
3
0
 1

1
 2
Sol. 1:
1
3
AT  
1

3
0 3
3
2
1
0
1
0
4
0
.E.
G
B
0
1
6  2  4


0 1
1  2
0
Elementary Linear Algebra: Section 4.6, p.236
0 3
3
2 w1
1
9  5  6 w 2
0
1  1  1 w3

0
0
0
0
 CS(A)=RS(AT)
 a basis for CS(A)
= a basis for RS(AT)
= {the nonzero vectors of B}
= {w1, w2, w3}
 1 
 
 0 

  3,
 3 
 
 2 
 0   0 
 1   0 
    
 9 ,  1   (a basis for the column space of A)
   
  5  1 
 6  1 

 Note: This basis is not a subset of {c1, c2, c3, c4}.
Elementary Linear Algebra: Section 4.6, p.236

Sol. 2:
1
0

A   3

3
 2
c1
Leading 1
3
1
0
4
0
c2
3
1
0
1
0 

G.E .
 B  0
6  1 


2 1 
0
0
 4  2
c3 c4
v1
1
3 1 3
1 1 0
0 0 1

0 0 0
0 0 0
v2 v3 v4
=> {v1, v2, v4} is a basis for CS(B)
{c1, c2, c4} is a basis for CS(A)
 Notes:
(1) This basis is a subset of {c1, c2, c3, c4}.
(2) v3 = –2v1+ v2, thus c3 = – 2c1+ c2 .
Elementary Linear Algebra: Section 4.6, p.236

Thm 4.16: (Solutions of a homogeneous system)
If A is an mn matrix, then the set of all solutions of the
homogeneous system of linear equations Ax = 0 is a subspace
of Rn called the nullspace of A.
Pf:
NS ( A)  {x  R n | Ax  0}
NS ( A)  R n
NS ( A)   ( A0  0)
Let x1, x 2  NS ( A) (i.e. Ax1  0, Ax 2  0)
Then (1) A(x1  x 2)  Ax1  Ax 2  0  0  0
Addition
(2) A(cx1)  c( Ax1)  c(0)  0
Scalar multiplica tion
Thus NS ( A) is a subspace of R n
 Notes: The nullspace of A is also called the solution space of
the homogeneous system Ax = 0.
Elementary Linear Algebra: Section 4.6, p.239
 Ex 6: (Finding the solution space of a homogeneous system)
1 2  2 1
Find the nullspace of the matrix A.
A  3 6  5 4
1 2 0 3
Sol: The nullspace of A is the solution space of Ax = 0.
1 2  2 1
1 2 0 3
. J .E
A  3 6  5 4 G
0 0 1 1
1 2 0 3
0 0 0 0
 x1 = –2s – 3t, x2 = s, x3 = –t, x4 = t
 x1   2s  3t   2  3
 x   s   1  0
  s    t    sv1  tv 2
 x   2  
 x3    t   0   1
  
    
x
t
  0  1
 4 
 NS ( A)  {sv1  tv 2 | s, t  R}
Elementary Linear Algebra: Section 4.6, p.240

Thm 4.15: (Row and column space have equal dimensions)
If A is an mn matrix, then the row space and the column
space of A have the same dimension.
dim(RS(A)) = dim(CS(A))
 Rank:
The dimension of the row (or column) space of a matrix A
is called the rank of A and is denoted by rank(A).
rank(A) = dim(RS(A)) = dim(CS(A))
Elementary Linear Algebra: Section 4.6, pp.237-238
 Nullity:
The dimension of the nullspace of A is called the nullity of A.
nullity(A) = dim(NS(A))
 Note: rank(AT) = rank(A)
Pf:
rank(AT) = dim(RS(AT)) = dim(CS(A)) = rank(A)
Elementary Linear Algebra: Section 4.6, p.238

Thm 4.17: (Dimension of the solution space)
If A is an mn matrix of rank r, then the dimension of
the solution space of Ax = 0 is n – r. That is
n = rank(A) + nullity(A)

Notes:
(1) rank(A): The number of leading variables in the solution of Ax=0.
(The number of nonzero rows in the row-echelon form of A)
(2) nullity (A): The number of free variables in the solution of Ax = 0.
Elementary Linear Algebra: Section 4.6, p.241

Notes:
If A is an mn matrix and rank(A) = r, then
Fundamental Space
Dimension
RS(A)=CS(AT)
r
CS(A)=RS(AT)
r
NS(A)
n–r
NS(AT)
m–r
Elementary Linear Algebra: Section 4.6, Addition

Ex 7: (Rank and nullity of a matrix)
Let the column vectors of the matrix A be denoted by a1, a2,
a3, a4, and a5.
1
0
 1 0 2
 0 1  3

1
3

A
 2  1
1 1
3


9 0  12
 0 3
a1 a2 a3 a4 a5
(a) Find the rank and nullity of A.
(b) Find a subset of the column vectors of A that forms a basis for
the column space of A .
(c) If possible, write the third column of A as a linear combination
of the first two columns.
Elementary Linear Algebra: Section 4.6, p.242
Sol: Let B be the reduced row-echelon form of A.
0
 1 0 2 1
 0 1  3 1

3

A
 2  1 1  1
3


 0 3 9 0  12
a1 a2 a3 a4 a5
(a) rank(A) = 3
1
0
B
0

0
b1
0 2 0
1
1 3 0  4
0 0 1  1

0 0 0 0
b2 b3 b4 b5
(the number of nonzero rows in B)
nuillity ( A)  n  rank ( A)  5  3  2
Elementary Linear Algebra: Section 4.6, p.242
(b) Leading 1
 {b1 , b 2 , b 4 } is a basis for CS ( B)
{a1 , a 2 , a 4 } is a basis for CS ( A)
 1
 0
 1
 0
 1
 1
a1   , a 2   , and a 4   ,
  2
 1
 1
 0
 3
 0
 
 
 
(c) b3  2b1  3b 2  a3  2a1  3a 2
Elementary Linear Algebra: Section 4.6, p.243
 Thm 4.18: (Solutions of a nonhomogeneous linear system)
If xp is a particular solution of the nonhomogeneous system
Ax = b, then every solution of this system can be written in
the form x = xp + xh , wher xh is a solution of the corresponding
homogeneous system Ax = 0.
Pf:
Let x be any solution of Ax = b.
 A(x  x p )  Ax  Ax p  b  b  0.
 (x  x p )
is a solution of Ax = 0
Let x h  x  x p
 x  x p  xh
Elementary Linear Algebra: Section 4.6, p.243

Ex 8: (Finding the solution set of a nonhomogeneous system)
Find the set of all solution vectors of the system of linear equations.
x1
Sol:
3x1 
x2
x1  2 x2
 2 x3

x4
 5 x3
 5 x4

5

8
 9
1
5
1
5
1 0  2
1 0  2
3 1  5
 G
0 1

. J .E
0
8


1

3

7




1 2
0 0
0  5  9
0
0
0
s
t
Elementary Linear Algebra: Section 4.6, p.243
 x1   2s
 x   s
 x   2  
 x3   s
  
 x4   0 s
 t  5  2  1  5
 3t  7  1  3  7
s
t

 0t  0  1  0  0
      
 t  0  0  1  0
 su1  tu 2  x p
5
 7 
i.e. x p    is a particular solution vector of Ax=b.
0
 
0
xh = su1 + tu2 is a solution of Ax = 0
Elementary Linear Algebra: Section 4.6, p.244

Thm 4.19: (Solution of a system of linear equations)
The system of linear equations Ax = b is consistent if and only
if b is in the column space of A.
Pf:
Let
 a11
a
A   21


 am1
a12  a1n 
a22  a2 n 
,

 

am 2  amn 
 x1 
x 
x   2,

 
 xn 
and
 b1 
b 
b  2

 
bm 
be the coefficient matrix, the column matrix of unknowns,
and the right-hand side, respectively, of the system Ax = b.
Elementary Linear Algebra: Section 4.6, pp.244-245
Then
 a11 a12  a1n   x1   a11 x1
a
 x  a x
a

a
22
2n   2 
Ax   21
  21 1
 

    

  
a
a

a
m2
mn   xn 
 m1
am1 x1
 a11 
 a12 
 a1n 
a 
a 
a 
 x1  21   x2  22     xn  2 n .
  
  
  
 
 
 
a
a
 m1 
 m2 
amn 


a12 x2
a22 x2
 a m 2 x2
  
  
a1n xn 
a2 n xn 
 

   amn xn 
Hence, Ax = b is consistent if and only if b is a linear combination
of the columns of A. That is, the system is consistent if and only if
b is in the subspace of Rm spanned by the columns of A.
Elementary Linear Algebra: Section 4.6, p.245
 Note:
If rank([A|b])=rank(A)
Then the system Ax=b is consistent.

Ex 9: (Consistency of a system of linear equations)
x1

x2
x1
3x1
Sol:
 2 x2
 x3
 1

x3

3
 x3

1
1
1 1  1
1 0
. J .E.
A  1 0
1 G
0 1  2
3 2  1
0 0
0
Elementary Linear Algebra: Section 4.6, p.245
1
3
 1 1  1  1
1 0
. J .E.
[ A  b]   1 0
1 3 G
0 1  2  4
3 2  1 1
0 0
0
0
c1 c2 c3
b
w1 w2 w3 v
 v  3w1  4w 2
 b  3c1  4c 2  0c3
(b is in the column space of A)
 The system of linear equations is consistent.
 Check:
rank ( A)  rank ([ A b])  2
Elementary Linear Algebra: Section 4.6, p.245

Summary of equivalent conditions for square matrices:
If A is an n×n matrix, then the following conditions are equivalent.
(1) A is invertible
(2) Ax = b has a unique solution for any n×1 matrix b.
(3) Ax = 0 has only the trivial solution
(4) A is row-equivalent to In
(5) | A | 0
(6) rank(A) = n
(7) The n row vectors of A are linearly independent.
(8) The n column vectors of A are linearly independent.
Elementary Linear Algebra: Section 4.6, p.246
Keywords in Section 4.6:

row space : 列空間

column space : 行空間

null space: 零空間

solution space : 解空間

rank: 秩

nullity : 核次數
4.7 Coordinates and Change of Basis

Coordinate representation relative to a basis
Let B = {v1, v2, …, vn} be an ordered basis for a vector space V
and let x be a vector in V such that
x  c1v1  c2 v 2    cn v n .
The scalars c1, c2, …, cn are called the coordinates of x relative
to the basis B. The coordinate matrix (or coordinate vector)
of x relative to B is the column matrix in Rn whose components
are the coordinates of x.
xB
 c1 
c 
  2

 
 cn 
Elementary Linear Algebra: Section 4.7, p.249
n
 Ex 1: (Coordinates and components in R )
Find the coordinate matrix of x = (–2, 1, 3) in R3
relative to the standard basis
S = {(1, 0, 0), ( 0, 1, 0), (0, 0, 1)}
Sol:
 x  (2, 1, 3)  2(1, 0, 0)  1(0, 1, 0)  3(0, 0, 1),
  2
[x]S   1.
 3
Elementary Linear Algebra: Section 4.7, p.250

Ex 3: (Finding a coordinate matrix relative to a nonstandard basis)
Find the coordinate matrix of x=(1, 2, –1) in R3
relative to the (nonstandard) basis
B ' = {u1, u2, u3}={(1, 0, 1), (0, – 1, 2), (2, 3, – 5)}
Sol: x  c u  c u  c u  (1, 2,  1)  c (1, 0, 1)  c (0,  1, 2)  c (2, 3,  5)
1
1
c1

c1

1 0
 0  1
 1 2
2
2
3
3
 2c3
 3c3
 5c3
1
2
3

1
 2
 1
2  c1   1
1 0
 c2
i.e. 0  1
3 c2    2
 1 2  5 c3   1
2c2
2
1
5
1 0 0
 5
G.J.E.
3 2  0 1 0  8  [ x ]    8
 2
0 0 1  2
 5  1
Elementary Linear Algebra: Section 4.7, p.251
B

Change of basis problem:
You were given the coordinates of a vector relative to one
basis B and were asked to find the coordinates relative to
another basis B'.

Ex: (Change of basis)
Consider two bases for a vector space V
B  {u1 , u2 }, B  {u1 , u2 }
a 
c 
If [u1 ]B   , [u2 ]B   
b 
d 
i.e., u1  au1  bu 2 , u2  cu1  du 2
Elementary Linear Algebra: Section 4.7, p.253
 k1 
Let v V , [ v]B   
k 2 
 v  k1u1  k 2u2
 k1 (au1  bu 2 )  k 2 (cu1  du 2 )
 (k1a  k 2 c)u1  (k1b  k 2 d )u 2
 k1a  k2c  a c   k1 
 [ v ]B  


 k 
k
b

k
d
b
d

 2 
2 
 1
 u1 B u2 B  v B
Elementary Linear Algebra: Section 4.7, p.253

Transition matrix from B' to B:
Let B  {u1 , u 2 ,..., u n } and B  {u1 , u2 ..., un } be two bases
for a vector space V
If [v]B is the coordinate matrix of v relative to B
[v]B‘ is the coordinate matrix of v relative to B'
then [ v]B  P[ v]B
 u1 B , u2 B ,..., un B  vB
where
P  u1 B , u2 B , ..., un B 
is called the transition matrix from B' to B
Elementary Linear Algebra: Section 4.7, p.255

Thm 4.20: (The inverse of a transition matrix)
If P is the transition matrix from a basis B' to a basis B in Rn,
then
(1) P is invertible
–1
(2) The transition matrix from B to B' is P

Notes:
B  {u1 , u 2 , ..., u n }, B'  {u1 , u2 , ..., un }
vB  [u1 ]B , [u2 ]B , ..., [un ]B  vB  P vB
vB  [u1 ]B , [u 2 ]B , ..., [u n ]B  vB  P 1 vB
Elementary Linear Algebra: Section 4.7, p.253

Thm 4.21: (Transition matrix from B to B')
Let B={v1, v2, … , vn} and B' ={u1, u2, … , un} be two bases
n
for R . Then the transition matrix P–1 from B to B' can be found
by using Gauss-Jordan elimination on the n×2n matrix B B
as follows.
B   B 
I
n
 P 1
Elementary Linear Algebra: Section 4.7, p.255

 Ex 5: (Finding a transition matrix)
B={(–3, 2), (4,–2)} and B' ={(–1, 2), (2,–2)} are two bases for R2
(a) Find the transition matrix from B' to B.
1 
(b) Let [ v]B '   , find [ v]B
 2
(c) Find the transition matrix from B to B' .
Elementary Linear Algebra: Section 4.7, p.257
Sol:
(a)  3
 2

(b)
Check :
4
2
B
 1
2
 2  2
B'
 3  2
P  

2

1


G.J.E.
 1 0  3  2
0 1  2  1


I
P
(the transition matrix from B' to B)
 1
3  2 1  1
[v ]B     [v ]B  P [v ]B  
 



 2
 2  1   2  0 
 1
[v ]B     v  (1)( 1,2)  (2)( 2,2)  (3,2)
 2
 1
[v ]B     v  (1)(3,2)  (0)( 4,2)  (3,2)
 0
Elementary Linear Algebra: Section 4.7, p.258
(c)
2  3
4
 1
 2  2  2  2


B'
B
  1 2
P  


2
3


1

G.J.E.
 1 0   1 2
0 1   2 3


-1
I
P
(the transition matrix from B to B')
Check:
 3  2   1 2  1 0
PP  

 I2




2  1  2 3 0 1
1
Elementary Linear Algebra: Section 4.7, p.259
 Ex 6: (Coordinate representation in P3(x))
(a) Find the coordinate matrix of p = 3x3-2x2+4 relative to the
standard basis S = {1, x, x2, x3} in P3(x).
(b) Find the coordinate matrix of p = 3x3-2x2+4 relative to the
basis S = {1, 1+x, 1+ x2, 1+ x3} in P3(x).
Sol:
 4
 0
(a) p  (4)(1)  (0)( x)  (-2)( x 2 )  (3)( x 3 )  [ p]B   
  2
 
 3
 3
 0
(b) p  (3)(1)  (0)(1  x)  (-2)(1  x 2 )  (3)(1  x 3 )  [ p]B   
  2
 
 3
Elementary Linear Algebra: Section 4.7, p.259
 Ex: (Coordinate representation in M2x2)
Find the coordinate matrix of x = 5 6 relative to
7 8 
the standard basis in M2x2.
B = 1 0, 0 1, 0 0, 0 0 

 0 0 1 0 0 1 
0
0





Sol:
5 6
1 0 0 1 0 0 0 0
x
 5
 6
 7
 8





7 8 
0 0 0 0 1 0 0 1
5 
6 
 x B   
7 
 
8 
Elementary Linear Algebra: Section 5.7, Addition
Keywords in Section 4.7:

coordinates of x relative to B:x相對於B的座標

coordinate matrix:座標矩陣

coordinate vector:座標向量

change of basis problem:基底變換問題

transition matrix from B' to B:從 B' 到 B的轉移矩陣