Download Document

Document related concepts

Non-negative matrix factorization wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Gaussian elimination wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Jordan normal form wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Exterior algebra wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Matrix multiplication wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

System of linear equations wikipedia , lookup

Euclidean vector wikipedia , lookup

Vector space wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Matrix calculus wikipedia , lookup

Four-vector wikipedia , lookup

Transcript
Chap. 4
Vector Spaces
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
Vectors in Rn
Vector Spaces
Subspaces of Vector Space
Spanning Sets & Linear Independent
Basis & Dimension
Rank of a Matrix & Systems of Linear Equations
Coordinates & Change of Basis
Applications of Vector Spaces
4.1 Vectors in


n
R
Vectors in the plane (R2) is represented geometrically by a
directed line segment whose initial point is the origin and
whose terminal point is the point x = (x1, y1).
x1, y1: the components of x
y
u = (u1, u2), v = (v1, v2), c is a scalar
Terminal point
Equal: u = v iff u1 = v1& u2 = v2
(x1, y1)
Scalar multiplication:
x
cv = c(v1, v2) = (cv1, cv2)
Initial point
Ming-Feng Yeh
Chapter 4
x
4-2
Section 4-1
Vector in the Plane

Given u = (u1, u2), v = (v1, v2), and c is a scalar
Scalar multiplication:
y
y
cu, c > 0
u
u
x
x
cu, c < 0
Zero vector: 0 = (0, 0)
Negative of v: v = (1)v
Ming-Feng Yeh
Chapter 4
4-3
Section 4-1
Vector Addition

Given u = (u1, u2) and v = (v1, v2)
Vector addition: u + v = (u1, u2) + (v1, v2)
= (u1 + v1, u2 + v2)
Difference of u and v: u  v = u + (v)
(u1 + v1, u2 + v2)
(u1, u2)
u+v
u
v
v1
Ming-Feng Yeh
(v1, v2)
u2
v2
u1
Chapter 4
4-4
Section 4-1
Example 3

Given v = (2, 5) and u = (3, 4)
(a) ½v
=(½(2), ½(5))
= (1, 5/2)
(2, 5)
(b) u  v
v
=(3(2), 45)
(1, 52 )
= (5, 1)
1
(c) ½v + u
2 v
= (1, 5/2) + (3, 4)
= (2, 13/2)
Ming-Feng Yeh
Chapter 4
y
(2, 132 )
0.5v + u
(3, 4)
u
x
v-u
(5,  1)
4-5
Section 4-1
Theorem 4.1

Let u, v, and w be vectors in the plane, and
let c and d be scalars.
Vector Addition
Scalar Multiplication
1. u + v is a vector in the plane
(closure under addition)
2. u + v = v + u
(commutative property)
3. (u + v) + w = v + (u + w)
(associative property)
4. u + 0 = u
5. u + (u) = 0
6. cu is a vector in the plane
(closure under scalar multi.)
7. c(u + v) = cu + cv
(left distributive property)
8. (c + d)u = cu + du
(right distributive property)
9. c(du) = (cd)u
10. 1(u) = u
Ming-Feng Yeh
Chapter 4
4-6
Section 4-1
Proof of Theorem 4.1
4. (u + v) + w = [(u1, u2) + (v1, v2)] + (w1, w2)
= (u1 + v1, u2 + v2) + (w1, w2)
=((u1 + v1) + w1, (u2 + v2) + w2)
=(u1 + (v1 + w1), u2 + (v2 + w2))
= (u1, u2) + [(v1, v2) + (w1, w2)]
= u + (v + w)
8. (c + d)u = (c + d) (u1, u2)
=((c + d)u1, (c + d)u2)
= (cu1 + du1, cu2 + du2)
= (cu1, cu2) + (du1, du2)
= c(u1, u2) + d(u1, u2)
= cu + du
Ming-Feng Yeh
Chapter 4
4-7
Section 4-1
Vectors in


n
R
A vector in n-space is represented by an ordered n-tuple.
x = (x1, x2, x3, …, xn)
The set of all n-tuples is called n-space (Rn)
R1 = 1-space = set of all real numbers
R2 = 2-space = set of all ordered pairs of real numbers
R3 = 3-space = set of all ordered triples of real numbers
R4 = 4-space = set of all ordered quadruples of real numbers
:
:
Rn = n-space = set of all ordered n-tuples of real numbers
Ming-Feng Yeh
Chapter 4
4-8
Section 4-1
Vector Addition & Scalar Multi.




Let u = (u1, u2, u3, …, un) and v = (v1, v2, v3, …, vn) be
vectors in Rn and let c be a real number.
Then the sum of u and v is defined to be the vector
u + v = (u1+v1, u2+v2, u3+v3, …, un+vn),
and the scalar multiple of u by c is defined to be
cu = (cu1, cu2, cu3, …, cun).
The negative of u is defined as
u = (u1, u2, u3, …, un)
The difference of u and v is defined as
u v = (u1v1, u2v2, u3v3, …, unvn)
The Zero vector in Rn is given by 0 = (0, 0, …, 0)
Ming-Feng Yeh
Chapter 4
4-9
Section 4-1
Theorem 4.2

Let u, v, and w be vectors in the Rn, and
let c and d be scalars.
Vector Addition
Scalar Multiplication
1. u + v is a vector in the plane
(closure under addition)
2. u + v = v + u
(commutative property)
3. (u + v) + w = v + (u + w)
(associative property)
4. u + 0 = u
5. u + (u) = 0
6. cu is a vector in the plane
(closure under scalar multi.)
7. c(u + v) = cu + cv
(left distributive property)
8. (c + d)u = cu + du
(right distributive property)
9. c(du) = (cd)u
10. 1(u) = u
Ming-Feng Yeh
Chapter 4
4-10
Section 4-1
Example 5



Let u = (2, 1, 5, 0), v = (4, 3, 1, 1), and w = (6, 2, 0, 3)
be vectors in R4. Solve for x in each of the following.
x = 2u  (v + 3w)
x = 2u  v  3w
= 2(2, 1, 5, 0)  (4, 3, 1, 1)  3(6, 2, 0, 3)
= (18, 11, 9, 8)
3(x + w) = 2u  v + x
 3x + 3w = 2u  v + x
 x = ½(2u  v  3w) = (9, 11/2, 9/2, 4)
Ming-Feng Yeh
Chapter 4
4-11
Section 4-1
Additive Identity & Additive Inverse

The zero vector 0 in Rn is called the additive identity in Rn.
The vector v is called the additive inverse of v.

Theorem 4.3: Let v be a vector in Rn, and let c be a

scalar. Then the following properties are true.
1. The additive identity is unique.
That is, if v + u = v, then u = 0.
2. The additive inverse of v is unique.
That is, if v + u = 0, then u = v.
3. 0v = 0.
4. c0 = 0.
6. (v) = v
5. If cv = 0, then c = 0 or v = 0.
Ming-Feng Yeh
Chapter 4
4-12
Section 4-1
Linear Combination & Ex. 6


The vector x is called a linear combination of the vectors
v1, v2, …, vn if x = c1v1 + c2v2 +…+ cnvn.
Example 6: Given x = (1, 2, 2), u = (0, 1, 4),
v = (1, 1, 2), and w = (3, 1, 2) in R3, find scalars
a, b, and c such that x = au + bv + cw.
Sol: (1, 2, 2) = a(0, 1, 4) + b(1, 1, 2) + c(3, 1, 2)
 (1, 2, 2) = (b + 3c, a + b + c, 4a + 2b +2c)
 a = 1, b = 2, c = 1
 x = u  2v  w
Ming-Feng Yeh
Chapter 4
4-13
Section 4-1
Row Vector & Column Vector

Represent a vector u = (u1, u2, u3, …, un) in Rn as
a 1n row matrix (row vector) u  u1 u2  un 
or
 u1 
u 
an n1 column matrix (column vector) u   2 

 
u n 
Ming-Feng Yeh
Chapter 4
4-14
4.2 Vector Spaces

Let V be a set on which two operations (vector addition & scalar
multiplication) are defined. If the following axioms are satisfied for
every u, v, and w in V, and every scalar c and d, then V is called a
vector space.
Scalar Multiplication
Vector Addition
1. u + v is in V.
6. cu is in V.
(closure under addition)
(closure under scalar multi.)
2. u + v = v + u
7. c(u + v) = cu + cv
(commutative property)
(left distributive property)
3. (u + v) + w = v + (u + w)
8. (c + d)u = cu + du
(associative property)
(right distributive property)
4. V has a zero vector 0 s.t.for 9. c(du) = (cd)u (associative prop.)
every u in V, u + 0 = u
10. 1(u) = u (scalar property)
(additive identity)
5. For every u in V, there is a vector in V denoted by u
s.t. u + (u) = 0. (additive inverse)
Ming-Feng Yeh
Chapter 4
4-15
Section 4-2
Examples 1 ~ 3
Example 1: R2 with the standard operations is a vector
space  see Theorem 4.1
n
 Example 2: R with the standard operations is a vector
space  see Theorem 4.2
 Example 3: Show that the set of all 23 matrices with the
operations of matrix addition and scalar multiplication
is a vector space.
pf: If A and B are 23 matrices and c is a scalar, then A+B and
cA are also 23 matrices. Hence, the set is closed under
matrix addition and scalar multiplication. Moreover, the
other eight vector space axioms follow from Theorems 2.1
and 2.2. Thus, the set is a vector space.

Ming-Feng Yeh
Chapter 4
4-16
Section 4-2
Example 4
The Vector Space of All Polynomials of Degree 2 or Less
Let P2 be the set of all polynomials of the form
p(x) = a2x2 + a1x + a0; q(x) = b2x2 + b1x + b0
where a0(b0), a1(b1), and a2(b2) are real numbers.
The sum of two polynomials p(x) and q(x) is defined by
p(x) + q(x) = (a2+b2)x2 + (a1+b1)x + (a0+b0),
and the scalar multiple of p(x) by the scalar c is defined by
cp(x) = ca2x2 + ca1x + ca0
Show that P2 is a vector space.
proof: omitted
Ming-Feng Yeh
Chapter 4
4-17
Section 4-2
Important Vector Spaces










R = set of all real number
R2 = set of all ordered pairs
R3 = set of all ordered triples
Rn = set of all n-tuples
C(, ) = set of all continuous functions defined on
the real line
C[a, b] = set of all continuous functions defined on
a closed interval [a, b]
P = set of all polynomials
Pn = set of all polynomials of degree  n
Mm,n = set of all mn matrices
Mn,n = set of all nn square matrices
Ming-Feng Yeh
Chapter 4
4-18
Section 4-2
Theorem 4.4
Properties of Scalar Multiplication
Let v be any element of a vector space V, and let c be any
scalar. Then the following properties are true.
1. 0v = 0.
2. c0 = 0.
3. If cv = 0, then c = 0 or v = 0. 4. (1)v = v.
pf: Suppose that cv = 0. To show that this implies either c = 0
or v = 0, assume that c  0.
Because c  0, we can you the reciprocal 1/c to show that
v = 0 as follows v  1v  (1 c)(c) v  (1 c)(cv)  (1 c)0  0

Ming-Feng Yeh
Chapter 4
4-19
Section 4-2
Example 6 ~ 7
Example 6
The set of integers is NOT a vector space
 The set of all integers (with the standard operations) does
not form a vector space because it is not closed under
scalar multiplication. For example, 12 (1)  12 . (noninteger)
Example 7
The set of second-degree polynomials is NOT a vector space
 The set of all 2nd-degree polynomials is not a vector space
because it is not closed under addition. For example,

p ( x)  x 2
 p( x)  q( x)  x  1 (the 1st-degree poly.)
2
q( x)   x  x  1
Ming-Feng Yeh
Chapter 4
4-20
Section 4-2
Example 8
Let V = R2, the set of all ordered pairs of real number, with
the standard addition and the following nonstandard
definition of scalar multiplication: c(x1, x2) = (cx1, 0).
Show that V is not a vector space.
pf: This example satisfies the first nine axioms of the
definition of a vector space. For example,
let u = (1, 1), v = (3, 4), and c = 2, then we have
c(u + v) = 2(1+3, 1+4) = (8, 0), cu = (2, 0), cv = (6, 0).
Therefore, c(u + v) = cu + cv. However, when c =1,
1(1, 1) = (1, 0)  (1,1). The tenth axiom is not verified.
Hence, the set (together with the two given operations) is
not a vector space.

Ming-Feng Yeh
Chapter 4
4-21
4.3 Subspaces of Vector Spaces

Definition of Subspace of a Vector Space
A nonempty subset W of a vector space V is called
a subspace of V if W is itself a vector space under
the operations of addition and scalar
multiplication defined in V.

Remark: If W is a subspace of V, it must be closed
under the operations inherited from V.
Ming-Feng Yeh
Chapter 4
4-22
Section 4-3
Example 1
A subspace of R3
 Show that the set W={(x1, 0, x3): x1, x3 R} is a subspace
of R3 with the standard operations.
 The set W can be interpreted as simply the xz-plane.
pf: The set W is nonempty because it contains the zero vector
(0, 0, 0).
The set W is closed under addition because the sum of any
two vectors in the xz-plane must also lie in the xz-plane.
That is, if (x1, 0, x3) and (y1, 0, y3) are in W, then their sum
(x1+y1, 0, x3 +y3) is also in W.
Ming-Feng Yeh
Chapter 4
4-23
Section 4-3
Example 1 (cont.)
To see that W is closed under scalar multiplication,
let (x1, 0, x3) be in W and let c be a scalar.
Then c(x1, 0, x3) = (cx1, 0, cx3) has zero as its second
component and must therefore be in W.
The other eight vector space axioms can be verified as well,
and these verifications as left to you.
Ming-Feng Yeh
Chapter 4
4-24
Section 4-3
Theorem 4.5

Test for a Subspace
If W is a nonempty subset of a vector space V, then W is a
subspace of V if and only if the following closure
conditions hold.
1. If u and v are in W, then u + v is in W.
2. If u is in W and c is any scalar, then cu is in W.

To establish that a set W is a vector space, you must verify all ten
vector space properties. However, if W is a subset of a larger vector
space V (and the operations defined on W are the same as those defined
on V), then most of the ten properties are inherited from the larger
space and need no verification.
Ming-Feng Yeh
Chapter 4
4-25
Section 4-3
Zero Subspace




The simplest subspace of a vector space is the one
containing of only the zero vector, W = {0}.
This subspace is called the zero subspace.
If W is a subspace of a vector space V, then both W and V
must have the same zero vector 0.
Another obvious subspace of V is V itself.
Every vector space contains two trivial subspaces (the
zero subspace and the vector space itself), and subspaces
other than these two are called proper (or nontrivial)
subspace.
Ming-Feng Yeh
Chapter 4
4-26
Section 4-3
Example 2
A Subspace of M2,2
 Let W be the set of all 22 symmetric matrices. Show that
W is a subspace of the vector space M2,2, with the standard
matrix addition and scalar multiplication.
pf: Because M2,2 is a vector space, we need only show that W
satisfies the conditions of Theorem 4.5.
1. W is nonempty.  why?
2.  A1  A1T , A2  A2T  ( A1  A2 )T  A1T  A2T  A1  A2
 if A1 and A2 are symmetric matrices of order 2, then
so is A1+A2.
3.  A  AT  (cA)T  cAT  cA
 if A is a symmetric matrix of order 2, then so is cA.
Ming-Feng Yeh
Chapter 4
4-27
Section 4-3
Example 3
The Set of Singular Matrices is NOT a Subspace of Mn,n
 Let W be the set of singular matrices of order 2. Show
that W is NOT a subspace of the vector space M2,2, with the
standard operations.
pf: W is nonempty and closed under scalar multiplication, but
it is NOT closed under addition. For example, let
1 0
0 0
1 0
A
,B
 A B  



0
0
0
1
0
1






A and B are both singular, but their sum A + B is
nonsingular. Hence, W is not closed under addition, and it
is not a subspace of M2,2.
Ming-Feng Yeh
Chapter 4
4-28
Section 4-3
Example 4
The Set of First-Quadrant Vectors is NOT a Subspace of R2
 Show that W = {(x1, x2): x1 0 and x2  0}, with the
standard operations, is NOT a subspace of R2.
pf: W is nonempty and closed under addition. It is NOT,
however, closed under scalar multiplication.
Note that (1, 1) is in W, but the scalar multiple
(1)(1, 1) = (1, 1)
is not in W. Therefore, W is not a subspace of R2.
Ming-Feng Yeh
Chapter 4
4-29
Section 4-3
Intersection of Two Subspaces


If U, V, and W are vector spaces such that
W is a subspace of V and V is a subspace
of U, then W is also a subspace of U.
Theorem 4.6
If V and W are both subspaces
of a vector space U, then the
intersection of V and W
(denoted by VW) is also
a subspace of U.
Ming-Feng Yeh
Chapter 4
W V
U
U
V V W W
4-30
Section 4-3
Proof of Theorem 4.6
1.  V and W are both subspaces of U.
 Both contain the zero vector.  VW is nonempty.
2. Let v1 and v2 be any vectors in VW.
 V and W are both subspaces of U.
 both are closed under addition.
 v1 and v2 are both in V. v1 + v2 must be in V.
Similarly, v1 + v2 is in W ( v1 and v2 are both in W ).
 v1 + v2 is in VW, and it follows that VW is closed
under addition.
3. VW is closed under scalar multiplication. (left to you)
Ming-Feng Yeh
Chapter 4
4-31
Section 4-3
Subspace of

2
R
If W is a subspace of R2, then it is a subspace if and only if
one of the following is true.
1. W consists of the single point (0, 0).
2. W consists of all points on a line that passes through
the origin.
3. W consists of all of R2.
y
y
y
x
x
Ming-Feng Yeh
Chapter 4
x
4-32
Section 4-3
Example 6
Which of these two subsets is a subspace of R2?
(a) The set of points on the line given by x + 2y = 0
Sol: A point in R2 is on the line x + 2y = 0 if and only if
it has the form (2t, t),  tR.
1. The set is nonempty since it contains the origin (0, 0).
2. Let v1 = (2t1, t1) and v2 = (2t2, t2) be any two points on
the line. Then v1+v2 = (2(t1+t2), t1 +t2) = (2t3, t3).
Thus v1+v2 is on the line,
and the set is closed under addition.
3. Similarly, you can show that the set is closed under
scalar multiplication.
Therefore, this set is a subspace of R2.

Ming-Feng Yeh
Chapter 4
4-33
Section 4-3
Example 6 (cont.)
Which of these two subsets is a subspace of R2?
(b) The set of points on the line given by x + 2y = 1
Sol: This subset of R2 is not a subspace of R2
because every subspace must contain the zero vector,
and the zero vector (0, 0) is not on the line.

Ming-Feng Yeh
Chapter 4
4-34
Section 4-3
Example 7
Show that the subset of R2 that consists of all points on the
unit circle x2 + y2 = 1 is not a subspace.
Sol: [Method I] This subset R2 is not a subspace of R2
because every subspace must contain
y
the zero vector, and the zero vector
(0, 0) is not on the circle.
[Method II] This subset R2 is not
a subspace of R2 because the points
x
(1, 0) and (0, 1) are in the subset,
but their sum (1, 1) is not. So this
subset is not closed under addition.

Ming-Feng Yeh
Chapter 4
4-35
Section 4-3
Subspace of

3
R
If W is a subspace of R3, then it is a subspace if and only if
one of the following is true.
1. W consists of the single point (0, 0, 0).
2. W consists of all points on a line that passes through
the origin.
3. W consists of all points on a plane that passes through
the origin.
4. W consists of all of R3.
Ming-Feng Yeh
Chapter 4
4-36
Section 4-3
Example 8
Let W = {(x1, x1+x3, x3): x1 and x3 are real number}.
Show that W is a subspace of R3.
pf: Let v = (v1, v1+v3, v3) and u = (u1, u1+u3, u3) be two
vectors in W, and let c be any real number.
1. W is nonempty because it contains the zero vector.
2. v + u = (v1+ u1, (v1+ u1)+(v3+ u3), v3+ u3) = (x1, x1+x3, x3).
Hence, v + u is in W (W is closed under addition).
3. cv = (cv1, c(v1+v3), cv3) = (x1, x1+x3, x3).
Hence, cv is in W.
We can conclude that W is a sunspace of R3.

Ming-Feng Yeh
Chapter 4
4-37
4.4 Spanning Sets & Linear
Independence


Linear Combination
A vector v in a vector space V is called a linear
combination of the vectors u1, u2, …, uk in V if
v = c1u1 + c2u2 + … + ckuk,
where c1, c2, …, ck are scalars.
Example 1.a
S = { (1, 3, 1), (0, 1, 2), (1, 0, 5)} in R3,
v1
v2
v3
v1 is a linear combination of v2 and v3 because
v1 = 3v2 + v3 = 3(0, 1, 2) + (1, 0, 5)
= (1, 3, 1)
Ming-Feng Yeh
Chapter 4
4-38
Section 4-4
Example 1.b

For the set of vectors in M2,2,
v1
v2
v3
v4
0 8 0 2  1 3  2 0 
S  
,
,
,





2 1 1 0  1 2  1 3 
v1 is a linear combination of v2, v3 and v4 because
v1  v 2  2v 3  v 4
Ming-Feng Yeh
Chapter 4
4-39
Section 4-4
Example 2
Write the vector w = (1, 1, 1) as a linear combination of
vectors in the set S.
v1
v2
v3
S = { (1, 2, 3), (0, 1, 2), (1, 0, 1)}
Sol: w = c1v1 + c2v2 + c3v3
 (1, 1, 1) = c1(1, 2, 3) + c2(0, 1, 2) + c3(1, 0, 1)
= (c1  c3, 2c1 + c2, 3c1 + 2c2 + c3)
 c3  1 c1  1  t ,
 c1


2
c

c

1

 1
c2  1  2t , t  R
2
3c1  2c2  c3  1 c3  t ,
 w  2 v1  3v 2  v 3 (let t  1)

Ming-Feng Yeh
Chapter 4
4-40
Section 4-4
Example 3
If possible, write the vector w = (1, 2, 2) as a linear
combination of vectors in the set S given in Example 2.
Sol:  c
1 0  1 1 
 c3  1
1

0 1 2  4
2
c

c


2

 1
2



0 0 0
7 
3c1  2c2  c3  2
The system of equations is inconsistent, and therefore
there is no solution.
Consequently, w CANNOT be written as a linear
combination of v1, v2, and v3.

Ming-Feng Yeh
Chapter 4
4-41
Section 4-4
Spanning Sets


Definition: Let S = {v1, v2, …, vk} be a subset of a vector
space V. The set S is called a spanning set of V if every
vector in V can be written as a linear combination of
vectors in S. In such cases it is said that S spans V.
Example 4: Standard Spanning Sets
a. The set S = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} spans R3
because any vector u = (u1, u2, u3) in R3 can be written
as u = u1(1, 0, 0) + u2(0, 1, 0) + u3(0, 0, 1) = (u1, u2, u3)
b. The set S = {1, x, x2} spans P2.
p(x) = a(1) + b(x) + c(x2) = a + bx + cx2
Ming-Feng Yeh
Chapter 4
4-42
Section 4-4
Example 5
A Spanning Set for R3
 Show that the set S = {(1, 2, 3), (0, 1, 2), (2, 0, 1)} spans
R3.
Pf: Let u = (u1, u2, u3) be any vector in R3.
(u1 , u2 , u3 )  c1 (1, 2, 3)  c2 (0,1, 2)  c3 (2, 0,1)
 (c1  2c3 , 2c1  c2 , 3c1  2c2  c3 )
 c3  u1 The coefficient matrix has a nonzero
 c1

 u2 determinant. Hence, the system has
2c1  c2
3

a
unique
solution.
Any
vector
in
R
3
c

2
c

c

u
2
3
3
 1
can be written as a linear combination of the vectors in S,
and we can conclude that the set S spans R3.
Ming-Feng Yeh
Chapter 4
4-43
Section 4-4
Example 6
A Set That Does Not Spans R3
 Show that the set S = {(1, 2, 3), (0, 1, 2), (1, 0, 1)} does
not spans R3.
Pf: We can see form Example 3 that the vector w = (1, 2, 2)
is in R3 and cannot be expressed as a linear combination of
the vectors in S.
Therefore, the set S does not span R3.
Ming-Feng Yeh
Chapter 4
4-44
Section 4-4
Example 5 vs Example 6


S1 = {(1, 2, 3), (0, 1, 2), (2, 0, 1)}
the vectors in S1 do not lie in a common plane
S2 = {(1, 2, 3), (0, 1, 2), (1, 0, 1)}
the vectors in S2 lie in a common plane
z
z
y
x
Ming-Feng Yeh
y
x
Chapter 4
4-45
Section 4-4
The Span of a Set

Definition: If S = {v1, v2, …, vk} is a set of vectors in V,
then the span of S is the set of all linear combinations of
the vectors in S,
span(S) = {c1v1+c2v2+…+ckvk: c1, c2, …, ck  R}

The span of S is denoted by span(S) or span{v1, v2, …, vk}.
If span(S) = V, then V is spanned by {v1, v2, …, vk} or
S spans V.
Ming-Feng Yeh
Chapter 4
4-46
Section 4-4
Theorem 4.7
If S = {v1, v2, …, vk} is a set of vectors in V, then span(S)
is a subspace of V. Moreover, span(S) is the smallest
subspace of V that contains S in the sense that every other
subspace of V that contains S must contain span(S).
Pf: Suppose that u and v are any two vectors in span(S)
u = c1v1+c2v2+…+ckvk; v = d1v1+d2v2+…+dkvk. Then,
u + v = (c1+d1)v1+(c2+d2)v2+…+(ck+dk)vk
cu = cc1v1+ cc2v2 +…+ cckvk
which means that u + v and cu are also in span(S).
Therefore, span(S) is a subspace of V.

Ming-Feng Yeh
Chapter 4
4-47
Section 4-4
Linear Dependence &
Linear Independence

A set of vectors S = {v1, v2, …, vk} in a vector
space V is called linearly independent if the
vector equation
c1v1 + c2v2 + … + ckvk = 0
has only the trivial solution, c1= 0, c2= 0, …, ck= 0.
If there are also nontrivial solutions, then S is
called linearly dependent.
Ming-Feng Yeh
Chapter 4
4-48
Section 4-4
Example 7
Linearly Dependent Sets
 The set S = {(1, 2), (3, 4)} in R2 is linearly dependent
 2(1, 2) + 1(2, 4) = (0, 0)
 The set S = {(1, 0), (0, 1), (2, 5)} in R2 is linearly
dependent  2(1, 0) 5(0, 1) + 1(2, 5) = (0, 0)
 The set S = {(0, 0), (1, 2)} in R2 is linearly dependent
 1(0, 0) + 0(1, 2) = (0, 0)
Ming-Feng Yeh
Chapter 4
4-49
Section 4-4
Example 8
Testing for Linear Independence
Determine whether the set of vectors in R3 is linearly
dependent or linearly independent
S = { v1 = (1, 2, 3), v2 = (0, 1, 2), v3 = (2, 0, 1)}
Sol: c1v1 + c2v2 + c3v3 = 0
 c1(1, 2, 3) + c2(0, 1, 2) + c3(2, 0, 1) = (0, 0, 0)
 (c12c3, 2c1+c2, 3c1+2c2 +c3) = (0, 0, 0)
 c1 = c2 = c3 = 0
Therefore, S is linearly independent.

Ming-Feng Yeh
Chapter 4
4-50
Section 4-4
Example 9
Testing for Linear Independence
Determine whether the set of vectors in P2 is linearly
dependent or linearly independent
S = { 1 + x  2x2, 2 + 5x  x2, x + x2}
Sol: c1v1 + c2v2 + c3v3 = 0
 c1(1 + x  2x2) + c2(2 + 5x  x2) + c3(x + x2) = 0 + 0x + 0x2
 (c1+2c2) + (c1+5c2+c3)x + (2c1c2+c3) x2 = 0 + 0x + 0x2
The system has an infinite number of solutions. Therefore,
the system must has nontrivial solutions, and we can
conclude that the set S is linearly dependent.
c1 = 2t, c2 = t, c3 = 3t, tR.

Ming-Feng Yeh
Chapter 4
4-51
Section 4-4
Example 10
Testing for Linear Independence
Determine whether the set of vectors in M2,2 is linearly
dependent or linearly independent
2 1 3 0 1 0 
S  
,
,




1 1 2 1 2 0 
Sol: c1v1 + c2v2 + c3v3 = 0
2 1
 3 0
1 0 0 0
 c2 
 c3 

 c1 




1
1
2
1
2
0
0
0





 

The system has only the trivial solution. Hence the set S is
linear independently.

Ming-Feng Yeh
Chapter 4
4-52
Section 4-4
Example 11

Testing for Linear Independence
Determine whether the set of vectors in M4,1 is linearly
dependent or linearly independent
  1  1   0   0  
        
  0  1   3   1  
S 
,
,
,










1
0
1

1


 0  2  2  2  
Sol: linear independence.
Ming-Feng Yeh
Chapter 4
4-53
Section 4-4
Theorem 4.8
A Property of Linearly Dependent Set
 A set S = {v1, v2, …, vk}, k2, is linear dependent if and
only if at least one of the vectors vj can be written as a
linear combination of the other vectors in S.
Pf: 」Assume that S is a linearly dependent set. Then there exist
scalars c1, c2, …, ck (not all zero) such that
c1v1 + c2v2 + … + ckvk = 0
Assume that c10. Then c1v1  c2 v 2  c3 v 3    ck v k
That is, v1 is a linear combination of the other vectors.
」Suppose v1 in S is a linear combination of the other vectors,
i.e., v1 = c2v2 + c3v3 +… + ckvk. Then the equation
v1 + c2v2 + c3v3 +… + ckvk = 0 has at least one coefficient, 1,
that is nonzero. Hence, S is linearly dependent.
Ming-Feng Yeh
Chapter 4
4-54
Section 4-4
Example 12
In Example 9, you determine set
S = { 1 + x  2x2, 2 + 5x  x2, x + x2}
is linearly dependent. Show that one of the vectors in this
set can be written as a linear combination of the other two.
Sol: One nontrivial solution is c1 = 2, c2 = 1, and c3 =3,
which yields
2(1 + x  2x2) + (1)(2 + 5x  x2) + 3(x + x2) = 0.
Therefore, v2 = 2v1 + 3v3.
v2 can be written as a linear combination of v1 and v3.

Ming-Feng Yeh
Chapter 4
4-55
Section 4-4
Corollary 4.8 & Example 13


Corollary 4.8
Two vectors u and v in a vector space V are linearly
dependent if and only if one is a scalar multiple of the
other.
z
Example 13
(a) S = {(1, 2, 0), (2, 2, 1)}
 linear independent.
(b) S = {(4, 4, 2), (2, 2, 1)}
 v1 = 2v2.
linear dependent
x
Ming-Feng Yeh
Chapter 4
4-56
y
4.5 Basis & Dimension



A set of vectors S = {v1, v2, …, vn} in a vector
space V is called basis if the following conditions
are true.
1. S spans V. 2. S is linearly independent.
If a vector space V has a basis consisting of a finite number
of vectors, then V is finite dimensional. Otherwise, V is
called infinite dimensional.
The vector space V = {0} is finite dimensional.
Ming-Feng Yeh
Chapter 4
4-57
Section 4-5
Example 1
The Standard Basis for R3
Show that the set S = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} is a
basis for R3.
pf: 1. Example 4(a) in Section 4.4 showed that S spans R3.
2. S is linear independent.
 c1(1, 0, 0) + c2(0, 1, 0) + c3(0, 0, 1) = (0, 0, 0)
has only the trivial solution c1 = c2 = c3 = 0.
 S is a basis for R3.
 The Standard Basis for Rn
e1 = (1, 0, 0, …, 0), e2 = (0, 1, 0, …, 0), e3 = (0, 0, 1, …, 0),
…, en = (0, 0, 0, …, 0, 1)

Ming-Feng Yeh
Chapter 4
4-58
Section 4-5
Example 2

A Nonstandard Basis for R2
Show that the set S = {v1(1, 1), v2(1, 1)} is a basis for R2.
pf: 1. Let x = (x1, x2) represent an arbitrary vector in R2.
Consider the linear combination c1v1 + c2v2 = x,
 (c1 + c2, c1  c2) = (x1, x2)
c  c  x1
1 2
c1  c2  x2
 the coefficient matrix has a nonzero determinant, the system has
a unique solution.
 S spans R2.
2. S is linearly independent (verify it).
 S is a basis for R2.
Ming-Feng Yeh
Chapter 4
4-59
Section 4-5
Example 4
A Basis for Polynomials
Show that the vector space P3 has the following basis.
S = {1, x, x2, x3}
pf: 1.  the span of S consists of all polynomials of the form
a0 + a1x + a2x2 + a3x3, a0, a1, a2, a3R
S spans P3.
2. Consider the following linear combination
a0 + a1x + a2x2 + a3x3 = 0(x) = 0, x
the solutions are a0 = a1 = a2 = a3 = 0. That is,
S is linear independent.
 S is a basis for P3.

Ming-Feng Yeh
Chapter 4
4-60
Section 4-5
Some Standard Bases




Standard basis for P3: {1, x, x2, x3}
Standard basis for Pn: {1, x, x2, x3, …, xn}
1 0 0 1 0 0 0 0 
Standard basis for M2,2: 
,
,
,





0 0 0 0 1 0 0 1 
Standard basis for Mm,n:
The set that consists of the mn distinct m  n matrices
having a single 1 and all other entries equal to zero.
Ming-Feng Yeh
Chapter 4
4-61
Section 4-5
Theorem 4.9
Uniqueness of Basis Representation
 If S = {v1, v2, …, vn} is a basis for a vector space V, then
every vector in V can be written in one and only one way
as a linear combination of vectors in S.
Pf:  S spans V.
an arbitrary vector u in V can be expressed as
u = c1v1 + c2v2 + … + cnvn 
Suppose that u has another representation
u = b1v1 + b2v2 + … + bnvn 
  (c1  b1)v1 + (c2  b2)v2 + … + (cn  bn)vn = 0 
 S is linear independent. The only solution to  is
c1  b1 = 0, c2  b2 = 0, cn  bn = 0  ci = bi for all i.
Hence, u has only one representation for S.
Ming-Feng Yeh
Chapter 4
4-62
Section 4-5
Example 6
Uniqueness of Basis Representation
Let u = (u1, u2, u3)R3. Show that u = c1v1 + c2v2 + c3v3
has a unique solution for the basis S = {v1, v2, v3} =
{ (1, 2, 3), (0, 1, 2), (2, 0, 1) }
Sol: (u1, u2, u3) = c1(1, 2, 3) + c2(0, 1, 2) + c3(2, 0, 1)

1 0  2  c1   u1 
 2 1 0  c2   u2   Ac  u
3 2 1  c3  u3 
 det( A)  0, c  A1u
that is, u has a unique solution for the basis S.
Ming-Feng Yeh
Chapter 4
4-63
Section 4-5
Theorem 4.10
Bases & Linear Dependence
 If S = {v1, v2, …, vn} is a basis for a vector space V, then
every set containing morn than n vectors in V is linearly
dependent.
Pf: Let S1 = {u1, u2, …, um}  V, m > n.
Consider: k1u1 + k2u2 + … + kmum = 0

If k1, k2 , …, km are not all zero, then S1 is linear dependent.
 S is a basis for V.
each ui is a linear combination of vectors in S, i.e.,
ui  c1i v1  c2i v 2    cni v n

Substituting each of these representations of ui into ,
d1v1  d 2 v 2    d n v n  0, di  ci1k1  ci 2 k2    cimkm
Ming-Feng Yeh
Chapter 4
4-64
Section 4-5
Theorem 4.10 (cont.)
d1v1  d 2 v 2    d n v n  0, di  ci1k1  ci 2 k2    cimkm
 vi’s form a linearly independent set. (S is a basis)
di = 0
 c11k1  c12k2    c1m k m  0
c21k1  c22k2    c2 m k m  0

cn1k1  cn 2 k2    cnmkm  0
This homogeneous system has fewer equations than
variables k1, k2 , …, km, (n < m). Hence it must have
nontrivial solutions. That is, S1 is linearly dependent.
Ming-Feng Yeh
Chapter 4
4-65
Section 4-5
Theorem 4.11
Number of Vectors in a Basis
 If a vector space V has one basis with n vectors,
then every basis for V has n vectors.
Pf: Let S1 = {v1, v2, …, vn} be a given basis for V, and
let S2 = {u1, u2, …, um} be another basis for V.
Because S1 is a basis and S2 is linearly independent,
Theorem 4.10 implies that m  n.
Similarly, n  m because S1 is linearly independent
and S2 is a basis.
Consequently, n = m.
Ming-Feng Yeh
Chapter 4
4-66
Section 4-5
Example 8
Spanning Sets & Bases
Explain why each of the following statements is true.
(a) The set S1 = { (3, 2, 1), (7, 1, 4)} is NOT a basis for R3.
The standard basis for R3 has 3 vectors, and S1 has only 2.
Hence, S1 cannot be a basis for R3.

(b) The set S2 = { x + 2, x2, x3 1, 3x +1, x2  2x + 3} is NOT
a basis for P3.
The standard basis for P3 has 4 vectors.
The set S2 has too many elements to be a basis for P3.
Ming-Feng Yeh
Chapter 4
4-67
Section 4-5
Dimension of a Vector Space



Definition: If a vector space V has a basis consisting of n
vectors, then the number n is called the dimension of V,
denoted by dim(V) = n.
If V consists of the zero vectors alone, the dimension of V
is defined as zero, i.e., dim(0) = 0.
dim(Rn) = n
dim(Pn) = n +1
dim(Mm,n) = mn
If W is a subspace of an n-dimensional vector space, then
dim(W)  n.
Ming-Feng Yeh
Chapter 4
4-68
Section 4-5
Example 9

Finding the Dimension of a Subspace
find a set of linearly independent vectors that spans the subspace
Determine the dimension of each subspace of R3.
(a) W = { (d, cd, c): c, d R}
Sol: (d, cd, c) = c(0, 1, 1) + d(1, 1, 0)
W is spanned by the set S = {(0, 1, 1), (1, 1, 0)}.
Hence, dim(W) = 2.
(b) W = { (2b, b, 0): b R}
Sol: W is spanned by the set S = {(1, 1, 0)}.
Hence, dim(W) = 1.
Ming-Feng Yeh
Chapter 4
4-69
Section 4-5
Example 10
Finding the Dimension of a Subspace
Find the dimension of the subspace W of R4 spanned by
S = { v1(1, 2, 5, 0), v2(3, 0, 1, 2), v3(5, 4, 9, 2)}
Sol: Although W is spanned by the set S, S is NOT a basis for
W because S is a linear dependent set.
v3 = 2v1  v2
This means that W is spanned by the set S1 = {v1, v2}.
Moreover, S1 is linearly independent.
Hence, dim(W) = 2.

Ming-Feng Yeh
Chapter 4
4-70
Section 4-5
Example 11
Finding the Dimension of a Subspace
Let W be a subspace of all symmetric matrices M2,2.
What is the dimension of W?
Sol: a b 
1 0 0 1 0 0
b c   a 0 0  b 1 1  c 0 1



 
 

1 0 0 1 0 0 
 S  
,
,




0 0 1 1 0 1 
S is linearly independent and S spans W.
Hence, dim(W) = 3.

Ming-Feng Yeh
Chapter 4
4-71
Section 4-5
Theorem 4.12 & Example 12
Theorem 4.12: Basis Test in an n-dimensional Space
Let V be a vector space of dimension n.
1. If S = {v1, v2, …, vn} is a linearly independent set of
vectors in V, then S is a basis for V.
2. If S = {v1, v2, …, vn} spans V, then S is a basis for V.
Example 12: Show the set S is a basis for M5,1.
 1  0  0  0  0 
           S is a linearly independent
 2  1  0  0  0 


S   1,  3,  2,  0,  0  set and dim(M5,1) = 5
 3  2  1  2  0   S is a basis for M .
5,1
          
 4  3  5  3  2 
Ming-Feng Yeh
Chapter 4
4-72
4.6 Rank of a Matrix & Systems
of Linear Equations
Column vectors: m 1

 a11 a12  a1n 
a

a

a
22
2n 
 21
Row vectors: 1  n
 
   


am1 am 2  amn 

Definition
Let A be an m  n matrix.
1. The row space of A is the subspace of Rn
spanned by the row vectors of A.
2. The column space of A is the subspace of Rm
spanned by the column vectors of A.
Ming-Feng Yeh
Chapter 4
4-73
Section 4-6
Theorems 4.13 & 4.14

Theorem 4.13
Row-Equivalent Matrices Have the Same Row Space
If an m  n matrix A is row-equivalent to an m  n matrix B,
then the row space of A is equal to the row space of B.

Theorem 4.14
Basis for the Row Space of a Matrix
If a matrix A is row-equivalent to a matrix B, then the
nonzero row vectors of B form a basis for the row space
of A.
Ming-Feng Yeh
Chapter 4
4-74
Section 4-6
Example 2
Find a basis for a row space of A.
Sol:
1
3
 1 3
1
 0 1

0
1
0



A   3 0
6  1  B  0



1
 3 4 2
0
 2 0  4  2
0

3 1 3 w1
1 1 0 w 2
0 0 1 w 3

0 0 0
0 0 0
w1 = (1, 3, 1, 3), w2 = (0, 1, 0, 1), and w3 = (0, 0, 0, 1)
form a basis for the row space of A.
Ming-Feng Yeh
Chapter 4
4-75
Section 4-6
Example 3
Find a basis for the subspace of R3 spanned by
S = { (1, 2, 5), (3, 0, 3), (5, 1, 8) }
Sol:
 1 2 5 v1
 1  2  5 w1
A   3 0 3 v 2  B  0
1
3 w 2
 5 1 8 v 3
0
0
0
w1 = (1, 2, 5) and w2 = (0, 1, 3) form a basis for the row
space of A.
That is, they form a basis for the subspace spanned by S.

Ming-Feng Yeh
Chapter 4
4-76
Section 4-6
Column Vectors of
Row-Equivalent Matrices
 1
 0

A   3

 3
 2
a1


3
1
1
1
0
6
4 2
0 4
a 2 a3
3
1
0
0

 1  B  0


1
0
0
 2
a4
b1
3 1 3
1 1 0
0 0 1

0 0 0
0 0 0
b2 b3 b4
b3 = 2b1 + b2  a3 = 2a1 + a2
The column vectors b1, b2, and b4 of matrix B are linearly
independent, and so are the corresponding columns of A.
Ming-Feng Yeh
Chapter 4
4-77
Section 4-6
Example 4
Find a basis for the column space of the matrix A.
Sol: [Method 1]
3
2 1 0  3 3
2  w1
1 0  3
3 1
 0 1 9  5  6  w
0
4
0

 2
AT  
1 1
6  2  4  0 0 1  1  1  w 3

 

3
0

1
1

2
0
0
0
0
0

 

w1 = (1, 0, 3, 3, 2), w2 = (0, 1, 9, 5, 6), and
w3 = (0, 0, 1, 1, 1) form a basis for the row space of AT.
T
T
T
That is equivalent to saying that w1 , w 2 , and w 3
form a basis for the column space of A.

Ming-Feng Yeh
Chapter 4
4-78
Section 4-6
Example 4 (cont.)
Sol: [Method 2]
 1
 0

A   3

 3
 2
a1
3
1
1
1
0
6
4 2
0 4
a 2 a3
3
1
0
0

 1  B  0


1
0
0
 2
a4
3 1 3
1 1 0
0 0 1

0 0 0
0 0 0
a1, a2, and a4 form a basis for the column space of A.
Ming-Feng Yeh
Chapter 4
4-79
Section 4-6
Thm 4.15 & Rank of a Matrix



Theorem 4.15
Row and Column Spaces Have Equal Dimensions
If A is an m  n matrix, then the row space and column
space of A have the same dimension.
Definition
The dimension of the row (or column) space of a matrix A
is called the rank of A and is denoted by rank(A).
Example 5
1  2 0 1 
1  2 0 1 
A  2 1 5  3  B  0 1 1  1  rank ( A)  3
0 1 3 5 
0 0 1 3 
Ming-Feng Yeh
Chapter 4
4-80
Section 4-6
Thm 4.16: Nullspace of a Matrix
Solutions of a Homogeneous System
 If A is an m  n matrix, then the set of all solutions of the
homogeneous system of linear equations Ax = 0
is a subspace of Rn called the nullspace of A and is
denoted by N(A). So, N(A) = {xRn: Ax = 0}.
 The dimension of the nullspace of A is called
the nullity of A. nullity(A) = dim( N(A) )
Pf: 1. Nonempty: A0 = 0
2. Addition: Ax1 = 0 and Ax2 = 0
 A(x1+x2) = Ax1+ Ax2 = 0
3. Scalar multiplication: A(cx1) = c(Ax1) = 0
Ming-Feng Yeh
Chapter 4
4-81
Section 4-6
Example 6
The nullspace of A is also called the solution space of the
system Ax = 0.
 Find the Solution Space of a Homogeneous System
Find the nullspace of A.
1 2  2 1  1 2 0 3
Sol:

A  3 6  5 4  0 0 1 1
1 2
0 3 0 0 0 0
 2s  3t   2  3
 s  1 0
  s    t   s, t  R
x
  t   0    1

    
t

 0 1
N ( A)  {x : x  sv1  tv 2 , s, t  R}
Ming-Feng Yeh
Chapter 4
4-82
Section 4-6
Remark of Example 6



 2  3
1 0
A basis for N(A) is { ,   }
 0    1
   
0 1
because all solutions of Ax = 0 are linear combinations of
these two vectors.
When homogeneous systems are solved from the reduced
row-echelon form, the spanning set is linear independent.
A is a 3  4 matrix, rank(A) = 2, nullity(A) = 2
 rank(A) + nullity(A) = 4
Ming-Feng Yeh
Chapter 4
4-83
Section 4-6
Theorem 4.17
Dimension of the Solution Space

If A is an m  n matrix of rank r, then the
dimension of the solution space of Ax = 0 is
n  r. That is,
rank(A) + nullity(A) = n.
Ming-Feng Yeh
Chapter 4
4-84
Section 4-6
Example 7
Rank and Nullity of a Matrix
 Let the column vectors of the matrix A be denoted by
a1, a2, …, a5.
(a) Find the rank and nullity of A.
1
0
1
 1 0 2
1 0  2 0
 0 1  3

0 1

1
3
3
0

4
B

A
 2  1
0 0
1 1
3
0 1  1




9 0  12
0 0
0
 0 3
0 0
rank(A) = 3, n = 5  nullity(A) = 5  3 = 2.
Ming-Feng Yeh
Chapter 4
4-85
Section 4-6
Example 7 (cont.)
(b) Find a set of the column vectors of A that forms a basis for
the column space of A.
 1
 0
A
 2

 0
1
0
a1   ,
 2
 
0
0 2
0
1
0
1  3
1
3
B
0
1
1 1
3


3
9 0  12
0
0
1
 1
1
a 2   , a 4   ,
 1
 1
 
 
3
 
0
1
0 2 0
1
0
0
1
3 0  4
0 1  1

0 0
0
form a basis for the column space of A.
Ming-Feng Yeh
Chapter 4
4-86
Section 4-6
Example 7 (cont.)
(c) If possible, write the third column of A as a linear
combination of the first two columns.
1
0
 1 0 2
1
 0 1  3

0
1
3
B
A
 2  1
0
1 1
3



0
3
9
0

12


0
b 3  2b1  3b 2  a 3  2a1  3a 2
Ming-Feng Yeh
Chapter 4
0 2 0
1
0
0
1
3 0  4
0 1  1

0 0
0
4-87
Section 4-6
Solutions of Linear Systems


The set of all solution vectors of Ax = 0 is a
subspace.
Is the set of all solution vectors of Ax = b (b 0)
also a subspace?
The answer is “no,” because the zero vector is
never a solution of Ax = b.
Ming-Feng Yeh
Chapter 4
4-88
Section 4-6
Theorem 4.18
Solutions of a Nonhomogeneous Linear System
If xp is a particular solution of Ax = b (b 0), then every
solution of this system can be written in the form
x = xp + xh,
where xh is a solution of Ax = 0.
Pf: Let x be any solution of Ax = b.
Then (x  xp) is a solution of Ax = 0,
because A(x  xp) = Ax  Axp = b  b = 0.
Let xh = x  xp  x = xp + xh

Ming-Feng Yeh
Chapter 4
4-89
Section 4-6
Example 8

Find the set of all solution vectors of the following system
x1
 2 x3  x4  5
3x1  x2  5 x3
 8
x1  2 x2
 5 x4  9
1
5  1 0  2
1
5
1 0  2
3 1  5
  0 1

0
8
1

3

7

 

 1 2
0  5  9 0 0
0
0
0
 x1   2 s  t  5 
 2  1  5 
 x   s  3t  7 
  1   3   1
  s    t     , s, t  R
x   2  
 x3  

 1  0 0
s
  

     
x
t

 0  1 0
 4 
Ming-Feng Yeh
Chapter 4
xh
xp
4-90
Section 4-6
Theorem 4.19


Solution of a System of Linear Equations
The system of linear equation Ax = b is consistent
if and only if b is in the column space of A.
Ax = b iff b is a linear combination of the columns of A
Ming-Feng Yeh
Chapter 4
4-91
Section 4-6
Example 9

 1 1  1  x1   1
1 0
  x    3
1
Consider 
 2   
3 2  1  x3   1
1
 1 1  1  1 0
A   1 0
1  0 1  2.  rank ( A)  2
3 2  1 0 0
0
1
3
1 1  1  1  1 0
[ Ab]  1 0
1 3  0 1  2  4. rank ([ Ab])  2
3 2  1 1 0 0
0
0
The rank of A is equal to the rank of [A  b]. Hence, b is in the
column space of A, and the system is consistent.
Ming-Feng Yeh
Chapter 4
4-92
Section 4-6
Equivalent Conditions
If A is an n  n matrix, then the following conditions are
equivalent.
 A is invertible.
 Ax = b has a unique solution for any n  1 matrix b.
 Ax = 0 has only trivial solution.
 A is row-equivalent to In.
A 0



Rank(A) = n
The n row (column) vectors of A are linearly independent.
Ming-Feng Yeh
Chapter 4
4-93
4.7 Coordinates and
Change of Basis
Coordinate Representation Relative to a Basis


Let B = {v1, v2, …, vn} is an ordered basis for a vector
space V and let x be a vector in V s.t.
x = c1v1 + c2v2 + … + cnvn
The scalars c1, c2, …, cn are called the coordinates of x
relative to the basis B.
 c1 
The coordinate matrix (or coordinate vector)
c 
of x relative to B is the column matrix in Rn [x]B   2 

whose components are the coordinate of x.
 
cn 
Ming-Feng Yeh
Chapter 4
4-94
Section 4-7
Example 2

Finding a Coordinate Matrix Relative to a Standard Basis
The coordinate matrix of x relative to the (nonstandard)
 3
basis B = {v1, v2} = {(1, 0), (1, 2)} is [x]B   
 2
Find the coordinates of x relative to the standard basis
B = {u1, u2} = {(1, 0), (0, 1)}.
Sol: x = 3v1 + 2v2 = 3(1, 0) + 2(1, 2)
= (5, 4)
= 5(1, 0) + 4(0, 1) = 5u1 + 4u2
v2
5
 
u2
[ x] B   
 4
v u
Ming-Feng Yeh
Chapter 4
1
1
4-95
Example 3

Finding a Coordinate Matrix Relative to a Nonstandard Basis
Find the coordinate matrix of x = (1, 2, 1) in R3 relative to
the (nonstandard) basis B = {u1, u2 , u3} = {(1, 0, 1),
(0, 1, 2), (2, 3, 5)}.
Sol: x = c1u1 + c2u2 + c3u3
(1, 2, 1) = c1(1, 0, 1) + c2(0, 1, 2) + c3(2, 3, 5)
2   c1   1 
1 0
 c1   5 
0  1 3  c    2 
c     8 

[
x
]

2
B

   
 2  
1 2  5 c3   1
c3   2
u1 u 2 u3 [x]B [x]B
Ming-Feng Yeh
Chapter 4
4-96
Change of Basis in

n
R
Transition matrix from B to B: P  u1 u 2
u3 
2   c1   1 
1 0
2  1   5 
 c1   1 4
0  1 3  c    2 
c    3  7  3  2     8
2

   
 2 
   
1 2  5 c3   1
c3   1  2  1  1  2
1
[x]B
[x]B
P
u1 u 2 u3 [x]B [x]B
 Change of basis from B to B: P[x]  [x]
B
B
1
 Change of basis from B to B: [x]  P [x]
B
B

transition matrix from B to B: P 1
Ming-Feng Yeh
Chapter 4
4-97
Theorems 4.20 & 4.21


Theorem 4.20
The Inverse of a Transition Matrix
If P is the transition matrix from a basis B to a basis B in
Rn, then P is invertible and the transition matrix from B to
B is given by P1.
Theorem 4.21
Transition Matrix from B to B
Let B = {v1, v2, …, vn} and B = {u1, u2, …, un} be two
bases for Rn. Then the transition matrix P1 from B to B
can be found by using Gauss-Jordan elimination on the n 
2n matrix [B | B], as follows [ B  B]  [ I n  P 1 ]
Ming-Feng Yeh
Chapter 4
4-98
Example 4
Finding a Transition Matrix
Find the transition matrix from B to B for the following
bases in R3.
B = {(1,0,0), (0,1,0), (0,0,1)} and
B = {(1,0,1), (0,1,2), (2,3,5)}.
2
1 0 0
1 0
Sol:
B  0 1 0, B  0  1 3 
0 0 1
1 2  5
2  1 0 0 1 0 0   1 4
2
1 0
[ B  B]  0  1 3  0 1 0  0 1 0  3  7  3
1 2  5  0 0 1 0 0 1  1  2  1
I3
P 1 4-99
Ming-Feng Yeh
Chapter 4

Example 5
Finding a Transition Matrix
Find the transition matrix from B to B for the following
bases in R2. B = {(3,2), (4, 2)} and B = {(1,2), (2,2)}.
Sol:
 1 2   3 4 
B  B  

2

2

2

2



  1 2
1 0   1 2
1
1

 I2  P  P  



2
3
0
1


2
3






 3 4   1 2  1 0  3  2

B  B   
  0 1  2  1  I 2  P
2

2

2

2

 

Ming-Feng Yeh
Chapter 4
4-100
Examples 6 & 7
Ex. 6: Coordinate Representation in P3
Find the coordinate matrix of p = 3x3  2x2 + 4 relative to
the standard basis in P3. S = {1, x, x2, x3}  4 
Sol: p = 4(1) + 0(x)  2(x2) + 3(x3) 
0
[ p ]S   
 2
 
3
Ex. 7: Coordinate Representation in M3,1
T
Find the coordinate matrix of X   1 4 3
relative to the standard basis in M3,1.
 1
1 0 0
 1
Sol:
X   4   (1) 0  41  30  [ X ]S   4 
 3 
0 0 1
 3 
Ming-Feng Yeh
Chapter 4
4-101