Download Criteria for Determining If A Subset is a Subspace

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Jordan normal form wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Matrix multiplication wikipedia , lookup

Cross product wikipedia , lookup

System of linear equations wikipedia , lookup

Exterior algebra wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Euclidean vector wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Vector space wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Matrix calculus wikipedia , lookup

Four-vector wikipedia , lookup

Vector field wikipedia , lookup

Transcript
These notes closely follow the presentation of the material given in David C. Lay’s
textbook Linear Algebra and its Applications (3rd edition). These notes are intended
primarily for in-class presentation and should not be regarded as a substitute for
thoroughly reading the textbook itself and working through the exercises therein.
Criteria for Determining If A Subset is a Subspace
Recall that if V is a vector space and W is a subset of V, then W is said to be a
subspace of V if W is itself a vector space (meaning that all ten of the vector space
axioms are true for W). The algebraic axioms will always be true for a subset of V
since they are true for all vectors in V. The only axioms that really need to be checked
are the closure axioms, the existence of a zero vector, and the existence of additive
inverses. As we are about to see, in fact, the only axioms that really need to be
checked are the closure axioms.
Lemma If V is a vector space, then V has exactly one zero vector.
Proof
Suppose that 0 1 and 0 2 are zero vectors in V.
Since 0 1 is a zero vector, we know that 0 2  0 1  0 2 .
Since 0 2 is a zero vector, we know that 0 1  0 2  0 1 .
Since 0 1  0 2  0 2  0 1 , we can conclude (from what was stated above) that
01  02.
Thus, V has only one zero vector (which we can simply call 0).
Lemma If V is a vector space and u is any vector in V, then 0  u  0.
Proof Suppose that u is a given vector in V. Then
0  u 0  0  u 0  u 0  u.
Thus,
0  u 0  u 0  u.
Since the vector 0  u has an additive inverse, call it v, we can add v to both sides
of the above equation to obtain
0  u  v 0  u 0  u  v.
Using the associative property of vector addition on the right hand side, we know
have
0  u  v 0  u 0  u  v.
Next, we use the fact that v is an additive inverse of 0  u to obtain
0  0  u  0.
This gives us 0  0  u.
Lemma If V is a vector space and u is any vector in V, then the vector 1  u is an
additive inverse of u and, in fact, the vector 1  u is the only additive inverse of
u.
Proof
Let u be some given vector in V.
1
Since V is closed under scalar multiplication, we know that the vector
1  u is also in V. We will show that 1  u is an additive inverse of u and that this
is the only vector that serves as an additive inverse of u.
First, note that
u 1  u  1  u 1  u
 1  1  u
 0u
 0.
This shows that 1  u serves as an additive inverse for u.
Now, suppose that v is an additive inverse of u. Then
u  v 0
which means that
u  v  u 1  u.
Adding v to both sides of the above equation gives us
u  v  v u 1  u  v
and using the associative property of vector addition then gives us
u  v  v  1  u u  v
which leads to
0  v 1  u  0
and thus
v  1  u.
This shows that 1  u is the only vector in V that serves as an additive inverse of
u. (Since u has only one additive inverse, we can give it the name u. What we
have proved here is that u 1  u.)
Theorem If V is a vector space and W is a non–empty subset of V that is closed under
vector addition and scalar multiplication, then W is a subspace of V.
Proof
We need only to check that W contains the zero vector of V and that each
vector in W has an additive inverse in W:
To see why W must contain the zero vector of V, let u be any vector in W.
(We know that W contains at least one vector because we are assuming that
W  .) Now note that
0u  0
and, since W is closed under scalar multiplication, we can conclude that 0  W.
Now, let u be any vector in W. Since W is closed under scalar
multiplication, then we know that the vector 1  u is also in W. As was proved
in one of the above lemmas, 1  u is the additive inverse of u. Thus, the additive
inverse of very vector in W is also in W.
2
3
The Null Space of a Matrix
Definition The nullspace of an m  n matrix, A, denoted by NulA, is the set of all
solutions of the homogeneous equation Ax  0.
Example Describe the nullspace of the matrix
A
1 3 2
5 9
1
.
4
Theorem The nullspace of any m  n matrix, A, is a subspace of  n .
5
Example Find a set of vectors that spans the nullspace of the matrix
A
1 3 2
5 9
1
.
6
The Column Space of a Matrix
Definition The column space of an m  n matrix, A, denoted by ColA, is the set of all
linear combinations of the vectors that make up the columns of A. In other words,
if
A
,
a1 a2  an
then
ColA  Spana 1 , a 2 ,  , a n .
Example Describe the column space of the matrix
A
1 3 2
5 9
1
.
7
Theorem The column space of any m  n matrix, A, is a subspace on  m .
8
Kernel and Range of a Linear Transformation
Definition If V and W are vector spaces and T : V  W, then T is said to be a linear
transformation (with domain V and codomain W) if
1.
2.
Tu  v  Tu  Tv for all u and v in V.
Tcu  cTu for all u in V and for all scalars c.
Example Recall that C 1 , the set of all functions f :    that are continuously
differentiable on , is a vector space. Also, recall that C 0 , the set of all
functions with domain  that are continuous on  is a vector space. The
differentiation operator, D : C 1   C 0 , defined by
Df  f 
is a mapping with domain C 1  and codomain C 0 .
Explain why D is in fact a linear transformation (by calling on facts known
from Calculus).
9
Definition The kernel of a linear transformation T : V  W, denoted by kerT, is the set
of all vectors v  V such that Tv  0 W (where 0 W denotes the zero vector in the
vector space W). In other words,
kerT  v  V | Tv  0 W .
The range of a linear transformation T : V  W, denoted by ranT, is the set of
all vectors w W that are images under of T at least one vector in V. In other
words,
ranT  w W | w Tv for at least one v  V .
10
Example Let D : C 1   C 0  be the differentiation operator. Describe the kernel
and the range of D. Is D a one–to–one linear transformation? Does D map C 1 
onto C 0 ? Explain.
11
Theorem If V and W are vector spaces and T : V  W is a linear transformation, then
kerT is a subspace of V and ranT is a subspace of W.
12
Example Recall that C 0 0, 1, the set of all functions f : 0, 1   that are continuous
on 0, 1, is a vector space. The integration operator, I : C 0 0, 1  , defined
by
If 
1
 0 fx dx
is a mapping with domain C 0 0, 1 and codomain .
Explain why I is in fact a linear transformation (by calling on facts known from
Calculus). Also, describe the kernel and range of I. Is I a one–to–one linear
transformation? Does I map C 0 0, 1 onto ? Explain.
13