Download Chapter 22. Subspaces, linear maps and the Kernel

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Exterior algebra wikipedia , lookup

Jordan normal form wikipedia , lookup

Euclidean vector wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Matrix calculus wikipedia , lookup

Four-vector wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Vector space wikipedia , lookup

System of linear equations wikipedia , lookup

Transcript
Chapter 22. Subspaces, linear maps and the Kernel-Image theorem
A subset U of a vector space V is called a subspace of V if the following two conditions hold:
S1) If u, u0 are any two vectors in U then u + u0 ∈ U.
S2) If u ∈ U and c ∈ R, then cu ∈ U.
In brief, U is a subspace if it is closed under addition and scalar multiplication.
Examples:
• A hyperplane in Rn defined by an equation a1 x1 + · · · + an xn = 0 (where the ai are given
constants) is a subspace of Rn . An intersection of such hyperplanes is also a subspace of Rn .
• The set of polynomials f ∈ Pn such that f (1) = 0 is a subspace of Pn .
• The set of solutions to the differential equation f 00 + af 0 + bf = 0 (where a and b are given
constants) is a subspace of C ∞ (R).
A subspace is itself a vector space.
Proposition 1 If dim V = n, then any subspace U ⊂ V is also finite-dimensional and dim U ≤ n.
And if dim U = n, then U = V.
Proof: Since dim V = n, no subset of V with more than n elements can be linearly independent, by
the Goldilocks Principle in chap. 21.
Now suppose U is a subspace of V that is not finite dimensional. That means U has no finite spanning
set. Let u1 be a nonzero vector in U. Since {u1 } cannot span U, there is another vector u2 ∈ U which
is not a scalar multiple of u1 . Now {u1 , u2 } cannot span U so there is u3 ∈ U which is not in the span
of {u1 , u2 }. Then there is u4 ∈ U not in the span of {u1 , u2 , u3 }, etc. Continuing like this we arrive
at a subset {u1 , . . . , un+1 } of U such that no uj is in the span of {u1 , . . . , uj−1 }. This set of vectors is
linearly independent. For suppose c1 , . . . , cn+1 are scalars such that
c1 u1 + · · · + cn+1 un+1 = 0.
Since un+1 is not in the span of {u1 , . . . , un }, we must have cn+1 = 0. Now
c1 u1 + · · · + cn un = 0.
Since un is not in the span of {u1 , . . . , un−1 }, we must have cn = 0, etc. So we find that all ci = 0.
We now have a linearly independent subset of V with more than n elements, which is a contradiction.
Hence U is finite-dimensional, as claimed.
Any basis of U would be a linearly independent set in V, so cannot have more than n elements.
Therefore dim U ≤ n.
Finally, suppose dim U = n. By Prop. 2 in chapter 21, any basis B of U is contained in a basis B 0 of
V. But since dim U = n = dim V, we must have B = B 0 , so B is also a basis of V. Now if v is any
1
element of V, we can write v as a linear combination of the vectors in B, which lies in U. Therefore
V ⊂ U, so U = V. As a corollary, we find that the various subspaces of V are stratified according to their dimension.
More precisely:
Proposition 0.1 Let U ⊂ U0 be finite dimensional subspaces of a vector space V. Then
dim U = dim U0
if and only if U = U0
Proof: If U = U0 it is obvious that dim U = dim U0 . If dim U = dim U0 , then applying Prop. 1 to
the pair U ⊂ U0 , we get U = U0 . Subspaces arise from linear functions between vector spaces. Suppose we have two vector spaces V
and W. A function
L : V −→ W
is called linear if the following two conditions hold:
L1) If v, v0 are any two vectors in V then L(v + v0 ) = L(v) + L(v0 ).
L2) If v ∈ V and c ∈ R, then L(cv) = cL(u).
In brief, L is a linear function if it preserves the addition and scalar multiplication in V and W.
There are two subspaces associated with a linear function L : V → W, namely the kernel of L:
ker L = {v ∈ V : L(v) = 0},
and the image of L:
im L = {L(v) : v ∈ V} = {w ∈ W : w = L(v) for some v ∈ V}.
Conditions S1) and S2) combined with L1) and L2) imply that ker L is subspace of V and im L is a
subspace of W. (See the exercises.)
Examples:
• The function L : Rn → R given by
L(x1 , . . . , xn ) = x1 + · · · + xn
is linear. So ker L is the hyperplane in Rn defined by an equation x1 +· · ·+xn = 0 and im L = R.
• The function L : Pn → Pn given by
df
dx
is linear. So ker D consists of the constant polynomials and im D = Pn−1 .
D(f ) =
2
• The function L : C ∞ (R) −→ C ∞ (R) given by L(f ) = f 00 + af 0 + bf is linear (where a, b are
given constants). So ker L is the set of solutions to the differential equation f 00 + af 0 + bf = 0.
It is a fact from differential equations that for every g ∈ C ∞ (R) there exists f ∈ C ∞ (R) such
that g = f 00 + af 0 + bf . This means that im L = C ∞ (R).
We can also describe ker L and im L in terms of solutions of equations:
Proposition 2: Let L : V → W be a linear function.
1. A vector w ∈ W lies in im L iff the equation L(v) = w has a solution for some v ∈ V.
2. The difference between any two solutions of L(v) = w lies in ker L.
Proof: Part 1 is immediate from the definition. The proof of part 1 is in the exercises. Example: Let us find all solutions of the differential equation
f ” + f = x2 .
Here the linear map is L : C ∞ (R) → C ∞ (R), given by L(f ) = f ” + f . We have seen that ker L has
basis {cos x, sin x}. One solution of L(f ) = x2 is f = x2 − 2. From Prop. 2 part 2 it follows that
every solution of L(f ) = x2 is of the form
f (x) = c1 cos x + c2 sin x + x2 − 2,
for some constants c1 , c2 . The kernel and image of a linear function are related by the 1
Kernel-Image Theorem: Let L : V → W be a linear function between two vector spaces V and W.
Assume that V is finite-dimensional. Then ker L and im L are both finite dimensional and we have
dim(ker L) + dim(im L) = dim V.
Proof: By Prop. 1, ker L is finite dimensional because it is a subspace of the finite-dimensional vector
space V. To prove that im L is finite dimensional we must find a finite spanning set for im L. Let
{y1 , . . . , yn } be a basis of V. Every vector in im L is of the form L(v) for some v ∈ V. We can write
v as a linear combination: v = c1 y1 + · · · + cn yn and
L(v) = c1 L(y1 ) + · · · + cn L(yn ).
Thus, {L(y1 ), . . . , L(yn )} is a finite spanning set for im L, as desired. So we have proved that ker L
and im L are both finite dimensional. Let
k = dim(ker L),
1
m = dim(im L)
In chap. 19 we discussed the Kernel-Image Theorem informally, in terms of “conservation of information”.
3
and let {v1 , . . . , vk } be a basis of ker L and let {w1 , . . . , wm } be a basis of im L.
By definition each wi = L(ui ) for some (possibly many) ui ∈ V. These ui are not unique; we choose
them arbitrarily. Let
B = {v1 , . . . , vk , u1 , . . . , um }
be the set of vectors obtained, however arbitrarily, in this way. I claim that B is a basis of V.
2
To show spanning, let v ∈ V. Then L(v) ∈ im L, which is spanned by the wi ’s, so there are scalars
ck+1 , . . . , ck+m such that
L(v) = ck+1 w1 + · · · + ck+m wm .
Since each wi = L(ui ), we have
L(v) = ck+1 L(u1 ) + · · · + ck+m L(u)m = L(ck+1 u1 + · · · + ck+m um ).
It follows that
L(v − ck+1 u1 − · · · − ck+m um ) = 0,
which means that
v − ck+1 u1 − · · · − ck+m um ∈ ker L.
Since ker L is spanned by the vi0 s, there are scalars c1 , . . . , ck such that
v − ck+1 u1 − · · · − ck+m um = c1 v1 + · · · + ck vk .
Therefore,
v = c1 v1 + · · · + ck vk + ck+1 u1 + · · · + ck+m um ,
so we have proved that B spans V.
To show linear independence, suppose c1 , . . . , ck+m are scalars such that
c1 v1 + · · · + ck vk + ck+1 u1 + · · · + ck+m um = 0.
Then
0 = L(0) = L(c1 v1 + · · · + ck vk + ck+1 u1 + · · · + ck+m um )
= c1 L(v1 ) + · · · + ck L(vk ) + ck+1 L(u1 ) + · · · + ck+m L(um )
= ck+1 w1 + · · · + ck+m wm ,
since each L(vi ) = 0 and each L(ui ) = wi . Since the wi ’s are linearly independent, we must have
ck+1 = · · · = ck+m = 0. But now
c1 v1 + · · · + ck vk = 0,
and since the vi ’s are linearly independent we must have c1 = · · · = ck = 0. We have proved that B is
linearly independent, so is a basis of V, Since dim V is the number of elements in any basis of V we
have
dim V = k + m = dim(ker L) + dim(im L),
and the theorem is proved. 2
In general, this basis will not be the same as the basis {y1 , . . . , yn } used in the first part of the proof.
4
Exercise 22.1 Suppose L : V → W is a linear function. Prove that ker L is a subspace of V, and
that im L is a subspace of W.
Exercise 22.2 Let L : Pn → R be the function given by
Z 1
L(f ) =
f (x) dx.
0
(a) Show that L is a linear function.
(b) Determine the kernel and image of L.
(c) Determine the dimension of the subspace {f ∈ Pn :
R1
0
f (x) dx = 0}.
Exercise 22.3 Let D : Pn → Pn be the function given by
D(f ) =
df
+ f.
dx
(a) Show that D is a linear function.
(b) Show that im D = Pn .
(c) Use the Kernel-Image Theorem to compute ker D.
(d) The function f = e−x satisfies the equation f 0 + f = 0. Does this contradict your result in (c)?
Exercise 22.4 Compute the kernel and image of the matrix that you used to project the hypercube in
to R2 .
Exercise 22.5 Use the Kernel-Image Theorem to compute the dimension of the vector space
V = {f ∈ Pn : f (0) = f (1) = 0}.
[Hint: Find a linear map L : Pn → R2 such that V = ker L.]
Exercise 22.6 A function L : V → W is called one-to-one if
L(v) = L(v0 )
implies v = v0 .
Assume that L is linear. Prove that L is one-to-one if and only if ker L = {0}.
Exercise 22.7 (Part 2 of Prop. 2) Let L : V → W be a linear function, suppose w ∈ im L, and that
L(v) = w. Suppose also that L(v0 ) = w. Prove that v0 = v + u for some vector u ∈ ker L.
5