Download 14 CHAPTER 2. LINEAR MAPS Thus one way of shrinking a given

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Dimension wikipedia , lookup

Jordan normal form wikipedia , lookup

Elementary algebra wikipedia , lookup

History of algebra wikipedia , lookup

System of polynomial equations wikipedia , lookup

Tensor operator wikipedia , lookup

Cross product wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Equation wikipedia , lookup

Matrix calculus wikipedia , lookup

Dual space wikipedia , lookup

Vector space wikipedia , lookup

Geometric algebra wikipedia , lookup

Euclidean vector wikipedia , lookup

Four-vector wikipedia , lookup

System of linear equations wikipedia , lookup

Cartesian tensor wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Bra–ket notation wikipedia , lookup

Linear algebra wikipedia , lookup

Basis (linear algebra) wikipedia , lookup

Transcript
14
CHAPTER 2. LINEAR MAPS
Thus one way of shrinking a given spanning set is to start eliminating linearly dependent elements. Before we describe this process more formally, however, it will be useful
to introduce some terminology and prove a couple of simple facts.
Suppose W is a subspace of some vector space V . We say that a set of vectors in W
is a basis for W if it spans W and is linearly independent. Thus a basis is meant to be a
spanning set of minimal size. One natural question that arises, however, is whether this
‘size’ is uniquely determined. It is.
Proposition 2.3.4. If {v1 , . . . , vn } and {w1 , . . . , wm } are both bases for the subspace W in V ,
then m = n.
Proof. Suppose for contradiction that m > n. Since the set {v1 , . . . , vn } already spans
W , so does the set {w1 , v1 , . . . , vn }. Since w1 is a linear combination of the v j , this
set must be linearly dependent. By the third notion of linear dependence above, some
vector in this set is equal to a linear combination of the preceding vectors. This cannot be w1 , since it has no preceding vectors. Thus there is some v j equal to a linear combination of the preceding vectors. Deleting this particular v j produces the set
{w1 , v1 , . . . , v j−1, v j+1 , . . . , vn } which still spans W .
Now consider the set {w2 , w1 , v1 , . . . , v j−1 , v j+1, . . . , vn }. This set must also be linearly dependent, so that some vector is equal to a linear combination fo the preceding
ones. This vector cannot be w1 or w2 , because the wi are linearly independent. Thus
some vk is equal to a linear combination of the preceding vectors, where k "= j.
We delete vk , and repeat the process over and over, each time deleting some v and
inserting some w, each time producing a set that still spans W . Because there are more
w’s than v’s, we eventually discover that the set {w1 , . . . , wn } spans W . But this means
that the original set of w vectors must be linearly dependent, contrary to assumption.
We deduce that m ≤ n. A similar argument shows that n ≤ m. Thus m = n.
We now have a basic process for shrinking spanning sets down to bases. Suppose W
is a subspace of V spanned by {w1 , w2 , . . . , wn }. If this set is linearly independent, it is a
basis already. If it is linearly dependent, then one of them is equal to a linear combination
of the others. This means we can exclude this one vector, and the remaining set will have
the same span. Now if this smaller set linearly independent, then it is a basis for W . If
it is not, then we repeat the procedure, removing some vector that is equal to a linear
combination of the others. The crucial point is that removing such vectors does not
diminish the span at all. All these sets will still span all of W . Clearly this process
cannot go on forever, as eventually we will run out of vectors. We have thus proven the
following.
Theorem 2.3.5. Any finite set of vectors contains a basis for the subspace that it spans.
Example. Describe the subspace spanned by the three vectors (1, 3), (2, 1) and (−1, 1) in R2 ,
and find a basis for this subspace.
2.3. SUBSPACES
15
We check first for linear independence by trying to solve the equation a(1, 3) +
b(2, 1) + c(−1, 1) = (0, 0). This leads to the system of equations
!
a + 2b − c = 0
3a + b + c = 0
Solving the last one for c, we see that c = −3a − b. Substituting into the first we find
that 4a + 3b = 0, so that b = −(4/3)a. Thus for any choice of a, if we set b = −(4/3)a
and c = −3a − b, we will have a solution. Taking a = 3 gives b = −4 and c = −5.
In other words 3(1, 3) − 4(2, 1) − 5(−1, 1) = (0, 0). Therefore these vectors are linearly
dependent.
On the other hand, this set does indeed span all of R2 . To show this, we write
a(1, 3) + b(2, 1) + c(−1, 1) = ( x, y) and see if this always has a solution, regardless of
choices for x and y. We are led to the system
!
a + 2b − c = x
3a + b + c = y
Solving as before for c in the second equation, we find that c = y − 3a − b. Plugging this
into the first equation produces 4a + 3b − y = x, so that b = 31 ( x + y − 4a). Thus we can
choose a arbitrarily and then, for any given x and y, we will find appropriate b and c
values from these two equations. Thus we will always find a solution, so these vectors
span all of R2 .
To find a basis inside this spanning set, we need to discard any linearly dependent
vector. Looking at the dependency equation above, it is clear that we can solve this for
any one of the three vectors. Thus we can eliminate any one of the three. The remaining
two are then easily checked to be linearly dependent, so that any two of these form a
basis for R2 .
2.3.3 Dimension
If a subspace W of a vector space V admits a finite spanning set, it is called finite dimensional. Otherwise it is called infinite dimensional. We saw in the last section that any two
finite bases for a finite dimensional subspace must contain the same number of vectors.
Thus if W is finite dimensional, we say that the dimension of W is equal to the number
of vectors in any basis for W .
Example. Rn is n–dimensional. So is Cn .
Example. The kernel of the linear map considered above is 1–dimensional. The image is
2–dimensional.
Just as every spanning set contains a basis, every linearly independent set can be
extended to a basis.
CHAPTER 2. LINEAR MAPS
16
Proposition 2.3.6. Suppose {v1 , . . . , vm } are linearly independent vectors in a finite dimensional vector space V . Then there is a basis for V of the form {v1 , . . . , vm , u1 , . . . , uk } where the
dimension of V is m + k.
Proof. Let {u1 , . . . , un } be a basis for the n–dimensional space V . Then the set
{ v1 , . . . , v m , u 1 , . . . , u n }
spans V , and so contains a basis. The process for finding a basis inside a spanning set
consists of removing, one at a time, vectors that are dependent on the other remaining
ones. At any given stage we might have several vectors to choose from for removal. We
claim that if we insist on always choosing the rightmost of the available options, then
we will never remove any of the v j . To see this, note that any dependency equation with
nontrivial solution must contain at least one of the ui , because the v j themselves are
independent. It follows that the basis thus produced will be of the form we desire.
This proposition also has immediate consequences for dimension, most importantly
the following.
Corollary 2.3.7. If W is a subspace of the finite-dimensional vector space V , then dim W ≤
dim V .
Proof. Any basis for W is a collection of linearly independent vectors in V , which therefore extends to a basis for V . The result follows.
We also note two more immediate corollaries.
Corollary 2.3.8. If {v1 , . . . , vm } is a linearly independent set of vectors in an n–dimensional
vector space V , then m ≤ n. Equivalently, any set with more than n vectors must be linearly
dependent.
Corollary 2.3.9. If {v1 , . . . , vm } spans an n–dimensional vector space V , then m ≥ n. Equivalently, any set with fewer than n vectors cannot span.
Example. We look back at the last example, and show how an understanding of dimension makes the problem much easier. Consider the three vectors (1, 3), (2, 1), and
(−1, 1). The fact that the dimension of R2 is 2 implies that this set is automatically linearly dependent. To find a basis inside, however, we still need to solve the dependency
equation. We do this as before, discovering that we may eliminate any one of the three
from the set. In any case the remaining two are clearly independent (because they are
not multiples of one another), and so are a basis for the subspace they span. Thus this
subspace is 2–dimensional, and hence must be all of R2 . Note that having an understanding of dimension allowed us to (i) know the answer to the dependency problem
before we started, and (ii) know that the vectors span all of R2 without checking the
general equation.
2.3. SUBSPACES
17
Example. Consider the vectors (1, 2, 3, 4), (2, 3, 1, 5), (−1, 0, 7, 2), and (0, 1, 0, 1). Dimension considerations here do not tell us immediately whether this set spans or is linearly
independent. We check the dependency equation
a(1, 2, 3, 4) + b(2, 3, 1, 5) + c(−1, 0, 7, 2) + d(0, 1, 0, 1) = (0, 0, 0, 0)
and discover (after some work) that a is arbitrary, b = − 32 a, c = − 13 a, and d = 0.
Thus we have an equation where we can solve for the first, second, or third vector. In
particular, setting a = −3 gives (−1, 0, 7, 2) = 3(1, 2, 3, 4) − 2(2, 3, 1, 5). Since every nontrivial solution to the dependency equation involves (−1, 0, 7, 2) nontrivially, eliminating
it produces a linearly independent set. Thus the subspace spanned by the four vectors
is 3–dimensional, and a basis for it is {(1, 2, 3, 4), (2, 3, 1, 5), (0, 1, 0, 1)}.
Example. Consider the vectors (1, 0, 1), (3, 0, 3), (1, 1, 0), and (2, 1, 1). Solving the dependency equation
a(1, 0, 1) + b(3, 0, 3) + c(1, 1, 0) + d(2, 1, 1) = (0, 0, 0)
as we have done before, we find that a and b are arbitrary, while c = a + 3b and d =
− a − 3b. So by setting b = 1, for instance, we find a solution to the dependency equation
that involves the vector (3, 0, 3) nontrivially. Thus it is dependent on the others and can
be removed.
But is the remaining set of three vectors linearly independent? Since we have two
unknowns that can be chosen freely, we can choose b = 0 and a = 1 expressing the first
vector in terms only of the third and fourth, not involving the second at all. Thus even
when the second vector is removed, the remaining set is still dependent, and we can
remove (for instance) the first.
The remaining two vectors, (1, 1, 0) and (2, 1, 1), are now seen to be linearly dependent. This is clear because one is not a multiple of the other, but also because the are no
choices of solution to the dependency equation that produce an equation involving only
these two vectors.
In sum, these four vectors span a 2–dimensional subspace of R4 , a basis for which is
{(1, 1, 0), (2, 1, 1)}.