Download Linear Independence A consistent system of linear equations with

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Laplace–Runge–Lenz vector wikipedia , lookup

Gaussian elimination wikipedia , lookup

Matrix multiplication wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Euclidean vector wikipedia , lookup

Vector space wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Matrix calculus wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Four-vector wikipedia , lookup

System of linear equations wikipedia , lookup

Transcript
Linear Independence
€
€
A consistent system of linear equations with matrix
equation Ax = b, where A is an m × n matrix, has a
solution set whose graph in R n is a “linear” object,
that is, has one of only n + 1 possible shapes: a
1
point
(a copy of R 0), a line
€
€ (a copy of R ), a plane (a
3
n
copy of R 2), a 3-space
€ (a copy of R ), … , all of R .
We have seen how, given one particular solution
x = p, every
other solution is€a translate x = p + v h
€
homogeneous
€of a solution v h to the associated
€
€
system of equations with matrix form Ax = 0. That
is, the solution set for Ax = b has
€ the same “shape”
as, and is parallel to, the solution set for the
€
homogeneous
system Ax = 0.
€
€
So the shape of the solution set depends only on A
and not on b. In fact, the process of row reducing A
€
shows that the solutions v h to the associated
homogeneous system of equations have the form
v h = x 1 v 1 +  + x f v f , where x 1 , x 2 ,…, x f are the
free variables that
€ arise (each corresponding to a
column of A that does not contain a pivot entry).
Therefore, the shape of the solution set to Ax = b is
€ the number of free variables
determined entirely by
that appear in the row reduction procedure.
€
The fundamental case occurs when there are no
free variables, that is, when the reduced echelon
form of A has pivots in every row. In this case, by
expressing Ax = 0 as a vector equation of the form
[
x 1 a 1 +  + x n a n = 0, where A = a 1
€
a2
]
 an ,
€
we have a situation in which the only solution to
the system is the trivial solution x = v h = 0.
€
This leads to the following very important
definition: a set of vectors a1,a 2,…,a n is said to be
€
linearly independent when the vector equation
x 1 a 1 +  + x n a n = 0 only has the trivial solution
x 1 = x 2 =  = x n =€0 .
€
€
Otherwise, there is a nontrivial solution (that is, at
least one of the x’s in the solution can be nonzero),
and we say that the a’s are linearly dependent.
Then we have a linear dependence relation
amongst the a’s. Alternatively, to say that the a’s
are linearly dependent is to say that the zero vector
0 can be expressed as a nontrivial linear
combination of the a’s.
Determining whether a set of vectors a1,a 2,…,a n is
linearly independent is easy when one of the
vectors is 0: if, say, a 1 = 0, then we have a simple
solution to x 1 a 1 +  + x n a n =€0 given by choosing
€
€
€
€
€
€
€
x 1 to be any nonzero value we please and putting
all the other x’s equal to 0. Consequently, if a set of
vectors contains the zero vector, it must always be
linearly dependent. Equivalently, any set of
linearly independent vectors cannot contain the zero
vector.
Another situation in which it is easy to determine
linear independence is when there are more vectors
in the set than entries in the vectors. If n > m, then
the n vectors a1,a 2,…,a n in R m are columns of an
m × n matrix A. The vector equation
x 1 a 1 +  + x n a n = 0 is equivalent
€ to the matrix
equation
Ax = 0 whose
€ corresponding linear system
€
has more variables than equations. Thus there
must be at least one free variable in the solution,
meaning that there are nontrivial solutions to
€
x 1 a 1 +  + x n a n = 0: If n > m, then the set
{ a1,a 2,…,a n } of vectors in R m must be linearly
dependent.
€
When n is small we
€ have a clear geometric picture
of the relation amongst linearly independent
vectors. For instance, the case n = 1 produces the
equation x 1 a 1 = 0, and as long as a 1 ≠ 0, we only
have the trivial solution x 1 = 0 . A single nonzero
vector always forms a linearly independent set.
€
€
€
When n = 2, the equation takes the form
x 1 a 1 + x 2 a 2 = 0. If this were a linear dependence
relation, then one of the x’s, say x 1 , would have to
€ be nonzero. Then we could solve the equation for
a 1 and obtain a relation indicating that a 1 is a
scalar multiple of a 2. Conversely,
if one of the
€
vectors is a scalar multiple of the other, we can
express this in the form x 1 a 1 + x€2 a 2 = 0. Thus, a
set of two €
nonzero vectors is linearly dependent if
and only if they are scalar multiples of each other.
€
€
More generally,€we can prove the following
Theorem A set { a1,a 2,…,a n } of vectors is linearly
dependent if and only if at least one of the vectors
a i is a nontrivial linear combination of the others.
In fact, if€{ a1,a 2,…,a n } is a linearly dependent set
and a 1 ≠ 0, then there must be some a j (with j > 1)
which is a linear combination of the preceding
vectors
a 1 , a 2 ,…, a j − 1 .
€
€
€
€
€
Proof If { a1,a 2,…,a n } is a linearly dependent set,
€then there are values of the x’s, not all 0, that make
x 1 a 1 +  + x n a n = 0 true. If we choose the index i
corresponding
to some nonzero x, then solve the
€
vector equation for a i , this shows that it is a linear
combination of the other vectors in the set.
€
Conversely, if a i can be written as a linear
combination of the remaining a’s, moving a i to the
other side of this equation expresses 0 as a
nontrivial
€ linear combination of all the a’s, so
{ a1,a 2,…,a n } is a linearly dependent set.
€
Furthermore, if { a1,a 2,…,a n } is a linearly
dependent set and a 1 ≠ 0, then there is a nontrivial
solution to x 1 a 1 +  + x n a n = 0. Let j be the
largest subscript
whose coefficient x j in this
€
equation is€nonzero; then in fact,
x1 a
€1 +  + x j a j = 0 and we can solve this equation
for a j , thereby expressing€a j as a linear
combination of the preceding collection of vectors
a 1 , a 2 ,…, a j − 1 . //
€
€
€
€
€