Download Math 60 – Linear Algebra Solutions to Midterm 1 (1) Consider the

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Gaussian elimination wikipedia , lookup

Bivector wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Cross product wikipedia , lookup

Exterior algebra wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

System of linear equations wikipedia , lookup

Euclidean vector wikipedia , lookup

Vector space wikipedia , lookup

Matrix calculus wikipedia , lookup

Four-vector wikipedia , lookup

Vector field wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Transcript
Math 60 – Linear Algebra
Solutions to Midterm 1
(1) Consider the set R2 with the operations of addition and scalar multiplication redefined
as follows:
(x1 , y1 ) + (x2 , y2 ) = (x1 + x2 + 1, y1 + y2 − 3)
r(x, y) = (rx + r − 1, ry − 3r + 3)
(a) Find the additive identity and the additive inverse corresponding to these operations, justifying your choices.
Suppose (x, y) is an arbitrary vector in R2 . If (a, b) is the zero vector (additive
identity), then we must have (x, y) + (a, b) = (x, y). In other words, using
addition as defined here, we must have:
(x + a + 1, y + b − 3) = (x, y)
and so the zero vector is (−1, 3). Note that that is the same vector we would
get by computing 0(x, y) using the given definition of scalar multiplication.
If (c, d) is the additive inverse of (x, y), then (x, y) + (c, d) must equal the zero
vector (−1, 3). That is
(x + c + 1, y + d − 3) = (−1, 3)
and so the additive inverse of (x, y) is (−x−2, −y+6). Note that that is the same
vector we would have gotten by computing (−1)(x, y) using the given definition
of scalar multiplication.
(b) Under these operations, is scalar multiplication distributive over vector addition?
We wish to check whether r[(x1 , y1 ) + (x2 , y2 )] = r(x1 , y1 ) + r(x2 , y2 ) for all
scalars r and all vectors (x1 , y1 ) and (x2 , y2 ). Using the addition and scalar
multiplication operations given, we compute the left-hand side:
r[(x1 , y1 ) + (x2 , y2 )] = r(x1 + x2 + 1, y1 + y2 − 3)
= (r(x1 + x2 + 1) + r − 1, r(y1 + y2 − 3) − 3r + 3)
= (rx2 + rx2 + 2r − 1, ry1 + ry2 − 6r + 3)
and the right-hand side:
r(x1 , y1 ) + r(x2 , y2 ) = (rx1 + r − 1, ry1 − 3r + 3) + (rx2 + r − 1, ry2 − 3r + 3)
= ((rx1 + r − 1) + (rx2 + r − 1) + 1 ,
(ry1 − 3r + 3) + (ry2 − 3r + 3) − 3)
= (rx1 + rx2 + 2r − 1, ry1 + ry2 − 6r + 3)
The two sides are equal, proving that scalar multiplication is distributive over
vector addition. In fact, you can verify, using similar computations, that all 8
axioms hold, so that R2 is a vector space using these operations.
(2) Determine which of the following subsets are subspaces of the corresponding vector
spaces.
(a) V = {(x, y, z) ∈ R3 | |x| = 2|y| = 3|z|}.
The main impediment to V being a vector subspace is that the absolute value is
not additive. For a counterexample, we observe that bothe vectors (6, 3, 2) and
(−6, 3, 2) are in V (since |6| = 2|3| = 3|2| and | − 6| = 2|3| = 3|2|), but their
sum
(6, 3, 2) + (−6, 3, 2) = (0, 6, 4)
is not, since |0| =
6 2|6|. So the subset V is not a vector subspace of R3 .
(b) W = {f (x) ∈ F(R) | f (−x) = f (x)}.
Suppose f (x) and g(x) are two arbitrary vectors in W , and r ∈ R is any scalar.
We wish to show that the vector h(x) = f (x) + g(x) is in W . To do this, it is
enough to check whether h(x) = h(−x). But h(x) = f (x) + g(x) = f (−x) +
g(−x) = h(−x) (since we picked f (x) and g(x) ∈ W , and so f (x) = f (−x) and
g(x) = g(−x)). So W is closed under addition.
We also wish to check that the vector k(x) = rf (x) is in W . To do this, it is
enough to check whether k(x) = k(−x). But k(x) = rf (x) = rf (−x) = k(−x)
(again using f (x) = f (−x)). So W is closed under scalar multiplication. And
therefore W is a vector subspace of F(R).
(3) Find all values of a for which the following system of equations has
(a) No solutions.
(b) Only one solution.
(c) Infinitely many solutions, and find these solutions.
x + y − 3z = 0
−x
+ az = 2
ay − 2z = 2
To determine whether a system of linear equations has (one/many) solutions, we
must

 to its echelon
 form.
 reduce the system
1 1
−3 0
1 1 −3 0
 −1 0
1 2 →  0 1 a−3 2 
a 2  1·+
0 a
−2 2
0 a −2 2


1 1
−3 0
2 3 →  0 1
a − 3 2 
-a·+
0 0 −2 − a(a − 3) 2 − 2a

1 1
−3 0
a − 3 2 
= 0 1
0 0 −(a − 1)(a − 2) 2(1 − a)
Now the system is in echelon form. The only way for a system of equations to have
no solutions is when (in echelon form) a row of zeros on the left equals a nonzero
number on the right. In this case, when −(a − 1)(a − 2) = 0 and 2 − 2a 6= 0. That
is, when a = 2.

For the system of equations to have one unique solution, there must be at least one
solution, and there must be no free variables. In our system, we see that both x and
y are leading variables. If z is to be a free variable, we must have −(a − 1)(a − 2) 6= 0.
Note that this condition also allows the system to have at least one solution by
eliminating the possibility of a row of zeros. So the system will have one solution if
a 6= 1 and a 6= 2.
Finally, for the system to have infinitely many solutions, one of its variables must
be free, and there must be at least one solution. Combining the reasoning of the two
parts above, we see that z is free if −(a − 1)(a − 2) = 0, in other words, if a = 1 or
a = 2. For there to be at least one solution, the resulting row of zeros on the left
must be countered by a zero on the right. So 2 − 2a = 0, and so a = 1. So if a = 1
we can continue with the Gaussian elimination to find all solutions:




1 0 −1 −2
1 1 −3 0

 0 1 −2 2  -1·+
2 1 →  0 1 −2 2
0 0
0 0
0 0
0 0
  
  
 
x
−2 + r
−2
1
and so y  =  2 + 2r  =  2  + r 2
z
r
0
1
(4) Suppose V is an n-dimensional vector space, and that {~v1 , · · · , ~vn } are n vectors in
V . Prove one of the following statements, clearly stating which one you are proving.
• If {~v1 , · · · , ~vn } are linearly independent, then V = Span{~v1 , · · · , ~vn }.
We proved in class that if a set of vectors is linearly independent, then we
can add select vectors to that set to get a (possibly bigger) set that still is
linearly independent, but also spans the corresponding vector space. (This is
what the book calls the Expansion Theorem). So we can add some vectors
w
~ 1, · · · , w
~ r to the set and obtain a bigger set {~v1 , · · · , ~vn , w
~ 1, · · · , w
~ r } which is
linearly independent, but also spans V . In other words, this set would form a
basis for V . We are also given that dim(V ) = n; but our constructed basis has
n + r elements, and so dim(V ) = n + r. The dimension of a vector space cannot
be two different numbers (see for instance Theorem 3.10), so r must equal 0,
and hence {~v1 , · · · , ~vn } must have spanned V in the first place.
• If V = Span{~v1 , · · · , ~vn }, then {~v1 , · · · , ~vn } are linearly independent.
We also proved in class that if a given set of vectors spans a vector space, then
that set can be reduced (by throwing away select vectors) to obtain a (possibly)
smaller set of vectors which is linearly independent, but still spans the vector
space. (This is what the book calls the Contraction Theorem). So we can remove
r of the ~vi ’s and obtain a set of n − r vectors which are linearly independent and
still span V ; in other words, this set forms a basis for V , and so dim(V ) = n − r.
But we are also given that dim(V ) = n, and so r = 0. Therefore, {~v1 , · · · , ~vn }
must have been linearly independent in the first place.