Download Math 54. Selected Solutions for Week 2 Section 1.4

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Fundamental theorem of algebra wikipedia , lookup

Quadratic form wikipedia , lookup

Dual space wikipedia , lookup

Tensor operator wikipedia , lookup

Equation wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Determinant wikipedia , lookup

Jordan normal form wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Cartesian tensor wikipedia , lookup

Bra–ket notation wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Basis (linear algebra) wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Four-vector wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Linear algebra wikipedia , lookup

Matrix multiplication wikipedia , lookup

Matrix calculus wikipedia , lookup

System of linear equations wikipedia , lookup

Transcript
Math 54. Selected Solutions for Week 2
Section 1.4 (Page 42)
13.
 


0
3 −5
Let ~u =  4  and A =  −2 6  . Is ~u in the plane in R3 spanned by the columns
4
1
1
of A ? (See the figure [omitted].) Why or why not?

 

−5 
 3
First of all, the plane in R3 is just the set Span  −2  ,  6  , so the question


1
1
is asking whether or not ~u lies in that set.
As in previous exercises, this leads to a linear system whose augmented matrix is
reduced to echelon form as:
 

 

3 −5 0
3 −5 0
3 −5 0
 −2 6 4  ∼  0 8 4  ∼  0 8 4  .
3
3
1
1 4
0 0 0
0 83 4
The system is consistent (since the last column is not a pivot column), so ~u does lie in
the given plane.
33.
Suppose A is a 4 × 3 matrix and ~b is a vector in R4 with the property that A~x = ~b
has a unique solution. What can you say about the reduced echelon form of A ? Justify
your answer.
Since A~x = ~b has a unique solution, the associated linear system has no free
variables, and therefore all columns of A are pivot columns. So the reduced echelon
form of A must be


1 0 0
0 1 0

 .
0 0 1
0 0 0
36.
Suppose A is a 4 × 4 matrix and ~b is a vector in R4 with the property that A~x = ~b
has a unique solution. Explain why the columns of A must span R4 .
As in the previous exercise, since A~x = ~b has a unique solution, all columns of
A must be pivot columns. Therefore there are four pivot columns, hence four pivot
elements. These must all lie in different rows, so since there are four rows, all rows
must contain a pivot element. By Theorem 4 on page 39, it follows that the columns
of A span R4 .
1
2
Section 1.5 (Page 49)
21.
3
4
Let p~ =
and ~q =
. Find a parametric equation of the line M through p~
−3
1
and ~q . [Hint: M is parallel to the vector ~q − p~ . See the figure below [omitted].]
1
We have ~q − p~ =
. The line containing this vector is Span{~q − p~} , and is
4
given in parametric form as
1
~x = t
( t in R ) .
4
Therefore (as on page 47) the line through p~ and ~q is obtained by translating that
line by p~ ; it is given in parametric form as
3
1
~x =
+t
( t in R ) .
−3
4
(You could also use ~q in place of p~ .)
37.
Construct a 2 × 2 matrix A such that the solution set of the equation A~x = ~0 is the
line in R2 through (4, 1) and the origin. Then, find a vector ~b in R2 such that the
solution set of A~x = ~b is not a line in R2 parallel to the solution set of A~x = ~0 . Why
does this not contradict Theorem 6?
We can find a homogeneous linear equation in (x1 , x2 ) that has solution x1 = 4 ,
x2 = 1 ; it is x1 − 4x2 = 0 (or any nonzero scalar multiple of this equation). We need
a linear system with two such equations, so we can just use this equation twice. The
coefficient matrix of this linear system is our matrix A :
1 −4
A=
.
1 −4
For any vector ~x in R2 , the two entries of the product A~x must be the same. So, let
~b = 0 .
1
~
Then the
matrix
equation A~x = b is inconsistent, because when you row reduce the
matrix A ~b you find that the last column is a pivot column. The solution set of
this matrix equation is empty, so it is not a line in R2 parallel to the solution set of
A~x = ~0 .
This does not contradict Theorem 6, because Theorem 6 applies only to consistent
equations, and this system is not consistent.
3
38.
Let A be an m × n matrix and let w
~ be a vector in Rn that satisfies the equation
A~x = ~0 . Show that for any scalar c , the vector cw
~ also satisfies A~x = ~0 . [That is,
~
show that A(cw)
~ = 0 .]
By Theorem 5(b) (page 41) and the fact that w
~ satisfies A~x = ~0 ,
A(cw)
~ = c(Aw)
~ = c~0 = ~0 .
Therefore cw
~ satisfies the equation.
Section 1.7 (Page 62)
8.
Determine if the columns of the matrix

1
 −2
0
−2
4
1
3
−6
−1

2
2
3
form a linearly independent set. Justify your answer.
They are linearly dependent, by Theorem 8 on page 61 (the matrix has more
columns than rows).
13.
Find the value(s) of h for which the vectors


1
 5  ,
−3

3
 h 
−9

−2
 −9  ,
6


are linearly dependent. Justify your answer.
Let us row reduce the matrix whose columns are these vectors:

1
 5
−3
−2
−9
6
 
3
1
h  ∼ 0
−9
0
−2
1
0

3
h − 15 
0
The vectors are linearly dependent for all values of h , because the third column of the
above matrix is never a pivot column.
(This is similar to Example 2 on page 59, except that we left out the last column of
the augmented matrix, since it is always zero and therefore does not affect the process
of row reduction (it is never a pivot column).)
4
36.
The following statement is either true (in all cases) or false (for at least one example).
If false, construct a specific example to show that the statement is not always true.
Such an example is called a counterexample to the statement. If the statement is true,
give a justification. (One specific example cannot explain why a statement is always
true.)
If ~v1 , ~v2 , ~v3 are in R3 and ~v3 is not a linear combination of ~v1 and ~v2 , then
{~v1 , ~v2 , ~v3 } is linearly independent.
 
 
1
0
Take ~v1 = ~0 , ~v2 =  1  , and ~v3 =  0  . This (indexed) set is linearly dependent
0
0
(because the first vector is zero), but one cannot write ~v3 as a linear combination of ~v1
and ~v2 , because the first coordinates of ~v1 and ~v2 are zero, but the first coordinate
of ~v3 is not zero.
40.
Suppose an m × n matrix A has n pivot columns. Explain why for each ~b in Rm
the equation A~x = ~b has at most one solution. [Hint: Explain why A~x = ~b cannot
have infinitely many solutions.
The matrix A has n pivot columns, which is equal to its number of columns.
Therefore
every matrix of A is a pivot column. Therefore, in an augmented matrix
~
A b , all columns except for possibly the last one will be pivot columns since the
pivots of the “ A part” of this matrix are the same as the pivots of A . So the equation
A~x = ~b cannot have infinitely many solutions (regardless of what ~b is).
Section 1.8 (Page 70)
10.
Find all ~x in R4 that are mapped into
for the matrix

3
1
A=
0
1
the zero vector by the transformation ~x 7→ A~x
2
0
1
4
10
2
2
10

−6
−4 
 .
3
8
The question amounts to solving the matrix equation A~x = ~0 , so we row reduce
its augmented matrix to reduced echelon form:
3 2 10
1 0 2

0 1 2
1 4 10

1
0
∼
0
0

−6
−4
3
8
0 2
2 4
0 0
0 0
 
0
1 0 2
0   3 2 10
∼
0
0 1 2
0
1 4 10
 
−4 0
1 0
6 0 0 1
∼
0 0
0 0
0 0
0 0
 
−4 0
1
−6 0   0
∼
3 0
0
8 0
0

2 −4 0
2 3 0
 .
0 0 0
0 0 0
0
2
1
4
2
4
2
8
−4
6
3
12

0
0

0
0
5
In parametric vector form, the solutions are therefore



4
−2
 −3 
 −2 
~x = x3 
 .
 + x4 
0
1
1
0


−1
 3 
Let ~b = 
 , and let A be the matrix in Exercise 10. Is ~b in the range of the linear
−1
4
transformation ~x 7→ A~x ? Why or why not?

12.
This leads to
as follows:

3
1

0
1
a linear system whose augmented matrix is (partially) row reduced
2 10
0 2
1 2
4 10

1
0
∼
0
0
 
1 0 2
−6 −1
−4 3   3 2 10
∼
0 1 2
3 −1
1 4 10
8
4
 
0 2 −4
3
1
2 4 6 −10   0
∼
1 2 3
−1
0
4 8 12
1
0

−4 3
−6 −1 

3 −1
8
4

0 2 −4
3
2 4 6 −10 
 .
0 0 0
4
0 0 0
21
At this point we can stop, because it is clear that the last column is a pivot column, so
the linear system is inconsistent. Therefore ~b is not in the range of ~x 7→ A~x .
(Another way to see this is to notice that if ~b is to equal the expression in the
answer to Exercise 10, then x3 must be −1 and x4 must be 4 . But using those values


18
 −10 
gives ~x = 
 , which is not ~b .)
−1
4
x1
−3
7
20. Let ~x =
, ~v1 =
, and ~v2 =
, and let T : R2 → R2 be a linear
x2
5
−2
transformation that maps ~x into x1~v1 + x2~v2 . Find a matrix A such that T (~x) is
A~x for each ~x .
We have
A = [ T (~e1 ) T (~e2 ) ] = [ ~v1
32.
−3
~v2 ] =
5
7
−2
.
Show that the transformation T defined by T (x1 , x2 ) = (x1 − 2|x2 |, x1 − 4x2 ) is not
linear. [In this exercise, column vectors are written as rows; for example, ~x = (x1 , x2 )
and T (x) is written as T (x1 , x2 ) .]
6
We have
T (0, 1) + T (0, −1) = (−2, −4) + (−2, 4) = (−4, 0) ,
but
T ((0, 1) + (0, −1)) = T (0, 0) = (0, 0) .
These are not equal, so T does not preserve vector addition. Therefore it is not a linear
transformation.
Section 1.9 (Page 80)
10.
Find the standard matrix of T , where T : R2 → R2 first reflects points through the
horizontal x1 -axis and then reflects points through the line x2 = x1 .
We have
T :
x1
x2
7→
x1
−x2
7→
−x2
x1
.
Therefore the standard matrix of T is
0
[ T (~e1 ) T (~e2 ) ] =
1
−1
0
.
You can also see this using the standard matrices given in the first and third rows
of the table on page 75:
0 1
1 0
0 −1
A=
=
.
1 0
0 −1
1 0
12.
Show that the transformation in Exercise 10 is merely a rotation about the origin.
What is the angle of the rotation?
The standard matrix obtained in Exercise 10 coincides with the matrix of Example
3 on page 74 when φ = π/2 . So the transformation is rotation counterclockwise about
the origin by π/2 radians.
34.
Let S : Rp → Rn and T : Rn → Rm be linear transformations. Show that the mapping ~x 7→ T (S(~x)) is a linear transformation (from Rp to Rm ). [Hint: Compute
T (S(c~u + d~v )) for ~u , ~v in Rp and scalars c and d . Justify each step of the computation, and explain why this computation gives the desired conclusion.]
We have
T (S(c~u + d~v )) = T (S(c~u) + S(d~v ))
= T (cS(~u) + dS(~v ))
since S preserves addition
since S preserves scalar multiplication
= T (cS(~u)) + T (dS(~v ))
since T preserves addition
= cT (S(~u)) + dT (S(~v ))
since T preserves scalar multiplication .
Taking c = d = 1 gives T (S(~u + ~v )) = T (S(~u)) + T (S(~v )) , and taking d = 0 , ~v = ~0
gives T (S(c~u)) = cT (S(~u)) . These are the two properties required for ~x →
7 T (S(~x))
to be a linear transformation.