Download Homework 9 - Solutions

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Quadratic form wikipedia , lookup

System of polynomial equations wikipedia , lookup

Euclidean vector wikipedia , lookup

Dimension wikipedia , lookup

Tensor operator wikipedia , lookup

History of algebra wikipedia , lookup

Dual space wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Equation wikipedia , lookup

Determinant wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Jordan normal form wikipedia , lookup

Gaussian elimination wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Bra–ket notation wikipedia , lookup

Cartesian tensor wikipedia , lookup

Matrix calculus wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Four-vector wikipedia , lookup

Matrix multiplication wikipedia , lookup

System of linear equations wikipedia , lookup

Linear algebra wikipedia , lookup

Basis (linear algebra) wikipedia , lookup

Transcript
Homework 9 - Solutions
Exercise 1 (1 pt.)
Consider the transformation T : R4 → R4 given by

−1 1
 0 −2
A=
 1
1
2
2
the matrix

3 −3
0
4 
.
−3 −1 
−6 −2
In Exercise 1 of Homework 7 you found bases for the kernel and the image of T . Find the dimensions
of ker T and im T and explain how these fit in the Rank-Nullity Theorem.
Solution
The bases for ker T and im T we found are
  

3
−1 



  

0
2
 ,

 1   0 





0
1
and

 

−1
1 




  −2 
0

 

 1  ,  1  ,





2
2
respectively. The dimension of a subspace is the number of vectors in any basis for it. Therefore
both the kernel and the image have dimension 2.
The Rank-Nullity Theorem states that for any linear transformation T : Rn → Rm ,
dim(ker T ) + dim(im T ) = dim Rn .
In this case dim(ker T ) = 2, dim(im T ) = 2, and dim R4 = 4, which make the above equation
2 + 2 = 4.
Exercise 2 (1 pt.)
Consider the subspace V of R3 with basis B = {~v1 , ~v2 }, where


 
1
3
~v1 =  −2  and ~v2 =  0  .
0
1
(a) Write the vectors ~x1 = [ 2 2 1 ]t and ~x2 = [ 7 − 2 2 ]t in V in terms of the basis B, i.e. find
[~x1 ]B and [~x2 ]B (you do not need to justify that ~x1 and ~x2 are in V ).
(b) Find the vectors ~y1 , ~y2 in R3 given that [~y1 ]B = [ 3 − 1 ]t and [~y2 ]B = [ −4 0 ]t .
Solution
(a) Writing ~x1 = a1~v1 + a2~v2 we get the system of equations


   
1
3
2
a1 + 3a2 = 2 = 2 a1~v1 + a2~v2 = ~x1 ⇔ a1  −2  + a2  0  =  2  ⇔ − 2a1
0
1
1
a2 = 1 1
The second and third equations imply a1 = −1 and a2 = 1 (and these values satisfy the first
equation too). So (a1 , a2 ) = (−1, 1) is the unique solution of the system. Hence
−1
~x1 = −~v1 + ~v2 , and so [~x1 ]B =
.
1
Similarly, writing ~x2 = b1~v1 + b2~v2 we get the system of equations


  

1
3
7
b
+
3b
=
7
1
2






= −2 b1~v1 + b2~v2 = ~x2 ⇔ b1 −2 + b2 0 = −2 ⇔ − 2b1
0
1
2
b2 = 2 The second and third equations imply b1 = 1 and b2 = 2. Thus
1
~x2 = ~v1 + 2~v2 , and [~x2 ]B =
.
2
(b) Since [~y1 ]B = [ 3 − 1 ]t and [~y2 ]B = [ −4 0 ]t , we have

   


 

1
3
0
1
−4
~y1 = 3v~1 − ~v2 = 3  −2  −  0  =  −6  and ~y2 = −4v~1 = −4  −2  =  8  .
0
1
−1
0
0
Exercise 3 (1 pt.)
Find a basis for the subspace of R3 .
V = [ x1 x2 x3 ]t ∈ R3 : x1 + x2 + x3 = 0
(you do not need to justify that V is a subspace).
Solution
Consider the vectors in R3


1
~v1 =  0 
−1


0
and ~v2 =  1  .
−1
Both of them are in the subspace V since the sums of their entries are zero. We claim that
B = {~v1 , ~v2 } is a basis for V . The vectors ~v1 , ~v2 are linearly independent. Indeed, if c1~v1 + c2~v2 = 0
is a linear relation, then



 

1
0
c1

c2
c1~v1 + c2~v2 = c1  0  + c2  1  = 
−1
−1
−c1 − c2
is zero, and so c1 = c2 = 0.
There are two ways to show that B = {~v1 , ~v2 } is a basis. One way is to use the definition of
a basis, i.e. show that V = Span(~v1 , ~v2 ). An alternative way is to show that dim V = 2, in which
case any two linearly independent vectors in V , for instance ~v1 and ~v2 , form a basis. We will show
it both ways.
To show that ~v1 and ~v2 span V , let ~x = [ x1 x2 x3 ]t be an arbitrary vector in V . Since it is in
V , x1 + x2 + x3 = 0 or equivalently x3 = −x1 − x2 . We have
  





x1
x1
1
0
 = x1  0  + x2  1  = x1~v1 + x2~v2 .
x2
~x =  x2  = 
x3
−x1 − x2
−1
−1
2
We see that any vector ~x ∈ V is a linear combination of ~v1 , ~v2 , so that V = Span(~v1 , ~v2 ).
We have seen that an m-dimensional subspace (of Rn ) has at most m linearly independent
vectors. Since there are 2 linearly independent vectors in V (which is a subspace of R3 ), it must
have dimension at least 2, i.e. dim V ≥ 2. Therefore the dimension of V is either 2 or 3, as it
is a subspace of R3 . If it is 3, then V = R3 . But this is not the case: for instance, the vector
~e1 = [ 1 0 0 ]t is not in V (its entries add to 1, not 0), and so V is not all of R3 . Hence dim V = 2
and B = {~v1 , ~v2 } is a basis for V .
Exercise 4 (2 pt.)
Consider the basis B = { ~v1 , ~v2 } of R2 , where ~v1 , ~v2 are the vectors:
1
1
~v1 =
, ~v2 =
1
4
(you do not need to justify that this is a basis).
(a) Find the matrix S such that ~x = S [~x]B for any ~x ∈ R2 . Also find its inverse S −1 , which is
the matrix such that [~x]B = S −1 ~x for any ~x ∈ R2 .
(b) Consider the linear transformation T : R2 → R2 given by the matrix
2 −1
A=
4 −3
Find [T (~v1 )]B and [T (~v2 )]B . Use your answer to find directly the matrix B of the linear
transformation T with respect to the basis B. Then find B using A and the matrices S and
S −1 from part (a). Compare your results.
(c) We denote by T n the composition
T
· · ◦ T} .
| ◦ ·{z
n times
Find the matrix associated to T 100 . (Your expression should not involve powers of matrices,
but may involve powers of numbers).
Solution
(a) The matrix S is the one whose columns are the vectors ~v1 , ~v2 , that is
1 1
S=
1 4
The determinant of S is det S = 1 · 4 − 1 · 1 = 3. Therefore, the inverse of S is
1
4 −1
−1
S =
3 −1 1
(b) We can calculate T (~v1 ) and T (~v1 ) using the matrix A. We have
2 −1
1
1
2 −1
1
−2
T (~v1 ) = A~v1 =
=
,
T (~v2 ) = A~v2 =
=
.
4 −3
1
1
4 −3
4
−8
It is obvious that T (~v1 ) = ~v1 and T (~v2 ) = −2~v2 . Hence
1
0
[T (~v1 )]B =
and [T (~v2 )]B =
.
0
−2
3
Then the matrix of the linear transformation T in terms of the basis B is


|
|
1 0


B = [T (~v1 )]B [T (~v2 )]B =
.
0 −2
|
|
Now we calculate B using A, S, S −1 . The matrices A and B are related by the equation
B = S −1 AS. We have
1
4 −1
2 −1
1 1
−1
B = S AS =
4 −3
1 4
3 −1 1
1 3 0
1
4 −1
1 −2
1 0
=
=
,
=
1 −8
0 −2
3 −1 1
3 0 −6
which agrees with the result above.
(c) Since the matrix of the composition of two linear transformations is equal to the product of
their matrices, it follows that the matrix associated to T 100 is A100 . Note that
A100 = (SBS −1 )100 = (SBS −1 ) · · · (SBS −1 ) = SB(S −1 S) · · · (S −1 S)BS −1 = SB 100 S −1 .
|
{z
}
100 times
It is easy to see that raising a diagonal matrix to a power is equivalent to raising its entries
to the same power, so that
100
1
0
1
0
100
.
=
B
=
0 2100
0
(−2)100
Therefore
A100
=
SB 100 S −1
1
1
0
1 1
4 −1
=
=
0 2100 3 −1 1
1 4
1
1
4 − 2100
−1 − 2100
=
=
3 4 − 4 · 2100 −1 + 4 · 2100
3
1
3
1 1
1 4
4 − 2100
4 − 2102
4
−1
−2100 2100
−1 − 2100
.
2102 − 1
Exercise 5 (1 pt.)
Let A, B, C ∈ Mn (R). Show the following:
(i) A is similar to itself.
(ii) If B is similar to A, then A is similar to B.
(iii) If C is similar to B, and B is similar to A, then C is similar to A.
(iv) If A, B are invertible and B is similar to A, then B −1 is similar to A−1 .
Solution
(i) The inverse of the identity matrix I is itself, i.e. I −1 = I. Clearly A = I −1 AI and so A is
similar to itself.
4
(ii) Write B = S −1 AS for some (invertible) matrix S. Multiplying this equation on the left by S
and on the right by S −1 we get
SBS −1 = S(S −1 AS)S −1 = (SS −1 )A(SS −1 ) = IAI = A.
Since A = SBS −1 = (S −1 )−1 BS −1 , A is similar to B.
(iii) Write C = T −1 BT and B = S −1 AS for invertible matrices S, T . Then
C = T −1 BT = T −1 (S −1 AS)T = (T −1 S −1 )A(ST ) = (ST )−1 A(ST )
Therefore C is similar to A.
(iv) Write B = S −1 AS for some invertible matrix S. Taking inverses in this equation gives
B −1 = (S −1 AS)−1 = S −1 A−1 (S −1 )−1 = S −1 A−1 S.
Thus B −1 is similar to A−1 .
Exercise 6 (1 pt.)
Let B = {~v1 , . . . , ~vm } be a basis for a subspace V of Rn . Show the following:
(i) [~x + ~y ]B = [~x]B + [~y ]B for any ~x, ~y ∈ V , and
(ii) [c~x]B = c[~x]B for any ~x ∈ V , c ∈ R.
Solution
(i) Let ~x, ~y ∈ V , and write
~x = a1~v1 + · · · + am~vm
and ~y = b1~v1 + · · · + bm~vm
for scalars a1 , . . . , am , b1 , . . . , bm ∈ R. In particular the vectors ~x and ~y with respect to the
basis B are [~x]B = [ a1 . . . am ]t and [~y ]B = [ b1 . . . bm ]t . Furthermore,
~x + ~y = (a1~v1 + · · · + am~vm ) + (b1~v1 + · · · + bm~vm ) = (a1 + b1 )~v1 + · · · + (am + bm )~vm ,
so that



[~x + ~y ]B = 


a1 + b1
a2 + b2
..
.

 
 
=
 
am + bm

a1
a2
..
.

 
 
+
 
am
b1
b2
..
.



 = [~x]B + [~y ]B .

bm
(ii) Let ~x ∈ V , and write ~x = a1~v1 + · · · + am~vm for scalars a1 , . . . , am ∈ R, as above. In particular
[~x]B = [ a1 . . . am ]t . Then
c~x = c(a1~v1 + · · · + am~vm ) = (ca1 )~v1 + · · · + (cam )~vm ,
so that



[c~x]B = 

ca1
ca2
..
.






 = c


cam
a1
a2
..
.
am
5



 = c[~x]B .

Exercise 7 (1 pt.)
Let T : Rn → Rm be a linear transformation. Show that:
(i) if n < m then the image of T cannot be equal to the whole space Rm , and
(ii) if n > m then the kernel of T cannot be trivial, i.e. ker T 6= {0}.
Solution
From the Rank-Nullity Theorem for Linear Transformations we know that
dim(ker T ) + dim(im T ) = n.
(i) Assume that the image of T is im T = Rm , so that dim(im T ) = m. Then the above equation
becomes:
dim(ker T ) + m = n ⇒ dim(ker T ) = n − m.
The term n − m is negative since n < m, but the dimension of the kernel (as of any subspace
of Rn ) is a non-negative number. So the case im T = Rm is impossible, i.e. the image of T
cannot be equal to the wholse space Rm .
(ii) Assume that the kernel of T is trivial, i.e. ker T = {0}, so that dim(ker T ) = 0. Then the
above equation becomes:
dim(im T ) = n.
The image of T is a subspace of Rm , and so its dimension is less or equal to the dimension of
Rm , which is m. Thus dim(ker T ) ≤ m, which contradicts dim(ker T ) = n. We conclude that
the case ker T = {0} is impossible, i.e. the kernel of T cannot be trivial.
6