Download Exam #2 Solutions

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Rotation matrix wikipedia , lookup

Linear least squares (mathematics) wikipedia , lookup

Determinant wikipedia , lookup

Vector space wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Principal component analysis wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Gaussian elimination wikipedia , lookup

Matrix multiplication wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Jordan normal form wikipedia , lookup

Matrix calculus wikipedia , lookup

Four-vector wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

System of linear equations wikipedia , lookup

Transcript
1. (5 points each) State whether the following are true or false. If true, say why. If false,
give reasons or give a counterexample.
a. The kernel of a linear transformation is a vector space.
Solution:
True. The kernel of a linear transformation is a subspace, and since subspaces are vector
spaces themselves, it is a vector space.
b. If v1 and v2 are linearly independent eigenvectors, then they correspond to distinct
eigenvalues.
Solution:
2 0
1 
0 
False. Let A = 
. Then, obviously, the eigenvalue of A is 2. But   and   are

0 2
0 
1 
both eigenvectors and they are linearly independent.
c. Every 2×2 invertible matrix is diagonalizable.
Solution:
1 1
False. Let A = 
 . This matrix is invertible, because its eigenvalue is 1, which is
0 1
non-zero. But when we find a basis for the eigenspace of 1, we find that we only get a 1dimensional space, which means that we cannot find 2 linearly independent eigenvectors
to make up a basis. Thus, by the Diagonalization Theorem, A is not diagonalizable.
d. The columns of the change-of-coordinates matrix
C
P B are linearly independent.
Solution:
True. C P B = [b1 ]C [bn ]C  , and since we have that B = {b1, …, bn} is a linearly
independent set and [ ]C is an isomorphism, {[b1]C, …, [bn]C} is a linearly independent
set.
2. Show that “similar to” is an equivalence relation. That is, if A, B, and C are n×n
matrices, show that:
a. (10 points) A is similar to A.
Solution:
We want to find an invertible matrix P such that A = P-1AP.
I is an invertible matrix, and we have A = I-1AI. Thus A is similar to A.
b. (10 points) If A is similar to B, then B is similar to A.
Solution:
Since A is similar to B, we have that B = P-1AP for some invertible matrix P.
Then, by multiplying both sides on the left by P and on the right by P-1, we have that
PBP-1 = A. So letting Q = P-1, we have that A = Q-1BQ. Thus B is similar to A.
c. (10 points) If A is similar to B and B is similar to C, then A is similar to C.
Solution:
Since A is similar to B, we have that B = P-1AP for some invertible matrix P. And since
B is similar to C, we have that C = Q-1BQ for some invertible matrix Q.
Therefore, plugging in B = P-1AP into the second equation C = Q-1BQ, we get that:
C = Q-1(P-1AP)Q
= (Q-1P-1)A(PQ)
= (PQ)-1C(PQ)
(by associativity of multiplication of matrices)
(by properties of inverses of matrices)
Thus, letting PQ = R, C = R-1AR, and we have that A is similar to C.
3. a. (20 points) Let V be the set of vectors in R2 that lie in the first quadrant, meaning
V = {(a1, a2) : a1 > 0 and a2 > 0 with a1, a2  R}.
If (a1, a2), (b1, b2)  V and c R, define the operations on V to be:
(a1, a2) + (b1, b2) = (a1 + b1, a2 + b2)
c(a1, a2) = (ca1, ca2)
Is V a subspace of R2 under these operations?
Solution:
No, V is not a subspace of R2.
Two answers will work –
First, the zero vector 0 of V should be (0,0), but it is not an element of V.
Second, if c ≤ 0, then ca1 ≤ 0 for a1 > 0, which implies that (ca1, ca2)  V
b. (25 points) Let W = {(a1, a2) : a1 > 0 and a2 > 0 with a1, a2  R}.
Now if (a1, a2), (b1, b2)  W and c R, define the operations on W to be:
(a1, a2) + (b1, b2) = (a1b1, a2b2)
c(a1, a2) = (a1c, a2c)
Is W a vector space under these operations? (Caution: What is the zero element, 0?
Remember the definition…) [Use the back of the page if you need it.]
Solution:
We check the 10 properties of a vector space: Let u = (a1, a2), v = (b1, b2), w = (c1, c2) be
elements of W, and let c, d  R.
(i)
Since a1, a2, b1, and b2 are all greater than zero, a1b1 and a2b2 are greater than zero.
Thus u + v  W.
(ii) u + v = (a1, a2) + (b1, b2) = (a1b1, a2b2) = (b1a1, b2a2) = (b1, b2) + (a1, a2) = v + u.
(iii) (u + v) + w = [(a1, a2) + (b1, b2)] + (c1, c2) = (a1b1, a2b2) + (c1, c2) = ([a1b1]c1, [a2b2]c2)
= (a1[b1c1], a2[b2c2]) = (a1, a2) + (b1c1, b2c2) = (a1, a2) + [(b1, b2) + (c1, c2)]
= u + (v + w)
(iv) The zero vector in this space is (1, 1) since (a1, a2) + (1, 1) = (1a1, 1a2) = (a1, a2).
And (1, 1)  W since 1 > 0.
1 1
(v) For u = (a1, a2), -u = ( , ) since
a1 a 2
a a
1 1
u + (-u) = (a1, a2) + ( ,
) = ( 1 , 2 ) = (1, 1).
a1 a 2
a1 a 2
(vi) For c  R, a1c and a2c are still both greater than zero, since a negative power still
doesn’t change the sign of a1 or a2. Thus cu = c(a1, a2) = (a1c, a2c)  W.
(vii) c(u + v) = c(u + v) = c[(a1, a2) + (b1, b2)] = c(a1b1, a2b2) = ([a1b1]c, [a2b2]c)
= (a1cb2c, a2cb2c) = (a1c, a2c) + (b1c, b2c) = c(a1, a2) + c(b1, b2) = cu + cv.
(viii) (c + d)u = [c + d](a1, a2) = (a1c+d, a2c+d) = (a1ca1d, a2ca2d) = (a1c, a2c) + (a1d, a2d)
= c(a1, a2) + d(a1, a2) = cu + du.
(ix) c(du) = c(a1d, a2d) = ([a1d]c, [a2d]c) = (a1dc, a2dc) = (a1cd, a2cd) = [cd](a1, a2) = (cd)u.
(x) 1u = 1(a1, a2) = (a11, a21) = (a1, a2) = u.
Therefore, yes, W is a vector space.
 2  0 
3 2
4. Take V = R2 with bases B = {  ,   } and C = {  ,   }.
0 1
1 4
4
Also take x  R2 with x =   .
3
a. (10 points) Find [x]B and [x]C.
Solution:
2 0 4 1 0 2
2
For [x]B : 
 ~ 
  [x]B =   .
3
0 1 3 0 1 3
3 2 4 1 4 3 1 4 3  1 4 3  1 0 1 
For [x]C: 
 ~ 
 ~ 
~ 
 ~ 

1
1
1 4 3 3 2 4 0  10  5 0 1 2  0 1 2 
1
 [x]C =   .
 12
b. (20 points) Find the change-of-coordinates matrices
C
P B and B P C .
Solution:
C
P B = [b1 ]C [bn ]C  , so we have to find [b1]C and [b2]C.
3 2 2 1 4 0 1 4 0 1 4 0  1 0 4 5 
For [b1]C: 
 ~ 

 ~ 
 ~ 
 ~ 
1
1
1 4 0 3 2 2 0  10 2 0 1  5  0 1  5 
3 2 0 1 4 1 1 4 1  1 4 1  1 0  15 
For[b2]C: 

 ~ 
 ~ 
~ 
 ~ 
3
3
1 4 1 3 2 0 0  10  3 0 1 10 0 1 10 
Thus
 4 5  15 
P
=
 1
.
C
B
 5 310 
Now, B P C = ( C P B )-1. First, det C P B =
 310
Thus, B P C = 5 
 15
  3 2 1
= 
.
4 
5
 1 4
1
5
1
5
.
c. (20 points) Now let T: R2 → R2 be a linear transformation such that
T(a, b) = (a + b, a – b)
Take B as the basis for the domain and C as the basis for the codomain. Find the matrix
of T relative to B and C.
Solution:
The matrix of T relative to B and C is M = [T(b1 )]C
[T(b 2 )]C  .
T(b1) = (2, 2) and T(b2) = (1, -1).
3 2 2 1 4 2 1 4 2  1 4 2  1 0 2 5 
For [T(b1)]C: 
 ~ 
 ~ 
 ~ 
 ~ 

2
2
0

10

4
1
4
2
3
2
2
0
1
5
 

 
 
 0 1 5 
3 2 1  1 4  1 1 4  1 1 4  1  1 0 3 5 
For [T(b2)]C: 
 ~ 

 ~ 
 ~ 
 ~ 
2
2
0
1

1
4
1
3
2
1
0

10
4
5
 0 1  5 

 
 
 
2 5  35 
Thus M = 
.
2 5  2 5 
d. (5 points) Write a matrix equation that can be used to find [T(x)]C.
Solution:
[T(x)]C = M [x]B
e. (10 points) What is [T(x)]C?
 2 5  3 5   2   4 5  9 5   13 5 
[T(x)]C = M [x]B = 
  = 
 = 
.
 2 5  2 5  3  4 5  6 5   2 5 
5. Let T: V→ W be a linear transformation between finite-dimensional vector spaces V
and W, and let H be a nonzero subspace of the vector space V.
a. (20 points) If T is one-to-one, show that dim T(H) = dim H, where
T(H) = {T(h): h H}.
Solution:
Since V is finite dimensional and H is a subspace of V, we have that H is finitedimensional.
Let dim H = n and let {b1, …, bn} be a basis for H.
Claim: {T(b1), …, T(bn)} is a basis for T(H).
First we show that {T(b1), …, T(bn)} spans T(H) (i.e., that any vector in T(H) can be
written as a linear combination of T(b1), …, T(bn)).
Take x  T(H). Then, by definition of T(H), x = T(h) for some h  H.
Since {b1, …, bn} is a basis for H, we can write h = c1b1 + …+ cnbn.
Thus: x = T(h) = T(c1b1 + …+ cnbn)
= c1T(b1) + …+ cnT(bn)
(since T is a linear transformation)
Therefore x can be written as a linear combination of T(b1), …, T(bn) and we have
that {T(b1), …, T(bn)} spans T(H).
Now we show that {T(b1), …, T(bn)} is a linearly independent set.
This was actually a homework problem (§4.3 #32). I think the best way to tackle proving
this is by contrapositive: Assume that {T(b1), …, T(bn)} is a linearly dependent set and
show that {b1, …, bn} must be linearly dependent, given that T is a one-to-one linear
transformation.
Since {T(b1), …, T(bn)} is linearly dependent, there exist real numbers c1, …, cn,
not all zero, such that c1T(b1) + …+ cnT(bn) = 0. Thus, since T is a linear
transformation, this implies that T(c1b1 + …+ cnbn) = 0.
Now, since T is one-to-one, if T(x) = T(y), then x = y.
We always have that T(0) = 0; and now we have that T(c1b1 + …+ cnbn) = 0.
Therefore, c1b1 + …+ cnbn = 0, which implies that {b1, …, bn} is linearly
dependent since the c1, …, cn are not all zero.
Hence, we have proved that {T(b1), …, T(bn)} is a linearly independent spanning set of
T(H), which implies that it is a basis.
And since T is one-to-one, since the b1, …, bn were all distinct (otherwise we would have
a dependence), we know that the T(b1), …, T(bn) are all distinct. Thus dim T(H) = n =
dim H.
b. (15 points) If T is one-to-one and onto, what order relation (<, >, =, ≤, ≥) goes in the
blank and why?
dim V ____ dim W
Solution:
dim V = dim W
Part (a) implies that since T is one-to-one, we have dim V = dim T(V).
Also, since T is onto, we have that W = T(V).
Thus, dim V = dim W.
2 3
6. Let A = 
.
4 1
a. (10 points) What are the characteristic equation and the characteristic polynomial of A?
Solution:
Characteristic equation: det(A – λI) = 0
For this matrix it is: λ2 – 3λ – 10 = 0
Characteristic polynomial: det(A – λI)
For this matrix it is: λ2 – 3λ – 10
b. (10 points) What are the eigenvalues of A?
Solution:
The eigenvalues are the roots of the characteristic polynomial:
λ2 – 3λ – 10 = (λ – 5)( λ +2)  the eigenvalues are λ = 5 and λ = -2.
c. (15 points) What are the bases for the eigenspaces of A?
Solution:
The eigenspace of each eigenvalue λ is the null space of (A – λI).
 3 3   3 3 1  1
For λ = 5: A – 5I = 
 ~ 
 ~ 

 4  4  0 0  0 0 
1
 a basis for this eigenspace is {   }.
1
4 3 4 3 1 3 4 
For λ = -2: A + 2I = 
 ~ 
 ~ 

4 3 0 0 0 0 
 3 4 
 3
 a basis for this eigenspace is { 
} OR {   }.

 1 
4
d. (15 points) Is A diagonalizable? If so, what are an invertible matrix P and a diagonal
matrix D such that A = PDP-1?
Solution:
Yes, A is diagonalizable because A is a 2×2 matrix and has 2 distinct eigenvalues.
And
1  3
P= 
 (matrix whose columns are linearly independent eigenvectors of A),
1 4 
5 0 
D= 
 (diagonal matrix with the eigenvalues on the diagonal in the same order
0  2
that the eigenvectors appear in P).
7. (Bonus problem, 10 points)
Find a basis for the kernel and a basis for the range of the linear transformation
T: P3 → P3 defined by T(a1t3 + a2t2 + a3t + a4) = (a1 – a2)t3 + (a3 – a4)t.
Solution:
We can easily read off the basis of the range of T from the equation above: {t3, t}. These
polynomials are obviously linearly independent in P3, and they span Range(T) since
every polynomial that is hit by T will be written in terms of these (by the way T is
defined above).
For the basis of the kernel:
The kernel of T is the set of all polynomials in P3 that are sent to the zero polynomial by
T. Let’s take some general polynomial in P3 and see what its coefficients have to be for
it to go to the zero polynomial:
If T(a1t3 + a2t2 + a3t + a4) = 0t3 + 0t2 + 0t + 0, then, by definition of T, we must have
(a1 – a2)t3 + (a3 – a4)t = 0t3 + 0t2 + 0t + 0. This implies that a1 – a2 = 0 and a3 – a4 = 0, or
a1 = a2 and a3 = a4.
So, every polynomial in the kernel is of the form:
a1t3 + a1t2 + a3t + a3 = a1(t3 + t2) + a3(t + 1)
Thus, a basis of the kernel of T is {t3 + t2, t + 1}. Again, we obviously have a linearly
independent set, and these polynomials clearly span ker(T) by the above arguments.