Download MATH 113: PRACTICE FINAL SOLUTIONS Note: The final is in

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Tensor product of modules wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Exterior algebra wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Vector space wikipedia , lookup

Four-vector wikipedia , lookup

System of linear equations wikipedia , lookup

Transcript
MATH 113: PRACTICE FINAL SOLUTIONS
Note: The final is in Room T175 of Herrin Hall at 7pm on Wednesday,
December 12th.
There are 9 problems; attempt all of them. Problem 9 (i) is a regular problem, but
9(ii)-(iii) are bonus problems, and they are not part of your regular score. So do them only
if you have completed all other problems, as well as 9(i).
This is a closed book, closed notes exam, with no calculators allowed (they shouldn’t be
useful anyway). You may use any theorem, proposition, etc., proved in class or in the book,
provided that you quote it precisely, and provided you are not asked explicitly to prove it.
Make sure that you justify your answer to each question, including the verification that all
assumptions of any theorem you quote hold. Try to be brief though.
If on a problem you cannot do part (i) or (ii), you may assume its result for the subsequent
parts.
All unspecified vector spaces can be assumed finite dimensional unless stated otherwise.
Allotted time: 3 hours. Total regular score: 160 points.
Problem 1. (20 points)
(i) Suppose V is finite dimensional, S, T ∈ L(V ). Show that ST = I if and only if
T S = I.
(ii) Suppose V, W are finite dimensional, S ∈ L(W, V ), T ∈ L(V, W ), ST = I. Does
this imply that T S = I? Prove this, or give a counterexample.
(iii) Suppose that S, T ∈ L(V ) and ST is nilpotent. Show that T S is nilpotent.
Solution 1.
(i) We prove ST = I implies T S = I; the converse follows by reversing
the role of S and T . So suppose ST = I. Then S is surjective (since for v ∈ V ,
v = S(T v)) and T is injective (for T v = 0 implies 0 = S(T v) = (ST )v = v), so as V
is finite dimensional, they are both invertible. Thus, S has an inverse R, so RS = I
in particular. But T = IT = (RS)T = R(ST ) = RI = R, so T S = I as claimed.
(ii) Suppose V = {0}, W = F, T 0 = 0 (V has only one element!), Sw = 0 for all
w ∈ W . Then ST 0 = 0, so ST = I, but T Sw = 0 for all w ∈ W , and as W is not
the trivial vector space, T S 6= I. (While this is the lowest dimensional possibility,
with dim V = 0, dim W = 1, it is easy to add dimensions: e.g. V = F, W = F2 ,
T v = (v, 0), S(w1 , w2 ) = w1 , so ST v = v for v ∈ F, T S(w1 , w2 ) = (w1 , 0), which is
not the identity map.)
(iii) Suppose (ST )k = 0. Then (T S)k+1 = T (ST )k S = T 0S = 0.
Problem 2. (15 points) Suppose that T ∈ L(V, W ) is injective and (v1 , . . . , vn ) is linearly
independent in V . Show that (T v1 , . . . , T vn ) is linearly independent in W .
Pn
Pn
Pn
Solution 2. Suppose that j=1 aj T vj = 0. Then T ( j=1 aj vj ) = j=1 aj T vj = 0. As T
Pn
is injective, this implies j=1 aj vj = 0. Since (v1 , . . . , vn ) is linearly independent in V , all
aj are 0. Thus, (T v1 , . . . , T vn ) is linearly independent in W .
Problem 3. (20 points) Consider the following map (., .) : R2 × R2 → R:
hx, yi = (x1 + x2 )(y1 + y2 ) + (2x1 + x2 )(2y1 + y2 ); x = (x1 , x2 ), y = (y1 , y2 ).
1
2
MATH 113: PRACTICE FINAL SOLUTIONS
(i) Show that h., .i is an inner product on R2 .
(ii) Find an orthonormal basis of R2 with respect to this inner product.
(iii) Let φ(x) = x1 , x = (x1 , x2 ). Find y ∈ R2 such that φ(x) = hx, yi for all x ∈ R2 .
Solution 3.
(i)
hx, xi = (x1 + x2 )2 + (2x1 + x2 )2 ≥ 0,
and is equal to 0 if and only if x1 = −x2 and 2x1 = −x2 , so x1 = x2 = 0. Thus, h., .i
is positive definite. As x, y are symmetric in the definition, it is clearly symmetric,
and
hax, yi = (ax1 + ax2 )(y1 + y2 ) + (2ax1 + ax2 )(2y1 + y2 )
= a((x1 + x2 )(y1 + y2 ) + (2x1 + x2 )(2y1 + y2 )) = ahx, yi,
with a similar argument for hx + x′ , yi = hx, yi + hx′ , yi.
(ii) One can apply Gram-Schmidt to the basis ((1, 0), (0, 1)). As k(1, 0)k2 = 1 + 22 = 5,
e1 = √15 (1, 0). Then h(1, 0), (0, 1)i = 1·1+1·2 = 3, so (0, 1)−h(0, 1), e1 ie1 = (0, 1)−
√
3
−3
3
2 2
1 2
1
2
5(− 35 , 1).
5 (1, 0) = ( 5 , 1). Normalizing: k(− 5 , 1)k = ( 5 ) + (− 5 ) = 5 , so e2 =
Then (e1 , e2 ) is an orthonormal basis.
(iii) If (e1 , e2 ) is an orthonormal basis, then y = φ(e1 )e1 + φ(e2 )e2 . Using the basis from
the previous part gives
√
1
3
y = √ e1 + 5(− )e2 = (2, −3).
5
5
We check that h(x1 , x2 ), (2, −3)i = (x1 + x2 )(−1) + (2x1 + x2 ) · 1 = x1 .
Problem 4. (15 points) Suppose V is a complex inner product space. Let (e1 , e2 , . . . , en )
be an orthonormal basis of V . Show that
n
X
kT ej k2 .
trace(T ∗ T ) =
j=1
Solution 4. If A ∈ L(V ), then the matrix entries of A with respect to the basis (e1 , . . . , en )
are hAej , ei i, since
Aej = hAej , e1 ie1 + . . . + hAej , en ien
as (e1 , . . . , en ) is orthonormal. Thus,
trace A = trace M(A, (e1 , . . . , en )) =
n
X
j=1
hAej , ej i.
Applying this with A = T ∗ T :
trace T ∗ =
n
X
j=1
hT ∗ T ej , ej i =
n
X
j=1
hT ej , T ej i =
n
X
j=1
kT ej k2 .
Problem 5. (20 points) Find p ∈ P3 (R) such that p(0) = 0, p′ (0) = 0, and
Z 1
|1 − p(x)|2 dx
0
is as small as possible.
MATH 113: PRACTICE FINAL SOLUTIONS
3
Solution 5. Let U be the subspace of P3 (R) consisting of polynomials p with p(0) = 0 and
p′ (0) = 0. These are exactly the polynomials ax2 + bx3 , a, b ∈ R. Let hp, qi be the inner
R1
product hp, qi = 0 p(x) q(x) dx on P3 (R). As the constant function 1 ∈ P3 (R), the p ∈ U
that minimizes k1 − pk2 (which is what we want) is the orthogonal projection PU 1 of 1 onto
U , so we just need to find this.
There are different ways one can proceed. One can find an orthonormal basis (e1 , e2 ) of
U using Gram-Schmidt applied to the basis (x2 , x3 ), and then
Explicitly, as
and finally
R1
0
2 2
(x ) dx =
1
5,
PU 1 = h1, e1 ie1 + h1, e2 ie2 .
√
R1
e1 = 5x2 , then as 0 x3 x2 dx = 16 ,
5
x3 − hx3 , e1 ie1 = x3 − x2 ,
6
Z
1
5
1
(x3 − x2 )2 dx =
,
6
7 · 36
0
√
one has e2 = 7 · 6 · (x3 − 56 x2 ). Thus, using
Z 1
Z 1
1
5
1
5
1
=− ,
1 · (x3 − x2 ) dx = −
1 · x2 dx = ,
3 0
5
4 18
36
0
we deduce that
5
5
PU 1 = x2 − 7(x3 − x2 ).
3
6
A different way of proceeding is to work without orthonormal bases, rather use the
characterization of PU 1 as the unique element of U with 1 − PU 1 orthogonal to all vectors in
U . So suppose PU 1 = ax2 +bx3 . Then we must have h1−PU 1, x2 i = 0 and h1−PU 1, x3 i = 0
which give
Z 1
1 a b
(1 − ax2 − bx3 )x2 dx = − − ,
0=
3 5 6
0
Z 1
1 a b
0=
(1 − ax2 − bx3 )x3 dx = − − ,
4 6 7
0
so
21
10 = 6a + 5b,
= 7a + 6b,
2
taking 7 times the first minus 6 times the second gives
70 − 63 = −b,
so b = −7, while taking 6 times the first minus 5 times the second gives
5 · 21
= a,
60 −
2
so a = 120−105
= 15
2
2 .
Problem 6. (20 points) Suppose that S, T ∈ L(V ), S invertible.
(i) Prove that if p ∈ P(F) is a polynomial then p(ST S −1) = Sp(T )S −1 .
(ii) Show that the generalized eigenspaces of ST S −1 are given by
null(ST S −1 − λI)dim V = {Sv : v ∈ null(T − λI)dim V }.
(iii) Show that trace(ST S −1 ) = trace(T ), where trace is the operator trace (sum of
eigenvalues, with multiplicity). (Assume that F = C in this part.)
4
MATH 113: PRACTICE FINAL SOLUTIONS
Solution 6.
(i) Observe that (ST S −1)k = ST k S −1 for all non-negative integers k.
Indeed, this is immediate for k = 0, 1. In general, it is easy to prove this by
induction: if it holds for some k, then
(ST S −1 )k+1 = (ST S −1 )k ST S −1 = ST k S −1 ST S −1 = ST k+1 S −1 ,
providing the inductive step and
Pmproving the claim.
Now, if p ∈ P(F) then p = j=0 aj z j , aj ∈ F, and
p(ST S −1) =
m
X
aj (ST S −1 )j =
m
X
j=0
j=0
m
X
aj T j )S −1 = Sp(T )S −1
aj ST j S −1 = S(
j=0
as claimed.
(ii) We first show
{Sv : v ∈ null(T − λI)dim V } ⊂ null(ST S −1 − λI)dim V .
Indeed, let p(z) = (z − λ)dim V . If v ∈ null(T − λI)dim V , i.e. p(T )v = 0 then
0 = Sp(T )v = Sp(T )S −1 Sv = p(ST S −1)Sv, so Sv ∈ null(ST S −1 − λI)dim V as
claimed.
As T = S −1 (ST S −1)S, this also gives
{S −1 w : w ∈ null(ST S −1 − λI)dim V } ⊂ null(T − λI)dim V .
As S(S −1 w) = w, we deduce that for w ∈ null(ST S −1 − λI)dim V there is in fact
v ∈ null(T − λI)dim V with Sv = w, namely v = S −1 w, so in summary
{Sv : v ∈ null(T − λI)dim V } = null(ST S −1 − λI)dim V .
(iii) Let λj , j = 1, . . . , k be the distinct eigenvalues of T with multiplicity m1 , . . . , mk ,
respectively. That is, dim null(T − λj I)dim V = mj . By what we have shown, the
distinct eigenvalues of ST S −1 are also exactly the λj , the generalized eigenspaces
are {Sv : v ∈ null(T − λj I)dim V }, and as
S|null(T −λj I)dim V : null(T − λj I)dim V → null(ST S −1 − λj I)dim V
is a bijection, hence an isomorphism, we deduce that these generalized eigenspaces
also have dimension mj . Thus,
trace T =
k
X
mj λj = trace(ST S −1 ).
j=1
Problem 7. (20 points) Suppose that V is a complex inner product space.
(i) Suppose T ∈ L(V ) and there is a constant C ≥ 0 such that kT vk ≤ Ckvk for all
v ∈ V . Show that all eigenvalues λ of T satisfy |λ| ≤ C.
(ii) Suppose now that T ∈ L(V ) in normal, and all eigenvalues satisfy |λ| ≤ C. Show
that kT vk ≤ Ckvk for all v ∈ V .
(iii) Give an example of an operator T ∈ L(C2 ) for which there is a constant C ≥ 0
such that |λ| ≤ C for all eigenvalues λ of T , but there is a vector v ∈ C2 with
kT vk > Ckvk.
Solution 7.
(i) Suppose that λ is an eigenvalue of T . Then there is v ∈ V , v 6= 0,
such that T v = λv. Thus, taking norms,
Ckvk ≥ kT vk = kλvk = |λ| kvk.
Dividing by kvk 6= 0 gives the conclusion.
MATH 113: PRACTICE FINAL SOLUTIONS
5
(ii) Since T is normal, V has an orthonormal basis (e1 , . . . , en ) consisting
P of eigenvectors
of T , with eigenvalue λ1 , . . . , λn . Recall that for any v ∈ V , v = nj=1 hv, ej iej , and
P
P
that for any aj ∈ C, k nj=1 aj ej k2 = nj=1 |aj |2 .
Now we compute for v ∈ V :
n
n
n
X
X
X
2
2
hv, ej iλj ej k2
hv, ej iT ej k = k
kT vk = kT ( hv, ej iej )k = k
2
j=1
=
n
X
j=1
|hv, ej i|2 |λj |2 ≤ C 2
j=1
n
X
j=1
j=1
|hv, ej i|2 = C 2 kvk2 .
Taking positive square roots gives the conclusion.
(iii) Take any non-zero nilpotent operator: the eigenvalues are all 0, so one can take
C = 0, but the operator is not the zero-operator, so there is some v 6= 0 such that
T v 6= 0, hence kT vk > 0 = 0kvk = Ckvk. For example, take T ∈ L(C2 ) given by
T (z1 , z2 ) = (z2 , 0), so T 2 = 0, hence T is nilpotent, but T (0, 1) = (1, 0) 6= 0.
Problem 8. (20 points)
(i) Suppose that V, W are inner product spaces. Show that for all T ∈ L(V, W ),
dim range T = dim range T ∗ , dim null T ∗ = dim null T + dim W − dim V.
(ii) Consider the map T : Rn → Rn+1 , T (x1 , . . . , xn ) = (x1 , . . . , xn , 0). Find T ∗ explicitly. What is null T ∗ ?
(i) As range T is the orthocomplement of null T ∗ in W ,
Solution 8.
dim range T + dim null T ∗ = dim W.
As range T ∗ is the orthocomplement of null T in V ,
dim range T ∗ + dim null T = dim V.
By the rank-nullity theorem applied to T , one has
dim range T + dim null T = dim V.
Subtracting the first of the three displayed equations from the third, we get
dim null T − dim null T ∗ = dim V − dim W,
which proves the claim regarding nullspaces, while subtracting the second displayed
equation from the third gives
dim range T − dim range T ∗ = 0,
which proves the conclusion regarding ranges.
(ii) Recall that unless otherwise specified, we have the standard inner product on Rp ,
p = n, n + 1 here. For y = (y1 , . . . , yn+1 ) ∈ Rn+1 , T ∗ y ∈ Rn is defined by the
property that for all x ∈ Rn
∗
hx, T yi
Rn
= hT x, yi
Rn+1
=
n+1
X
(T x)j yj =
j=1
n
X
j=1
xj yj = hx, (y1 , . . . , yn )i.
Thus, T ∗ y = (y1 , . . . , yn ). Correspondingly
null T ∗ = {(0, . . . , 0, yn+1 ) : yn+1 ∈ R}.
6
MATH 113: PRACTICE FINAL SOLUTIONS
Problem 9. (Bonus problem: do it only if you have solved all other problems.) If V, W, Z
are vector spaces over a field F, and F : V × W → Z, we say that F is bilinear if
F (v1 + v2 , w) = F (v1 , w) + F (v2 , w), F (cv, w) = cF (v, w)
F (v, w1 + w2 ) = F (v, w1 ) + F (v, w2 ), F (v, cw) = cF (v, w)
for all v, v1 , v2 ∈ V , w, w1 , w2 ∈ W , c ∈ F, i.e. if F is linear in both slots.
(i) (10 points) Show that the set B(V, W ; Z) of bilinear maps from V × W to Z is a
vector space over F under the usual addition and multiplication by scalars:
(F + G)(v, w) = F (v, w) + G(v, w), (λF )(v, w) = λF (v, w),
(v ∈ V , w ∈ W , c ∈ F) – make sure to check that F + G and cF are indeed bilinear.
(ii) (Bonus problem: do it only if you have solved all other problems.) Recall that
V ∗ = L(V, F) is the dual of V . For f ∈ V ∗ , g ∈ W ∗ , let f ⊗ g : V × W → F be
the map (f ⊗ g)(v, w) = f (v)g(w) ∈ F. Show that f ⊗ g ∈ B(V, W ; F), and the map
j : V ∗ × W ∗ → B(V, W ; F), j(f, g) = f ⊗ g, is bilinear.
(iii) (Bonus problem: do it only if you have solved all other problems.) If V, W are finite
dimensional with bases (v1 , . . . , vm ), (w1 , . . . , wn ), (f1 , . . . , fm ) is the basis of V ∗
dual to (v1 , . . . , vm ), and (g1 , . . . , gn ) is the basis of W ∗ dual to (w1 , . . . , wn ), let
eij = fi ⊗ gj , so eij (vr , ws ) = 0 if r 6= i or s 6= j, and eij (vr , ws ) = 1 if r = i and
s = j. Show that
{eij : i = 1, . . . , m, j = 1, . . . , n}
is a basis for B(V, W ; F). In particular, conclude that
dim B(V, W ; F) = (dim V )(dim W ).
Remark: One could now define the tensor product of V ∗ and W ∗ as V ∗ ⊗ W ∗ = B(V, W ; F),
or the tensor product of V and W , using (V ∗ )∗ = V , as
V ⊗ W = B(V ∗ , W ∗ , F).
Solution 9.
(i) First, if F is bilinear and λ ∈ F, then
(λF )(cv, w) = λF (cv, w) = λ(cF (v, w)) = c(λF (v, w)) = c(λF )(v, w),
(λF )(v1 + v2 , w) = λF (v1 + v2 , w) = λ(F (v1 , w) + F (v2 , w))
= λF (v1 , w) + λF (v2 , w) = (λF )(v1 , w) + (λF )(v2 , w),
so λF is linear in the first slot. Here the first and last equalities in both chains are
the definition of λF , the second is that F is linear in the first slot, while the third
is using the vector space structure in Z, namely that λ(cz) = c(λz) for all z ∈ Z,
respectively the distributive law. The calculation that λF is linear in the second
slot is identical, with the role of v’s and w’s interchanged.
A different way of phrasing this is the following: for each fixed w ∈ W , let
Fw♯ (v) = F (v, w), and for each fixed v ∈ V let Fv♭ (w) = F (v, w). Then the statement
that F is bilinear is equivalent to Fw♯ : V → Z being linear for each w ∈ W and
Fv♭ : W → Z being linear for each v ∈ V . Since
(λF )♯w (v) = (λF )(v, w) = λF (v, w) = λFw♯ (v),
(λF )♯w = λFw♯ , so the fact that (λF )♯w is linear follows from the fact that λFw♯ is –
and the latter we have already shown (a scalar multiple of a linear map is linear).
A similar calculation for Fv♭ shows that λF is bilinear as claimed.
MATH 113: PRACTICE FINAL SOLUTIONS
7
Similarly, F + G is bilinear if F and G are because (F + G)♯w = Fw♯ + G♯w , Fw♯
and G♯w are linear, with analogous formulae for (F + G)♭v .
Now, if U is any set (and Z is a vector space as above), the set M(U ; Z) of
all maps from U to Z is a vector space with the usual addition and multiplication
by scalars: (λF )(u) = λF (u), (F + G)(u) = F (u) + G(u). Now, B(V, W ; Z) is a
subset of M(V × W ; Z), and we have shown that it is closed under addition and
multiplication by scalars. As the 0-map is bilinear, we deduce that B(V, W ; Z) is a
subspace of M(V × W ; Z), so in particular is a vector space itself.
(ii) (f ⊗ g)♯w (v) = (f ⊗ g)(v, w) = f (v)g(w) = g(w)f (v), so (f ⊗ g)♯w = g(w)f . As f is
linear, g(w) ∈ F, we deduce that (f ⊗ g)♯w is linear. The proof that (f ⊗ g)♭v is linear
is similar, and we deduce that f ⊗ g is bilinear, so f ⊗ g ∈ B(V, W ; F).
Now consider the map j : V ∗ × W ∗ → B(V, W ; F), j(f, g) = f ⊗ g. Then
j(λf, g)(v, w) = ((λf ) ⊗ g)(v, w) = (λf )(v)g(w) = λf (v)g(w)
= λ(f ⊗ g)(w) = λj(f, g)(v, w),
with a similar calculation for homogeneity in the second slot. Additivity is also
similar:
j(f1 + f2 , g)(v, w) = ((f1 + f2 ) ⊗ g)(v, w) = (f1 + f2 )(v)g(w) = (f1 (v) + f2 (v))g(w)
= f1 (v)g(w) + f2 (v)g(w) = (f1 ⊗ g)(v, w) + (f2 ⊗ g)(v, w)
= (j(f1 , g) + j(f2 , g))(v, w),
with aP
similar calculation in the second
slot. Thus, j is indeed bilinear.
P
(iii) First, i,j aij eij = 0 means that i,j aij eij (v, w) = 0 for all v ∈ V , w ∈ W . Letting
v = vr , w = ws we deduce that ars = 0 for all r, s, so {eij : i = 1, . . . , m, j =
1, . . . , n} is linearly independent.
ToPsee that it spans,
suppose that F is a bilinear map. For v ∈ V , w ∈ W , write
P
v = i bi vi , w = j cj wj . Then
X
X
X
bi cj F (vi , wj )
cj wj ) =
bi vi ,
F (v, w) = F (
=
X
P
i,j
i
i,j
j
X
X
X
F (vi , wj )eij (v, w).
F (vi , wj )eij (
br vr ,
cs ws ) =
r
s
i,j
Thus F =
F (vi , wj )eij , proving that {eij : i = 1, . . . , m, j = 1, . . . , n} spans.
Combined with the already shown linear independence, we deduce that it is a basis,
so dim B(V, W ; F) = mn = (dim V )(dim W ), as claimed.