Download M341 Linear Algebra, Spring 2014, Travis Schedler Review Sheet

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Determinant wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Exterior algebra wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

System of linear equations wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Jordan normal form wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Euclidean vector wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Matrix multiplication wikipedia , lookup

Vector space wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Matrix calculus wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Four-vector wikipedia , lookup

Transcript
M341 Linear Algebra, Spring 2014, Travis Schedler
Review Sheet
Don’t panic! I will make the exam easier than this sheet (and obviously much much
shorter), but I think this will be good practice for you if you can get through all of these
problems carefully and write solutions. I reserve the right to use any of these on the exam
(or maybe easier versions).
Recall in the below that F can denote either R or C, and hv, wi = v · w̄ over Fn . If you
want, you can take F = R, but when you have time you should allow also F = C. (Most
of the arguments are the same!) Note that, on the exam, you will have to sometimes allow
F = C or you will lose points.
(1) Compute the complex eigenvectors and eigenvalues of the following matrices:
(1)

 



−1 0
0
0
1 1 0 0
1 2
3




1 2
0 −1
2 0
3 2 0 1 0 0  0 1
0
0  
,
,
,
,
,
, 0 2 −1 .
2 4
1 0
−3 2
2 3 0 0 2 0  0 0 2 + i
0 
0 −1 2
0 0 0 3
0 0
0
3−i
(2) Using row operations (NOT cofactor
following matrix:

1 2
3 7

(2)
−3 2
2 13
expansion), compute the determinant of the

2
1
6
2
.
−6 −2
0
2
(3) (a) Verify that [1, 1, −1], [0, 1, 1], [−2, 1, −1] is an orthogonal basis (your argument
should first show why it is orthogonal, then why it is a basis).
(b) Compute the coefficients a, b, c such that [3, 4, 5] = a[1, 1, −1] + b[0, 1, 1] +
c[−2, 1, −1] using the dot product.
(c) Now, recompute the coefficients a, b, c in the previous part without using orthogonality or dot product, but instead by performing Gaussian elimination on an
augmented matrix with columns the four vectors above. (Hint: the columns to
the left of the bar should be the basis vectors, and the column to the right of
the bar should be the vector [3, 4, 5] that you want to compute the coefficients
of the basis for.)
Is it significantly more work?
(4) Define the following (note: if you have difficulty finding it in the book or in class
notes, you can also google some of these):
Dot product, angles between real vectors, inner product hv, wi on Cn or Rn ,
hv,wi
, i.e.,
projection to vectors and vector spaces (for vectors, this is projw v = hw,wi
you just replace dot products with h−, −i; for vector spaces, this is the formula
from class which sums the projections to an orthonormal basis, or alternatively the
characterization that projV : Fn → V is the map which is the identity on V and zero
on V ⊥ ).
Matrices: Hermitian matrices, symmetric matrices, unitary matrices, orthogonal
matrices, diagonalizable matrices, nilpotent matrices, inverse of a matrix, invertible
1
matrices. The matrix of a linear transformation T : V → W , denoted by [T ]BC , where
B is a basis of V and C is a basis of W . The change-of-basis matrix [I]BC (where B
and C are bases of a vector space V ). Eigenvalues and eigenvectors of a matrix. Rowechelon and reduced row-echelon form matrices. Charateristic polynomial. Algebraic
and geometric multiplicity of eigenvalues.
Vector spaces and linear transformations: Vector spaces, vector subspaces, linear
transformations, domain and codomain of a transformation, kernel of a transformation, image (or range) of a transformation.
(5) Using the linear independence of eigenvectors with distinct eigenvalues from class,
show that the functions sin(x), sin(2x), sin(3x), . . . are linearly independent (hint:
they are eigenvectors of the operator T (f ) = T 00 (f ).) That is, for all k ≥ 1, and all
A1 , . . . , Ak ∈ R with Ak 6= 0, the functions
A1 sin(x) + A2 sin(2x) + · · · + Ak sin(kx)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
are distinct. (Note the same proof works for A1 , . . . , Ak ∈ C, considering functions
R → C!)
Describe the following algorithms: Gaussian and Gauss-Jordan elimination, GramSchmidt orthogonalization.
Give the statements of the following theorems: the dimension theorem (relating three
of the four: domain, codomain, kernel, and image); the Jordan Normal Form theorem for complex n by n matrices; the spectral theorem for symmetric real matrices;
the spectral theorem for complex Hermitian matrices; the polar decomposition theorem for real square matrices; the polar decomposition theorem for complex square
matrices.
Suppose that v and w are orthogonal vectors. Show that ||v + w||2 = ||v||2 + ||w||2 .
Hint: use the definition of length, ||u||2 = hu, ui, and the rules found at the beginning
of Section 7.1 of the textbook that we discussed for inner products.
Let V ⊆ Fn be a vector subspace and u ∈ Fn .
(a) If u = v + w for v ∈ V and w ∈ V ⊥ , then ||u||2 = ||v||2 + ||w||2 . Conclude
that ||u||2 = || projV (u)||2 + ||u − projV (u)||2 (Hint for the latter: this is just the
definition of projV (u), which implies that v = projV (u).)
(b) Now let v 0 ∈ V be any vector and v, w be as in (a) (i.e., v = projV (u) and
w = u − v). Show using (a) that ||u − v 0 ||2 ≥ ||w||2 . (Hint: Show that ||u − v 0 ||2 =
||v − v 0 ||2 + ||w||2 ; this is an application of (a).)
(c) Show that equality holds in (b) if and only if v 0 = v. That is, v = projV (u) is
the closest vector in V to u ∈ Fn . This should not be difficult if you used the
hint before.
Give an example of a four-by-four matrix which has eigenvalues 1 and 2, such that
dim E1 = 1 and dim E2 = 2 (i.e., the geometric multiplicity of 1 is one and the
geometric multiplicity of 2 is two), and such that the algebraic multiplicity of both 1
and 2 is two, i.e., the characteristic polynomial is (x − 1)2 (x − 2)2 .
Prove that a linear map T : V → W is injective if and only if ker(T ) = 0.
Prove that, if V is finite-dimensional, then a linear map T : V → V is injective if and
only if it is surjective. Give a counterexample if we replace T merely by a function
f :V →V.
2
(13) Prove by contrapositive: hv, wi = 0 for all w implies that v = 0. Give also a direct
proof (is it similar?) and a proof by contradiction.
(14) Let V ⊆ Fn be a two-dimensional vector space with orthogonal basis (v1 , v2 ). Prove:
projV (v) = projv1 v + projv2 v,
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
hv,wi
where projw v := hw,wi
. Conclude that the RHS of the above does not depend on the
choice of v1 and v2 (as long as (v1 , v2 ) is an orthogonal basis of V ).
Hint: recall the formula for projV : Fn → V from our Gram-Schmidt orthogonalization, which we also defined to be the unique linear map such that projV |V = I|V
and projV |V ⊥ = 0|V ⊥ .
Give a counterexample to the previous problem when (v1 , v2 ) is not orthogonal; take
n = 2 and give an explicit choice of v, v1 , and v2 , such that projV (v) 6= projv1 v +
projv2 v. Hint: For n = 2 we must have V = F2 , so projV (v) = projF2 (v) = I(v) = v.
So you just need to show that the RHS is not v in your example.
Prove that the set of functions Z → R is a vector space, but that it is infinitedimensional. (For the infinite-dimensionality there are many ways to proceed. One
way is to produce infinitely many functions which are linearly independent.)
Prove that the vector space of continuous functions R → R is infinite-dimensional.
(Hint: Recall that the vector space of polynomials is infinite dimensional. Recall also
that the map from polynomials to the corresponding functions is an injective linear
map. Note that polynomial functions are indeed continuous. Finally, recall that an
injective linear map can only increase dimension, and in particular that if T : V → W
is injective and V is infinite-dimensional, W must also be infinite-dimensional.)
Compute the Fourier transform of v = [2, 1, 2, 1]: recall this is the vector [â0 , â1 , â2 , â3 ]
such that v = â0 [1, 1, 1, 1] + â1 [1, i, −1, −i] + â2 [1, −1, 1, −1] + â3 [1, −i, −1, i]. Hint:
Use that the Fourier basis is orthogonal (but not orthonormal!) so that âk =
proj1,ik ,i2k ,i3k v.
Compute the wavelet transform of v = [2, 1, 2, 1], which we define to be the vector [b̂0 , b̂1 , b̂2 , b̂3 ] such that v = 41 b̂0 [1, 1, 1, 1] + 14 b̂1 [1, 1, −1, −1] + 41 b̂2 [1, −1, 1, −1] +
1
b̂ [1, −1, −1, 1]. Again, use that the wavelet basis ([1, 1, 1, 1], [1, 1, −1, −1], [1, −1, 1, −1], [1, −1, −1,
4 3
is orthogonal (and note that the 41 factors are here in order to agree with the guest
lecture and the homework).
Find the angle between the vectors [1, 2, 3, 4, 5] and
[1, −1, 2, 0,
1].
1 2 0
Solve the system of equations Ax = b where A = 0 1 2  and:
1 1 −2
 
1

(a) b = 1;
0
 
1

(b) b = 1.
1
Prove using the dimension theorem that the space of polynomials of degree ≤ 10 such
that P (2) = 0, P 00 (1) + P 0 (0) = 0, and P 0 (1) = 0 has dimension 8.
3
(23) (a) Show that the space of polynomials of degree ≤ 5 satisfying P 0 (3) = 1 is not a
vector space.
(b) Nonetheless, explain that this space of polynomials can be described as {x + Q |
Q ∈ V } where V is a vector space. (Hint: this vector space should be the kernel
of a linear map P≤5 → F.)
(24) Let u, v, w be eigenvectors and suppose that u + 2v + 3w = 0. Prove that at least
two of the eigenvectors have the same eigenvalue.
(25) Reprove from the homework: the dimension of Hom(V, W ) is the product of the
dimension of V and that of W (Hint: fixing a basis B of V and C of W , we proved
there is an isomorphism Hom(V, W ) ∼
= Matmn (F), T 7→ [T ]BC , where n = dim V and
m = dim W . Then, the dimension of the space of n by m matrices is easily seen to
be nm.)
(26) Now we see from the previous problem that dim Hom(V, V ) = n2 where n = dim V .
2
Show that the ordered set (1, T, T 2 , . . . , T n ) is linearly dependent (hint: its length is
2
n2 + 1 > n2 ). Conclude that there is some polynomial a0 + a1 T + · · · + an2 T n which
equals zero.
Remark: The Cayley-Hamilton theorem sharpens this considerably: it proves that
χT (T ) = 0, where χT (x) is the characteristic polynomial of T . This has degree only
n, much less than n2 .
(27) Prove that, if V is a vector space and V1 and V2 are subspaces, then V1 + V2 :=
{v + w | v ∈ V1 , w ∈ V2 } is also a vector space. Moreover, prove that dim(V1 + V2 ) ≤
dim V1 + dim V2 (hint: recall that the dimension is the minimum size of a spanning
set, and produce a spanning set of V1 + V2 of size dim V1 + dim V2 ).
(28) Prove that, if V1 ∩ V2 = {0}, then dim(V1 + V2 ) = dim V1 + dim V2 . (Hint: prove that
bases for the two combine to form a basis).
(29) (a) Prove directly (using summations) the associativity property for matrices, A(BC) =
(AB)C.
(b) Now prove the same property this time using linear transformations, as follows. For simplicity we assume the matrices are all square, n × n. Then, if
A, B, C are matrices, let TA , TB , TC : Fn → Fn be the corresponding linear
transformations TA (v) = Av, TB (v) = Bv, and TC (v) = Cv. Recall first that
[TA ]SS = A, [TB ]SS = B, [TC ]SS = C, with S the standard basis. Next recall that,
since TA , TB , and TC are all functions, we have (TA ◦ TB ) ◦ TC = TA ◦ (TB ◦ TC ).
Finally recall the theorem from class that [T1 ◦ T2 ]SS = [T1 ]SS [T2 ]SS , for any linear transformations T1 and T2 . Put all this together to prove (AB)C = A(BC),
without using any summations.
4