* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Final Exam Solutions
Tensor operator wikipedia , lookup
Determinant wikipedia , lookup
Non-negative matrix factorization wikipedia , lookup
Gaussian elimination wikipedia , lookup
Geometric algebra wikipedia , lookup
Covariance and contravariance of vectors wikipedia , lookup
Matrix calculus wikipedia , lookup
Cayley–Hamilton theorem wikipedia , lookup
Singular-value decomposition wikipedia , lookup
Matrix multiplication wikipedia , lookup
Quadratic form wikipedia , lookup
Bra–ket notation wikipedia , lookup
Four-vector wikipedia , lookup
Cartesian tensor wikipedia , lookup
Perron–Frobenius theorem wikipedia , lookup
Linear algebra wikipedia , lookup
Jordan normal form wikipedia , lookup
System of linear equations wikipedia , lookup
1. (5 points each) State whether the following are true or false, and justify all of your
answers.
0
a. Let A be a 3×3 matrix with eigenvectors v1 = 1 , v2 =
5
diagonalizable.
1
2 , v =
3
8
4
1 . A is
0
Solution:
TRUE. These vectors are linearly independent since:
0 1 4
1 2 1 ~
5 8 0
1 2 1 1 2 1
0 1
4 ~ 0 1 4 the columns are linearly independent.
0 2 5 0 0 13
Thus we have 3 linearly independent eigenvectors for a 3×3 matrix, which implies that A
is diagonalizable by The Diagonalization Theorem.
b. det (ATA) ≥ 0.
Solution:
TRUE.
det (ATA) = det (AT)(det A) since det(AB) = (det A)(det B)
= (det A)(det A) since det A = det (AT)
= (det A)2 ≥ 0
since det A R
c. It is possible for a 3×3 matrix with real entries to have the eigenvalues 2, -3, and i.
Solution:
FALSE.
Complex eigenvalues of a real-valued matrix come in conjugate pairs, so -i would also
need to be an eigenvalue.
1. (continued)
d. If J is the Jordan canonical form of a matrix A, then A is similar to J.
Solution:
TRUE.
If J is the Jordan canonical form of a matrix A, then A = PJP-1, which implies that A is
similar to J.
1 2 3
e. Let A = 2 3 7 . The vector v =
3 6 9
10
1 is in Nul(A).
2
Solution:
FALSE.
A vector v is in Nul(A) if Av = 0.
1 2 3 10 2
Here, Av = 2 3 7 1 = 3 0. Thus v Nul(A).
3 6 9 2 6
f. Each eigenvalue of A is also an eigenvalue of A2.
Solution:
FALSE.
1 0
1 0
If A =
, then the eigenvalues of A are 1 and 2. But A2 =
, and its
0 2
0 4
eigenvalues are 1 and 4. Thus the eigenvalue 2 of A is not an eigenvalue of A2.
In general, if is an eigenvalue of A, then 2 is an eigenvalue of A2.
1
2. Let u = 2 , v =
3
0
4 , w =
2
5
6 be vectors in R3.
7
a. (5 points) What is the distance from u to w?
Solution:
4
dist(u, w) = ║u – w║ = ║ 4 ║ = 16 16 16 =
4
48 = 4 3
b. (10 points) Find the component of u orthogonal to v.
Solution:
Let L = Span{v}.
1
1
0 1
uv
2
4 = 1.6
v = 2 –
u – projLu = 2 –
vv
20
3
3
2 3.2
c. (10 points) Find the projection of u onto the subspace of R3 spanned by v and w.
[NOTE: To use the nice dot product formula, what kind of set/basis must you have??]
Solution:
We must have an orthogonal basis for Span{v, w} – we use Gram-Schmidt!
0 5
5
wv
10
4 = 4 .
v = 6 –
Let v1 = v; v2 = w –
20
vv
2 8
7
Then, if W = Span{v, w}:
0
5 37 21 37 21
u v1
u v2
37
4 = 190 = 38
projWu =
v1 +
v2 = .4 +
105
21
105 275 55
v1 v1
v2 v2
.2
8 105 21
d. (10 points) Find the distance from u to the subspace of R3 spanned by v and w.
Solution:
Let W = Span{v, w}.
1
dist(u, W) = ║u – projWu║ = ║ 2 –
3
37 21
16 21
38 ║ = ║ 4 ║ =
21
21
55 21
8 21
=
(16) 2 (4) 2 (8) 2
21
336
4 21
=
21
21
3. (20 points) Let u1, …, up be an orthogonal basis for a subspace W of Rn, and let
T: Rn→ Rn be defined by T(x) = projW x. Show that T is a linear transformation.
Solution:
Since u1, …, up is an orthogonal basis, we can write:
x up
x u1
up
u1 + … +
up up
u1 u1
To show that T is a linear transformation, we must show 2 things:
projW x =
(i) T(x + y) = T(x) + T(y):
( x y) u p
( x y) u1
up
u1 + … +
up up
u1 u1
x up y up
x u1 y u1
up
=
u1 + … +
up up
u1 u1
T(x + y) =
(prop’s of dot product)
x up
y up
x u1
y u1
up +
u p (prop’s of vectors)
u1 + … +
u1 + … +
up up
up up
u1 u1
u1 u1
= T(x) + T(y)
=
(ii) T(cx) = cT(x):
(cx) u p
(cx) u1
up
u1 + … +
up up
u1 u1
c( x u p )
c( x u1 )
up
=
u1 + … +
up up
u1 u1
T(cx) =
x u1
x up
= c
u p u p
u1 u1
= cT(x)
Thus, T is a linear transformation.
(prop’s of dot product)
(prop’s of vectors)
4. Let a subspace U of R5 be defined by: U = {(x1, x2, x3, x4, x5): x1 = 3x2, x3 = 7x4}
a. (15 points) Find a basis for U.
Solution:
Vectors in U look like:
3 x 2
3
0
0
x
1
0
0
2
7x 4 = x2 0 + x4 7 + x5 0 . Therefore a basis is:
x4
0
1
0
x 5
0
0
1
3 0 0
1 0 0
0 , 7 , 0
0 1 0
0 0 1
b. (5 points) What is the dimension of U?
Solution:
The dimension of U is the number of elements in the basis, which is 3.
5. Given the quadratic form on R3: Q(x) = x12 + 25x22 – 10x1x2
a. (5 points) What is the matrix of this quadratic form?
Solution:
1 5
A=
5 25
b. (15 points) Make a change of variable that transforms this quadratic form into one with
no cross- product terms. Tell me what the P matrix is, and write what the new
quadratic form looks like in the new variable.
Solution:
To make a change of variable to satisfy the conditions that we want, we find a P such that
x = Py, and with yT(PTAP)y a quadratic form with no cross product terms (i.e. with PTAP
a diagonal matrix).
So, we want to orthogonally diagonalize A! Meaning we find an orthonormal basis for
R2 such that PTAP is diagonal, where P is the matrix whose columns are the orthonormal
basis.
The eigenvalues of A are the roots of the polynomial:
(1 – )(25 – ) – 25 = 2 – 26 = ( – 26)
eigenvalues are 0 and 26
Now we find bases for the eigenspaces:
1 5 1 5
For = 0: A – 0I =
~
basis is
5 25 0 0
25 5 1
For = 26: A – 26I =
~
5 1 0
5
1
1
basis is
0
5
1
5
Since these eigenvectors correspond to different eigenvalues, they are orthogonal. So to
get an orthonormal basis we just normalize.
5
u1 = 1
5
P= 1
26
1
26
5
26
26
1 26
,
u
=
2 5
26
26
26
2
2
and the new quadratic form is 0y1 + 26y2 .
6. (20 points) Suppose that T1, …, Tn are injective linear transformations such that
T1◦…◦ Tn makes sense (i.e. the codomain of Ti+1 is the domain of Ti.) Prove that
T1◦…◦ Tn is injective.
[Recall: T1◦…◦ Tn(x) = T1(…(Tn(x))…) = composition of functions]
Solution:
We can prove this in a few ways… I’ll do 2:
(i) A transformation T is one-to-one if an only if ker(T) = {0}
Since T1, …, Tn are injective, ker(T1) = {0}, …, ker(Tn) = {0}.
So now, what is in the kernel of T1◦…◦ Tn?
Assume T1◦…◦ Tn(x) = T1(…(Tn(x))…) = 0. We show that x = 0.
T1 injective T2(…(Tn(x))…) = 0.
T2 injective T3(…(Tn(x))…) = 0.
………….
Tn-1 injective Tn(x) = 0.
Tn injective x = 0.
Thus, T1◦…◦ Tn(x) = 0 x = 0, which means that ker(T1◦…◦ Tn) = {0}, and T1◦…◦ Tn is
injective.
(ii) A transformation T is one-to-one if and only if [T(x) = T(y) x = y]
So Ti(x) = Ti(y) x = y for all i = 1, …, n.
Assume T1◦…◦ Tn(x) = T1◦…◦ Tn(y). We show that x = y.
T1 injective T2(…(Tn(x))…) = T2(…(Tn(y))…)
T2 injective T3(…(Tn(x))…) = T3(…(Tn(y))…)
………….
Tn-1 injective Tn(x) = Tn(y)
Tn injective x = y
Thus T1◦…◦ Tn(x) = T1◦…◦ Tn(y) x = y , which means that T1◦…◦ Tn is injective.
7.
a. (5 points) What is the definition of orthogonally diagonalizable?
Solution:
A matrix A is orthogonally diagonalizable if there exist an orthogonal matrix P and a
diagonal matrix D such that A = PDP-1 = PDPT.
3 2 4
b. (20 points) Orthogonally diagonalize the matrix A = 2 6 2 , given that its
4
2 3
eigenvalues are 7 and -2.
Solution:
We want to find an orthonormal basis for R3.
Since we have the eigenvalues already, we find bases for the eigenspaces:
4 2 4 4 2 4 1
For = 7: A – 7I = 2 1 2 ~ 0
0 0 ~ 0
4
2 4 0
0 0 0
1 1
basis is 2 , 0
0 1
1
2
0
0
1
0
0
5 2 4 1 4 1 1 4 1 1 0
1 ~ 0
For = -2: A + 2I = 2 8 2 ~ 5 2 4 ~ 0 1
2
1
5 0 0
0 0 0
4
2 5 4 2
1
2
1
basis is 2 OR 1
1
2
1
1
2
0
The basis that we found for the eigenspace of = 7 is not orthogonal, so we use GramSchmidt to make it orthogonal:
1
v1 = 2 ; v2 =
0
1
1 4 5
0 – 1 2 = 2
5
5
1
0 1
2
So {v1, v2} is an orthogonal basis for the eigenspace of = 7, and {v1, v2, 1 } is an
2
2
3
orthogonal basis for R since 1 corresponds to a different eigenvalue than v1 and v2,
2
which implies that it is orthogonal to both v1 and v2.
Now we just have to normalize:
4
1 5
u1 = 2 5 , u2 = 2
5
0
2 3
, u3 = 1 3 .
45
2 3
45
45
Thus A = PDPT for
1 5
P= 2 5
0
4
45
2
45
5
45
23
1
3 and D =
2
3
7 0 0
0 7 0 .
0 0 2
8. (15 points) Show that two vectors u and v of an inner product space V are orthogonal if
and only if ║u – v║2 = ║u║2 + ║v║2.
Solution:
First, we note that:
║u – v║2 = <u – v, u – v >
= <u, u> – <u, v> – <v, u> + <v, v>
= <u, u> – 2<u, v> + <v, v>
= ║u║2 – 2<u, v> + ║v║2
(by the def of ║u║2)
(by prop’s of inner product)
(by prop’s of inner product)
(by the def of ║u║2)
Then ║u – v║2 = ║u║2 + ║v║2 -2<u, v> = 0
<u, v> = 0
u and v are orthogonal.
9.
a. (5 points) State the Cauchy-Schwarz Inequality.
Solution:
For all u, v in a vector space V, <u, v> ║u║║v║.
b. (5 points) Circle one: “Cauchy” is pronounced cow-shee coh-shee
a
c. (10 points) Let u = and v =
b
a2 b2
1
ab
.
Show
that
1
2
2
2
cow-chee
.
Solution:
We use Cauchy-Schwarz!
║u║ =
<u, v> = a + b
a2 b2
Then Cauchy-Schwarz a + b ( a 2 b 2 )( 2 )
(a + b)2 (a2 + b2)(2)
( a b) 2
a2 + b2
2
( a b) 2
a2 b2
2
22
a2 b2
ab
.
2
2
2
║v║ = 1 1 2