* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Exam Review 1 Solutions Spring 16, 21-241: Matrices and Linear Transformations
Non-negative matrix factorization wikipedia , lookup
Euclidean vector wikipedia , lookup
Laplace–Runge–Lenz vector wikipedia , lookup
Singular-value decomposition wikipedia , lookup
Linear least squares (mathematics) wikipedia , lookup
Perron–Frobenius theorem wikipedia , lookup
Eigenvalues and eigenvectors wikipedia , lookup
Vector space wikipedia , lookup
Gaussian elimination wikipedia , lookup
Covariance and contravariance of vectors wikipedia , lookup
Orthogonal matrix wikipedia , lookup
Cayley–Hamilton theorem wikipedia , lookup
Matrix multiplication wikipedia , lookup
Four-vector wikipedia , lookup
Exam Review 1 Solutions Spring 16, 21-241: Matrices and Linear Transformations February 12, 2016 Abstract [TOPICS COVERED] The fundamental problem in linear algebra. Solving systems of linear equations. Matrices and Gaussian elimination. Logic. Proof techniques. Vector spaces. Geometry of linear equations. Linear transformations. Matrix multiplication. Matrix algebra. 1 Some problems 1. (a) Fill A T T F F in the blanks. B ¬A T F F F T T F T A∧B T F F F A∨B T T T F A =⇒ B T F T T A ⇐⇒ B T F F T (b) State the definition of even and odd used in class. Solution. We say that x is even if there exists y ∈ Z such that x = 2y. We say that x is odd if there exists y ∈ Z such that x = 2y + 1. (c) In each of the following statements, circle the terms (e.g. x, y) that the bolded quantifiers refer to. i. ∀x ∈ Z, ∃y ∈ Z, P (x, y) =⇒ Q(x) ii. ∀x ∈ Z, ∃y ∈ Z, P (x, y) ∨ ∃y ∈ Z, Q(x, y) Solution. (underlined) i ∀x ∈ Z, ∃y ∈ Z, P (x, y) =⇒ Q(x) ii ∀x ∈ Z, ∃y ∈ Z, P (x, y) ∨ ∃y ∈ Z, Q(x, y) (d) Negate the following statements: i. ∀x, P (x) 1 ii. iii. iv. v. vi. ∃x, P (x) P ∧R P ∨R P =⇒ R P ⇐⇒ R Solution. i ∃x, ¬P (x) ii ∀x, ¬P (x) iii ¬P ∨ ¬R iv ¬P ∧ ¬R v P ∧ ¬R vi (P ∧ ¬R) ∨ (¬P ∧ R) (e) Consider the statement: ∀x ∈ Z, ∃y ∈ Z, x = 2y + 1 =⇒ ∃y ∈ Z, 5x2 = 4y + 1 i. ii. iii. iv. If we denote the statement as ∀x ∈ Z, P (x), write down what P (x) is. If we denote P (x) as Q(x) =⇒ R(x), write down what Q(x) and R(x) are. What is the negation of R(x)? Using the above parts (or not), negate the original statement. Be sure to show all your working. Solution. i P (x) : ∃y ∈ Z, x = 2y + 1 =⇒ ∃y ∈ Z, 5x2 = 4y + 1 ii Q(x) : ∃y ∈ Z, x = 2y + 1 R(x) : ∃y ∈ Z, 5x2 = 4y + 1 iii ¬R(x) = ¬ ∃y ∈ Z, 5x2 = 4y + 1 = ∀y ∈ Z, ¬ 5x2 = 4y + 1 = ∀y ∈ Z, 5x2 6= 4y + 1 Page 2 iv ¬ ∀x ∈ Z, = ∃x ∈ Z, ¬ ∃y ∈ Z, x = 2y + 1 ∃y ∈ Z, x = 2y + 1 =⇒ =⇒ ∃y ∈ Z, 5x2 = 4y + 1 ∃y ∈ Z, 5x2 = 4y + 1 ! ! = ∃x ∈ Z, ∃y ∈ Z, x = 2y + 1 ∧ ¬ ∃y ∈ Z, 5x2 = 4y + 1 = ∃x ∈ Z, ∃y ∈ Z, x = 2y + 1 ∧ ∀y ∈ Z, 5x2 6= 4y + 1 (f) Prove or disprove the statement in 1(e). Solution. We shall prove the statement. Let x ∈ Z such that there exists y ∈ Z with x = 2y + 1. Then 5x2 = 5(2y + 1)2 = 5(4y 2 + 4y + 1) = 20y 2 + 20y + 5 = 4(5y 2 + 5y + 1) + 1 Since y ∈ Z, so z = 5y 2 + 5y + 1 ∈ Z. Thus 5x2 = 4z + 1, thus proving the statement. 2. The following augmented matrices are derived from solving a system of linear equations in R4 , with the columns corresponding to x1 , x2 , x3 , x4 respectively. Determine the set of all solutions for each system. 1 0 0 0 0 0 1 0 0 2 (a) 0 0 1 0 4 0 0 0 1 8 Solution. (working omitted) (x1 , x2 , x3 , x4 ) = (0, 0, 0, 0) 0 1 0 1 4 (b) 0 0 1 2 4 0 0 0 0 0 Page 3 Solution. (working omitted) (x1 , x2 , x3 , x4 ) = (s, 4 − t, 4 − 2t, t), s, t ∈ R (c) 1 0 0 0 3 0 0 1 0 5 Solution. (working omitted) (x1 , x2 , x3 , x4 ) = (3, s, 5, t), s, t ∈ R (d) 0 1 0 2 3 0 0 0 0 1 Solution. (working omitted) No solution. There is a contradiction in the second row. 1 0 2 0 2 (e) 0 1 3 0 3 0 0 0 1 5 Solution. (working omitted) (x1 , x2 , x3 , x4 ) = (2 − 2s, 3 − 3s, s, 5), s ∈ R (f) 0 0 0 0 0 0 0 0 0 0 Solution. (working omitted) (x1 , x2 , x3 , x4 ) = (s, t, u, v), s, t, u, v ∈ R (g) 0 1 2 3 4 Solution. (working omitted) (x1 , x2 , x3 , x4 ) = (s, 4 − 2t − 3u, t, u), s, t, u ∈ R Page 4 3. (a) Solve the following linear systems by Gaussian Elimination. a + 2b + 3c = 1 d + 2e + 3f = 0 g + 2h + 3j = 0 b + 4c = 0 e + 4f = 1 h + 4j = 0 5a + 6b = 0 5d + 6e = 0 5g + 6h = 1 (Hint: Solve them all together.) 1 2 3 ! a d g ! b e h . 0 1 4 (b) Compute c f j 5 6 0 Solution. (working omitted) 1 0 0 0 1 0 0 0 1 (c) Compute a d g ! b e h c f j 1 2 3 ! 0 1 4 . 5 6 0 Solution. (working omitted) 1 0 0 0 1 0 0 0 1 4. Consider the following system of linear equations: 3x + 2y = 9 2x − y = −1 (a) Solve the system. Solution (working omitted) (x, y) = (1, 3) (b) Express your solution in a row picture. Solution omitted. (c) Express your solution in a column picture. Solutoin omitted. 5. Let M be the set of all 2 × 2 square matrices. Define the following operations: ⊕ : M × M → M defined by A ⊕ B = A + B Page 5 and : R × M → M defined by c A = cA (a) We claim that {M, R, ⊕, } (i.e. the vector space with vectors in M and scalars in R with addition ⊕ and scalar multiplication ) is a vector space. In order to prove this, we need to verify all axioms, which we shall not do all of that. Instead, verify only that the following axioms: i. (c + d)v = cv + dv ii. (cd)v = c(dv) Solution. (justification for steps omitted) i (c + d) M = (c + d)M = cM + dM = cM ⊕ dM = (c M ) ⊕ (d M ) ii (cd) M = (cd)M = c(dM ) = c dM = c (d M ) " 13 # " 1 # " 2 # " 4 # 9 (b) Express as a linear combination of −1 , 0 , and 3 . 22 1 1 5 Solution. (working omitted) 1 2 4 13 13 41 40 −1 − 0 + 3 = 9 3 9 9 1 1 5 22 (c) Let T : R3 → M be a linear transformation such that " 2 # " 4 # " 1 # 1 1 2 0 4 −3 T ( −1 ) = , T( 0 ) = , T( 3 ) = . 1 2 1 3 5 9 1 1 5 " 13 # Find T ( 9 ). 22 Page 6 Solution. (working omitted) 13 −9 22 35 " x # x −y . (d) You may have observed that T : → M is actually defined by y 7→ z x+z z Prove that this is indeed a linear transformation. R3 Solution. Step 1: x1 x2 x1 + x2 T ( y1 + y2 ) = T ( y1 + y2 ) z1 z2 z1 + z2 x1 + x2 −y1 − y2 = z1 + z2 x 1 + x 2 + z1 + z2 x1 −y1 x2 −y2 = + z1 x 1 + z 1 z2 x 2 + z2 x1 x2 = T ( y1 ) + T ( y2 ) z1 z2 Step 2: cx x T (c y ) = T ( cy ) cz z cx −cy = cz cx + cz x −y =c z x+z x = cT ( y ) z Thus T is a linear transformation. 6. Consider the polynomial space P2 . Let p(x) = a + bx + cx2 ∈ P2 . (a) What is the zero vector? Page 7 Solution. (working omitted) 0 (b) What is the inverse of p(x)? Solution. (working omitted) −a − bx − cx2 (c) Is p(x + 1) in P2 ? Solution. Yes. a + b(x + 1) + c(x + 1)2 = (a + b + c) + (b + 2c)x + cx2 ∈ P2 (d) Is p(5) in P2 ? Solution. Yes. p(5) = a + 5b + 25c ∈ P2 (e) When is p(x2 ) in P2 ? Solution. (working omitted) When c = 0. 7. In this problem, x, y, z, 0 (zero vector) are n × 1 vectors, Q is an n × n matrix, c is a scalar. Circle the following expressions that should never appear on any of your workings. (a) x · y = xT y (b) (x · y)z = (x · z)y (c) cx = x[ c ] (d) (x · y)z = (xT y)z (e) (x · y)z = (zxT )y (f) y·x z·x = y z (g) 0 = 0 Solution. (a) - distinguish between scalar and 1 × 1 matrix. (b) - not true in general. (d) - RHS matrix multiplication not valid. xT y is a 1 × 1 matrix, z is n × 1. (f) - RHS not valid. You cannot divide a vector by another. (g) - distinguish between zero vector and zero scalar. Page 8 8. Compute the following product of matrices: 1 1 2 3 0 2 4 6 3 5 4 7 Solution. (working omitted) 68 136 204 272 9. Let {Ak }k∈N be a sequence of m × n matrices. Using induction, prove that for all ` ∈ N, ` ` X X −( Ai ) = (−Ai ). i=1 i=1 Solution. Lemma 1. Let A, B be m × n matrices. Then −(A + B) = (−A) + (−B). Proof. Fact: If X = [xij ]i,j , then −X = [−xij ]i,j . Thus −(A + B) = −([aij ]i,j + [bij ]i,j ) = − [aij + bij ]i,j = [−(aij + bij )]i,j = [(−aij ) + (−bij )]i,j = [−aij ]i,j + [−bij ]i,j = (−A) + (−B) Base case: When ` = 1, −(A1 ) = (−A1 ) indeed. Induction hypothesis: Let ` ∈ N. Assume ` ` X X −( Ai ) = (−Ai ). i=1 Page 9 i=1 Inductive step: We want to show that `+1 `+1 X X −( Ai ) = (−Ai ). i=1 i=1 We have −( `+1 X Ai ) = −( ` X i=1 = (− Ai + A`+1 ) i=1 ` X Ai ) + (−A`+1 ) by lemma i=1 = ` X (−Ai ) + (−A`+1 ) by induction hypothesis i=1 = `+1 X (−Ai ) i=1 Conclusion: By the principle of mathematical induction, we have proven that ` ` X X −( Ai ) = (−Ai ). i=1 i=1 for all ` ∈ N. 10. Let A and B be n × n matrices. Express each column of AB as a linear combination of the columns of A. Solution. Let A= c1 · · · cn and B= d1 · · · dn Using the matrix-column representation, we get AB = A d1 · · · dn = Ad1 · · · Adn Observe the i-th column of AB, which will be Adi . Now, we can express the i-th column of B in terms of its individual elements d1i di = ... dni Using the column-row representation, we get d1i Adi = c1 · · · cn ... = d1i c1 + · · · + dni cn . dni Page 10 2 Some harder problems 11. Find values of x and y for which 2 2 4 2 ! 3 x −1 y has −1 0 5 1 (a) a unique solution Solution. (working omitted) x 6= 2 (b) infinitely many solutions Solution. (working omitted) x = 2, y = 1 (c) no solution. Solution. (working omitted) x = 2, y 6= 1 12. Consider the following plane in R3 : P1 : " 2 # " 1 # " 3 # + t 4 , s, t ∈ R x= 4 +s 1 1 −1 −2 (a) Is P1 expressed in general form, normal form, vector form, or parametric form? Solution. Vector form. (b) Express P1 in general form. Solution. (working omitted) 2x − y + z = 1 " 3 # (c) Find the equation of the line ` that passes through 2 and is perpendicular to the 5 " 2 # 4 . Express your final answer in the normal of the plane P1 and the direction 1 parametric form. Page 11 Solution. (working omitted) x=3+t y=2 z = 5 − 2t `: t∈R 13. Prove that in a vector space, the following properties hold: (a) 0v = 0. Solution. 0v = (0 + 0)v = 0v + 0v ∵ (c + d)x = cx + dx =⇒ 0v + (−(0v)) = 0v + 0v + (−(0v)) ∵ x + −x = 0 =⇒ 0 = 0v + 0 = 0v ∵x+0=0 (b) (−1)v = −v. Solution. 0 = 0v = (1 + (−1))v = 1v + (−1)v ∵ (c + d)x = cx + dx = v + (−1)v ∵ 1x = x =⇒ −v = −v + v + (−1)v ∵ −x + x = x + (−x) = 0 = 0 + (−1)v = (−1)v ∵0+x=x+0=x (c) The zero vector is unique. Solution. Let 01 and 02 be zero vectors. Then 01 = 01 + 02 = 02 + 01 = 02 . by axioms on commutativity and zero vector. (d) The additive inverse is unique. Page 12 Solution. Let x have inverses (−x)1 and (−x)2 . Then (−x)1 + x + (−x)2 = 0 + (−x)2 = (−x)2 + 0 = (−x)2 by axioms on commutativity and zero vector and inverse. Also, (−x)1 + x + (−x)2 = (−x)1 + 0 = (−x)1 by axioms on zero vector and inverse. Thus (−x)1 = (−x)2 14. Let P be the powerset of N, the set of natural numbers. Consider the vector space {P, F, 4, } where F consists of {0, 1} with addition defined modulo 2, 4 is the symmetric set difference, and is such that 1 S = S and 0 S is the zero vector. (a) Let S ∈ P . What must 1 S be? Solution. It must be S by axiom. (b) What is the zero vector? Solution. First of all note that A 4 B = (A ∪ B) \ (A ∩ B). For all S ∈ P , we must have S40=S In particular, 0 4 0 = 0. But 0 4 0 = ∅. Thus 0 = ∅. (c) Let S ∈ P . What is −S? Solution. We must have S 4 (−S) = 0 = ∅. Therefore it must be that S ∩ (−S) ⊇ S ∪ (−S) by definition of 4. Then S ⊆ S ∪ (−S) ⊆ S ∩ (−S) ⊆ S, Page 13 so S = S ∪ (−S). Also, −S ⊆ S ∪ (−S) ⊆ S ∩ (−S) ⊆ −S, so −S = S ∪ (−S). Thus S = −S (d) Prove that the following axiom c(u + v) = cu + cv is satisfied. Solution. Note: This problem was badly crafted. Since we are given that {P, F, 4, } is a vector space, of course the axiom holds! What is there to prove? However, the intent of this problem was to verify that the axiom indeed holds. Let S, T ∈ P . If c = 0, then 0 (S 4 T ) = ∅ Since 0 S = ∅ and 0 T = ∅, (0 S) 4 (0 T ) = ∅ 4 ∅ = ∅ indeed. If c = 1, then 1 (S 4 T ) = S 4 T = (1 S) 4 (1 T ) indeed. Thus we have verified that the axiom is satisfied. 15. Let V and W be vector spaces. We call a function f : V → W a LT if f satisfies the condition f (cu + v) = cf (u) + f (v) for scalar c and vectors u and v. Prove that f is LT if and only if it is a linear transformation. Step 1: Suppose f is a LT. Then in particular, f (cu + v) = cf (u) + f (v) for c = 1. So f (u + v) = f (u) + f (v). This must also hold for c = d − 1 for all d and u = v. So f ((d − 1)u + u) = (d − 1)f (u) + f (u) =⇒ f (du) = df (u). Thus f is indeed a linear transformation. Page 14 Step 2: Suppose f is a linear transformation. Then f (cu + v) = f (cu) + f (v) = cf (u) + f (v) for all c, u, v. Thus f is a LT. Thus f is a LT iff it is a linear transformation. 16. Consider the function T : F → F defined by T (f (x)) = f (x2 ) Is T a linear transformation? Prove or disprove. Solution. Yes. There are two things to check. Step 1: Let k be a constant. Then T (kf (x)) = T ((kf )(x)) = (kf )(x2 ) = kf (x2 ) = kT (f (x)). Step 2: Let f and g be functions. Then T (f (x) + g(x)) = T ((f + g)(x)) = (f + g)(x2 ) = f (x2 ) + g(x2 ) = T (f (x)) + T (g(x)). Thus T is a linear transformation. 17. In this problem, we are interested in properties of matrix transposition and symmetric matrices. A matrix is said to be symmetric if and only if A = At . 4 3 (a) Let C = be a 2 × 2 matrix. If D = cji + j ij , write out the matrix D. 2 1 Solution. (working omitted) (b) Using the [ 5 4 4 3 ]ij notation, prove that (At )t = A. Solution. (AT )T = ([aij ]Ti,j )T = [aji ]Ti,j = [aij ]i,j = A (c) Using the [ is At ? ) ]ij notation, prove that (AB)t = B t At . (Hint: Let A = [aij ]ij . Then what Page 15 Solution. Let A be of size m × n and B be of size n × p. Let aij denote the ij-th entry of A, bij for b, a∗ij for AT , and b∗ij for B T . (AB)T = ([aij ]i,j [bij ]i,j )T " n #T X = aik bkj k=1 " = n X i,j # ajk bki k=1 " = n X i,j # akj ∗ b∗ik k=1 " = n X i,j # b∗ik a∗kj k=1 = b∗ij i,j T T i,j ∗ aij i,j =B A (d) Using what you have done above, show that AAt and At A are symmetric matrices. Solution. (AAT )T = (AT )T AT = AAT and (AT A)T = AT (AT )T = AT A. Since their transposes are themselves, they are symmetric. (e) Prove or disprove: AAt = At A. Solution. Not true. Counterexample: A= 0 0 Then T AA = 0 but T A A= 0 0 0 0 . (f) State the most general condition(s) under which A + At is symmetric. Then prove that A + At is symmetric with these/this condition(s). The most general condition is when A is square, then A + AT is defined. (A + AT )T = AT + (AT )T = AT + A = A + AT . Page 16 Since the transpose is itself, A + AT is symmetric. 18. A matrix [ai,j ]ij is called a diagonal matrix if it is of the form λi if i = j ai,j = 0 otherwise Suppose A is a diagonal square matrix. Prove by induction that Ak = [akij ]ij for k ∈ N. Solution. We induct on k. h i Base case: k = 1. Then A1 = A. a1ij i,j h i = [aij ]i,j = A. Thus Ak = akij h i Induction hypothesis: Suppose Ak = akij i,j i,j for k = 1. for some k ∈ N. h i Indutive step: We want to show that Ak+1 = ak+1 . ij i,j Ak+1 = Ak A h i = akij [aij ]i,j i,j " n # X = akim amj m=1 i,j h i = akii aij i,j " ( !# λ i = j i = λki 0 i 6= j i,j "( # k+1 λi i=j = k+1 0 i 6= j i,j h i = ak+1 . ij i,j Conclusion: By the principle of mathematical induction, Ak = [akij ]ij for k ∈ N. 3 Just for fun 19. A museum has a main gallery and a north wing, south wing, east wing, and west wing. Each wing is only connected to the main gallery (i.e. the museum is a cross shape). The lighting in Page 17 each room takes on only two states: ON or OFF. There is one light switch per room. Because the engineers are not from CMU, they completely messed up the wiring such that flipping a switch in one room switches the states of the lights in the room and the adjacent rooms. An example of this is in the following diagram: OFF ON ON OF F −→ ON ON ON OF F ON OF F when the switch in the north wing is flipped. The president is on his way to visit the museum and the curator want to turn on all the lights to make the museum look presentable. Alas, he doesn’t know how. However, he spots you admiring a great piece of art. Knowing you are from CMU he places his faith in you and enlists your help. Whatever the state of the lights in the various rooms, can you always turn them all on? Prove or disprove. GOOD LUCK ON THE EXAM! Page 18