Download Review Sheet

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Covariance and contravariance of vectors wikipedia , lookup

Linear least squares (mathematics) wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Principal component analysis wikipedia , lookup

Jordan normal form wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Determinant wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Four-vector wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Matrix calculus wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Gaussian elimination wikipedia , lookup

System of linear equations wikipedia , lookup

Matrix multiplication wikipedia , lookup

Transcript
Main topics for the Final
The final will cover Sections Chapters 1,2,3,4 and 6, as well as sections 5.1-5.4 and 7.1-7.4 from chapters 5 and 7.
This is essentially all material covered this term.
Watch out for these common mistakes:
• When writing a system of equations as an augmented matrix, remember that you must first put all of the
variables on one side of the equation.
• Make sure you can correctly find the solution to a system of equations, once its in reduced row echelon form.
Don’t forget about any of the equations.
• If you have a linear transformation from Rm to Rn , does it correspond to an n × m matrix or a m × n matrix.
How can you remember?
• Remember that the entries of the matrix of some linear transformation cannot depend on any of the variables.
If you have a specific function from Rm to Rn , you should end up with a specific matrix.
• If you are composing the linear transformations T (~x) = A~x and S(~x) = B~x, does the composition S ◦ T
correspond to the matrix AB or to BA? How can you remember?
• If you are asked to find bases for the image and kernel of an n × m matrix (n rows, m columns), what type
of vectors should be in the image, n dimensional vectors or m dimensional vectors? What about the kernel?
• If you are trying to find a basis for the image of A by Gauss-Jordan, remember that your answer should be
columns of the original matrix.
• We know that the rank and nullity of a matrix sum to either the number of rows or the number of columns.
Which one is it? How can you remember?
• When you are working with arbitrary vector spaces, make sure you remember what your coordinates are,
and what your vectors are. For instance, for a space of polynomials, thinks like x or x2 are your vectors, and
so you should treat them in the same way you would treat vectors. To see if something is a linear space, or a
linear transformation, you really just need to understand whats happening to the coordinates. If everything
is linear in those coordinates, then you have something linear.
• When you are trying to find the matrix of some linear transformation, pay attention to the basis. You can
do this in the exact same way you would for linear function from Rm to Rn , but you need to remember to
always use the given basis, instead of the standard basis.
• When finding the
basis for a vector space, V , your answer should always be a list of elements that are in V .
 
1
 
Something like 2 isn’t in P2 .
3
• Don’t just memorize the formula for Gram-Schmidt, you will forget it. Focus on understanding it. Why are
we subtracting off the things that we are? If you understand this, you should be able to come up with the
formula on your own, if you ever forget it.
• When dealing with orthonormal bases and orthogonal projections, don’t get confused about AT A and AAT .
Both of these show up in different situations. How can you tell which one is the right one to use?
• While the formula ~x∗ = (AT A)−1 AT ~b technically works for finding a least squares solution, its usually a
mistake to try using it. Just solve the system of equations AT A~x∗ = AT ~b normally. We know ways to solve
systems of equations that are much easier than finding the inverse of a matrix.
• When counting the number of inversions in a pattern, remember that you need to think about all possible
pairs of entries.
• If you are finding the determinant of a matrix by Gauss-Jordan, remember that factoring something out of
a row will change the determinant (in what way?).
• It can be tempting to use ‘fancy’ techniques like Cramer’s rule or the adjoint matrix to do things like solving
systems of equations, or finding inverses of matrices, but this is usually a bad idea. Unless you have a very
1
good reason not to, you should probably use more basic techniques like Gauss-Jordan. This is usuall faster,
easier to remember, and harder to mess up.
• Remember that λ = 0 can be an eigenvalue of a matrix. Really, there’s nothing special about λ = 0 when
you are talking about eigenvalues.
• If A is an n × n matrix, then its characteristic polynomial must have degree n. If it doesn’t, you’ve done
something wrong.
• When finding eigenvectors, remember that ~v = ~0 is NOT an eigenvector. If ~v = ~0 is the only solution to
the system A~v = λ~v , then λ is NOT an eigenvalue of A — you must have made a mistake in finding the
eigenvalues.
Make sure you are comfortable with the following:
Systems of Linear Equations (1.1):
• Know how to use elimination or substitution to solve “simple” systems of linear equations.
• Know how to recognize when a system has infinitely many solutions, or no solutions. If a system has infinitely
many solutions, how would you go about finding all of them?
• Understand solutions to systems of equations geometrically, in terms of intersecting lines/planes.
• Know how to recognize situations when you must solve a system of linear equations.
Gauss-Jordan Elimination (1.2):
• Understand how a system of linear equations can be written as a augmented matrix.
– What are the dimensions of this matrix? If you have m variables and n equations, how many rows and
columns does the augmented matrix have?
– Do you not lose any information by switching to the augmented matrix? Can you always get back to
the original system?
• Know the elementary row operations. What do they represent in terms of the original system of equations?
Why doesn’t applying an elementary row operation change the set of solutions to the system of equations?
• Know what it means for a matrix to be in reduced row echelon form.
• Know the Gauss-Jordan algorithm. That is, know how to turn any matrix into a matrix into a matrix in
reduced row echelon form.
• Know how to “read off” the solution to a system of equations, once the augmented matrix has been written
in reduced row echelon form.
– What does the final augmented matrix look like if the system has only one solution? If the system is
inconsistent?
– What if the system has infinitely many solutions? How do you recognize this? And how can you find
all solutions in this case?
• Know what leading variables and free variables are. What do they mean in terms of finding solutions to
systems of equations? What does the number of free variables mean in terms of the set of solutions? If there
is only 1 solution, how many free variables are there? What if the set of solutions forms a line? A plane?
Rank of a Matrix (1.3):
• Know what the rank of a matrix is, how to find it, and why it is important.
• If A is an n × m matrix, what is the largest possible value of rank(A)?
• If the n × m matrix A is the coefficient matrix of a system of linear equations (i.e. the system is A~x = ~b)
under what conditions on m, n and rank(A) will:
– The system always have at least one solution? (i.e. never be inconsistent)
– Never have more than one solution?
2
– Always have exactly one solution.
• Can you interpret the above in terms of a linear transformation being injective, surjective, or both?
• If the system A~x = ~b has a solution, how can number of dimensions of the set of solutions? Does this quantity
depend on ~b?
• If an n × n matrix has rank n, what is its reduced row echelon form? What if an n × m matrix has maximal
possible rank? What can you say about its rref?
• If a system of equations has n equations and m unknowns with n < m, is it possible for the system to have
exactly one solution? What about no solutions?
• Remember, intuitively you can think of the rank of a system of equations as being the “actual” number of
equations. You can always rewrite the system A~x = ~b as a system with exactly rank(A) equations. In this
situation, adding each equation really does decrease the dimension of the set of solutions by one.
Linear Transformations (2.1):
• Understand what it means for a function T : Rm → Rn to be linear. Know how this is different from
saying the T is affine. (Before taking this course, when you used the term linear, you probably meant affine.
Remember the distinction.)
• Understand why you can completely describe a linear transformation T : Rm → Rn by simply giving the
matrix of coefficients. What are the dimensions of this matrix?
– Make sure you understand this! The relationship between a matrix and a linear transformation is
the single most important thing we will learn this term. Everything else we do will be based on this.
• Understand how a system of linear equations can be written as A~x = ~b for some matrix A, and thus can be
interpreted as T (~x) = ~b for a linear transformation T . This means that understanding linear transformations
is the same thing as understanding systems of linear equations.
• Given an n × m matrix, know how to use this to define a linear transformation Rm → Rn . Remember to
pay attention to the dimensions. How would you calculate the image of a specific vector ~v ∈ Rm ?
• Know how find the matrix corresponding
 to an explicitly given linear transformation. For instance, what
#
"
a
a + 2b − 3c
 
? Be sure to get the dimensions right, and
matrix corresponds to the map T  b  =
b−c
c
remember that the entries of the final matrix can’t depend on the inputs a, b and c.
• Know what the standard basis vectors e1 , e2 , . . . , em of Rm are, and know how any vector ~v ∈ Rm can be
written as a sum ~v = x1 e1 + x2 e2 + · · · + xm em (this is called a linear combination of e1 , e2 , . . . , em ).
• Know why a function T : Rm → Rn that satisfies the two properties T (~x + ~y ) = T (~x) + T (~y ) and T (k~x) =
kT (~x) must be linear. Understand why in this case, the vectors T (e1 ), T (e2 ), . . . , T (em ) completely determine
the function T , and know how to easily find the matrix representing T from these vectors.
• Know how to find the matrix representing simple linear transformations like T (~x) = ~0, T (~x) = ~x or T (~x) =
k~x.
Geometric Transformations (2.2):
• Understand why most common geometric transformations are linear. (This is because addition and scalar
multiplication of vectors can be defined geometrically, so any transformations that preserves the pictures
defining addition/multiplication should be linear.)
• Know how to find the 2 × 2 matrix representing a rotation of the plane about the origin, through an angle
of θ.
• Know how to find matrices representing other transformations, such as reflection, orthogonal projection,
scaling, or shears.
• Also know how to find matrices of geometrical transformations in higher dimensions (eg rotations or reflec3
tions in three dimensions).
Composition of linear transformations (2.3):
• If S, T : Rm → Rn are linear transformations, understand why (S + T ) is also a linear transformation. What
about kT , where k ∈ R? How do you find the matrices corresponding to these, in terms of the matrices of
S and T ?
• Now assume that S : Rm → Rp and T : Rp → Rn are linear transformations. Understand why the
composition (T ◦ S)(~x) = T (S(~x)) is also a linear transformation.
• If A is the matrix corresponding to S and B is the matrix corresponding to T , know how to find the matrix
corresponding to T ◦ S in terms of A and B. This is denoted by BA, and is called the matrix product. Make
sure you know how to calculate it.
• Make sure you understand why the matrix product is defined the way that it is. It is not arbitrary, it is
defined in exactly the correct way to make it agree with composition of functions.
• Understand why we generally don’t have AB = BA, and why its possible to have AB = 0 with A, B 6= 0 (or
even An = 0, but A 6= 0). These might seem strange if you are used to multiplication of real numbers, but
if you think of matrix multiplication as function composition, this may seem much more natural.
• Remember that the product AB will not even be defined for some choices of matrices A and B that you
pick. Under what conditions will it exist? What does this mean in terms of function composition?
• Know how to use the function interpretation of matrix multiplication to show that matrix multiplication is
associative (i.e. A(BC) = (AB)C) without any extra calculations.
• Know what the identity matrix, In is, and why it is significant. What is In A, or AIm (where A is n × m)?
• Know how to use matrix multiplication to find the matrices corresponding to complicated linear transformations, by writing them in terms of simpler ones (such as writing orthogonal onto a line as a composition
of two rotations and a simpler orthogonal projection).
– When you are doing this, be sure to pay attention to the order in which you are applying the transformations. The first one you apply should be on the right (for the same reason that f (g(x)) means
applying g first, and then f ).
Inverses (2.4):
• Know what it means for a function f : X → Y to be injective, surjective or bijective. What do these mean
in terms of the equation f (x) = b?
• Understand why a bijective function f : X → Y must have an inverse f −1 : Y → X.
• If T is a linear function which is bijective, understand why T −1 must also be linear. If A and A−1 are the
matrices corresponding to these linear transformations, then what are AA−1 and A−1 A? What is (A−1 )−1 ?
• Know how to tell if an n × m matrix A has an inverse. What must be true of n, m and rank(A)?
• If A is invertible, understand why the system of equations A~x = ~b can be rewritten as ~x = A−1~b. This gives
us a very easy solve any arbitrary linear equation involving A.
• Know how to find the inverse of a matrix. If A is an n × n invertible matrix, we must find some X with
AX = In . Doing this gives us a n systems of linear equations (one for each column) for the entries of X.
Understand why this can be written as a single augmented matrix [A|In ]. What do you get when you write
this augmented matrix in rref?
• What would happen in the above procedure if A was not invertible. Would you be able to finish the process
and get an (incorrect) answer for A−1 ? Is it necessary to make sure that A is invertible before trying to
calculate the inverse, or will you figure out that it isn’t invertible in the process of trying to find A−1 ?
• Understand why knowing that AB = BA = In automatically implies that B = A−1 . What if we only had
BA = In ?
4
"
#
a b
• Know how to tell when a 2 × 2 matrix A =
is invertible. If it is invertible, what is the inverse, in
c d
terms of a, b, c and d? In Chapter 6, we will do the same thing for n × n matrices.
Image and Kernel (3.1):
• Know what this image of a function f : X → Y is, and how to find it.
• If T is a linear transformation, know how to find the image of T .
• Know what the span, span(v~1 , v~2 , . . . , v~m ) of a set of vectors v~1 , v~2 , . . . , v~m is. What does this mean geometrically? What is the span of a single vector? Two non-parallel vectors?
• Understand why the image of a matrix A is just the span of its columns.
• Understand what the kernel of a linear transformation is. What does it mean in terms of systems of equations,
and why should we care about it?
• Know how to use Gauss-Jordan elimination to find the kernel of a matrix. You should be able the express
the kernel as the span of a set of vectors. How many vectors do you get, and what do they correspond to?
• If you know the kernel of a matrix A, what does that tell you about the solutions to a system of equations
like A~x = ~b? If you have one solution, how do you find the others? Geometrically, what does the set of
solutions look like, and how does it relate to ker(A)?
• If ker(T ) = {~0}, where T is a linear transformation, what can you conclude about T ?
Subspaces of Rn (3.2):
• Understand what a subspace of Rn is. This is just a subset of Rn which satisfies a few simple properties
(namely that you can’t get out of it by adding two vectors, or by taking the scalar multiple of a vector).
• Know what these look like in a low number of dimensions. What is a 1-dimensional subspace? A 2dimensional subspace? What do the subspaces of R2 or R3 look like?
• Know how to check if something is a subspace (this just amounts to checking the the properties hold).
• Know why ker(T ) and im(T ) are subspaces, and more generally, span(v~1 , v~2 , . . . , v~m ) is a subspace of Rn for
any v~1 , v~2 , . . . , v~m ∈ Rn .
• Understand why having v~1 , v~2 , . . . , v~m ∈ W , for W a subspace of Rn , implies that span(v~1 , v~2 , . . . , v~m ) ⊆ W .
Bases and Dimension (3.2,3.3):
• Given a list of vectors v~1 , v~2 , . . . , v~m , understand what it means for some of them to be redundant, and
understand why deleting the redundant vectors doesn’t change the span.
• Know why a set of vectors v~1 , v~2 , . . . , v~m is linearly independent (i.e. has no redundant vectors) if and only
if it has no nontrivial relations in the form c1 v~1 + c2 v~2 + · · · + cm v~m = 0.
• If the columns, v~1 , v~2 , . . . , v~n of an n × m matrix A are linearly independent, then what is ker(A)?
• In general, how do relations, c1 v~1 + c2 v~2 + · · · + cm v~m = 0, between the columns of a matrix A relate to
elements of ker(A)?
• Know what it means for a set of vectors v~1 , v~2 , . . . , v~m ∈ W to be a basis for W . Why does this mean that
any w
~ ∈ W can be written uniquely as w
~ = c1 v~1 + c2 v~2 + · · · + cm v~m ?
• Know why any two bases of a subspace W must have the same number of elements (called the dimension of
W ).
• Understand why this definition of dimension lines up with your intuitive understanding of dimension. What
is the dimension of a line? Of a plane? Of Rn ?
• Know how to use Gauss-Jordan elimination to find bases for ker(A) and im(A). How do the dimensions of
these spaces relate to the rank of A?
• What is dim(ker(A)) + dim(im(A))?
Linear spaces (4.1):
• Know what it means for a set to be a linear space (vector space), and know how to recognize if something
5
is a linear space.
• Be familiar with the common examples of linear spaces (Pn , Rn×m , C, F (R, R), C∞ , etc.). Know to construct
other linear spaces as subspaces of these (and how to recognize if a given subset of one of these is linear).
– When you are doing this, its very important to keep track of what the elements of your space are.
For instance, in P2 the elements are polynomials, a + bx + cx2 . This means that things like x or x2
should be treated like vectors, not numbers (and so there is no ‘value’ of x, these are functions), and the
coefficients a, b and c should be treated as coordinates.
you should never have
x appearing
#
" In particular,
#
"
1 x
~e1 ~e2
.
in the entries of a vector or matrix. Something like
is just as meaningless as
x x2
~e2 ~e3
• Understand how pretty much everything we learned about Rn in chapters 2 and 3 can be done for any linear
space.
– Once you understand, and internalize this idea, the material from chapter 4 will start to seem much
easier. There is very little ‘new’ material in this chapter. Almost everything we learn here is just
something you already learned earlier in the term, just phrased it a slightly more general way.
– If you are stuck on a problem about linear space, it is a good idea to think about what the equivalent
problem about Rn would be. If you know how to solve that problem, you should be able to to solve the
original one in essentially the same way.
• In particular, understand how the concepts of linear independence, span, bases and dimensions generalize to
linear space.
• Know how to find a basis for a vector space, and use this to determine the dimension.
– Generally, this amounts to writing out what the elements of V look like. For instance, elements of P2
look like a + bx + cx2 for arbitrary a, b and c. Describing the elements of P2 in this form is exactly the
same thing as writing the elements of P2 as a linear combination of the vectors 1, x and x2 (where the
coefficients are your choice of coordinates, a, b and c). If there are no relations between your chosen
coordinates (i.e. any choices of a, b and c give you an element of P2 ) then this set 1, x, x2 is a basis.
– For example, if V is the set of polynomials in P2 with f 0 (1) = 0, then an arbitrary element of V can be
written as f (x) = a + bx + cx2 , with b = −2c. This is equivalent to saying f (x) = a − 2cx + cx2 , where
there is now no restriction on a and c. Thus V has basis 1, x2 − 2x.
• What are the dimensions of the spaces Pn , Rn×m and C? What is a simple choice of basis for each space?
• If B = (f1 , . . . , fn ) is a basis for V , understand how B allows you to think of V as being the space Rn .
Given some f ∈ V and a basis B for V , what do you need to do find the vector [f ]B ∈ V corresponding to
f ? Remember that this depends on your choice of basis B.
Linear Transformations and Isomorphisms (4.2):
• Understand what it means for a function T : V → W to be a linear transformation, and know how to
recognize when a function is linear.
– To figure out if T is linear, you should think about what it does to the coordinates of your vectors. If
you understand why a linear space is just the same think as Rn , determining whether T is linear is just
the same this as determining whether a map Rm → Rn is linear, which is likely something you can do.
• Be familiar with simple examples of linear transformations. For instance:
– Derivatives or integrals.
– The map T (f ) = f (c) from Pn to R, where c is a constant.
– The map T (~x) = ~v · ~x from Rn to R, where ~v ∈ Rn is a constant vector.
– The maps T (X) = AX or T (X) = XB (or T (X) = AXB), where X is in Rn×m , and A and B are
constant matrices of the right dimension.
• Know how concepts such as the image and kernel, or rank-nullity, generalize to this context.
6
• Again, make sure you understand why these aren’t anything ‘new,’ these are exactly the same things we
considered in chapters 2 and 3, just in a slightly different context. If you know how to work with linear
transformations from Rm to Rn , then general linear transformations shouldn’t be any harder.
• Know what it means for a linear transformation T : V → W to be an isomorphism, and how we determine
if T is one (if dim V = dim W , there’s no need to check that it is both injective and surjective).
• Understand what it means when we say that two isomorphic vector spaces are ‘the same.’
• If B = (f1 , . . . , fn ) is a basis for V , understand how this gives an isomorphism from V to Rn , and so we can
usually think of V as just being the same as Rn . (But do keep in mind that this depends on the choice of
B, so sometimes its better to think of V and Rn as being different things.)
The matrix of a linear transformation (3.4,4.3):
• If T : V → W is a linear transformation, and A = (f1 , . . . , fm ) and B = (g1 , . . . , gn ) are bases for V and
W , understand how T can be thought of as a map from Rm to Rn , and thus as corresponding to a matrix,
[T ]A,B (and remember that this depends on the choice of A and B).
• Understand why [T ]A,B [f ]A = [T (f )]B , for any f ∈ V .
• Know how to find the matrix [T ]A,B .
– This is another situation where understanding the case for maps T : Rm → Rn helps a lot. To find
the matrix corresponding to T : Rm → Rn , one simply computes T (~e1 ), . . . , T (~em ) and takes these to
be the columns of the matrix. To find the matrix of a map T : V → W one does essentially the same
thing, except with the bases A = (f1 , . . . , fm ) and B = (g1 , . . . , gn ) instead of the standard bases for
Rm and Rn . Namely, one computes T (f1 ), T (f2 ), . . . , T (fm ), and finds the coordinate vector of each
one, with respect to the basis B (which essentially amounts writing each T (fi ) as a linear combination
of g1 , . . . , gn ).
– Again, remember that this depends on the basis. If the basis B is not the standard basis of your space,
then make sure you do NOT use the standard basis when you write T (fi ) as a coordinate vector.
– Also, remember that everything you know from chapter 2 still applies. If dim(V ) = m and dim(W ) = n,
what are the dimensions of [T ]B ?
• If A and B are two different bases for the same space V , know how to find the change of basis matrix
S = SA→B satisfying S[f ]A = [f ]B , and know how to use this to find [f ]B , given [f ]A .
• Understand why SB→A = (SA→B )−1 .
• If T : V → V is a linear transformation, understand how to compute [T ]B given [T ]A and S = SB→A .
• Know what it means for two matrices, A and B, to be similar, and why we sometimes say this means that
A and B represent the same function.
Orthogonality (5.1):
• Know what the dot product of two vectors ~v , w
~ ∈ Rn is, and know how to use this to compute the lengths
of vectors, and angles between two vectors. In particular, understand why ~v and w
~ are perpendicular
(orthogonal) if and only if ~v · w
~ = 0.
• Know what it means for a set of vectors ~u1 , . . . , ~um ∈ Rn to be orthonormal.
• Understand why an orthonormal set of vectors is automatically linearly independent, and why a set of n
orthonormal vectors in Rn is automatically a basis.
• Understand why ~e1 , . . . , ~en is an orthonormal basis for Rn . Are there other orthonormal bases? Will a basis
of Rn ‘usually’ be orthonormal?
• If ~u1 , . . . , ~un is an orthonormal basis of Rn , and ~x ∈ Rn , know how to easily find the coordinates of ~x with
respect to the basis ~u1 , . . . , ~un (that is, find c1 , . . . , cn such that ~x = c1 ~u1 + · · · + cn ~un ).
• If V is a subspace of Rn , understand what the orthogonal projection, projV (~x) of ~x ∈ Rn onto V is.
• Know to compute projV (~x), if you are given an orthonormal basis ~u1 , . . . , ~um for V .
7
The
•
•
•
Gram-Schmidt Process (5.2):
Understand why it is often important to find an orthonormal basis for a subspace of Rn .
If ~v is a nonzero vector, understand how to find a unit vector ~u parallel to ~v .
If ~u is a unit vector and w
~ is any other vector, know how to find the constant k for which w
~⊥ = w
~ − k~u is
⊥
perpendicular to ~u. If (~u, w)
~ is a basis for a subspace V , why is (~u, w
~ ) also a basis for V ?
• If two vectors ~v and w
~ form a basis for V , know how to use the above two bullet points to find an orthonormal
basis for V .
• In general, if ~v1 , . . . , ~vm is a basis for V , know how to use the Gram-Schmidt process to find an orthonormal
basis for V .
• Make sure you really understand how to do this process. If you simply try to memorize the formulas without
understanding them, you will almost certainly get something wrong. Focus on understanding why the formula
is what it is. For instance, ask yourself:
– Why do we only turn the first vector into a unit vector at the start?
– When we want to find ~vj⊥ we need to subtract off multiples of some other vectors. Which vectors are
we subtracting, and why?
– How do we figure out what multiples of these vectors to use?
– How do we know that ~vj⊥ is perpendicular to ~u1 , . . . , ~uj−1 ?
– At which point do we turn each vector into a unit vector?
– How do you know that ~u1 , . . . , ~um is still a basis for the same space as ~v1 , . . . , ~vm ?
• Make sure that you can actually do these computations. Don’t just learn the formulas and think you’ll be
able to use them correctly on the test. Practice them!
• What would you do if I asked you to find an orthonormal basis for some subspace V of Rn , but didn’t give
you a basis to start with?
Orthogonal Transformations (5.3):
• Understand what the transpose of a matrix is. If A is an n × m matrix, what are the the dimensions of AT ?
How would you find AT if you knew A?
• If A and B are matrices (of the appropriate dimensions), what is (A+B)T ? What is (kA)T ? (AB)T ? (AT )T ?
• Understand why the dot product ~v · w
~ can be thought of as the matrix product ~v T w.
~ What does this mean
about (A~v ) · w,
~ or (A~v ) · (Aw),
~ where A is an n × n matrix?
• Know what it means for a linear transformation T : Rn → Rn to be orthogonal. If A is the matrix representing
T , what must be true about A?
• If A is an orthogonal matrix (i.e. the matrix of an orthogonal transformation) what must be true about the
columns of A? In terms of linear transformations, what must be true about the vectors T (~e1 ), T (~e2 ), . . . , T (~en )
in order for T to be orthogonal?
• If V is a subspace of Rn , understand why the map T : Rn → Rn given by T (~x) = projV (~x) is linear.
• If ~u1 , . . . , ~um is an orthonormal basis for V , know how to find the matrix representing T (~x) = projV (~x).
Remember that this should be an n × n matrix.
Least Squares (5.4):
• If V is a subspace of Rn , what is the space V ⊥ ? How does this relate to projV ? How does dim V ⊥ relate to
dim V ?
• If A is an m × n matrix, then A represents a linear transformation Rm → Rn . AT also represents a linear
transformation. Which spaces does it map between?
• Understand why ker(AT ) = (im A)T . How can you use this observation to determine whether a vector ~v ∈ Rn
is perpendicular to im A? If ~x ∈ Rn and ~x∗ = projim A (~x), what can you say about AT (~x − ~x∗ )?
• What is the relationship between rank(A) and rank(AT )?
8
• Understand why ker(A) = ker(AT A). If A : Rm → Rn is injective, what can you say about AT A?
• Understand why, in real life, it is often unreasonable to assume that the systems of equations you consider
will be inconsistent. If a system of equations does literally have a solution, what should you try to do?
• Understand what the least squares solution to a system of equations A~x = ~b is. How is this different then
simply asking for the solution to the equation.
– Remember, if you are trying to find the least squares solution to A~x = ~b, then you are not actually
trying to solve this equation. Therefore you CANNOT use techniques like Gauss-Jordan, or any of our
other tricks, to solve it. DO NOT think of these as actual systems of equations.
• If ~x∗ is a least squares solution to A~x = ~b, understand why ~x∗ is an actual solution to AT A~x = AT ~b. Know
how to use this to find least squares solutions.
– When solving problems like this, make sure you remember the difference between AT A and AAT , these
are very different matrices. This can be a little confusing, as both of these products do show up, in
different contexts (one for finding least squares solutions, the other for finding matrices of orthogonal
projections). If you get confused, try to think whether you want an m × m matrix or an n × n matrix.
Determinants (6.1):
"
#
a b
• Understand why the determinant of a 2 × 2 matrix A =
, det A = ad − bc, determines whether A is
c d
invertible. What does this mean geometrically.
• If A is a 3×3 matrix with columns ~u, ~v and w,
~ understand why it is reasonable to define det A to be ~u ·(~v × w).
~
Why is this zero if and only if A is invertible? What does this mean geometrically.
• Know how to compute the determinant of a 3 × 3 matrix. The formula for this is quite complicated, but
there is an easy way to remember it.
• Know how to generalize the 3 × 3 case to an n × n matrix.
– Computing the determinant of an n × n matrix involves taking a bunch of products of n entries. How
can you tell which products to include, and which not to include?
– For each such product, how do you figure out if you should add or subtract it?
• If a matrix has a lot of zeros, do you necessarily consider all patterns when computing the determinant?
How do you figure out which ones you need to consider?
• How do you find the determinant of an upper triangular matrix?
Properties of determinants (6.2):
• Understand what it means to say that the determinant is linear in each of its rows and columns. Why is
this true?
• Know what happens to the determinant of a matrix A when you:
– Switch two rows of A.
– Multiply the ith row of A by k.
– Add k times the j th row to the ith row.
• Understand how the above three properties allow you find the determinant of any matrix by using GaussJordan. Usually, this will be the fastest way to find a determinant.
• If two rows of a matrix A are equal, what can you say about det A?
• Understand what affect applying a matrix A : Rn → Rn has on the ‘volume’ of some object in Rn . What
does this have to do with relation det(AB) = (det A)(det B).
• What does the fact that det(AB) = det(A) det(B) imply about det(A−1 )? det(Am )? det(S −1 AS)?
• Understand how we can define the determinant of a linear transformation T : V → V . Why does this not
depend on the choice of basis?
• Understand why det(AT ) = det(A). What does this imply about the determinant of an orthogonal matrix?
9
Cramer’s Rule (6.3):
a x + a x = b 11 1
12 2
1
• Know how to explicitly solve the system of equations (with a11 a22 −a12 a21 6= 0) in terms
a21 x1 + a22 x2 = b2 of the a’s and b’s. How can you express this in terms of determinants? What does this mean geometrically,
in terms of areas of parallelograms.
• Know how to generalize this to systems of n equations in n variables (of rank n), i.e. how to find an explicit
formula for the solution in terms of the coefficients.
• Try to focus on remembering why this is true, its very easy to forget this formula if you don’t understand it.
Laplace Expansion (6.2,6.3):
• Know how to express a 3 × 3 determinant as a sum/difference of multiples of 2 × 2 determinants.
• Understand what this means in terms of choosing patterns in a matrix. Given some row (or column) you
can classify all patterns based on which element they include from this row.
• Understand how this allows you express an n × n determinant in terms of n different (n − 1) × (n − 1)
determinants. How do these (n − 1) × (n − 1) matrices relate to the original matrix?
• Know how to figure out which sign you use for each minor using Laplace expansion. Is there an easy way to
remember?
• Usually it will still be easier to compute determinants using Gauss-Jordan. In what situations will Laplace
expansion be easier?
• If an entire row (or column) of a matrix A consists entirely of 0’s, what can you say about det A?
• Know how to determine the inverse of A by solving the systems of equations A~x = ~e1 , A~x = ~e2 , . . . , A~x = ~en .
• Know how to use Laplace expansion to simplify applying Cramer’s rule in this situation. How can you use
this to get an explicit formula for A−1 , using det A and the minors of A.
Eigenvalues and Eigenvectors (7.1):
• Understand why the standard basis is often not the best basis to use when working with some matrix. For
instance, is it (usually) easy to find A1000 ? Would it be easier if A was diagonal?
• Know what it means to say that that ~v is an eigenvector of A, and that λ is the corresponding eigenvalue.
• If ~v is an eigenvector with eigenvalue 0, what is A~v ?
• Know what an eigenbasis for A is.
• If B is an eigenbasis for A, understand why the matrix [A]B is diagonal.
• Understand why A has an eigenbasis if and only if it is diagonalizable. That is if there is some matrix S
for which S −1 AS is diagonal. How does S relate to the eigenbasis? What are the entries on the diagonal of
S −1 AS?
• Know how to simplify (S −1 AS)t , and how to use this to compute At , when A is diagonalizable (but not
necessarily diagonal).
"
#
"
#
1 1
0 −1
• Know why some matrices, such as
or
are NOT diagonalizable.
0 1
1 0
• Know how to find the eigenvalues and eigenvectors of a geometrical transformation. What are the eigenvalues/vectors of a reflection? A orthogonal projection? A rotation?
Computing Eigenvalues (7.2):
• Understand why λ is an eigenvalue of A if and only if ker(λIn − A) 6= {~0}.
• Understand why this means that λ is an eigenvalue of A if and only if det(λIn − A) = 0. Know how to use
this to find all of the eigenvalues of a matrix, by just finding the roots of a polynomial.
• Know how to find the characteristic polynomial of a matrix. This is just finding a determinant, but it may be
difficult to use Gauss-Jordan here (why?). This is a situation where you would likely want to use a different
method of finding a determinant (such as our explicit formulas for n = 2 or n = 3, or Laplace expansion for
10
larger matrices).
• Know what the algebraic multiplicity of an eigenvalue is, and how to find it.
• Understand why an n × n matrix can have at most n eigenvalues.
• If A is upper triangular, how can you easily find all of the eigenvalues, together with their algebraic multiplicities?
Finding Eigenvectors (7.3):
• If λ is an eigenvalue of a matrix A, know how to find all eigenvectors of A corresponding to λ. This is
essentially just solving a system of equations (which system?). How do you know that there has to be a
nonzero solution to that system?
• Know what the eigenspace Eλ of a matrix A is. How does this relate to the eigenvectors of A?
• What is the geometric multiplicity of an eigenvalue? How do these relate to finding an eigenbasis for A?
• Understand how you can find the geometric multiplicity of an eigenvalue by finding the rank of a matrix.
• Understand why ge. mu.(λ) ≤ al. mu.(λ) for any λ. If pA (λ) has n real root (with multiplicity), what must
be true about al. mu.(λ) and ge. mu.(λ) in order for A to be diagonalizable?
• If al. mu.(λ) = 1, is it necessary to determine ge. mu.(λ)?
• In particular, if pA (λ) has n distinct real roots, why must A be diagonalizable?
Eigenvalues and Eigenvectors of Linear Transformations (7.2,7.3,7.4):
• Understand why the matrices A and S −1 AS (for any S) have the same eigenvalues, characteristic polynomial
and algebraic and geometric multiplicities. Do they have the same eigenspaces?
• Understand why this we say that this means that the above quantities depend only on the linear transformation, not the choice of basis.
• Understand why the product of the eigenvalues of a matrix is equal to its determinant. Why is this obvious
for a diagonal matrix? Why does that imply it must also be true for a diagonalizable matrix?
• Know what the trace of a matrix is, and understand why it is equal to the sum of the eigenvalues. Why does
this mean that the trace of A doesn’t depend on the choice of basis?
• If T : V → V is a linear transformation from a vector space V to itself, know what an eigenvalue or
eigenvector of T is. How would you go about finding these?
• How would you figure out if T was diagonalizable? If it was, how would you find an eigenbasis?
11