Download If A and B are n by n matrices with inverses, (AB)-1=B-1A-1

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Bivector wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Rotation matrix wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Cross product wikipedia , lookup

Exterior algebra wikipedia , lookup

Jordan normal form wikipedia , lookup

Determinant wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

System of linear equations wikipedia , lookup

Vector space wikipedia , lookup

Principal component analysis wikipedia , lookup

Euclidean vector wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Gaussian elimination wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Matrix multiplication wikipedia , lookup

Matrix calculus wikipedia , lookup

Four-vector wikipedia , lookup

Transcript
Math 100 Final Exam Formulas
If A and B are n by n matrices with inverses, (AB)-1=B-1A-1
If A and B are n by n matrices, (AB)T=BTAT
(AT)-1=(A-1)T
The determinant function det(A) for an n by n matrix A is
det( A)   a1 j1 a2 j2 ...anjn where the sum is taken over all permutations ( j1 , j2 ,..., jn ) of
(1,2,…,n) and + or – is chosen according to the permutation being even or odd.
If A is a triangular matrix (upper, lower, or diagonal), det(A)=product of the diagonal
elements
If A has a row or column of zeros, det(A)=0
det(A)=det(AT)
det(A-1)=1/det(A)
u  u12  u22  u32  ...  un2
ku | k | u
If u and v are vectors in 2-space or 3-space and θ is the angle between them, then the dot

 u v cos  if u  0 and v  0
u v  

0 if u  0 or v  0
product or inner product u.v is defined by
u  v  u1v1  u2v2  u3v3  ...  unvn
proja u 
ua
a
2
a [vector component of u along a]
u  proja u  u 
ua
a
2
a [vector component of u orthogonal to a]
D
Distance from point to a line in 2-space:
| ax0  by0  c |
a 2  b2
If u  {u1 , u2 , u3} and v  {v1 , v2 , v3} are vectors in 3-space, then the cross product is the
vector u  v  {u2v3  u3v2 , u3v1  u1v3 , u1v2  u2v1} or in determinant notation
u
uv   2
 v2
u3
v3
,
u1 u3 u1 u2 
,

v1 v3 v1 v2 
u  v  u v sin  where  is the angle between u and v
u1 u2
v v
The absolute value of the determinant 1 2 is equal to the area of the parallelogram in
2-space determined by the vectors u  {u1 , u2 } and v={v1 , v2 }
u1
u2
u3
v1
v2
v3
w w2 w3
The absolute value of the determinant 1
is equal to the volume of the
parallelepiped in 3-space determined by the vectors
u  {u1 , u2 , u3}, v  {v1 , v2 , v3}, and w  {w1 , w2 , w3}
Volume of parallelepiped determined by vectors u, v, and w is given by | u  (v  w) |
Point normal form of the equation of a plane: a(x-x0)+b(y-y0)+c(z-z0)=0
Equation of a plane with {a,b,c} as a normal vector: ax+by+cz+d=0
Parametric equations of a line:
x  x0  ta, y  y0  tb, z  z0  tc where    t  
D
| ax0  by0  cz0  d |
Distance from a point to a plane in 3-space:
a 2  b2  c2
T
If vectors are written as column matrices, u  v  v u
Matrix notation for a linear transformation from n-space to m-space:
a1n   x1 
 w1   a11 a12
w   a
a2 n   x2 
 2    21 a22
  
 
  
 
amn   xn 
 wm   am1 am 2
 1 0


Standard Matrix for Reflection about y-axis in 2-space:  0 1 
1 0 


Standard Matrix for Reflection about x-axis in 2-space: 0 1
0 1 


Standard Matrix for Reflection about the line x=y in 2-space: 1 0
1 0 0 
0 1 0 


 0 0 1
Standard Matrix for Reflection about the xy-plane in 3-space:
1 0 0 
 0 1 0 


 0 0 1 
Standard Matrix for Reflection about the xz-plane in 3-space:
 1 0 0 
 0 1 0


 0 0 1 
Standard Matrix for Reflection about the yz-plane in 3-space:
1 0 


Standard Matrix for Orthogonal projection on the x-axis in 2-space: 0 0
0 0


Standard Matrix for Orthogonal projection on the y-axis in 2-space: 0 1 
1 0 0 
0 1 0 


0 0 0 
Standard Matrix for the Orthogonal projection on the xy-plane in 3-space:
1 0 0 
0 0 0 


0 0 1 

Standard Matrix for the Orthogonal projection on the xz-plane in 3-space:
0 0 0 
0 1 0 


0 0 1 
Standard Matrix for the Orthogonal projection on the yz-plane in 3-space:
cos 

Standard Matrix for the rotation through angle θ in 2-space:  sin 
 sin  
cos  
Standard Matrix for the rotation through angle θ about the positive x-axis in 3-space:
0
0 
1
0 cos   sin  


0 sin  cos  
Standard Matrix for the rotation through angle θ about the positive y-axis in 3-space:
 cos  0 sin  
 0
1
0 

  sin  0 cos  
Standard Matrix for the rotation through angle θ about the positive z-axis in 3-space:
cos   sin  0 
 sin  cos  0 


 0
0
1 
k 0 


Standard Matrix for Contraction or dilation by factor k in 2-space:  0 k 
k 0 0 
0 k 0


 0 0 k 
Standard Matrix for Contraction or dilation by factor k in 3-space:
n
m
A linear transformation T : R  R is one-to-one if T maps distinct vectors (points) in
Rn into distinct vectors (points) in R m
Equivalent statements: If A is an n by n matrix and
TA : R n  R n
is multiplication by A,
T
then the following statements are equivalent: (a) A is invertible; (b) The range of A is
Rn ; (c) TA is one-to-one
n
m
A transformation T : R  R is linear if and only if the following relationships hold for
n
all vectors u and v in R and every scalar c: (a) T(u+v)=T(u)+T(v) and
(b) T(cu)=cT(u)
Definition: A set of objects, V, forms a vector space if the following are satisfied.
(1) If u and v are in V, then u+v is in V
(2) u+v=v+u
(3) u+(v+w)=(u+v)+w
(4) There is a zero vector 0 such that 0+v=v+0=v for and v in V
(5) There is a vector –u such that u+(-u)=0 for ever u in V
(6) If k is any scalar and v is in V, then kv is in V
(7) k(u+v)=ku+kv
(8) (k+l)u=ku+lu
(9) k(lu)=(kl)u
(10)
1u=u
Theorem: If W is a set of vectors from vector space V, then W is a subspace of V if and
only if the following are satisfied: (1) If u and v are in W, then u+v is in W and (2) if k is
any scalar and u is any vector in W, then ku is in W.
Definition: If S={v1, v2,…, vr} is a nonempty set of vectors, then the vector equation k1v1+
k2v2+…+ krvr has at least one solution, namely k1=k2=…=kr=0. If this is the only
solution S is a linearly independent set; otherwise, S is linearly dependent.
Theorem: Let S={v1, v2,…, vr}be a set of vectors in Rn. If r>n, then S is linearly
dependent.
Definition: If V is any vector space and S={v1, v2,…, vn}is a set of vectors in V, then S is
called a basis for V if the following two conditions hold: (1) S is linearly independent,
and (2) S spans V.
Theorem: Let V be a finite-dimensional vectors space and {v1, v2,…, vn}any basis. (1) If a
set has more than n vectors, it is linearly dependent. (2) If a set has fewer than n vectors,
then it does not span V.
Definition: The dimension of a finite-dimensional vector space V, denoted by dim(V), is
defined to be the number of vectors in a basis for V. In addition, define the zero vector
space has dimension zero.
Theorem: If V is an n-dimensional vectors space, and if S is a set in V with exactly n
vectors, then S is a basis for V if either S spans V or S is linearly independent.
Definition: If A is an m by n matrix, then the subspace of Rn spanned by the row vectors
of A is called the row space of A, and the subspace of Rm spanned by the column vectors
is called the column space of A. The solution space of the homogeneous system of
equations Ax=0, which is a subspace of Rn, is called the nullspace of A.
Theorem: If x0 denotes any single solution of a consistent linear system Ax=b, and if v1,
v2,…, vk form a basis for the nullspace of A, then every solution of Ax=b can be
expressed in the form x=x0+c1v1+ c2v2+…+ ckvk and conversely.
Theorem: If A and B are row equivalent matrices, then: (1) A given set of column vectors
of A is linearly independent if and only if the corresponding column vectors of B are
linearly independent and (2) A given set of column vectors of A forms a basis for the
column space of A if and only if the corresponding column vectors of B form a basis for
the column space of B.
Theorem: If A is any matrix, the row space and column space of A have the same
dimension.
Definition: The common dimension of the row space and column space of a matrix A is
called the rank of A and is denoted by rank(A). The dimension of the nullspace of A is
called the nullity of A and is denoted by nullity(A).
Theorem: If A is a matrix with n columns, rank(A)+nullity(A)=n
Inner Product Properties:
(1) <u,v>=<v,u>
(2) <u+v,w>=<u,w>+<v,w>
(3) <ku,v>=k<u,v>
(4) <v,v>≥0 and <v,v>=0 if and only if v=0
Weighted inner product: <u,v>=w1u1v1+ w2u2v2+… +wnunvn where the w’s are positive
real numbers
<u,v>=Au.Av=vTATAu
|<u,v>| ≤ ||u|| ||v||
cos  
 u, v 
|| u |||| v ||
If W is a subspace of a vector space V, W┴ is the set of all vectors in V that are
orthogonal to W
If A is an m by n matrix then:
(a) The nullspace of A and the row space of A are orthogonal to each other in Rn with
respect to the Euclidean inner product.
(b) The nullspace of AT and the column space of A are orthogonal to each other in Rm
with respect to the Euclidean inner product.
If S={v1,v2,…,vn}is an orthonormal basis for an inner product space V and u is any
vector in V, then u=<u,v1>v1+<u,v2>v2+…+<u,vn>vn
If W is a finite-dimensional subspace of an inner product space V, then every vector u in
V can be expressed in exactly one way as u=w1+w2 where w1 is in W and w2 is in W┴.
If {v1,v2,…,vn}is an orthonormal basis for a finite dimensional subspace W of a vector
space V with inner product denoted by <u,v> and u is any vector in V, then
projWu=<u,v1>v1+<u,v2>v2+…+<u,vn>vn
If W is a finite-dimensional subspace of an inner product space V, and if u is a vector in
V, then projWu is the best approximation to u from W in the sense that ||u-projWu||<||u-w||
for every vector w in W that is dfferent from projWu
If A is an m by n matrix with linearly independent column vectors, then for every m by 1
matrix b the linear system Ax=b has a unique least squares solution. This solution is
given by x=(ATA)-1ATb and projWu=Ax=A(ATA)-1ATb
A square matrix A with the property that A-1=AT is said to be orthogonal.
For an n by n matrix the following are equivalent: (1) A is orthogonal, (2) The row
vectors of A form an orthonormal set with respect to the Euclidean inner product, (3) The
column vectors of A form an orthonormal set with respect to the Euclidean inner product.
If you change the basis for a vector space V from some old basis B={u1,u2,…,un} to some
new basis B`={v1,v2,…,vn}, then the old coordinate matrix [v]B is related to the new
coordinate matrix [v]B` by the equation [v]B = P [v]B` where the columns of P are the
coordinate matrices of the new basis vectors relative to the old basis.
The matrix P in the last statement is invertible and the inverse of P is the transition matrix
from B to B`.
If P is the transition matrix from one orthonormal basis to another orthonormal basis for
an inner product space, then P is an orthogonal matrix.