Download Formulas

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Tensor operator wikipedia , lookup

Quadratic form wikipedia , lookup

Determinant wikipedia , lookup

Euclidean vector wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Vector space wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Dual space wikipedia , lookup

System of linear equations wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Jordan normal form wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Matrix multiplication wikipedia , lookup

Cartesian tensor wikipedia , lookup

Bra–ket notation wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Matrix calculus wikipedia , lookup

Four-vector wikipedia , lookup

Linear algebra wikipedia , lookup

Basis (linear algebra) wikipedia , lookup

Transcript
Formulas
Trigonometric Identities
sin(x+y) = sin x cos y + cos x sin y
cos(x+y) = cos x cos y - sin x sin y
sin x sin y = ½ [ cos(x-y) - cos(x+y) ]
cos x cos y = ½ [ cos(x-y) + cos(x+y) ]
sin x cos y = ½ [ sin(x+y) + sin(x-y) ]
sin2x = ½ [ 1 - cos(2x) ]
cos2x = ½ [ 1 + cos(2x) ]
Geometry
Circles:
Ellipses:
2
x
a2
Area =  r2
Circumference = 2 r
Hyperbolas:
y
y
b
2
2
x
a2
+ b2 = 1
a
a
y
y
y
2
bx a
- b2 = 1
x
a
b
a
y
x
bx a
4
Volume = 3 r3
Surface Area = 4 r2
Cylinders: Volume = (Area of base)  (Height)
Spheres:
1
Volume = 3 (Area of base)  (Height)
cos -sin
R = sin cos  = matrix for a counter-clockwise rotation through an angle .
Cones:
Summation Formulas
n(n+1)
1 + 2 + 3 +  + n =
2
n2(n+1)2
13 + 23 + 33 +  + n3 =
4
n(n+1)(2n+1)
12 + 22 + 32 +  + n2 =
6
n+1
x -1
1 + x + x2 +  + xn =
x-1
Electric Circuits
Kirchoff's current law:
At any junction, the sum of the currents going into the junction is equal to the
sum of the currents going out of the junction, i.e. ins = outs.
Kirchoff's voltage law: In any loop, the sum of the voltage increases is equal to the sum of the
voltage decreases, i.e. ups = downs.
Ohm's law:
For any resistor, the voltage decrease V is equal to current I times the
resistance R, i.e. V = IR
Real Numbers - Properties
Basic Properties
R1.
R2.
R3.
R4.
R5.
R6.
R7.
R8
R9.
u+v = v+u
(u + v) + w = u + (v + w)
u+0 = u
u + (- u) = 0
uv = vu
u(vw) = (uv)w
u(v + w) = uv + uw
1u = u
uu-1 = 1
1
Definitions
R10.
R11.
u - v = u + (- v)
u
= uv-1
v
Derived Properties (Can be derived from R1 – R11)
R11.
R12.
R13.
0u = 0
- u = (-1)u
u - v = u + (-1)v
Vector Space Axioms
A vector space is a collection V of objects, called vectors such that the following hold for all u, v and w in V and all
numbers c and d.
1.
u + v is in V.
2.
u+v = v+u
3.
(u + v) + w = u + (v + w)
4.
u+0 = u
5.
u + (-u) = 0
6.
cu is in V
7.
c(u + v) = cv + cu
8.
(c + d)u = cu + du
9.
c(du) = (cd)u
10. 1u = u
Linear Combinations (Superpositions)
A linear combination (or superposition) of u1, …, un is a sum of scalar multiples of the vectors, i.e. a vector v of the
form v = c1u1 +  + cnun. Another way of writing this is v = Tc where T is the matrix whose columns are the
uj and c is the column vector whose entries are the cj.
To write v as a linear combination of u1, …, un is to find c1, …, cn such that v = c1u1 +  + cnun. In other words, to
solve the system of linear equations v = Tc.
u1, …, un span the set of all vectors (with the same number of components as the uj) if every vector v can be written as
a linear combination of u1, …, un. This is true precisely if the system of linear equations v = Tc has a solution c
for every vector v. This is true if T can be transformed by a sequence of elementary row operations to an echelon
form so that no row consists entirely of zeros.
More generally, u1, …, un span a subspace if every vector v in the subspace can be written as a linear combination of
u1, …, un.
Linear independence and linear dependence
A set of vectors u1, …, un is linearly dependent if one can find c1, …, cn that are not all zero and such that c1u1 +  +
cnun = 0. This occurs precisely if one can write one of the uj as a linear combination of the others. Another way
of saying this is that there is a non zero vector c such that Tc = 0, where T is the matrix whose columns are the uj,
i.e the null space of T consists of more than just the zero vector.
A set of vectors u1, …, un is linearly independent if c1u1 +  + cnun = 0 implies all the cj are zero. This occurs
precisely if the null space of T consists of just the zero vector. This is true if T can be transformed by a sequence
of elementary row operations to an echelon form so that column has a row with leading one in that column.
Determinants
minor of an element = matrix obtained by deleting row and column of the element.
cofactor of an element =  det(minor) where we use + or – depending if the sum of the row and column indices of the
element is even or odd.
det(A) = sum of elements in any row or column of A times their cofactors.
2
det(A) = area, volume or hypervolume of parallelogram, parallelopiped, or higher dimensional parallelpiped whose edges
are the columns of A.
det(A) = sum of  a1, j1a2, j2an, jn where we sum over all rearrangements (permuatations) j1, j2, …, jn of the numbers 1,
2, …, n and we use + or – depending on whether there are an even or odd number
of pairs jp, jq that are out of order, i.e. jp > jq, but p < q.
det(A) = 0 if all the elements of any row or column of A are 0.
det(A) = product of diagonal elements of A
if A is upper or lower triangular.
det(B) = c det(A) if B is obtained from A by multiplying any one row or column of A by c.
det(B) = - det(A) if B is obtained from A by interchanging any two rows or columns of A.
det(A) = 0 if two rows or columns of A are equal or constant multiples of each other.
det(A) = det(B) + det(C) if we split up the elements of any row or column of A as sums and let B be equal to A except for
this row or column where we take the first element of the sum and let C be equal to A
except for this row or column where we take the second element of the sum.
det(B) = det(A) if B is obtained from A by adding a multiple of one row of A to another (or a multiple of one column of
another).
det(AB) = det(A) det(B)
det(A-1) =
1
det(A)
det(AT) = det(A)
xj =
det(B)
if Ax = b and B is obtained by replacing the jth column of A by b.
det(A)
A-1 =
BT
where B is the matrix of cofactors of A.
det(A)
-1
a b = 1  d -b
c d 
ad - bc -c a 
Subspaces and Bases
A subspace is a collection S of vectors such that if u and v are in S and c is a number then u + v and cu are in S
A set of vectors u1, …, un is a basis for the set of all vectors (with the same number of components as the uj) if it is
linearly independent and it spans the set of all vectors. This occurs precisely if T is invertible if T is the matrix
whose columns are the uj.
u1, …, un is a basis for a subspace if it is linearly independent and it spans the subspace.
Eigenvalues and Eigenvectors
To find the eigenvalues  of A, solve det(A - I) = 0.
To find the eigenvector(s) u of A for an eigenvalue , solve (A - I)u = 0.
A = TDT -1
T = matrix whose columns are the eigenvectors of A,
D = diagonal matrix with the eigenvalues of A on the diagonal.
An = TDnT -1
Dn = diagonal matrix with the powers of the eigenvalues on the diagonal.
A = rTRT -1
A = 22 matrix with complex eigenvalues  = r ( cos  i sin ),
T = matrix whose columns are the real and imaginary parts of an eigenvector corresponding to -,
R = matrix for a rotation by an angle .
An = rnTRnT -1
Orthogonal Sets, Orthogonal Projection, and Least Squares
A set of vectors u1, …, un is orthogonal if each vector in the set is orthogonal (perpendicular) to every other vector in
the set, i.e. uj . uk = 0 whenever j  k.
A set of vectors u1, …, un is orthonormal if the set is orthogonal and each vector has length one. Another way of
saying this last condition is uj . uj = 1 for all j.
3
Problem: Find the superpostion c1u1 +  + cnun of u1, …, un closest to v. This is equivalent to finding the cj so that
v – (c1u1 +  + cnun) is orthogonal to each of the uj. This vector is called the orthogonal projection of v on the
subspace spanned by the uj. The cj can be computed by solving the equations STSc = STv, where S is the matrix
whose columns are the uj and c is the column vector whose entries are the cj. If there is only one uj, then
u.v
c=
.
u.u
Problem: Given data points (x1, y1), …, (xn, yn) and functions f1(x), …, fm(x), find the linear
combination y = c1f1(x) +  + cmfm(x) of the functions that fits the data points best in the
sense of least squares. This means choosing the cj so as to minimize the sum of squares of
the differences between the predicted and actual y values, i.e. to minimize
S = [ y1 – ( c1f1(x1) +  + cmfm(x1) ) ]2 +  + [ yn – ( c1f1(xn) +  + cmfm(xn) ) ]2,
 y1 
 fj(x1) 


.
i.e. find the superpostion of u1, …, um closest to v =
, where uj = 
 


 yn 
 fj(xn) 
Symmetric and Orthogonal Matrices
A is symmetric if AT = A. For a symmetric matrix one has
i.
(Au) . v = u . (Av) for all u, v.
ii. The eigenvalues of A are all real.
iii. Eigenvalues of A corresponding to different eigenvalues are orthogonal.
iv. If an eigenvalue of A is repeated k times, then there are k linearly independent
eigenvectors of A for that eigenvalue.
v.
There is an orthogonal matrix consisting of eigenvectors of A.
S is orthogonal if S -1 = S T. For an orthogonal matrix one has
i.
The columns are an orthonormal set. The rows are also.
ii. | Su | = | u | for all u. The mapping associated with S is length preserving.
iii. det( S ) = 1.
The mapping associated with S is area or volume preserving.
vi. In two or three dimensions S is a rotation possibly followed by a reflection.
Quadratic forms
x
ax2 + bxy + cy2 = (x,y) A  y  = 1r2 + 2s2
a b/2
where A =  b/2 c ,
1 and 2 are the eigenvalues of A
r and s are coordinates with respect to axes through the eigenvectors of A.
4