Download Formulas

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Mathematics of radio engineering wikipedia , lookup

Two-port network wikipedia , lookup

Transcript
Formulas
Trigonometric Identities
sin(x+y) = sin x cos y + cos x sin y
cos(x+y) = cos x cos y - sin x sin y
sin x sin y = ½ [ cos(x-y) - cos(x+y) ]
cos x cos y = ½ [ cos(x-y) + cos(x+y) ]
sin x cos y = ½ [ sin(x+y) + sin(x-y) ]
sin2x = ½ [ 1 - cos(2x) ]
cos2x = ½ [ 1 + cos(2x) ]
cos -sin
R = sin cos  = matrix for a counter-clockwise rotation of the plane through an angle .
Electric Circuits
Kirchoff's current law:
At any junction, the sum of the currents going into the junction is equal to the
sum of the currents going out of the junction, i.e. ins = outs.
Kirchoff's voltage law:
In any loop, the sum of the voltage changes is zero. In other words the sum
of the voltage increases is equal to the sum of the voltage decreases, i.e.
ups = downs.
Resistors:
Let VR be the voltage decrease from one end of a resistor to the other going in
the direction of the current. We shall assume VR depends only on the current,
i.e VR = f(i). A resistor is linear if VR is a linear function of the current, i.e.
VR = iR,where R is a constant called the resistance of the resistor, this
relationship is called Ohm's law.
Inductors:
Let VL be the voltage decrease from one end of a inductor to the other going
di
in the direction of the current. Usually one may assume VL = L dt, where L is
a constant called the inductance of the inductor.
Capacitors:
Let q be the charge on plate of the capacitor that the current is flowing into
dq
(call this the top plate and the other plate the bottom plate). Then i = dt . Let
VC be the voltage decrease from the top plate to the bottom plate. Usually
q
one may assume VC = C ,where C is a constant called the capacitance of the
capacitor.
Eigenvalues and Eigenvectors
 is an eigenvalue of A  Au = u for some u  0  det(A - I) = 0. Let be the eigenvalues of
A.1, 2, …, p
X is an eigenvector of A corresponding to the eigenvalue   AX = X  (A - I)X = 0. Let X1,
X2, …, Xp be eigenvectors of A corresponding to 1, 2, …, p.
X is a generalized eigenvector of A corresponding to the eigenvalue   (A - I)kX = 0 for some
positive integer k.
A = TDT -1
T = matrix whose columns are the eigenvectors of A,
D = diagonal matrix with the eigenvalues of A on the diagonal.
An = TDnT -1
Dn = diagonal matrix with the powers of the eigenvalues on the diagonal.
etA = TetDT -1
etD = diagonal matrix whose diagonal entries are ejt.
cos( A t) = T cos( D t) T -1
cos( D t) = diagonal matrix whose diagonal entries are cos( j t).
Note: cos(i) = cosh()
-1/2
-1/2
A sin( A t) = T D sin( D t) T -1 D-1/2 sin( D t) = diagonal matrix whose diagonal entries
are sin( j t)/ j. Note: sin(i)/i = sinh()
A = rTRT -1
A = 22 matrix with complex eigenvalues  =   i = r ( cos  i sin ),
T = matrix whose columns are the imaginary and real parts of an eigenvector
corresponding to +,
R = matrix for a rotation by an angle .
An = rnTRnT -1
etA = etTRtT -1
A = TJT -1
An = TJ nT -1
etA = TetJT -1
A = 22 matrix with repeated eigenvalue  and single eigenvector X (up to
constant multiples),
T = matrix whose columns are X and Y where (A - I)Y = X,
 1
J =  0   .
n nn-1
J n =  0 n  .
 et tet 
etJ = 
.
 0 et 
Difference Equations
un+1 = Aun 
un = Anuo
un = c11nX1 + … + cppnXp
un = c1rn(cos(n)Y - sin(n)Z) + c2rn(cos(n)Z + sin(n)Y)
if A is a 22 matrix with complex eigenvalue r( cos + i sin ) whose
eigenvector is Y + iZ,
n
un = c1 X + c2(nY + nn-1X) if A is a 22 matrix with repeated eigenvalue  and single
eigenvector X and vector Y satisfying (A - I)Y = X
Differential Equations
du
dt
= Au 
u(t) = etAu(0)
u(t) = c1e1tX1 + … + cpeptXp
u(t) = c1et(cos(t)Y - sin(t)Z) + c2et(cos(t)Z + sin(t)Y) if A is a 22 matrix with
complex eigenvalue   i whose eigenvector is Y + iZ,
u(t) = c1etX + c2et(Y + tX) if A is a 22 matrix with repeated eigenvalue  and single
eigenvector X and vector Y satisfying (A - I)Y = X
du
dt
= Au + f(t) 
u(t) = up(t) + uh(t)
du
where up(t) is a solution to dt = Au + f(t) and uh(t) is the general
du
solution to dt = Au
-tA
up(t) = etA 
 e f(t) dt
Special case: If f(t) = etY then try up(t) = etX, plug into equation and solve for X. This
works if  is not an eigenvalue of A, but needs
modification if  is an eigenvalue of A.
d2u
dt2
= -Au 
du
u(t) = cos( A t) u(0) + A-1/2 sin( A t) dt (0)
Nonlinear Systems of Differential Equations
dx
dt
= f(x,y)
dy
dt
= g(x,y)
Suppose f(x,y) and g(x,y) and their first partial derivatives are continuous in an open set D of the
xy-plane. The set D will be referred to below.
The phase plane is the plane of the dependent variables. In this case it is the xy-plane.
If (x(t), y(t)) is a solution to the system then the corresponding trajectory is the directed curve in
the phase plane defined by (x, y) = (x(t), y(t)) as t varies.
dx
The x-nullclines are the curves in the phase plane defined by dt = 0, i.e. f(x,y) = 0.
dy
The y-nullclines are the curves in the phase plane defined by dt = 0, i.e. g(x,y) = 0.
(x*, y*) is an equilibrium point
 (x(t), y(t)) = (x*, y*) for all t is a solution to the system.
 f(x*, y*) = 0 and g(x*, y*) = 0
 (x*, y*) is a point of intersection of an x-nullcline and y-nullcline.
  xf  yf 
Linearization: Let (x*, y*) be an equilibrium point and A =   g  g  where the partial


x y 
derivatives are evaluated at (x*, y*). The phase portrait near (x*, y*) is similar to the phase
du
portrait of the linear system dt = Au near the origin provided all the eigenvalues of A have
non-zero real part.
Conservation Laws: A conservation law for the system is a function V(x, y) defined in D that is
constant along trajectories, i.e. for every solution (x(t), y(t)) of the system there is a constant C
such that (x(t), y(t)) = C for all t. This will occur if
d V(x(t), y(t))
dt
= 0 for any solution (x(t), y(t)).
In this case the trajectories lie on the level curves of of V(x, y), i.e. the curves defined by
V(x, y) = C for various values of C. To help draw these trajectories, here are some properties of
these level curves.
i. Suppose V(x, y) = p(x) + q(y) where q(y) decreases from  to 0 as y increases from - to 0
and increases from 0 to  as y increases from 0 to . Let y = q-1+(z) and y = q-1- (z) be the
inverse functions of z = q(y). Then the level curve of V(x, y) = C can be made by taking the
portion of the curve z = C – p(x) that lies above the x axis and applying y = q-1+(z) and
y = q-1- (z). In the first case this gives a curve similar to the part of z = C – p(x) lying above
the x axis and in the second it gives a curve similar to the reflection of the part of
z = C - p(x) lying above the x axis across the x axis.
  xV2  xV y 
V
V
ii. Suppose  x = 0 and  y = 0 at a point (x*, y*). Let A =   2V  2V  be the matrix of


2
2
  x y  y2 
second derivatives evaluated at (x*, y*). If the eigenvalues of A are both strictly positive or
both strictly negative, then, near (x*, y*), the level curves of V(x, y) are closed curves about
(x*, y*). If one eigenvalue of A is strictly positive and the other is strictly negative, then,
near (x*, y*), the level curves of V(x, y) have a saddle point structure.
Positively Invariant Sets: A set P in the phase plane is positively invariant if P is contained in D
and any trajectory that enters P remains in P thereafter.
Liapunov Functions: A Liapunov function for the system is a function L(x, y) defined in D that is
non-increasing along trajectories. This will occur if
d L(x(t), y(t))
dt
 0 for any solution (x(t), y(t)).
If L(x, y) is a Liapunov function then the following is true.
a. Liapunov’s Theorem (basic version): Suppose the following are true.
i.
L(x, y) is a Liapunov function for the system.
ii. C is any number.
iii. P is a connected component of the set { (x, y): (x, y) is in D and L(x, y)  C }.
iv. P is closed and bounded.
v.
The only trajectories in P on which L(x, y) is constant are equilibrium points.
vi. There are only a finite number of equilibrium points in P.
Then any trajectory that enters P approaches an equilibrium point as t  .
b. Liapunov’s Theorem (extended version): Suppose the following are true.
i.
L(x, y) is a Liapunov function for the system.
ii. P is a posively invariant set.
iii. P is closed and bounded.
iv. The only trajectories in P on which L(x, y) is constant are equilibrium points.
v.
There are only a finite number of equilibrium points in P.
Then any trajectory that enters P approaches an equilibrium point as t  .
c. If L(x, y) is a Liapunov function then any connected component of the set
{ (x, y): (x, y) is in D and L(x, y)  C } is positively invariant.
Periodic Solutions: A solution (x(t), y(t)) is periodic (with periond T) if
(x(t), y(t)) = (x(t+T), y(t+T)) for all t. This occurs if and only if the trajectory corresponding to
the solution is a closed curve.
Poincare-Bendixson Theorem. Suppose a solution (x(t), y(t)) that remains in D and is
bounded as t  . Then one of the following are true.
i.
(x(t), y(t)) is an equilibrium solution.
ii.
(x(t), y(t)) approaches an equilibrium solution as t  .
iii.
(x(t), y(t)) is a periodic solution.
iv.
(x(t), y(t)) approaches a periodic solution as t  .
v.
(x(t), y(t)) approaches the boundary of D as t  .