Download Solutions of First Order Linear Systems

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Orthogonal matrix wikipedia , lookup

Euclidean vector wikipedia , lookup

Gaussian elimination wikipedia , lookup

Matrix multiplication wikipedia , lookup

Vector space wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Four-vector wikipedia , lookup

Matrix calculus wikipedia , lookup

Jordan normal form wikipedia , lookup

System of linear equations wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Transcript
Solutions of First Order Linear Systems
An n-dimensional first order linear system is a list of n first order, linear, differential equations,
in n unknown functions of one variable. It may be written as the matrix/vector equation
x0 = Ax + f
where x is an n-dimensional vector containing the unknown functions and A is the n×n coefficient
matrix. For example, the 3-dimensional linear system
x01 = −x1 + 5x2 + 3x3 + et
x02 = x2 + x3
x03 = −2x2 − 2x3 − t2
has





et
x1
−1
5
3
0 
1
1  and f = 
x =  x2  and A =  0
x3
0 −2 −2
−t2
Note that the book uses y’s instead of x’s in some sections, so you should be comfortable using
either letter. We will almost always use t for the variable.

(1) Homogeneous Solution: Solve the homogeneous equation
x0 = Ax
to find n linearly independent vector solutions x1 , x2 , . . . , xn , and hence the homogeneous
solution xh = c1 x1 + c2 x2 + · · · + cn xn . You can check whether x1 , x2 , . . . , xn are linearly
independent by calculating the Wronksian
W (t0 ) = det x1 (t0 ) x2 (t0 ) . . . xn (t0 )
at any point t0 in the domain of x1 , x2 , . . . , xn , and showing that it is non-zero. Note that
the Wronskian is formed by first placing the column vector solutions next to each other
to form a square matrix, substituting t0 and then taking the determinant of the matrix.
To find the n linearly independent vector solutions, you first need to find the eigenvalues
λ of A, by solving the characteristic equation det(A − λI) = 0 for λ, and then find the
corresponding eigenvectors v, by solving the vector equation (A − λI)v = 0 for v. If there
are repeated eigenvalues you may need to also find generalized eigenvectors - this case will
be explained below.
(a) Distinct Real Eigenvalues: If λ is a real eigenvalue which appears once as a root
of the characteristic equation, and v is the corresponding eigenvector then x = eλt v
is a vector solution of the homogeneous equation.
(b) Distinct Complex Eigenvalues: If λ = α + iβ is a complex eigenvalue then so is
its complex conjugate λ = α − iβ. Each of these eigenvalues will provide one vector
solution, but since the eigenvalues are closely related, it is probably easier to use
λ = α + iβ to find both solutions. Find the corresponding eigenvector v and split it
using vector addition as v = v1 + iv2 , where v1 and v2 have real entries. Then the
two vector solutions are
x1 = eαt (cos(βt)v1 − sin(βt)v2 )
and
x2 = eαt (sin(βt)v1 + cos(βt)v2 )
1
2
Example:
1 1
x =
x
−1 1
The eigenvalues are 1 + i, and 1 − i. Use the eigenvalue 1 + i to find the eigenvector
by solving
−i
1
v=0
−1 −i
Then
1
1
0
v=
=
+i
= v1 + iv2
i
0
1
and so the two corresponding vector solutions are
1
0
t
x1 = e (cos(t)
− sin(t)
)
0
1
0
and
t
x2 = e (sin(t)
1
0
+ cos(t)
0
1
)
(c) Repeated Eigenvalues: If an eigenvalue is repeated we need to analyse the matrix
A more carefully to find the corresponding vector solutions.
Definition 1. The Algebraic Multiplicity (AM) of an eigenvalue λ is the number of
times it appears as a root of the characteristic equation det(A − λI) = 0.
For each eigenvalue λ, we need to find the same number of vector solutions as its
Algebraic Multiplicity. If AM = 1, then we are in one of the situations described
above so the more difficult (and interesting) case is when AM > 1.
Definition 2. The Geometric Multiplicity (GM) of an eigenvalue λ is the number of linearly independent eigenvectors corresponding to λ (or the dimension of the
eigenspace). An equivalent (and probably more helpful) definition is that the Geometric Multiplicity is the number of degrees of freedom in the eigenvector equation
(A − λI)v = 0.
These two numbers always have the property that 1 ≤ GM ≤ AM . In the case
where GM = AM , we have “enough” eigenvectors to produce solutions of the form
x = eλt v corresponding to λ and we procede as above.
Thus the most interesting case is when GM < AM . To see what happens in general,
you are referred to the textbook, however to avoid being cast adrift in a sea of
confusing abstraction and notation we shall only consider a special case here.
Definition 3. If λ is a an eigenvalue for A, then v is a corresponding generalized
eigenvector if (A − λI)d v = 0 for some positive integer d.
Note that an eigenvector is also a generalized eigenvector (take d = 1) but as d
increases there will be more solutions which are not eigenvectors.
Consider a real eigenvalue λ for which AM = 2 and GM = 1, so that we are looking
for two vector solutions corresponding to λ but we only have one linearly independent
eigenvector v1 , and hence only one vector solution x1 = eλt v1 .
Solve the equation (A − λI)2 v = 0 which will have two degrees of freedom, so there
will be two linearly independent generalized eigenvectors. It is strongly recommended
(though not necessary) that you choose the eigenvector v1 as one of the generalized
3
eigenvectors, since this gives us the solution x1 = eλt v1 which we already know. If
v2 is the other generalized eigenvector then, the second solution corresponding to λ
is
x2 = eλt (I + t(A − λI))v2 .
For more details on how this solution arises, see the Appendix on matrix exponentials.
(2) Particular Solution: To find the particular solution xp for an inhomogeneous first order
linear system
x0 = Ax + f
we use the Variation of Parameters method.
Definition 4. The Fundamental Matrix
Y = x1 x2 . . . xn
is the n × n matrix formed by writing the n column vector solutions of the homogeneous
equation next to each other
It is easy to check that Y 0 = AY and that Y is invertible because its determinant is the
Wronskian and hence non-zero. It can then be shown that the particular vector solution
is
Z
xp = Y
Y −1 f dt
You should compute xp in the following order:
(a) Find Y −1 which is an n × n matrix.
(b) Find Y −1 f which is a vector.
R
(c) Find Y −1 f dt which is a vector. You compute the integral of a vector by finding an
anti-derivative of each entry, and this is a case where you can choose the constants
of integration to be 0 because we are only looking for one particular solution.
R
(d) Find xp = Y Y −1 f dt which is a vector.
(3) General Solution: The general solution is the vector
x = xh + xp = c1 x1 + c2 x2 + · · · + cn xn + xp
(4) Initial Value Problems: An IVP will have the form
x0 = Ax + f , x(t0 ) = x0
In this case you can solve for the constants c1 , c2 , . . . , cn . Substitute the initial condition
into the general solution, and this will give you n equations in the n unknown constants.
4
Examples: See if you can verify the solutions to the following problems.
(1) In this example we have a repeated eigenvalue but there are sufficient eigenvectors


2
2 −4
x0 =  2 −1 −2  x
4
2 −6
The characteristic equation for the matrix is
−(λ + 1)(λ + 2)2 = 0
so the eigenvalues are −1 and −2. First consider the eigenvalue λ = −2, which has
Algebraic Multiplicity 2, and for which the eigenvector equation is


4 2 −4
 2 1 −2  v = 0.
4 2 −4
This equation has two degrees of freedom so the eigenvalue λ = 2 has Geometric Multiplicity 2 and thus it has two eigenvectors. For example we could choose


 
1
1
v1 =  −2  , v2 =  0 
0
1
The eigenvalue λ = −1 has Algebraic Multiplicity 1 and a corresponding eigenvector
 
2

1 .
v3 =
2
The general solution of the system is


 
 
1
1
2
x = c1 e−2t  −2  + c2 e−2t  0  + c3 e−t  1 
0
1
2
(2) In this example there is a repeated eigenvalue and we need to find a generalized eigenvector.


−1
2
1
0 x
x0 =  0 −1
−1 −3 −3
The characteristic equation for the matrix is (as in the previous example)
−(λ + 1)(λ + 2)2 = 0
and so the eigenvalues are −1 and −2. First consider the eigenvalue λ = −2, which has
Algebraic Multiplicity 2, and for which the eigenvector equation is


1
2
1
 0
1
0 v = 0
−1 −3 −1
which only has one degree of freedom. We can choose the eigenvector


1
v1 =  0 
−1
5
but now we need to also find a generalized eigenvector. Compute
2 


0
1 0
1
2
1
1 0 
1
0  = 0
(A − λI)2 =  0
−1 −3 −1
0 −2 0
and then solve the equation


0
1 0
 0
1 0 v = 0
0 −2 0
which has two degrees of freedom. As always, we should choose the eigenvector v1 as
one solution. The other solution v2 is a generalized eigenvector and must be linearly
independent with v1 , so for example we can choose
 
1
v2 =  0 
0
The two vector solutions of the system are then


1
x1 = e−2t  0 
−1
and

  


1
2
1
1
1+t
1
0 )  0  = e−2t  0 
x2 = e−2t (I + t  0
−1 −3 −1
0
−t
The eigenvalue λ = −1 has Algebraic Multiplicity 1 and a corresponding eigenvector


1
v3 =  1 
−2
and so the general solution of the system is






1
1+t
1
x = c1 e−2t  0  + c2 e−2t  0  + c3 e−t  1 
−1
−t
−2
(3) Solve the initial value problem for following system of equations
x0 = −x + y + t, y 0 = x − y − t, x(0) = 1, y(0) = 1
to find formulas for x(t) and y(t).
First put the system of equations into matrix/vector form x0 = Ax + f where
−1
1
t
A=
and f =
1 −1
−t
and then find the eigenvalues of A which are 0 and −2. The corresponding eigenvectors
are
1
−1
v1 =
and v2 =
1
1
so a Fundamental Matrix for the homogeneous system is
1 −e−2t
Y =
.
1
e−2t
6
Now we compute
solution xp
the particular
1
1
(a) Y −1 = 12
−e2t e2t
0
(b) Y −1 f =
−e2t t
R −1
0
(c) Y f dt =
− 12 e2t t + 14 e2t
t 1 R −1
2 − 4
(d) xp = Y Y f dt =
− 2t + 14
The general solution is
x = xh + xp = c1
Finally, remember that x =
1
1
x
y
+ c2 e−2t
−1
1
+
t
2
− 2t
−
+
1
4
1
4
and substitute the initial values to find c1 = 1 and
c2 = −1/4. The solutions are
x =
y =
3 1 −2t
+ e
+
4 4
5 1 −2t
− e
−
4 4
t
2
t
2
Appendix: Matrix Exponentials
Definition 5. Let A be a square matrix and let t be a variable then
etA = I + tA +
t2 2 t3 3
A + A + ···
2!
3!
This is the definition but we NEVER (well almost never) use it to actually find etA because
computing an infinite series of matrices is too hard. Usually we want to compute etA v since this
is a solution of x0 = Ax for any vector v.
Let λ be an eigenvalue of A, and note that tA = t(λI + A − λI). Then, because tλI is a diagonal
matrix, we can use the usual law of exponents and so
etA = etλI et(A−λI)
t3
t2
(A − λI)2 + (A − λI)3 + · · · )
2!
3!
Now if v1 is an eigenvector, then (A − λI)v1 = 0, so
= eλt (I + t(A − λI) +
etA v1 = eλt Iv1 = eλt v1
and if v2 is a generalized eigenvector, then (A − λI)d v2 = 0, so
t2
t3
td−1
(A − λI)2 + (A − λI)3 + · · · +
(A − λI)d−1 )v2
2!
3!
(d − 1)!
Finally, if you ever need to compute a matrix exponential, find a Fundamental Matrix Y for the
equation x0 = Ax and then use the formula
etA v2 = eλt (I + t(A − λI) +
etA = Y · Y (0)−1