Download Introduction to Systems and General Solutions to Systems

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Jordan normal form wikipedia , lookup

Linear least squares (mathematics) wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Determinant wikipedia , lookup

Four-vector wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Matrix calculus wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Matrix multiplication wikipedia , lookup

Gaussian elimination wikipedia , lookup

System of linear equations wikipedia , lookup

Transcript
Introduction to Systems and General Solutions to
Systems
1
Introduction to Systems
We now turn our attention to systems of first order linear equations. We will be
reviewing some linear algebra as we go, as much of the theory of these systems
depends on the theory of matrices:
1. Systems of Differential Equations
2. Linear Algebra Review
3. Systems as Matrix Equations
4. Higher Order Equations as Systems
2
Systems of Differential Equations
Recall that in algebra we are sometimes interested in solving a system of n equations
in n unknowns, such as the following form:
a11 x1 + a12 x2 + · · · + a1n xn = b1
a21 x1 + a22 x2 + · · · + a2n xn = b2
..
.
an1 x1 + an2 x2 + · · · + ann xn = bn
Here, the aij and bi are constant coefficients. The solution to this system is a set of
n values x1 , x2 , . . . xn which satisfy all the equations simultaneously.
Similarly, a system of n first order linear equations will be a set of n equations
involving n functions x1 (t) through xn (t), as follows:
x′1 = p11 (t)x1 + p12 (t)x2 + · · · + p1n (t)xn + g1 (t)
x′2 = p21 (t)x1 + p22 (t)x2 + · · · + p2n (t)xn + g2 (t)
..
.
′
xn = pn1 (t)x1 + pn2 (t)x2 + · · · + pnn (t)xn + gn (t)
1
In this case, the pij (t) and gi (t) are coefficient functions, and the solution is a set of
n functions x1 to xn that satisfy all the equations simultaneously.
As we would expect, we say that the system is homogeneous if all of the gi (t) are
identically zero, and non-homogeneous if some of the gi (t) are non-zero.
3
Linear Algebra Review
More details are contained in §4.1 of the text, but here is a brief review of basic
properties of matrices and vectors:
1. Addition, Subtraction, and Scalar Multiplication:
Matrices and vectors of the same size can be added or subtracted by adding
or subtracting the individual components. Multiplication by a scalar simply
multiplies each element in a matrix by that scalar.
Example:
Let A =
4 −1 0
−1 0 7
!
1 0 −2
−2 1 3
and B =
!
.
Then
A − 2B =
2. Transposes:
The transpose AT of a matrix A is the matrix obtained by switching the rows
and columns of A.
Example:
Let A =
3 + i −1
i
2
1 − 2i 0
!
. Then
AT =
3. Multiplication:
If the A is an l × m matrix, and B is m × n matrix, then the product C = AB
is defined and is an l × n matrix. We find the entry in row i and column j by
2
multiplying the entries in row i of A with the corresponding entries in column
j of B and adding:
cij =
n
X
(aik )(bkj )
k=1
Example:
Let A =
2 0
−1 3
!
,B=
−1 1
4 2
!
, and x =
2
3
!
. Find:
AB =
xT x =
xxT =
4. Determinants:
The determinant of a 2 × 2 matrix is straightforward:
a b = ad − bc
c d Determinants of larger matrices are defined recursively, by cofactor expansion.
We expand along any row or column of a matrix, multiplying the entry (−1)i+j aij
by the determinant of the submatrix formed by deleting the ith row and j th column. We sum the results to get the determinant.
For example,
1 3 −2 0 1
4 0 1 = (1) 0 0
1 0 0 − (3) 4 0
4 1 + (−2) 1 0
1 0
=0+3+0=3
An interesting application of determinants is that the system Ax = 0 has a
non-zero solution vector x exactly when the determinant |A| = 0.
3
5. Row Reduction and Inverses:
An inverse of a matrix A is a matrix A−1 such that A−1 A = AA−1 = I, where
I is the identity matrix. (The identity matrix has ones on the main diagonal
and zeros elsewhere; when the multiplication is defined, AI = IA = A for any
matrix A.)
To find A−1 , put the identity matrix to the right of A like so:
a11 a12 1 0
a21 a22 0 1
!
and then perform row operations on both until the left matrix is the identity.
The identity matrix that was on the right will then contain the inverse A−1.
The legal row operations are:
(a) Interchange two rows.
(b) Multiply an entire row by a constant.
(c) Add a multiple of one row to another row.
Example:
Find the inverse of
2 1 1 0
0 2 0 1
2 1
0 2
!
!
:
→
Alternatively, we can compute the inverse of 2 × 2 matrices using the formula
a b
c d
!−1
1
=
ad − bc
d −b
−c a
!
.
Note that we start with the reciprocal of the determinant. It turns out that A
is invertible if and only if |A| =
6 0.
6. Derivatives and Antiderivatives:
4
If we have a matrix containing functions, such as
e2t
2t
cos(t) 3
A=
!
then we define the derivative of the matrix to be the derivative of the component
functions:
!
dA
2e2t
2
=
− sin(t) 0
dt
Example:
Find the derivative:
t2
7 sin(t)
t+2
5et
d
dt
!
=
Of course this means we can find antiderivatives of matrix functions just by
taking antiderivatives of each component function, and then adding a matrix of
constant functions.
4
Systems as Matrix Equations
With the conventions we have adopted above, we can rewrite a system of first order
linear differential equations
x′1 = p11 (t)x1 + p12 (t)x2 + · · · + p1n (t)xn + g1 (t)
x′2 = p21 (t)x1 + p22 (t)x2 + · · · + p2n (t)xn + g2 (t)
..
.
′
xn = pn1 (t)x1 + pn2 (t)x2 + · · · + pnn (t)xn + gn (t)
as a single differential equation using matrices:
x′ = P(t)x + g(t)
Here we have



x=


x1
x2
..
.
xn



,





P(t) = 


p11 (t) p12 (t) · · · p1n (t)
p21 (t) p22 (t) · · · p2n (t) 

,
..
..
..
..

.

.
.
.
pn1 (t) pn2 (t) · · · pnn (t)

5



g(t) = 


g1 (t)
g2 (t)
..
.
gn (t)






Example:
We can easily confirm that
!
e3t
2e3t
x(t) =
is a solution to the homogeneous equation
′
x =
since
′
x =
3e3t
6e3t
!
!
1 1
4 1
!
1 1
4 1
and
x
e3t
2e3t
!
=
3e3t
6e3t
!
However, it may not yet be clear how to come up with such a solution.
Of course, if we have n first order equations, we should be able to solve for n initial
conditions, x1 (t0 ), x2 (t0 ), . . . xn (t0 ). This suggests that we need to have n constants
c1 to cn in our general solution.
Example:
As in the previous example, it is not hard to confirm that
2e3t
e3t
x(t) = c1
!
+ c2
e2t
e2t
!
2c1 e3t + c2 e2t
c1 e3t + c2 e2t
=
!
is a solution to the homogeneous equation
′
x =
since
′
x =
4 −2
1 1
!
x,
6c1 e3t + 2c2 e2t
3c1 e3t + 2c2 e2t
!
and we also have
4 −2
1 1
!
2c1 e3t + c2 e2t
c1 e3t + c2 e2t
!
=
6c1 e3t + 2c2 e2t
3c1 e3t + 2c2 e2t
!
(Check!)
Can we then require x1 (0) = 3 and x2 (0) = 1, for example? Plugging in t = 0 gives
2c1 + c2 = 3
c1 + c2 = 1
6
This is the same as the matrix equation
2 1
1 1
!
!
=
2 1 3
1 1 1
!
c1
c2
3
1
!
or the augmented matrix
Solving this system gives our needed c1 and c2 :
4.1
Existence and Uniqueness
We mention the existence and uniqueness result for first order linear systems of equations:
Theorem 4.1: Consider the initial value problem
y′ (t) = P (t)y(t) + g(t),
y(t0 ) = y0 ,
where y and g are n × 1 vector functions, and P (t) is an n × n matrix
function. Let the n2 components of P (t) be continuous on the interval
(a, b), and let t0 be in (a, b). The the initial value problem has a unique
solution that exists on the entire interval a < t < b.
Our goal therefore must be to find this unique solution. We will begin establishing
what we need during the next class.
5
Higher Order Equations as Systems
We will consider how higher order equations can be rewritten as first order systems.
One reason why we are interested in systems of first order equations is because any
nth order linear ordinary differential equation can be transformed into a system of n
first order linear equations. If you have an nth order equation in the variable y, simply
set x1 = y(t), x2 = y ′ (t), . . . , xn = y (n−1) and use the fact that x′1 = x2 , x′2 = x3 , and
so forth to set up your system.
Example:
Convert y ′′ + y ′ = y + t into a system of two first order equations.
7
We set x1 = y, and x2 = y ′ . Then we have x′1 = y ′ = x2 , and since y ′′ + y ′ = y + t,
we have y ′′ = −y ′ + y + t, or x′2 = −x2 + x1 + t. So we have the system of equations
x′1 = x2
x′2 = −x2 + x1 + t
We see that this system is linear, but non-homogeneous.
We can of course write the system as a matrix equation:
"
x′1 (t)
x′2 (t)
#
=
0 1
1 −1
!"
x1 (t)
x2 (t)
#
+
"
0
t
#
This means that the theory of higher order differential equations can be rewritten in
terms of systems of first order equations. One reason this is useful is that we can
convert numerical methods such as Euler’s method for use on systems of first order
equations. Thus, if we can rewrite higher order equations, we will be able to use
numerical methods to approximate their solutions.
Let’s try one more example.
Example:
Convert y ′′′ − y ′′ + 2y = 3t, y(0) = 4, y ′ (0) = −1, y ′′(0) = 2 into a first order system
with appropriate initial conditions.
y1 (t) = y(t)
y1′ (t) = y2 (t)
y2 (t) =
y2′ (t) =
y3 (t) =
y3′ (t) =
y1 (0) = 4
y2 (0) =
y3 (0) =
6
General Solutions to Systems
We continue on systems of equations. We will then consider the form of the general
solution and introduce the Wronskian for systems.
1. General Solutions
2. The Wronskian
3. Linear Independence
4. Fundamental Matrices
8
7
General Solutions
We saw last time that we could use a column vector v as a solution to a system of
first order differential equations. It is easy to see that if v1 and v2 are both solutions
to the (matrix) equation
y′ = Ay,
then so is any linear combination y = c1 v1 + c2 v2 , since
y′ = c1 v1′ + c2 v2′
and
Ay = A(c1 v1 + c2 v2 ) = c1 Av1 + c2 Av2 = c1 v1′ + c2 v2′
from basic properties of matrices and the assumption that v1 and v2 are both solutions
to the system. Thus, we have determined the principle of superposition holds for
solutions to homogeneous systems of first order equations. (See Theorem 4.2 in the
text.)
This leaves us looking for a general solution to a system of equations. We know of
course that if we have n first order equations, we ought to be able to solve for n initial
conditions. Generalizing from our experience with second order and higher equations,
we will attempt to find n solutions y1 , . . . , yn to the system, and we will guess that
c1 y1 + · · · + cn yn might be the general solution.
Once we have found n solutions y1 , . . . , yn , it will prove useful to construct a matrix
Ψ(t) which has columns given by the solutions. In other words, if we have solutions



y1 = 


y1,1
y2,1
..
.
yn,1



 , y2





=


y1,2
y2,2
..
.
yn,2



 , . . . , yn





=


y1,n
y2,n
..
.
yn,n






we will then form the matrix
Ψ(t) = (y1 y2 . . . yn ) =






y1,1 y1,2
y2,1 y2,2
..
..
.
.
yn,1 yn,2
· · · y1,n
· · · y2,n
..
..
.
.
· · · yn,n



.


Then any linear combination of these solutions can be written easily as
Ψ(t)c =






y1,1 y1,2
y2,1 y2,2
..
..
.
.
yn,1 yn,2
· · · y1,n
· · · y2,n
..
..
.
.
· · · yn,n






9
c1
c2
..
.
cn






= c1 y1 + c2 y2 + · · · + cn yn .
(You can verify the equality if you wish. It represents a special form of matrix multiplication.)
Example:
Consider the system
9 −4
15 −7
y′ =
!
y.
Confirm that both of the following are solutions:
y1 =
2e3t
3e3t
!
:
y2 =
2e−t
5e−t
!
:
So this means any linear combination of y1 and y2 are solutions. We form our solution
matrix
!
2e3t 2e−t
Ψ(t) =
3e3t 5e−t
and we know we have solutions of the form c1 y1 (t) + c2 y2 (t), or Ψ(t)c where c is a
2 × 1 column vector of constants.
Can we then solve the system above together with the initial condition
y(0) =
1
1
!
?
We must attempt to solve the equation Ψ(0)c = y(0), or
2 2
3 5
!
c1
c2
!
=
1
1
!
This is just a system of equations, which we can solve using row reduction on the
augmented matrix
2 2 1
3 5 1
!
→
10
So we get c1 = 3/4 and c2 = −1/4, and our solution is
y(t) = [y1 y2 ]
3/4
−1/4
!
3
=
4
2e3t
3e3t
!
1
−
4
2e−t
5e−t
!
.
(It is not difficult to confirm both that this satisfies the system and that it satisfies
the initial conditions.)
We are of course somewhat cautious. We know from solving second order (and higher)
systems that it is not generally enough just to have the right number of solutions to
combine, but that the solutions need to be “really different” from each other, so
that we may satisfy whatever initial conditions we wish. In the above example, we
were able to satisfy our initial conditions. How can we determine in general if linear
combination of a set of solutions will be able to satisfy any initial conditions, and
therefore form a fundamental set of solutions?
8
The Wronskian
We have seen above that if we find n solutions y1 , . . . , yn to a system y′ = P (t)y, then
we can attempt to write all solutions in the form c1 y1 (t) + · · · cn yn (t), or equivalently
Ψ(t)c, where Ψ(t) = [y1 . . . yn ] and c is an n × 1 column vector of constants.
Then to be able solve for an initial condition y(t0 ) = y0 is to be able to solve the
matrix equation
Ψ(t0 )c = y0 .
It is possible to solve this uniquely exactly when the matrix Ψ(t0 ) is non-singular,
which happens when the determinant |Ψ(t0 )| =
6 0.
Therefore, we define the Wronskian of the vector functions y1 , . . . , yn at point t to
be the determinant
y1,1 (t) y1,2(t) · · · y1,n (t)
y2,1 (t) y2,2(t) · · · y2,n (t)
W (y1 , . . . , yn )(t) = |Ψ(t)| =
..
..
..
..
.
.
.
.
yn,1 (t) yn,2(t) · · · yn,n (t)
.
Then it is probably not surprising that the Wronskian identifies fundamental sets of
solutions:
Theorem 4.3:(p. 231)
11
Let y1 (t), y2 (t), . . . , yn (t) be a set of n solutions to the (order n) system
y′ = P (t)y,
a < t < b,
where the matrix function P (t) is continuous on (a, b). Let W (t) represent
the Wronskian of this set of solutions. If there is a point t0 in (a, b) where
W (t0 ) 6= 0, then y1 (t), y2 (t), . . . , yn (t) form a fundamental set of solutions
to the equation.
Example:
In previous example we showed that
2e3t
3e3t
!
and
2e−t
5e−t
!
were solutions to the given differential equation. Show that they form a fundamental
set of solutions. We check the Wronskian:
2e3t 2e−t
3e3t 5e−t
= 10e2t − 6e2t = 4e2t 6= 0
=
We see that W (t) 6= 0 for any value, and so these form a fundamental set of solutions.
Notice in the above example, our Wronskian was never zero. This is not unusual. In
parallel with the case for second order and higher equations, the Wronskian of a set
of n solutions to an order n equation is either always zero or always non-zero on the
interval where a unique solution is guaranteed to exist.
Finally, we note that as we saw in example above, solving for initial conditions once
we have a fundamental set of solutions is fairly straightforward: just solve the system
Ψ(t0 )c = y(0)
for the constants c1 , c2 , . . . , cn .
We continue on systems of equations. We will discuss linear independence and fundamental matrices.
9
Linear Independence
Recall that we say that a set of vectors x1 , x2 , . . ., xn are linearly independent if there
is no collection of n constants c1 , c2 , . . ., cn (not all zero) for which
c1 x1 + c2 x2 + · · · + cn xn = 0
12
If such a collection of constants exists, then we say the xi are linearly dependent.
How can we tell if a set of vectors is linearly independent?
If we have n vectors of x1 , x2 , . . ., xn of length n, they are linearly independent if and
only if the only solution to the system
Xc = (x1 x2 · · ·



xn ) 


c1
c2
..
.
cn






=






x1,1 x1,2
x2,1 x2,2
..
..
.
.
xn,1 xn,2
. . . x1,n
. . . x2,n
..
..
.
.
. . . xn,n






c1
c2
..
.
cn






=






0
0
..
.
0






is the zero vector. Therefore, if the matrix X above is non-singular, the vectors are
linearly independent. If not, they are dependent.
Example:
Determine whether the vectors


0


x1 =  −2  ,
2




3


and x3 =  2 
7
1


x2 =  0  ,
3
are linearly independent or not.
We see quickly that they are linearly dependent, since the determinant of X =
(x1 x2 x3 ) is zero:
0 1 3 −2 2
−2 0 2 = 0 − 2 7
2 3 7 +3
−2 0 = 18 − 18 = 0
2 3 This means that there are constants c1 , c2 , and c3 (not all zero) such that c1 x1 +
c2 x2 + c3 x3 = 0. (If we wished to find the constants, it would not be difficult; just
solve the system Xc = 0 using row reduction.)
Of course, we see that all of this means the Wronskian being non-zero assures that
we have a linearly independent set of solutions to a differential equation. (These are
of course required to get a general solution.)
10
The Fundamental Matrix
Given a set of n solutions, y1 , . . . , yn , to a linear system of n first order equations,
we have already seen how to form a solution matrix


y1,1 . . . y1,n
 .
.. 
.
Ψ = (y1 , . . . , yn ) = 
. 

 .
yn,1 . . . yn,n
13
If the n solutions form a fundamental set of solutions (in other words, if the yi are
linearly independent solutions), then we call Ψ a fundamental matrix for the system.
We have already seen that any solution to the system y′ = P (t)y must have the form
Ψ(t)c where Ψ(t) is our fundamental matrix and c is a column vector of constants:
Theorem 4.5: (p. 234) Let y1 (t), y2 (t), . . . , yn (t) be a fundamental
set of solutions of
y′ = P (t)y,
a < t < b,
where the n × n matrix function P (t) is continuous on (a, b). Let
Ψ(t) = [y1 (t), y2 (t), . . . , yn (t)]
denote the n × n matrix function formed from the fundamental set. Let
ŷ1 (t), ŷ2 (t), . . . , ŷn (t) be any other set of n solutions of the differential
equation, and let
Ψ̂(t) = [ŷ1 (t), ŷ2 (t), . . . , ŷn (t)]
denote the (n × n) matrix formed from this other set of solutions. Then
1. There is a unique (n × n) constant matrix C such that
Ψ̂(t) = Ψ(t)C,
a < t < b,
2. Moreover, ŷ1 (t), ŷ2 (t), . . . , ŷn (t) is also a fundamental set of solutions
if and only if the determinant of C is non-zero.
It is interesting to note and not difficult to prove (see p. 234 in the text) that the
fundamental matrix is itself a solution to the matrix differential equation
Ψ′ (t) = P (t)Ψ(t).
Example:
We saw before that the two functions
2e3t
3e3t
!
and
′
9 −4
15 −7
were solutions to
y =
2e−t
5e−t
!
!
y.
We also proved (by checking the Wronskian) that these functions formed a fundamental set of solutions. So a fundamental matrix for the system above is
Ψ(t) =
2e3t 2e−t
3e3t 5e−t
14
!
We note that Ψ satisfies the matrix equation Ψ′ (t) = P (t)Ψ(t), since
6e3t −2e−t
9e3t −5e−t
′
Ψ (t) =
!
while
9 −4
15 −7
P (t)Ψ(t) =
!
2e3t 2e−t
3e3t 5e−t
!
=
6e3t −2e−t
9e3t −5e−t
!
also.
Find the map from the fundamental matrix for these solutions to the alternative
solution matrix
!
e−t e3t
Ψ̂ = 5 −t 3 3t
e
e
2
2
We need to solve
2e3t 2e−t
3e3t 5e−t
!
c11 c12
c21 c22
!
=
e−t
5 −t
e
2
e3t
3 3t
e
2
!
We see for example that we need 2e3t c11 + 2e−t c21 = e−t , so we will require c11 = 0
and c21 = 1/2. Continuing in this way, we get the equation
and determine that our matrix C is


0
1/2
1/2
0


Theorem 4.5 therefore assures us the new solution matrix Ψ̂ is also a fundamental
matrix, since |C| = −1/4 6= 0.
15