Download 13 Solving nonhomogeneous equations: Variation of the

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Schrödinger equation wikipedia , lookup

Debye–Hückel equation wikipedia , lookup

Unification (computer science) wikipedia , lookup

Two-body problem in general relativity wikipedia , lookup

Equations of motion wikipedia , lookup

Navier–Stokes equations wikipedia , lookup

Equation of state wikipedia , lookup

Euler equations (fluid dynamics) wikipedia , lookup

BKL singularity wikipedia , lookup

Derivation of the Navier–Stokes equations wikipedia , lookup

Perturbation theory wikipedia , lookup

Itô diffusion wikipedia , lookup

Computational electromagnetics wikipedia , lookup

Differential equation wikipedia , lookup

Heat equation wikipedia , lookup

Exact solutions in general relativity wikipedia , lookup

Schwarzschild geodesics wikipedia , lookup

Partial differential equation wikipedia , lookup

Transcript
13
Solving nonhomogeneous equations: Variation of the constants
method
We are still solving
Ly = f,
(1)
where L is a linear differential operator with constant coefficients and f is a given function. Together
(1) is a linear nonhomogeneous ODE with constant coefficients, whose general solution is, of course,
y(t) = yh (t) + yp (t),
where yh (t) is a general solution to the homogeneous equation Ly = 0 and yp (t) is a particular (any)
solution to (1). In the last lecture we saw how to guess yp (t) by looking at the expression for f (t).
The major drawback is, of course, the choice of f (t) was quite limited. Here we will see how to deal
with this problem in the general case.
13.1
Variation of the constants for the second order ODE
Here I would like to start with the second order ODE
y ′′ + p(t)y ′ + q(t)y = f (t).
(2)
The general solution to the homogeneous is given by (as we know from Lecture 11)
yh (t) = C1 y1 (t) + C2 y2 (t),
where y1 (t) and y2 (t) are linearly independent solutions to the homogeneous equation. Now the crucial
assumption: Assume that C1 and C2 are not constants, but functions of t. That means that we are
looking for a solution to (2) in the form
y(t) = C1 (t)y1 (t) + C2 (t)y2 (t).
From now I will suppress the dependence on t to simplify the arithmetics, but you should remember
that C1 , C2 , y1 , y2 are functions of t. We have:
y ′ = C1′ y1 + C2′ y2 + C1 y1′ + C2 y2′ .
Here we make a mysterious and somewhat arbitrary assumption that
C1′ y1 + C2′ y2 = 0.
(Just a few words: I will explain this mystery in the third part our course, and how one can come up
with this assumption. For now a vague explanation can be as follows: We actually need two conditions
to determine two functions C1 (t) and C2 (t), and one ODE will provide us with only one condition,
hence we are free to choose another condition, which can be taken as above.)
Using the above, we have
y ′′ = C1′ y1′ + C2′ y2′ + C1 y1′′ + C2 y2′′ .
MATH266: Intro to ODE by Artem Novozhilov, e-mail: [email protected]. Fall 2013
1
Now we plug y ′ and y ′′ into the original equation:
C1′ y1′ + C2′ y2′ + C1 y1′′ + C2 y2′′ + p(C1 y1′ + C2′ y2′ ) + q(C1 y1 + C2 y2 ) = f =⇒
C1 (y1′′ + py1′ + qy1 ) + C2 (y2′′ + py2′ + qy2 ) + C1′ y1′ + C2′ y2′ = f =⇒
C1′ y1′ + C2′ y2′ = f.
Finally, we have
C1′ y1 + C2′ y2 = 0,
C1′ y1′ + C2′ y2′ = f,
for two unknown functions C1 (t), C2 (t). Note that the determinant of the system matrix is simply the
Wronskian
W (t) = y1 y2′ − y2 y1′ ̸= 0,
because y1 and y2 are linearly independent. After some algebra we can find (fill in the gaps):
C1′ = −W −1 (t)y2 f,
C2′ = W −1 (t)y1 f,
from where by integration we will find C1 and C2 . Therefore, the general solution for (2) is given by
∫
∫
−1
y(t) = A1 y1 (t) + A2 y2 (t) − y1 (t) W (τ )y2 (τ )f (τ ) dτ + y2 (t) W −1 (τ )y1 (τ )f (τ ) dτ,
where the integrals are evaluated without arbitrary constants, and A1 , A2 are new arbitrary constants.
Example 1. Find the general solution to
y ′′ − 2y ′ + y =
et
,
t
t > 0.
For the homogeneous counterpart we’ll find that
yh (t) = C1 et + C2 tet ,
and y1 = et , y2 = tet . We’ll find the Wronskian
[ t
]
e
tet
W (t) = det t t
= e2t ̸= 0.
e e + tet
Hence
C1′ = −e−2t tet
and
C2′ = e−2t et
et
= −1 =⇒ C1 = t,
t
1
et
=
=⇒ C2 = ln |t| = ln t,
t
t
therefore, the general solution is
y(t) = C1 et + C2 tet − tet + tet ln t,
where I used again the conventional letters C1 and C2 to denote arbitrary constants.
The main question of course why did we bother to learn the method of an educated guess, if we have
such a nice and general method. The problem here is that very often for the variation of parameters
the integrals are quite difficult or even not possible to evaluate (see you homework problems for nice
examples).
2
13.2
Variation of the constants for the n-th order ODE
There is quite straightforward generalization of the variation of parameter method for the case of the
n-th order equation
Ly = f.
In this case we have that
yh = C1 y1 + . . . + Cn yn ,
where {y1 , . . . , yn } is a fundamental set of solutions. Assuming that each Ci is a function of t one can
arrive to
C1′ y1 + . . . + Cn′ yn = 0,
C1′ y1′ + . . . + Cn′ yn′ = 0,
...
C1′ y1
+ . . . + Cn′ yn(n−2) = 0,
C1′ y1
+ . . . + Cn′ yn(n−1) = 0,
(n−2)
(n−1)
which is a system of linear algebraic equations with respect to unknowns C1′ , . . . , Cn′ . Note that the
determinant of the matrix of the system is the Wronskian and hence not zero. The solution can be
found using any of many available methods (Cramer’s rule, Gauss eliminations, etc).
13.3
Solving linear ODE with nonconstant coefficients
This section does not directly belong to this lecture and discusses some additional points, which were
included in the homework problems. If you did not have any questions solving recent homework, you
can safely skip this section.
Note that in the discussion of the linear ODE we had two cases: The general theory was presented
for the general linear equations with nonconstant coefficients, whereas the actual solution methods
were discussed only for the equations with constant coefficients. A natural questions to ask is whether
this fact is a simple coincidence or caused by some additional reasons. And the answer is simple: in the
realm of ODE equations analytical solutions (i.e., solutions that can be written down with a formula)
are a very rare species. In the vast majority of cases there are no regular methods to obtain a closed
form solution (which does not mean, of course, that there are no methods to tackle the problem).
The only remarkable exception is the linear ODE with constant coefficients, for which the full theory
exists. This is the primary reason we concentrate mostly on the linear ODE with constant coefficients.
However, in some cases a smart substitution can reduce the problem at hands to a solvable one, as
you should be aware by now from your homework problems. Here is a basic example, treated in most
ODE textbooks.
Example 2 (Cauchy–Euler equation). Cauchy–Euler ODE is a linear ODE with non-constant coefficients of the form
tn y (n) + tn−1 an−1 y (n−1) + . . . + ta1 y + a0 y = 0.
This equation can be fully solved, which amounts to finding a fundamental solution set {y1 , . . . , yn }.
To show how it can be done, let me start with the second order Cauchy–Euler equation
t2 y ′′ + pty ′ + qy = 0.
3
Now I make a change of the independent variable
t = ex ,
hence our unknown function y(t) will be some new u(x) = y(ex ). We need to determine how the
derivatives will change. For this recall the chain rule:
d
d
d
f (g(x)) =
f (t)
g(x),
dx
dt
dx
where t = g(x).
We also have that y(t) = u(log t). Hence,
d
d
d
dx
1
y(t) =
u(log t) =
u(x)
= u′ = u′ e−x .
dt
dt
dx
dt
t
Similarly,
d2
d u′ (log t)
d ′
u′′ − u′
−x dx
y(t)
=
=
(u
(x)e
)
=
.
dt2
dt
t
dx
dt
t2
From this we’ll find that
ty ′ = u′ , t2 y ′′ = u′′ − u′ ,
where the prime at y means the derivative with respect to t and prime at u means the derivative with
respect to x. For u(x) we finally obtain
u′′ + (p − 1)u′ + qu = 0,
which is a linear ODE with constant coefficients, which we solved multiple times. Similarly, it can be
shown that the n-th order Cauchy–Euler equation can be reduced by the same change of variables to
the case of a constant coefficient ODE.
Example 3. Consider a concrete example:
t3 y ′′ − 2ty = 6 ln t,
t ̸= 0.
This equation after dividing by t takes the form of the Cauchy–Euler equation
t2 y ′′ − 2y =
6 ln t
.
t
The change of variables t = ex bring this equation into the form
u′′ − u′ − 2u = 6xe−x ,
We have
u(x) = y(ex ).
uh (x) = C1 e2x + C2 e−x ,
therefore a particular solution should be sought in the form
up (x) = (Ax + B)xe−x .
Plugging this expression into the equation, we’ll find (fill in the details)
2A − 3B = 0, −6A = 6 =⇒ A = −1, B =
4
2
.
3
Therefore,
and
up (x) = (−x + 2/3)xe−x ,
u(x) = C1 e2x + C2 e−x + (−x + 2/3)xe−x .
Returning to the original variable yields
y(t) = u(ln t) = C1 t2 +
C2
ln t
+ (2/3 − ln t)
.
t
t
Finally, I would like to mention another important fact. If we know one nonzero solution to
y ′′ + p(t)y + q(t)y = 0,
we can reduce its order by one. Indeed, let y1 (t) ̸= 0 be such solution. Make a substitution
y(t) = y1 (t)u(t),
where u(t) is a new unknown function. Then we get
y1′′ u + 2y1′ u′ + y1 u′′ + p(t)(y1′ u + y1 u′ ) + q(t)y1 u = 0,
or
)
)
(
(
y1 u′′ + 2y1 + p(t)y1 u′ + y1′′ + p(t)y1′ + q(t)y1 u = 0.
Therefore, using the fact that y1 is a solution and another change v = u′ , we obtain
)
(
y1 v ′ + 2y1′ + p(t)y1 v = 0,
which is a first order equation.
5