Download Section 3 Integral Equations

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Function of several real variables wikipedia , lookup

History of calculus wikipedia , lookup

Fundamental theorem of calculus wikipedia , lookup

Itô calculus wikipedia , lookup

Integral wikipedia , lookup

Path integral formulation wikipedia , lookup

Lebesgue integration wikipedia , lookup

Differential equation wikipedia , lookup

Multiple integral wikipedia , lookup

Computational electromagnetics wikipedia , lookup

Neumann–Poincaré operator wikipedia , lookup

Partial differential equation wikipedia , lookup

Transcript
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations
1
Section 3
Integral Equations
Integral Operators and Linear Integral Equations
As we saw in Section 1 on operator notation, we work with functions defined in some
suitable function space. For example, f (x), g (x) may live in the space of continuous realvalued functions on [a, b], i.e. C (a, b). We also saw that it is possible to define integral as
well as differential operators acting on functions. Theorem 2.8 is an example of an integral
operator:
Z
b
u (x) =
G (x, y) f (y) dy,
a
where G (x, y) is a Green’s function.
Definition 3.1: An integral operator, K, is an integral of the form
Z b
K (x, y) f (y) dy,
a
where K (x, y) is a given (real-valued) function of two variables, called the kernel.
The integral equation
Kf = g
maps a function f to a new function g in the interval [a, b], e.g.
K : C [a, b] → C [a, b] ,
K : f 7→ g.
Theorem 3.2: The integral operator K is linear
K (αf + βg) = αKf + βKg.
Example 1: The Laplace Transform is an integral operator
Z ∞
Lf =
exp (−sx) f (x) dx = f¯ (s) ,
0
with kernel K(x, s) = K(s, x) = exp(−sx). Let us recap its important properties.
Theorem 3.3:
(i)
(ii)
(iii)
df
L
= sf¯ (s) − f (0) ,
dx
2 df
L
= s2 f¯ (s) − sf (0) − f ′ (0) ,
dx2
Z x
1
L
f (y) dy = f¯ (s) .
s
0
(iv) Convolution: let
f (x) ∗ g (x) =
Z
x
f (y) g (x − y) dy
0
then
L (f (x) ∗ g (x)) = f¯ (s) ḡ (s) .
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations
2
Recall that a differential equation is an equation containing an unknown function under
a differential operator. Hence, we have the same for an integral operator.
Definition 3.4: An integral equation is an equation containing an unknown function
under an integral operator.
Definition 3.5: (a) A linear Fredholm integral equation of the first kind has the
form
Z b
Kf = g,
K (x, y) f (y) dy = g (x) ;
a
(b) A linear Fredholm integral equation of the second kind has the form
f − λKf = g,
f (x) − λ
Z
b
K (x, y) f (y) dy = g (x) ,
a
where the kernel K (x, y) and forcing (or inhomogeneous) term g (x) are known functions
and f (x) is the unknown function. Also, λ (∈ R or C) is a parameter.
Definition 3.6: If g (x) ≡ 0 the integral equation is called homogeneous and a value of
λ for which
λKφ = φ
possesses a non-trivial solution φ is called an eigenvalue of K corresponding to the eigenfunction φ.
Note: λ is the reciprocal of what we have previously called an eigenvalue.
Definition 3.7 Volterra integral equations of the first and second kind take the
forms
(a)
Z x
K (x, y) f (y) dy = g (x) ,
a
(b)
f (x) − λ
Z
x
K (x, y) f (y) dy = g (x) ,
a
respectively.
Note: Volterra equations may be considered a special case of Fredholm equations. Suppose a ≤ y ≤ b and put
K (x, y) for a ≤ y ≤ x,
K1 (x, y) =
0
for x < y ≤ b,
then
Z
a
b
K1 (x, y) f (y) dy =
Z
a
x
+
Z b
x
K1 (x, y) f (y) dy =
Z
a
x
K (x, y) f (y) dy + 0.
3
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations
Conversion of ODEs to integral equations
Example 2: Boundary value problems
Let
Lu (x) = λρ (x) u (x) + g (x)
where L is a linear ODE operator as in Section 1 (of at least second order), ρ (x) > 0 is
continuous, g (x) is piecewise continuous and the unknown u (x) is subject to homogeneous
boundary conditions at x = a, b.
Suppose L has a known Green’s function under the given boundary conditions, G (x, y)
say, then applying Theorem 2.10,
Z b
u (x) =
G (x, y) [λρ (y) u (y) + g (y)] dy
a
Z b
Z b
= λ
G (x, y) ρ (y) u (y) dy +
G (x, y) g (y) dy
a
a
Z b
= λ
G (x, y) ρ (y) u (y) dy + h (x) .
a
The latter is a Fredholm integral equation of the second kind with kernel G (x, y) ρ (y) and
forcing term
Z
b
h (x) =
G (x, y) g (y) dy.
a
The ODE and integral equation are equivalent: solving the ODE subject to the BCs is
equivalent to solving the integral equation.
Example 3: Initial value problems
Let
u′′ (x) + a (x) u′ (x) + b (x) u (x) = g (x) ,
where u (0) = α and u′ (0) = β and a (x) , b (x) and g (x) are known functions.
Change dummy variable from x to y and then Integrate with respect to y from 0 to z :
Z z
Z x
Z z
Z z
′′
′
u (y) dy +
a (y) u (y) dy +
b (y) u (y) dy =
g (y) dy
0
0
0
0
and using integration by parts, with u = a, v = u , on the second term on the left hand
side
Z z
Z z
Z z
x
z
′
′
[u (y)]0 + [a (y) u (y)]0 −
a (y) u (y) dy +
b (y) u (y) dy =
g (y) dy,
′
′
0
0
0
yields
′
u (z) − β + a (z) u (z) − a (0) α −
Z
z
′
[a (y) − b (y)] u (y) dy =
0
Integrating now with respect to z over 0 to x gives
Z x
x
[u (z)]0 − βx +
a (z) u (z) dz − a (0) αx
0
Z xZ z
−
[a′ (y) − b (y)] u (y) dydz
Z0 x Z0 z
=
g (y) dydz.
0
0
Z
z
g (y) dy.
0
(3.1)
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations
4
This can be simplified by appealing to the following theorem.
Theorem 3.8: For any f (x),
Z xZ z
0
f (y) dydz =
0
x
Z
(x − y) f (y) dy.
0
Proof: Interchanging the order of integration over the triangular domain in the yz-plane
reveals that the integral equals
Z xZ z
Z x
Z x
f (y) dydz =
f (y)
dzdy
0
0
0
y
which is trivially integrated over z to give
Z x
(x − y) f (y) dy
0
as required. Alternatively, integrating the repeated integral by parts with u (z) =
v ′ (z) = 1, i.e. u′ (z) = f (z) , v (z) = z, gives
Z z
z=x Z x
Z xZ z
f (y) dydz = z
f (y) dy
−
zf (z) dz
0
0
0
=x
=
Z
Z
x
f (y) dy −
0
x
xf (y) dy −
0
z=0
Z
Z
Rz
0
f (y) dy,
0
x
zf (z) dz
0
x
yf (y) dy.
0
Returning to equation (3.1), it may now be written as
Z x
u (x) − α − βx − a (0) αx +
a (y) u (y) dy
0
Z x
−
(x − y) [a′ (y) − b (y)] u (y) dy
Z0 x
=
(x − y) g (y) dy.
0
Thus,
Z
x
u (x) +
{a (y) − (x − y) [a′ (y) − b (y)]} u (y) dy
Z x 0
=
(x − y) g (y) dy + [β + a (0) α] x + α.
0
This is a Volterra integral equation of the second kind.
Notes:
(1) BVPs correspond to Fredholm equations.
(2) Initial value problems (IVPs) correspond to Volterra equations.
(3) The integral equation formulation implicitly contains the boundary/initial conditions;
in the ODE they are imposed separately.
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations
5
Example 4
Write the Initial Value Problem
u′′ (y) + yu′(y) + 2u(y) = 0
subject to u(0) = α, u′(0) = β as a Volterra integral equation.
Integrate with respect to y between 0 and z:
Z z
Z z
′
′
u (z) − β +
yu (y)dy + 2
u(y)dy = 0
0
0
Integrate second term by parts:
Z z
Z z
′
z
yu (y)dy = [yu(y)]0 −
u(y)dy,
0
0
Z z
= zu(z) −
y(y)dy
0
and substitute back to get
′
u (z) − β + zu(z) +
Z
z
u(y)dy = 0.
0
Integrate again, with respect to z between 0 and x:
Z x
Z xZ
u(x) − α − βx +
zu(z)dz +
0
0
z
u(y)dydz = 0.
0
Change dummy variable in first integral term to y and use Theorem 3.8 in last term to
get
Z x
Z x
u(x) − α − βx +
yu(y)dy +
(x − y)u(y) = 0
0
0
Simplification gives
u(x) + x
Z
0
x
u(y)dy = α + βx.
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations
6
Fredholm Equations with Degenerate Kernels
We have seen that a Fredholm integral equation of the second kind is defined as
Z b
f (x) = λ
K (x, y) f (y) dy + g (x) .
(3.2)
a
Definition 3.9: The kernel K (x, y) is said to be degenerate (separable) if it can be
written as a sum of terms, each being a product of a function of x and a function of y.
Thus,
n
X
K (x, y) =
uj (x) vj (y) = uT (x) v (y) = u (x) · v (y) = hu, vi,
(3.3)
j=1
where the latter notation is the inner product for finite vector spaces (i.e. the dot product).
Equation (3.2) may be solved by reduction to a set of simultaneous linear algebraic equations as we shall now show. Substituting (3.3) into (3.2) gives
#
Z b "X
n
f (x) = λ
uj (x) vj (y) f (y) dy + g (x)
a
=λ
n X
j=1
uj (x)
j=1
and letting
cj =
then
Z
Z
b
vi (y) f (y) dy + g (x)
a
b
vj (y) f (y) dy = hvj , f i,
(3.4)
a
f (x) = λ
n
X
cj uj (x) + g (x) .
(3.5)
j=1
For this class of kernel, it is sufficient to find the cj in order to obtain the solution to the
integral equation. Eliminating f between equations (3.4) and (3.5) (i.e. take inner product
of both sides with vi ) gives
" n
#
Z b
X
ci =
vi (y) λ
cj uj (y) + g (y) dy,
a
j=1
or interchanging the summation and integration,
ci = λ
n
X
j=1
cj
Z
b
vi (y) uj (y) dy +
a
Z
b
vi (y) g (y) dy.
(3.6)
a
Writing
aij =
Z
b
vi (y) uj (y) dy = hvi , uj i,
(3.7)
a
and
gi =
Z
b
vi (y) g (y) dy = hvi , gi,
a
(3.8)
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations
7
then (3.6) becomes
ci = λ
n
X
aij cj + gi .
(3.9)
j=1
By defining the matrices
A = (aij ) ,



c =

c1
c2
..
.
cn



,

this equation may be written in matrix notation as



g =

g1
g2
..
.
gn





c = λAc + g
i.e.
(I − λA) c = g
(3.10)
where I is the identity. This is just a simple linear system of equations for c. We
therefore need to understand how we solve the canonical system Ax = b where A is a
given matrix, b is the given forcing vector and x is the vector to be determined. Let’s
state an important theorem from Linear Algebra:
Theorem 3.10: (Fredholm Alternative)
Consider the linear system
Ax = b
(3.11)
where A is an n × n matrix, x is an unknown n × 1 column vector, and b is a specified
n × 1 column vector.
We also introduce the related (adjoint) homogeneous problem
AT x̂ = 0
(3.12)
with p = n − rank (A) non-trivial linearly independent solutions
x̂1 , x̂2 , . . . , x̂p .
[Reminder, rank(A) is the number of linearly independent rows (or columns) of the matrix
A.]
Then the following alternatives hold:
either
(i) DetA 6= 0, so that there exists a unique solution to (3.11) given by x = A−1 b for each
given b. (And b = 0 ⇒ x = 0)
or
(ii) DetA = 0 and then
(a) If b is such that hb, x̂j i = 0 for all j then there are infinitely many solutions
to equation (3.11).
(b) If b is such that hb, x̂j i =
6 0 for any j then there is no solution to equation
(3.11).
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations
8
In the case of (ii)(a), then there are infinitely many solutions because the theorem states
that we can find a particular solution xP S and furthermore, the homogeneous system
Ax = 0
(3.13)
has p = n − rank (A) > 0 non-trivial linearly independent solutions
x1 , x2 , . . . , xp .
so that there are infinitely many solutions because we can write
p
X
x = xP S +
αj x̂j
j=1
where αj are arbitrary constants (and hence there are infinitely many solutions).
No proof of this theorem is given.
To illustrate this theorem consider the following simple 2 × 2 matrix example:
Example 5
Determine the solution structure of the linear system Ax = b when
2 1
1 1
(I) A =
(II) A =
1 1
2 2
and in the case of (II) when
1
b=
2
b=
1
1
(3.14)
(3.15)
(I) Since Det(A) = 1 6= 0 the solution exists for any b, given by x = A−1b.
(II) Here Det(A) = 0 so we have to consider solutions to the adjoint homogeneous system,
i.e.
AT x̂ = 0
(3.16)
i.e.
1 2
1 2
x̂ = 0.
(3.17)
This has the 1 non-trivial linearly independent solution x̂1 = (2 − 1)T . It is clear that
there should be 1 such solution, i.e. p = n−rank(A) = 2 − 1 = 1.
Note also that the homogeneous system
Ax̂ = 0
(3.18)
i.e.
1 1
2 2
x̂ = 0
(3.19)
has the 1 non-trivial linearly independent solution x1 = (1 − 1)T . If solutions do exist
they will therefore have the form x = xP S + α1 x1 .
A solution to the problem Ax = b will exist if x̂1 · b = 0. This condition does hold for
b = (1 2)T and so the theorem predicts that a solution will exist. Indeed it does, note
that xP S = (1/2 1/2)T and so x = xP S + α1 x1 is the infinite set of solutions.
The orthogonality condition does not hold for b = (1 1)T and so the theorem predicts
that a solution will not exist. This is clear from looking at the system.
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations
9
Now let us apply the Fredholm Alternative theorem to equation (3.10) in order to solve
the problem of degenerate kernels in general.
Case (i) if
det (I − λA) 6= 0
(3.20)
then the Fredholm Alternative theorem tells us that (3.10) has a unique solution for c:
c = (I−λA)−1 g.
(3.21)
Hence (3.2), with degenerate kernel (3.3), has the solution (3.5):
f (x) = λ
n
X
ci ui (x) + g (x) = λ(u (x))T c + g (x)
i=1
or from (3.21)
f (x) = λ(u (x))T (I−λA)−1 g + g (x) ,
which may be expressed, from (3.8), as
f (x) = λ
Z
a
b
(u (x))T (I − λA)−1 v (y) g (y) dy + g (x) .
Definition 3.11: The resolvent kernel R (λ, x, y) is such that the integral representation for the solution
Z b
f (x) = λ
R (λ, x, y) g (y) dy + g (x)
a
holds.
Theorem 3.12: For a degenerate kernel, the resolvent kernel is given by
R (λ, x, y) = (u (x))T (I−λA)−1 v (y) .
Case (i) covered the simple case when there is a unique solution. Let us know concern
ourselves with the case when the determinant of the matrix on the left hand side of the
linear system is zero.
Case (ii) suppose
det (I − λA) = 0,
(3.22)
and that the homogeneous equation
(I − λA) c = 0
(3.23)
has p non-trivial linearly independent solutions
c1 , c2 , . . . , cp .
Then, the homogeneous form of the integral equation (3.2), i.e.
f (x) = λ
Z
a
b
K (x, y) f (y) dy,
(3.24)
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations
10
with degenerate kernel (3.3), has p solutions, from (3.5):
j
f (x) = λ
n
X
cji ui (x)
(3.25)
i=1
with j = 1, 2, ..., p.
Turning to the inhomogenous equation, (3.10), it has a solution if and only if the forcing
term g is orthogonal to every solution of
(I − λA)T h = 0
or
n
X
i.e.
hT g =0
(3.26)
hi gi = 0.
i=1
Hence (3.8) yields
n
X
hi
Z
b
a
Thus, writing
b
vi (y) g (y) dy = 0
a
i=1
which is equivalent to
Z
n
X
!
hi vi (y) g (y) dy = 0.
i=1
h (y) =
n
X
hi vi (y)
(3.27)
i=1
then
Z
b
h (y) g (y) dy = 0,
a
which means that g (x) must be orthogonal to h (x) on [a, b].
Let us explore the function h(x) a little; we start be expressing (3.26) as
hi − λ
n
X
aji hj = 0.
j=1
Without loss of generality assume that all the vi (x) in (3.3) are linearly independent (since
if one is dependent on the others, eliminate it and obtain a separable kernel with n replaced
by (n − 1)). Multiply the ith equation in (3.26) by vi (x) and sum over all i from 1 to n:
n
X
hi vi (x) − λ
i=1
n X
n
X
aji hj vi (x) = 0,
i=1 j=1
i.e. from (3.7)
n
X
i=1
hi vi (x) − λ
Z bX
n X
n
a
hj vj (y) ui (y) vi (x) dy = 0.
i=1 j=1
Using (3.27) and (3.3) we see that this reduces to the integral equation:
Z b
h (x) − λ
K (y, x) h (y) dy = 0.
a
(3.28)
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations
11
Note that this is the homogeneous form of the transpose of the Fredholm integral equation
(3.2), i.e. it has no forcing term on the right hand side and the kernel is written K(y, x)
rather than K(x, y).
In conclusion for Case (ii), the integral equation (3.2) with a separable kernel of the
form (3.3) has a solution if and only if g (x) is orthogonal to every solution h (x) of the
homogeneous equation (3.28). The general solution is then
f (x) = f (0) (x) +
p
X
αj f (j) (x)
(3.29)
j=1
where f (0) (x) is a particular solution of (3.2), (3.3) and the αj are arbitrary constants.
Example 6:
Consider the integral equation
f (x) = λ
Z
π
sin (x − y) f (y) dy + g (x) .
(3.30)
0
Find
(i) the values of λ for which it has a unique solution,
(ii) the solution in this case,
(iii) the resolvent kernel.
For those values of λ for which the solution is not unique, find
(iv) a condition which g (x) must satisfy in order for a solution to exist,
(v) the general solution in this case.
The solution proceeds as follows. Expand the kernel:
Z π
f (x) = λ
sin (x − y) f (y) dy + g (x)
Z0 π
= λ
(sin x cos y − cos x sin y) f (y) dy + g (x)
0
and hence it is clear that
K (x, y) = sin x cos y − cos x sin y
is separable. Thus
Z
f (x) = λ sin x
π
f (y) cos ydy − cos x
0
Z
π
f (y) sin ydy + g (x) ,
0
and so write
c1 =
π
Z
f (y) cos ydy,
(3.31)
f (y) sin ydy,
(3.32)
0
c2 =
Z
π
0
which gives
f (x) = λ [c1 sin x − c2 cos x] + g (x) .
Substituting this value of f (x) into (3.31) gives
Z π
c1 =
{λ [c1 sin y − c2 cos y] + g (y)} cos ydy
0
(3.33)
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations
= λc1
π
Z
Z
sin y cos ydy − λc2
0
Defining
g1 =
Z
π
2
cos ydy +
0
Z
12
π
g (y) cos ydy.
0
π
g (y) cos ydy,
0
and noting the values of the integrals
π
Z π
Z
1
1 π
sin y cos ydy =
sin 2ydy = − cos 2y = 0,
2 0
4
0
0
π
Z π
Z π
1
1
1
1
2
cos ydy =
(1 + cos 2y) dy =
y + sin 2y = π,
2 0
2
2
2
0
0
yields
1
c1 = − πλc2 + g1 .
2
Repeating this procedure, putting f (x) from (3.33) into (3.32), gives
Z π
c2 =
{λ [c1 sin y − c2 cos y] + g (y)} sin ydy
0
= λc1
Z
π
2
sin ydy − λc2
0
Observing that
Z
0
π
Z
π
cos y sin ydy +
0
1
sin ydy =
2
2
Z
0
π
Z
π
g (y) sin ydy.
0
π
1
1
1
y − sin 2y = π,
(1 − cos 2y) dy =
2
2
2
0
and writing
g2 =
Z
π
g (y) sin ydy,
0
then we obtain
1
c2 = πλc1 + g2 .
2
Thus, there is a pair of simultaneous equations for c1 , c2 :
c1 + 12 πλc2 = g1 ,
− 12 πλc1 + c2 = g2 ,
or in matrix notation
1
1
− 2 πλ
1
πλ
2
1
c1
c2
=
g1
g2
Case (i): These equations have a unique solution provided
1
1
πλ
2
det
6= 0,
− 12 πλ 1
i.e.
1
1 + π 2 λ2 6= 0
4
or
λ 6= ±
2i
.
π
.
(3.34)
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations
In this case
or
c1
c2
c1
c2
=
1
1
− 2 πλ
−1 1
πλ
2
1
g1
g2
1
1 − 12 πλ
g1
g1 − 12 πλg2
=
1
1
πλ
1
g2
1 + 41 π 2 λ2 2 πλg1 + g2
2
Rπ
Rπ
1
g R(y) cos ydy − 12 πλ R0 g (y) sin ydy
0
=
π
π
1
1 + 14 π 2 λ2 2 πλ 0 g (y) cos ydy + 0 g (y) sin ydy
Z π
1
cos y − 12 πλ sin y
=
g (y) dy.
1
πλ cos y + sin y
1 + 14 π 2 λ2 0
2
1
=
1 2 2
1 + 4π λ
Hence,
f (x) = λ [c1 sin x − c2 cos x] + g (x)
λ
=
1 2 2
1 + 4π λ
Z π 1
1
πλ cos y + sin y g (y) dy
×
sin x cos y − πλ sin y − cos x
2
2
0
+g (x)
or
λ
f (x) =
1 2 2
1 + 4π λ
Z
0
π
1
sin (x − y) − πλ cos (x − y) g (y) dy + g (x) .
2
This is the required solution, and we can observe that the resolvent kernel is
R (x, y, λ) =
sin (x − y) − 12 πλ cos (x − y)
.
1 + 14 π 2 λ2
Case (ii): If
det
i.e.
1
− 12 πλ
1
πλ
2
1
=0
2i
2i
or −
π
π
then there is either no solution or infinitely many solutions. With this, (3.34) becomes
1 ±i
c1
g1
=
.
∓i 1
c2
g2
λ=+
Solving these equations using row operations, R2 7→ R2 ± iR1, gives
1 ±i
c1
g1
=
0 0
c2
g2 ± ig1
or
c1 ± ic2 = g1 ,
0 = g2 ± ig1 .
13
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations
14
The second equation places a restriction on g (x), which by definition of the gi , is
Z π
(sin y ± i cos y) g (y) dy = 0.
(3.35)
0
This is the condition that g (x) must satisfy for the integral equation to be soluble, i.e. if
g(x) does not satisfy this, then (3.30) does not have a solution. Suppose this condition
holds then we can set c2 to take any arbitrary constant value, c2 = α, say. Thus,
c1 = ∓iα + g1 ,
and hence from (3.33), the solution of (3.30) is, when λ = ±2i/π,
f (x) = ±
2i
[(∓iα + g1 ) sin x − α cos x] + g (x)
π
2i
2α
(sin x ∓ i cos x) ± g1 sin x + g (x)
π
π
for arbitrary α, with constraint g2 = ∓ig1 or equivalently (3.35).
We arrived at the above conclusions via simple row operations. Fredholm’s theorem would
have also told us the same information regarding constraints on g1 and g2 .
=