Download 8th Tutorial - Mathematics and Statistics

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Quartic function wikipedia , lookup

Dynamic substructuring wikipedia , lookup

System of polynomial equations wikipedia , lookup

Finite element method wikipedia , lookup

Horner's method wikipedia , lookup

System of linear equations wikipedia , lookup

Newton's method wikipedia , lookup

Interval finite element wikipedia , lookup

Multidisciplinary design optimization wikipedia , lookup

Root-finding algorithm wikipedia , lookup

False position method wikipedia , lookup

Transcript
8th Tutorial
Tiago Salvador
Department of Mathematics and Statistics, McGill University
[email protected]
November 2, 2015
Abstract
In this tutorial we discuss Richardson’s extrapolation and several methods to solve numerically
initial value problems.
Contents
1 Richardson’s extrapolation
1
2 1-step methods for initial value problems
3
3 Consistency error and convergence for 1-step methods
4
1
Richardson’s extrapolation
Richardson’s extrapolation is a technique that allows us to obtain high-accurate results using loworder (approximation) formulas. We can apply it every time the error of the approximation has a
known formula that depends on a parameter, usually the step size h.
Suppose that for each h 6= 0, we have a formula N1 (h) that approximates an unknown constant
M such that
M = N1 (h) + k1 h + k2 h2 + k3 h3 + . . .
for some constants k1 , k2 , k3 , . . .. If we replace h by h2 in equation (1), we get
h
h2
h3
h
M = N1
+ k1 + k2
+ k3
+ ...
2
2
4
8
(1)
(2)
Subtracting equation (1) from twice the equation (2), we eliminate the term involving k1 and get
2
3
h
h
h
M = 2N1
− N1 (h) − k2
− h2 + k3
− h3 + . . .
2
2
4
Set
N2 (h) := 2N1
h
− N1 (h) .
2
Hence
k2 2 3k3 3
h −
h − ....
2
4
Therefore N2 (h) is an approximation of order O(h2 ) of M . We can now repeat this procedure to
N2 (h) to get an approximation of order O(h3 ). In general we can obtain approximations of arbitrary
higher orders.
M = N2 (h) −
1
Exercise 1.1. Consider the most basic approximation of the first derivative
f (x0 + h) − f (x0 )
h
and use the Richardson’s extrapolation to derive a second order approximation.
f0 ≈
Solution: We have
N1 (h) :=
f (x0 + h) − f (x0 )
.
h
Hence
h
− N1 (h)
2
4f (x0 + h/2) − 4f (x0 ) − f (x0 + h) + f (x0 )
=
h
4f (x0 + h/2) − 4f (x0 ) − f (x0 + h) + f (x0 )
=
h
−3f (x0 ) + 4f (x0 + h/2) − f (x0 + h)
=
h
N2 (h) = 2N1
Note that if we replace h by 2h we recover the three-point endpoint formula.
Exercise 1.2. In calculus, we learn that e = limh→0 (1 + h)1/h .
a) Determine approximation to e corresponding to h = 0.04, 0.02 and 0.01.
b) Use Richardson’s extrapolation on the approximations, assuming that constants k1 , k2 , . . . exists
such that
e = (1 + h)1/h + k1 h + k2 h2 + k3 h3 + . . . ,
to produce an O(h3 ) approximation to e, where h = 0.04.
c) Do you think that the assumption in part b) is correct?
Solution: Let N1 (h) = (1 + h)1/h . We know that e ≈ 2.718281828459045.
a) We use N1 (h) with h = 0.04, 0.02 and 0.01. For h = 0.04, we have
e ≈ (1 + 0.04)1/0.04 ≈ 2.66584.
For h = 0.02, we have
e ≈ (1 + 0.02)1/0.02 ≈ 2.69159.
For h = 0.01, we have
e ≈ (1 + 0.01)1/0.01 ≈ 2.70481.
b) We already know that
e = N2 (h) −
with N2 (h) = 2N1
h
2
k2 2 3k3 3
h −
h − ....
2
4
− N1 (h). We have
h
k2
3k3 3
e = N2
− h2 −
h − ....
2
8
32
Page 2
(3)
(4)
Subtracting equation (3) from 4 times equation (4), leads to
h
8k3 3
3e = 4N2
− N2 (h) +
h + ...
2
3
which we can write as
e=
4N2
h
2
− N2 (h) 8k3 3
+
h + ....
3
3
Therefore an O(h3 ) approximation to e is given by
e ≈ N3 (h) :=
4N2
h
2
− N2 (h)
.
3
For h = 0.04, we have
e ≈ N3 (0.04) = 2.718272931 . . . .
c) We look at the quotients e−Nh1 (h) . If they are approximately the same constant then the
assumption in b) seems correct. We have
e − N1 (0.04)
≈ 1.31114
0.04
e − N1 (0.02)
≈ 1.33469
0.02
e − N1 (0.01)
≈ 1.3468
0.01
and so the assumption in b) seems indeed correct.
2
1-step methods for initial value problems
In the following we are interested in solving the initial value problem
(
y 0 (t) = f (t, y(t)) t ∈ [a, b]
y(a) = y0
(IVP)
where f and y0 (initial value) are both given. We are interested in finding the function y that solves
(IVP). Here we will focus on obtaining approximations for y at various values, called mesh points
or time steps, in the interval [a, b], instead of obtaining a continuous approximation to the solution
y. Once we know the approximation solution at the mesh points, we can use interpolation to get an
approximation of the solution in the entire interval.
Usually the mesh points are equally spaced in the interval. Fixing the number of mesh points as
N + 1, they are given by
tn = a + nh n = 0, 1, . . . , N
where h = b−a
N = tn+1 − tn is called the step size. We will denote by yn the approximation of y(tn ).
To solve (IVP) numerically we ”march in time” at each time step tn by solving a discrete set of
equations for yn , which we will refer to as discretization.
To generate the discrete set of equations to be solved we can approximate y 0 by Dh y . We now
give some examples.
Example 2.1 (Forward Euler). Taking Dh y to be the forward difference, i.e.,
Dh y(tn ) =
yn+1 − yn
,
h
Page 3
we get the discrete equation
yn+1 − yn
= f (tn , yn )
h
and so
yn+1 = yn + hf (tn , yn ),
n = 0, . . . , N − 1.
This is an explicit 1-step method.
Example 2.2 (Backward Euler). Taking Dh y to be the backward difference, i.e.,
Dh y(tn ) =
yn − yn−1
,
h
we get
yn = yn−1 + hf (tn , yn ),
n = 1, . . . , N.
This is an implicit 1-step method.
Example 2.3 (Midpoint method). Taking Dh y to be the backward difference, i.e.,
Dh y(tn ) =
yn+1 − yn−1
,
2h
we get
yn+1 = yn−1 + 2hf (tn , yn ),
n = 0, . . . , N − 1.
This is an explicit 2-step method.
Recall that by the Fundamental Theorem of Calculus we can write
Z tn+1
y(tn+1 ) = y(tn ) +
f (s, y(s)) ds.
tn
Hence, we can also use numerical quadratures to obtain the discrete set of equations
Example 2.4 (Trapezoid Method). The Trapezoid Rule tells us that
Z tn+1
h
f (s, y(s)) ds ≈ (f (tn , yn ) + f (tn+1 , yn+1 )
2
tn
and therefore we get
yn+1 = yn +
h
(f (tn , yn ) + f (tn+1 , yn+1 )),
2
n = 0, . . . , N − 1.
This is an implicit 1-step method.
Definition 2.1. A discretization is called explicit if the next yn can be computed explicitly from the
previous ones. Otherwise, we call it implicit.
Remark 2.2. In general, implicit methods are more accurate and stable (a notion we will make precise
later), but in order to apply them we need to solve the implicit equation. This is not always possible
and so implicit methods usually require the use of a root find method, thus becoming more expensive.
3
Local truncation error and convergence for 1-step methods
To analyze the error of each of the discretizations, we need a new concept of error.
Definition 3.1. Given a discretization Φh (tn , yn ) = 0 and the exact solution y(·) of (IVP), the local
truncation error at tn is given by
τh (tn ) = Φh (tn , y(tn )).
Page 4
Remark 3.2. This error is a local error since it measures the accuracy of the method at a specific
step, assuming we know the exact solution at the previous step. In other words, it measures the amount
by which the exact solution fails the discretization.
Definition 3.3. A discretization is of order p if
τh :=
max
n=0,...,N −1
|τh (tn )| = O(hp ).
Example 3.1. The Forward Euler method gives us
Φh (tn , yn ) = yn+1 − yn − hf (tn , yn ).
Therefor it has a local truncation error at the nth step given by
τh (tn ) = y(tn+1 ) − y(tn ) − hf (tn , y(tn ))
for each i = 0, . . . , N − 1. If we Taylor expand y(t) about t = tn and evaluate at t = tn+1 , we get
h2 00
y (ξn )
2
where ξn ∈ (tn , tn+1 ). Hence, since f (tn , y(tn )) = y 0 (tn ), we get
y(tn+1 ) = y(tn ) + hy 0 (tn ) +
h2 00
y (ξn ).
2
When y 00 (t) is known to be bounded by a constant M on [a, b], we have
τh (tn ) =
|τh (tn )| ≤
h2
M
2
and so the local truncation error in the Forward Euler method is O(h2 ).
Exercise 3.1. Show that the local truncation error of the trapezoidal method is O(h3 ).
Solution: The Trapezoidal method is given by
yn+1 = yn +
h
(f (tn , yn ) + f (tn+1 , yn+1 )),
2
n = 0, . . . , N − 1.
The local truncation error is then given by
τn+1 (h) = y(tn+1 ) − y(tn ) −
h
(f (tn , y(tn )) + f (tn+1 , y(tn+1 )))
2
for n = 1, 2 . . . , N − 1. Since y(·) is a solution of (IVP), we can rewrite it as
τn+1 (h) = y(tn+1 ) − y(tn ) −
h 0
(y (tn ) + y 0 (tn+1 ))
2
We will use the Taylor expansion to simplify the above expression. We have
y(tn+1 ) = y(tn ) + hy 0 (tn ) +
h2 00
y (tn ) + O(h3 )
2
and
y 0 (tn+1 ) = y 0 (tn ) + hy 00 (tn ) + O(h2 ).
Hence
τn+1 (h) = y(tn ) + hy 0 (tn ) +
h 0
h2 00
y (tn ) + O(h3 ) − y(tn ) −
y (tn ) + y 0 (tn ) + hy 00 (tn ) + O(h2 )
2
2
= O(h3 )
Page 5
Definition 3.4. A discretization is consistent if limh→0 τh = 0.
Definition 3.5. Let en := yn − y(tn ). A method converges if
lim
max
h→0 n=0,...,N −1
|en | = 0.
Theorem 3.6. If f in (IVP) is continuous and there exists a constant M such that
∂f
(x, z) ≤ M
∂z
for all x, z ∈ R then Euler’s method is convergent.
Remark 3.7. The above result is also true for other 1-step methods. However, for k-step methods,
consistency will not be enough in general to guarantee convergence, we will need our methods to be
stable (a concept we will introduce ahead).
References
[1] R. L. Burden and J. D. Faires. Numerical Analysis. 9th edition. Brookes/Cole, 2004.
[2] A. Quarteroni, R. Sacco and F. Saleri. Numerical Mathematics. 2nd edition. Springer, 2006.
[3] Tutorial notes written by Jan Feys
[4] Class notes written by Andy Wan for Math 317 Fall 2014.
Page 6