Download MATH10232: SOLUTION SHEET I

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Mathematical optimization wikipedia , lookup

System of polynomial equations wikipedia , lookup

Newton's method wikipedia , lookup

Calculus of variations wikipedia , lookup

System of linear equations wikipedia , lookup

False position method wikipedia , lookup

Interval finite element wikipedia , lookup

Transcript
MATH10232: SOLUTION SHEET I
1. Classifying ODEs:
(a) The equation u′ (x) + u2 (x) = cos(x) is:
First order because the highest derivative is u′ (x) – a first derivative.
Nonlinear because of the presence of the term u2 (x) on the left-hand
side.
Non-autonomous because the independent variable, x, appears explicitly (on the right-hand side).
4
(b) The equation d dtu(t)
+ [2 + cos(t)] u(t) = 3 is:
4
Fourth order because the highest derivative is d4 u/dt4 – a fourth
derivative.
Linear because the unknown function, u, and its derivatives only appear in linear combinations.
Non-autonomous because the independent variable, t, appears explicitly (in the term multiplying u(t) on the left-hand side).
(c) The equation θ̈ + Ω2 sin θ = 0 is:
Second order because the highest derivative is θ̈ – a second derivative.
Nonlinear because of the presence of the sin θ term.
(Note that in the excluded case Ω = 0 the equation is linear)
Autonomous because the independent variable, t, does not appear
explicitly.
2. Properties of ODEs:
2
= 2sφ is a first order ODE. The highest derivative is dφ
(a) False. dφ
ds
ds
– a first derivative. The derivative is squared in the equation, but that
does not change its order.
(b) False. u′′ and u′ are evaluated at different values of the independent
variable, namely at x and at x−1, respectively, so u′′ (x)+u′ (x−1) = 1
is not an ODE. (It’s a delay differential equation – a different beast
altogether).
(c) False. The independent variable is y and x is the dependent variable, so
dx
2
dy + 5y x = 0 is a linear ODE for x(y). The point of this example is to
remind you that the symbol chosen for each variable can be anything,
provided that it is unambiguously defined somewhere.
(d) False. A real solution of y ′2 + y 2 = 0 is y(x) = 0.
(e) False. The independent variable is z and the dependent variable is t;
dt
d2 t
t2 dz
2 + 2t dz + 2t = 0 is an ODE for t(z) and the independent variable,
z does not occur explicitly.
3. Solutions of ODEs:
(a) The general solution can be found by integrating twice:
y ′′ (x) = x2
⇒
y′ =
1 3
x +C
3
1
⇒
y=
1 4
x + Cx + D.
12
The two constants of integration are required for it to be the general
solution.
(b) The equation is already separated, so the general solution can be found
by integrating once:
Z
Z
Z
1 2
y = sin x+C.
yy ′ dx = cos x dx ⇒
y dy = sin x+C ⇒
2
We have found the solution in implicit form and there is no need to
simplify further.
(c) The required identity is 1/2(1 + cos 2x) = cos2 x and so the ODE
becomes
′ 2
y (x) = cos2 x ⇒ y ′ (x) = ± cos x.
Integrating once gives,
y(x) = ± sin(x) + C,
which is the general solution.
(d) The solution can be found be integrating once
Z
′
5x
y (x) = xe
⇒ y = x e5x dx.
Integrating by parts gives
hx
i Z 1
1
1
5x
5x
y=
−
e
e dx =
x−
e5x + C.
5
5
5
5
4. Computer exercises:
The commands are quite simple, once you are familiar with the MATLAB
argot.
(a) >>
x = 0:0.01:1;
(b) >>
y = x.*x; Note the use of the element-by-element .* operator
(c) The required loop is given in the question, simply type it in:
for i=1:100
dydx(i) = (y(i+1) - y(i))/0.01;
end
(d) To create the plot we must answer the question “what is the value of
x that corresponds to dydx(i)?” The estimate for the gradient should
be most accurate at the mid-point of the interval, so let’s take 100
values of x equally spaced between 0.005 and 0.995.
>> x2 = 0.005:0.01:0.995;
The graph can now be created by the command
>> plot(x2,dydx);
2
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
(e) The exact gradient is, of course, dy/dx = 2x and the error can be
assessed by comparing the difference between the exact and estimated
gradients.
>> e = 2*x2 - dydx;
The maximum absolute error is given by the command
>> max(abs(e));
and I found the answer to be 1.5099e−14, so it appears that this approximation is exact (to machine precision).
Instead of using the midpoint of the intervals, if we take the x values
to be the left-hand end of each interval then the maximum absolute
error would be
>> e2 = 2*x(1:100) - dydx;
>> max(abs(e2));
where the MATLAB notation x(1:100) means “select the first 100
entries in the array x”. The result is 0.01, which is also the case if we
take the right-hand end of each interval to be the x values.
The fact that the error seems to be equal to the size of the interval if we assume that we are
approximating the value of the derivative at the end points, but zero if we approximate at
the mid-point, should strike you as being “interesting”. In fact, we can prove these results
by the following argument:
Let the size of each interval be h and the i + 1-th interval be x ∈ [hi, h(i + 1)]. Our
approximation to the derivative in that interval is
c
y(h(i + 1)) − y(hi)
dy
=
.
dx
h
(1)
For the function y = x2 , equation (1) becomes
c
h2 (i + 1)2 − h2 i2
dy
=
= h (i + 1)2 − i2 = h i2 + 2i + 1 − i2 = h(2i + 1).
dx
h
The exact derivative in the i + 1-th interval is


at left end of i + 1-th interval,
2hi,
dy
= 2x = 2h i+(i+1)
= h(2i + 1), at mid-point of i + 1-th interval,
2

dx

2h(i + 1),
at right end of i + 1-th interval.
3
(2)
Thus, we obtain the error in the i + 1-th interval


−h,
c
dy
dy
e=
−
=
0,

dx
dx

h,
at left end of i + 1-th interval,
at mid-point of i + 1-th interval,
at right end of i + 1-th interval.
The above argument formalises the guess that we get the most accurate result when we
take the x value to be the mid-point of the interval.
(f) The method is exactly the same when approximating y = x3 . The full
set of commands to find the error when using the “best” (mid-point)
approximation is:
>> x = 0:0.01:1;
>> y = x.^3;
>> for i=1:100
dydx(i) = (y(i+1) - y(i))/0.01;
end
>> x2 = 0.005:0.01:0.995;
>> plot(x2,dydx);
>> e = 3*x2.*x2 - dydx;
>> max(abs(e))
ans =
2.5000e-05
The error is small, but not close to numerical zero.
A similar argument to the one above shows that the error is exactly h2 /4. In fact, the
proposed method is only “exact” when approximating gradients that are linear, otherwise
we always get an error of O(h2 ), although proving that is rather tricky.
4