Download 1 Elementary Concepts 2016 HO

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

System of polynomial equations wikipedia , lookup

System of linear equations wikipedia , lookup

False position method wikipedia , lookup

Transcript
Computational Economics and Finance
Part I:
Elementary Concepts of Numerical Analysis
Spring 2016
Outline
• Computer arithmetic
• Error analysis:
– Sources of error
– Error propagation
– Controlling the error
∗ Rates of convergence
– Compute and verify
2
Computer Arithmetic
Unlike pure mathematics, computer arithmetic has finite precision and is limited by time and space.
Real numbers are represented as floating-point numbers of the
form
±d0.d1d2 . . . dp−1 × β e .
0.d1d2 . . . dp−1 is called the significand (old: mantissa) with dj ∈
{0, 1, . . . , β − 1} and has p digits.
β is called the base.
e ∈ {emin, emin + 1, . . . , emax} is the exponent.
3
Example
Consider the decimal number 0.1.
If β = 10 and p = 3, then 1.00 × 10−1 is exact.
If β = 2 and p = 24, then 1.10011001100110011001101 × 2−4 is
not exact.
In fact, with β = 2 the number 0.1 lies strictly between two
floating-point numbers and is not exactly representable by either
of them.
4
Double Precision
Most widely-used standard for floating-point computation:
IEEE Standard for Floating-Point Arithmetic (IEEE 754)
IEEE: Institute of Electrical and Electronics Engineers
Followed by many hardware (central processing unit, CPU and
floating-point unit, FPU) and software implementations
Current version is IEEE 754–2008, published in August 2008
Perhaps most widely used: IEEE 754 double-precision binary
floating-point format: binary64
5
binary64
Base β = 2, exponent and significand written in binary form
total of 64 bits
1 bit for +/- sign, 11 bits for exponent, 52 bits for significand
Normalized such that most significant bit d0 = 1 for all numbers
Exponent is biased by 1023
The number
(−1)sign (1.d1d2 . . . d52) × 2e−1023
has the value
⎛
(−1)sign ⎝1 +
52
⎞
di2−i⎠ × 2e−1023
i=1
6
Machine Epsilon
Smallest quantity such that 1 − and 1 + are both different
from one; smallest possible difference in the significand between
two numbers
Double precision has at most 16 decimal digits of accuracy
2−52 ≈ 2.220446049 · 10−16
Matlab: eps = 2.2204e − 016
Mathematica: $M achineEpsilon = 2.22045 × 10−16
7
Machine Infinity
Largest quantity that can be represented; overflow error occurs
if an operation produces a larger quantity.
Double precision has maximal exponent
210 = 1024 = 20 + 21 + 22 + . . . + 210 − 1023
and
21024 ≈ 1.797693135 · 10308
Bias in representation: 1023
Largest number (2 − eps) · 21023 ≈ 1.797693135 · 10308
Matlab: realmax = 1.7977e + 308
Mathematica: $M axM achineN umber = 1.79769 × 10308
8
Machine Zero
Any quantity that cannot be distinguished from zero.
Underflow error occurs if an operation on nonzero quantities
produces a smaller quantity.
Double precision has smallest exponent −1023
2−1023 ≈ 1.112536929 · 10−308
By convention this number represents 0 since normalization requires d0 = 1.
Smallest positive number 2−1022 ≈ 2.225073859 · 10−308
Matlab: realmin = 2.2251e − 308
Mathematica: $M inM achineN umber = 2.22507 × 10−308
9
Extended Precision
Often desirable and occasionally necessary to increase precision
Some Software packages can produce arbitrary precision arithmetic
Mathematica:
$M inN umber = 1.887662394852453 × 10−323228468
$M axN umber = 5.297557459040040 × 10323228467
10
Computer Arithmetic
A computer can only execute the basic arithmetic operations of
addition, subtraction, multiplication, and division. Everything
else is approximated.
Relative speeds, old values (Exercise 2.7):
operation
subtraction
multiplication
division
exponentiation
sine function
speed relative
to addition
1.03
1.03
1.06
5.09
4.20
11
Computer Arithmetic
Efficient evaluation of polynomials (Horner’s method)
a0 + a1x + a2x2 + a3x3 = a0 + x(a1 + x(a2 + xa3))
Efficient computation of derivatives (automatic differentiation)
Consider f (x, y, z) = (xα + y α + z α)β
Then
xα
∂f
f (x, y, z)
= (xα + y α + z α)β−1βαxα−1 = α
βα
∂x
(x + y α + z α)
x
12
Error Analysis: Sources of Error
Model error: an economic model is only an approximation of a
“real” phenomenon
Data error: parameters of the model have to be estimated, forecasted, simulated or approximated; data may be missing; available data may not well reflect the true but unknown process
Numerical errors: solving a model on a computer typically results
in an approximation of the solution; such approximations are the
essence of numerical analysis and involve two types of numerical
errors, round-off errors and truncation errors
reality −→ model −→ numerical solution
13
Numerical Analysis: Sources of Error
Numbers are represented by finite number of bits.
Real numbers with significand longer than the number of bits
available have to be shortened.
Examples: irrational numbers, finite numbers that are too long,
finite numbers in decimal form that have no finite exact representation in binary form
Round-off error: chopping off extra digits or rounding
2 stored as 0.66666 or as 0.66667
3
14
Round-off Errors
If β = 2 and p = 24, then the binary floating point representation
of the decimal number 0.1,
1.10011001100110011001101 × 2−4,
(in single precision) is not exact
Round-off errors are likely to occur
• when the numbers involved in calculations differ significantly
in their magnitude, or
• when two numbers that are nearly identical are subtracted
from each other.
15
Example in Matlab
Solve the quadratic equation
x2 − 100.0001x + 0.01 = 0
using the quadratic formula
x=
−b ±
b2 − 4ac
2a
Exact solutions: x1 = 100 and x2 = 0.0001
16
Round-off Errors in Matlab
f ormat long;
a = 1; b = −100.0001; c = 0.01;
RootDis = sqrt(b2 − 4 ∗ a ∗ c)
RootDis = 99.999899999999997
x1 = (−b + RootDis)/(2 ∗ a)
x1 = 100
x2 = (−b − RootDis)/(2 ∗ a)
x2 = 1.000000000033197e − 004
17
Truncation Errors
Truncation errors occur when numerical methods used for solving a mathematical problem use an approximate mathematical
procedure.
xn becomes N
xn for
Example: The infinite sum ex = ∞
n=0 n!
n=0 n!
some finite N .
Truncation error is independent of round-off error and occurs
even when the mathematical operations are exact.
18
Error Analysis: Error Propagation
Catastrophic cancelation occurs when subtracting rounded quantities.
Benign cancelation occurs when subtracting exact quantities.
Example: The area of a triangle with sides a, b, and c is
A=
s(s − a)(s − b)(s − c),
(Heron of Alexandria). Suppose a = 9 and
where s = a+b+c
2
b = c = 4.53. The correct answer is s = 9.03 and A = 2.342 . . ..
If β = 10 and p = 3, then s = 9.05 and A = 3.04.
Rewriting as
A=
(a + (b + c))(c − (a − b))(c + (a − b))(a + (b − c))
4
,
a ≥ b ≥ c,
yields A = 2.35.
19
Error Analysis: Controlling Rounding Error
Rules of thumb:
• Avoid unnecessary subtractions of numbers of similar magnitude.
• First add the smaller numbers and then add the result to the
larger numbers.
20
Example: Rounding Error
Exercise 2.3: Consider the system of linear equations
64919121x − 159018721y = 1
41869520.5x − 102558961y = 0
Exact solution: x = 205117922 and y = 83739041
However, double-precision arithmetic yields x = 1.02559e + 008
and y = 4.18695e + 007 due to catastrophic cancelation:
−102558961
64919121(−102558961) − 41869520.5(−159018721)
−102558961
=
−6658037598793281 + 6658037598793280.5
x =
Matlab: 102558961
Mathematica: 1.02559 × 108
21
Solving the Equations in Matlab
A = [ 64919121 -159018721 ; 41869520.5 -102558961 ];
b = [ 1 ; 0 ];
A\b
ans =
1.0e+008 *
1.0602
0.4328
22
Solving the Equations in Mathematica
Clear[x, y];
Solve[{64919121 x - 159018721 y == 1,
41869520.5 x - 102558961 y == 0}, {x, y}]
{}
According to Mathematica the system has no solution.
Clear[x, y];
Solve[{64919121 x - 159018721 y == 1,
83739041/2 x - 102558961 y == 0}, {x, y}]
{{x− > 205117922, y− > 83739041}}
Now Mathematica finds the exact solution correctly.
23
Error Analysis: Controlling Rounding Error
Exercise 2.5: Compute
83521y 8 + 578x2y 4 − 2x4 + 2x6 − x8
for x = 9478657 and y = 2298912
Exact answer: −179689877047297
However, double-precision arithmetic yields −1.0889e + 040 (depending on ordering)
Individual terms:
83521y 8
578x2y 4
−2x4
2x6
−x8
83521y 8 − x8
=
=
=
=
=
=
6.5159e + 055
1.45048e + 042
−1.61442e + 028
1.45048e + 042
−6.5159e + 055
−2.9074e + 042
24
Exercise 2.5 in Matlab
x=9478657;
y=2298912;
83521 ∗ y 8 + 578 ∗ x2 ∗ y 4 − 2 ∗ x4 + 2 ∗ x6 − x8
ans =
-1.0889e+040
25
Exercise 2.5 in Mathematica
x = 9478657;
y = 2298912;
83521 ∗ y 8 + 578 ∗ x2 ∗ y 4 − 2 ∗ x4 + 2 ∗ x6 − x8
-179689877047297
Mathematica finds the correct solution.
x = 9478657.;
y = 2298912;
83521 ∗ y 8 + 578 ∗ x2 ∗ y 4 − 2 ∗ x4 + 2 ∗ x6 − x8
0
Mathematica states that the solution is zero!
26
Error Analysis: Controlling Truncation Error
Truncation error occurs in the application of many numerical
methods.
Example: iterative method x(k+1) = g (k+1)(x(k), x(k−1), . . .)
Need stopping rules to stop sequence {x(k)} when we are close
to unknown solution x∗.
Unless sequence x(k) converges for small k stopping rule leads to
truncation error.
27
Stopping Rules
Stop when the sequence is “not changing much” anymore.
Stop when x(k+1) − x(k) is small relative to x(k) ,
x(k+1) − x(k) ≤ε
x(k) for small ε.
This rule may never stop the sequence if x(k+1) converges to
zero.
General stopping rule: stop and accept x(k+1) if
x(k+1) − x(k) ≤ε
1+ x(k) 28
Failure of General Stopping Rule
Consider the sequence
xk =
k
1
j=1 j
This sequence diverges, but xk tends to infinity very slowly, e.g.
x10000 = 9.78761.
For ε = 0.0001 the general stopping rule would stop the sequence
at k = 1159 with x1159 = 7.63296.
General stopping rule is not reliable.
29
Rates of Convergence
Key measure for the performance of an algorithm
Suppose sequence {x(k)} with x(k) ∈ Rn converges to x∗.
{x(k)} converges at rate q > 1 to x∗ if
||x(k+1) − x∗||
≤M <∞
||x(k) − x∗||q
for all k sufficiently large.
Quadratic convergence: q = 2
k
Example: 1 + 2−(2 ) converges at rate q = 2 to 1
30
Linear Convergence
{x(k)} converges linearly to x∗ at rate β if
||x(k+1) − x∗||
≤β<1
||x(k) − x∗||
for all k sufficiently large.
Example: 1 + 2−k converges linearly to 1 at rate β = 0.5.
Superlinear convergence:
||x(k+1) − x∗||
=0
k→∞ ||x(k) − x∗ ||
lim
Example: 1 + k−k converges superlinearly to 1.
31
Error Analysis: Controlling Truncation Error
Adaptive stopping rule: Suppose the sequence {x(k)} converges
linearly at rate β to x∗ and ||x(k+1) − x∗|| ≤ β||x(k) − x∗||. Then
||x(k+1) − x(k)||
(k+1)
∗
||x
− x || ≤
1−β
Stop and accept x(k+1) if ||x(k+1) − x(k)|| ≤ ε(1 − β).
Estimate β as the maximum over
||x(k−l) −x(k+1) ||
for l = 0, 1, 2, . . ..
||x(k−l−1)−x(k+1) ||
32
Error Analysis: Controlling Truncation Error
Exercise 2.11a: Consider the sequence xk =
limk→∞ xk = e3 − 1 = 19.08553 . . .
k
3n
n=1 n! . Note that
General stopping rule
Adaptive stopping rule:
β̂ = maxl=0,1,2,...
β̃ =
||x(k−l) −x(k+1) ||
||x(k−l−1)−x(k+1) ||
||x(k) −x(k+1) ||
||x(k−1)−x(k+1) ||
33
Compute and Verify
First, compute an approximate solution to your problem.
Second, verify that it is an acceptable approximation according
to economically meaningful criteria.
Example: Consider the problem of solving f (x) = 0. The exact
solution is x∗, our approximate solution is x̂.
Forward error analysis: How far is x̂ from x∗?
Backward error analysis: Construct a similar problem fˆ such
that fˆ(x̂) = 0. How far is fˆ from f ?
Compute and verify: How far is f (x̂) from its target value of 0?
34
Compute and Verify
Example: f (x) = x2 − 2 = 0
Approximate solution x̂ = 1.41
f (1.41) = 1.9881 − 2 < 0, f (1.42) = 2.0164 − 2 > 0
Bound on forward error ||x̂ − x∗|| < 0.01
fˆ(x) = x2 − 1.9881 satisfies fˆ(x̂) = 0
Backward error ||fˆ(x) − f (x)|| = 0.0119
f (x̂) = −0.0119
How large or important is this error?
35
Compute and Verify
Quantify the importance of the error in economically meaningful
terms.
Example: Excess demand function E(p) = D(p) − S(p)
What does E(p̂) = 0.01 mean? Not much.
E(p̂)
= 0.01 mean? A lot.
What does D(p̂)
Interpretation of relative error in this example
• Leakage between demand and supply due to market frictions.
• Optimization error of “boundedly rational” agents.
36
Compute and Verify
Relative errors in economically meaningful terms
Advantage: Generally applicable (unlike forward error analysis).
Disadvantage: More than one solution may be deemed acceptable (like backward error analysis).
37
Summary
• Computer arithmetic
• Error analysis:
– Sources of error
– Error propagation
– Controlling the error
∗ Rates of convergence
– Compute and verify
38