Download Document

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Numerical Analysis
CC413
Propagation of Errors
Propagation of Errors
In numerical methods, the calculations
are not made with exact numbers. How
do these inaccuracies propagate through
the calculations?
2
Underflow and Overflow


Numbers occurring in calculations that have
a magnitude less than 2 -1023 .(1+2 -52)
result in underflow and are generally set to
zero.
Numbers greater than 2 1024 .(2-2 -52) result
in overflow.
Significant Figures

Number of significant figures indicates precision. Significant digits of a number
are those that can be used with confidence, e.g., the number of certain digits plus
one estimated digit.
53,800 How many significant figures?
5.38 x 104
5.380 x 104
5.3800 x 104
3
4
5
Zeros are sometimes used to locate the decimal point not significant figures.
0.00001753
0.0001753
0.001753
4
4
4
Error Definitions


Numerical error - use of approximations to
represent exact mathematical operations and
quantities
true value = approximation + error
•
•
•
•
error, et=true value - approximation
subscript t represents the true error
shortcoming....gives no sense of magnitude
normalize by true value to get true relative error
Example 1:
Find the bounds for the propagation in adding two numbers. For example if one is
calculating X +Y where
X = 1.5 ± 0.05
Y = 3.4 ± 0.04
Solution
Maximum possible value of X = 1.55 and Y = 3.44
Maximum possible value of X + Y = 1.55 + 3.44 = 4.99
Minimum possible value of X = 1.45 and Y = 3.36.
Minimum possible value of X + Y = 1.45 + 3.36 = 4.81
Hence
4.81 ≤ X + Y ≤4.99.
Example 2:
The strain in an axial member of a square cross-section is
given by
Given
F
 2
h E
F  72  0.9 N
h  4  0.1 mm
E  70  1.5 GPa
Find the maximum possible error in the measured strain.
Example 2:
Solution
72

3 2
9
(4  10 ) (70  10 )
 64.286  10 6
 64.286 
8
http://numericalmethods.eng.usf.edu
Error definitions cont.
true error
et 
 100
true value

True relative percent error
Example
Consider a problem where the true answer is
7.91712. If you report the value as 7.92, answer
the following questions.
1. What is the true error?
2. What is the relative error?
Example

Determine the absolute and relative
errors when approximating p by p∗
when
Solution


This example shows that the same relative error,
0.3333×10−1, occurs for widely varying absolute errors.
the absolute error can be misleading and the relative
error more meaningful, because the relative error takes
into consideration the size of the value.
Error Definitions cont.




Round off error – Symmetric rounding
originate from the fact that computers retain
only a fixed number of significant figures: y =
0.73248261 to be 0.7325
Error = y-round(y) = 0.00001739 and relative
error = error /y = -0.000024
Truncation errors – Chopping errors that
result from using an approximation in place
of an exact mathematical procedure: y =
0.73248261 to be 0.7324
Error = 0.0008261 and relative error =
Example
Determine the five-digit (a) chopping and (b) rounding
values of the irrational number π.
 Solution The number π has an infinite decimal expansion
of the form
π = 3.14159265. . . .
 Written in normalized decimal form, we have
π = 0.314159265 . . . × 10.
(a) The floating-point form of π using five-digit chopping is
f l(π) = 0.31415 × 10 = 3.1415.
(b) The sixth digit of the decimal expansion of π is a 9, so the
floating-point form of π using five-digit rounding is
f l(π) = (0.31415 + 0.00001) × 10 = 3.1416.



Use absolute value.
Computations are repeated until stopping criterion is
satisfied.
e a e s

Pre-specified % tolerance
based on the knowledge of
your solution
If the following criterion is met
e s  (0.5 10 (2-n) )%
you can be sure that the result is correct to at least n
significant figures.
Example


Suppose that x = 57 and y = 13. Use five-digit
chopping for calculating x + y, x − y, x × y, and x ÷
y.
Solution: Note that

Solution (Cont.)
Chopping Errors (Error Bounds Analysis)
Suppose the mantissa can only support n digits.
z  0.a1a2 an an 1an 2    e ,
a1  0
fl( z )  0.a1a2 an    e
Thus the absolute and relative chopping errors are
e
en
z  fl( z )  (0.00
...
0
a
a

)



(
0
.
a
a

)


n 1 n  2

 n 1 n 2 
n zeroes
(0.00...0an 1an 2 )    e
z  fl( z )

z
(0.a1a2 an an 1an 2 )    e
Suppose ß = 10 (base 10), what are the values of ai such
that the errors are the largest?
18
Chopping Errors (Error Bounds Analysis)
Because
0.an1an2an3   1
z  fl( z )  0.an 1an  2    en   en
z  fl( z)
  en
z  fl( z )
0.00...0an 1an  2    e

z
0.a1a2  an an 1an  2    e
 en

0.a1a2  an an 1an  2    e
 en

e
0.100000
a
a



 n 1 n  2
n digits
 en
 en
e  n ( e 1)
1 n






0.1   e  1   e
z  fl ( z )
z
  1 n
19
Round-off Errors (Error Bounds Analysis)
z    0.a1a2  an an1    e ,
a1  0
  1 (sign )   base e  exponent

  (0.a1a2  an )    e

fl ( z )  

e


[(
0
.
a
a

a
)

(
0
.
00
...
01
)
]


1 2
n 



 

 n
0  a n 1 

2
Round down

2
 a n 1  
Round up
fl(z) is the rounded value of z
20
Round-off Errors (Error Bounds Analysis)
Absolute error of fl(z)
When rounding down
z  fl( z )
z  fl( z )
an 1
   0.000an 1an  2 an 3    e
   0.an 1an  2 an 3    en
 0.an 1an  2 an 3    en

1
  (.an 1 )  
2
2
z  fl ( z )
1 en
 
2
z  fl ( z )
1 en
 
2
Similarly, when rounding up
i.e., when

2
 an 1  
21
Round-off Errors (Error Bounds Analysis)
Relative error of fl(z)
1 en
z  fl ( z ) 

2
z  fl ( z )
1  n

z
2 z  e
1
 n

because z  (.a1a2 )    e
2 (.a1a2 ) 
1  n

because (.a1 )   (0.1) 
2 (.1) 
1  n

z  fl ( z )
1 1 n
2  1


z
2
22
Summary of Error Bounds Analysis
Chopping Errors
Round-off errors
Absolute
z  fl( z)  
1 en
z  fl( z )  
2
Relative
z  fl( z )
  1n
z
β
n
e n
z  fl( z ) 1 1n
 
z
2
base
# of significant digits or # of digits in the mantissa
Regardless of chopping or round-off is used to round the
numbers, the absolute errors may increase as the
numbers grow in magnitude but the relative errors are
bounded by the same magnitude.
23
Numerical Stability


Rounding errors may accumulate and
propagate unstably in a bad algorithm.
Can be proven that for Gaussian
elimination the accumulated error is
bounded
Ill-conditioned system of
equations

A small deviation in the entries of A
matrix, causes a large deviation in the
solution.
x1  1
2   x1   3 

 1

 x   1
0.48 0.99  x   1.47

 2  

 2  
2   x1   3 
 1
0.49 0.99  x   1.47 

 2  

25
 x1  3
 x   0 
 2  
Machine Epsilon
Relative chopping error
z  fl( z )
z
Relative round-off error
z  fl( z )
z
eps is known as the
machine epsilon – the
smallest number such that
epsilon = 1;
while (1 + epsilon > 1)
epsilon = epsilon / 2;
epsilon = epsilon * 2;
1 + eps > 1
  1n  epsChopping

1 1n
  epsRound-off
2
Algorithm to compute machine epsilon
26
Exercise
Discuss to what extent
(a + b)c = ac + bc
is violated in machine arithmetic.
27
How does a CPU compute the following functions for a specific x
value?
cos(x)
ex
sin(x)
log(x)
etc.

Non-elementary functions such as trigonometric, exponential, and
others are expressed in an approximate fashion using Taylor series
when their values, derivatives, and integrals are computed.

Taylor series provides a means to predict the value of a function at
one point in terms of the function value and its derivatives at
another point.
28
Taylor Series (nth order approximation):
f ' ( xi )
f " ( xi )
f ( n ) ( xi )
2
f ( xi 1 )  f ( xi ) 
( xi 1  xi ) 
( xi 1  xi )   
( xi 1  xi ) n  Rn
1!
2!
n!
The Reminder term, Rn, accounts for all terms from (n+1) to infinity.
f ( n1) (e ) ( n1)
Rn 
h
(n  1)!
Define the step size as h=(xi+1- xi), the series becomes:
f ' ( xi )
f " ( xi ) 2
f ( n ) ( xi ) n
f ( xi 1 )  f ( xi ) 
h
h 
h  Rn
1!
2!
n!
29
Any smooth function can be approximated as a polynomial.
Take x = xi+1 Then f(x) ≈ f(xi)
f ( x)  f ( xi )  f ' ( xi )( x  xi )
zero order approximation
first order approximation
Second order approximation:
f ' ( xi )
f " ( xi )
f ( x)  f ( xi ) 
( x  xi ) 
( x  xi ) 2
1!
2!
nth order approximation:
f ' ( xi )
f " ( xi )
f ( n ) ( xi )
2
f ( x)  f ( xi ) 
( x  xi ) 
( x  xi )   
( x  xi ) n  Rn
1!
2!
n!
• Each additional term will contribute some improvement to the approximation. Only
if an infinite number of terms are added will the series yield an exact result.
• In most cases, only a few terms will result in an approximation that is close enough
to the true value for practical purposes
Taylor Series Expansion
f  xi 1   f  xi   f '  xi h 
f n  xi  n
...... 
h  Rn
n!
f ' '  xi  2 f ' ' '  xi  3
h 
h 
2!
3!
where h  step size  xi 1  xi
f  n 1   n 1
Rn 
h
xi    xi 1
n  1!
Example
Use zero through fourth order Taylor series
expansion to approximate f(1) given f(0) = 1.2
(i.e. h = 1)
f  x   0.1x 4  0.15x 3  0.5x 2  0.25x  1.2
1.4
1
f(x)
Note:
f(1) = 0.2
1.2
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
x
0.8
1
Solution

n=0
•
•

f(1) = 1.2
et = abs [(0.2 - 1.2)/0.2] x 100 = 500%
n=1
•
•
•
•
f '(x) = -0.4x3 - 0.45x2 -x -0.25
f '(0) = -0.25
f(1) = 1.2 - 0.25h = 0.95
et =375%
Solution


n=2
2
• f "=-1.2 x - 0.9x -1
• f "(0) = -1
• f(1) = 0.45
•
et = 125%
n=3
• f "'=-2.4x - 0.9
• f "'(0)=-0.9
• f(1) = 0.3
•
et =50%
Solution




n=4
• f ""(0) = -2.4
• f(1) = 0.2 EXACT
Why does the fourth term give us an exact
solution?
The 5th derivative is zero
In general, nth order polynomial, we get an
exact solution with an nth order Taylor series
Example
Approximate the function f(x) = 1.2 - 0.25x - 0.5x2 - 0.15x3 - 0.1x4
from xi = 0 with h = 1 and predict f(x) at xi+1 = 1.
36
Taylor Series Problem
Use zero- through fourth-order Taylor series
expansions to predict f(4) for f(x) = ln x using a base
point at x = 2. Compute the percent relative error et
for each approximation.
1.6
1.4
1.2
y
1
0.8
0.6
0.4
0.2
0
0
1
2
3
x
4
5
Example:
computing f(x) = ex using Taylor Series
expansion f ( x )
f (x )
f (x )
f ( xi 1 )  f ( xi ) 
'
i
1!
( xi 1  xi ) 
"
i
2!
Choose x = xi+1 and xi = 0 Then
( xi 1  xi )   
(n)
2
i
n!
( xi 1  xi ) n  Rn
f(xi+1) = f(x) and (xi+1 – xi) = x
Since First Derivative of ex is also ex :
(2.) (ex )” = ex
(3.) (ex)”’ = ex, …
(nth.) (ex)(n) = ex
As a result we get:
2
3
n
x
x
x
e  1 x    ... 
2! 3!
n!
x
38
Looks familiar?
Maclaurin series for ex
Yet another example:
computing f(x) = cos(x) using Taylor Series expansion
Choose x=xi+1 and xi=0 Then f(xi+1) = f(x) and (xi+1 – xi) = x
Derivatives of cos(x):
(1.) (cos(x) )’ = -sin(x)
(2.) (cos(x) )” = -cos(x),
(3.) (cos(x) )”’ = sin(x)
(4.) (cos(x) )”” = cos(x),
……
As a result we get:
x2 x4 x6
cos x  1     
2! 4! 6!
39
Error Propagation
Let xfl refer to the floating point representation of the real number x.
 Since computer has fixed word length, there is a difference between x and xfl
(round-off error)
and we would like to estimate the error in the calculation of f(x) :

f ( x fl )  f ( x)  f ( x fl )
• Both x and f(x) are unknown.
• If xfl is close to x, then we can use first order Taylor expansion and compute:
f ( x)  f ( x fl )  f ( x fl )( x  x fl )
f ( x fl )  f ( x fl ) * x
Result: If f’(xfl) and x are known, then we can estimate the error using this formula
Related documents