Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Can You Trust Your Computer? CS365 – Mathematics of Computer Science Spring 2007 David Uhrig Tiffany Sharrard Introduction User's Perception of Computers Real vs. Floating Point Numbers Rounding and Chopping Over and Under Flow Machine Constants Error Propagation and Analysis New Ideas and Solutions Conclusion Questions User’s Perception of Computers How are computers perceived by users? A computer is seen as an tool that will give you an exact answer What a computer may do: Can create only garbage because of how the computer handles real numbers Example 1048 + 914 – 1048 + 1032 + 615 – 1032 The answer to this is 1529, but most digital computers would return zero Why? Real vs. Floating Point Real Number System Can be written in decimal notation Can be infinite Includes all positive and negative integers, fractions, and irrational numbers Real vs. Floating Point Floating Point Number System A t-digit base b floating-point number form: ± d1d2d3…dt be Where d1d2d3…dt is the mantissa, b is the base number system, e is the exponent Real vs. Floating Point Floating Point Number System (cont’d) The exponent is an integer between two fixed integer bounds e1 and e2 e1 <= 0 <= e2 Real vs. Floating Point Floating Point Number System (cont’d) Normalized Depends on: Base b Length of the mantissa t Bounds for the exponent, e1 and e2 Rounding vs. Chopping Chop A number is chopped to t digits and all the digits past t are discarded Example: t=5 x = 2.5873892874 result = 2.5873 Rounding vs. Chopping Round A number x is rounded to t digits when x is replaced by a t digit number that approximates x with minimum error Example: t=5 x = 2.5873892874 result = 2.5874 Overflow vs. Underflow Overflow Occurs when the result of a floating point operation is larger than the largest floating point number in the given floating point number system When this occurs, almost all computers will signal an error message Overflow vs. Underflow Underflow Occurs when the result of a computation is smaller than the smallest quantity the computer can store Some computers don’t see this error because the machine sets the number to zero Machine Constants Amount of round-off depends on the floating-point format your computer uses Before the error can be corrected, the machine constants need to be identified. Constants vary greatly by hardware IEEE 754 is the Standard for Binary Floating-Point Arithmetic Machine Constants Com put e r CDC CYBER 170 CDC CYBER 205 Cray -1 DEC VAX (single) DEC VAX (double) HP-11C, 15C IBM 3033 (single) IBM 3033 (double) IBM/PC (single) IBM/PC (double) PRIME 850 (single) PRIME 850 (double) IEEE 754 Standard R/C R C C R R R C C R R C C β t 2 2 2 2 2 10 16 16 2 2 2 2 48 47 48 24 56 10 6 14 24 53 23 47 L U -976 -28,626 -8,192 -127 -1,023 -99 -64 -64 -126 -1,022 -128 -32,896 1,071 28,718 8,191 127 1,023 99 63 63 127 1,023 127 32,639 e 3.55x 10 -15 1.42x 10 -14 7.11x 10 -15 5.96x 10 -8 1.11x 10 -16 5.00x 10 -10 9.54x 10 -7 2.22x 10 -16 5.96x 10 -8 1.11x 10 -16 2.38x 10 -7 1.42x 10 -14 Machine Epsilon To quantify the amount of round-off error, a round-off unit is specified: ε - Machine Epsilon, or Machine Precision This is the fractional accuracy of a floating point number. Represented by: ƒl(1 + ε) ≥ 1 Where ε is the smallest floating point number the machine can generate. Computing ɛ C Code * Program Output #include <stdio.h> david@david-laptop:~$ ./findepsilon current Epsilon, 1 + current Epsilon 1 2.00000000000000000000 0.5 1.50000000000000000000 0.25 1.25000000000000000000 0.125 1.12500000000000000000 0.0625 1.06250000000000000000 0.03125 1.03125000000000000000 0.015625 1.01562500000000000000 0.0078125 1.00781250000000000000 0.00390625 1.00390625000000000000 0.00195312 1.00195312500000000000 0.000976562 1.00097656250000000000 0.000488281 1.00048828125000000000 0.000244141 1.00024414062500000000 0.00012207 1.00012207031250000000 6.10352E-05 1.00006103515625000000 3.05176E-05 1.00003051757812500000 1.52588E-05 1.00001525878906250000 7.62939E-06 1.00000762939453125000 3.8147E-06 1.00000381469726562500 1.90735E-06 1.00000190734863281250 9.53674E-07 1.00000095367431640625 4.76837E-07 1.00000047683715820312 2.38419E-07 1.00000023841857910156 int main(int argc, char **argv) { float machEps=1.0f; printf("current Epsilon, 1 + current Epsilon\n"); while(1) { printf("%G\t%.20f\n", machEps, (1.0f+machEps)); machEps/=2.0f; //If next epsilon yields 1, then break, because //current epsilon is the machine epsilon. if((float)(1.0+(machEps/2.0)) == 1.0) break; } printf("\nCalculated Machine epsilon: %G\n", machEps); return 0; } * - Code borrowed from Wikipedia Entry on Machine Epsilon: http://en.wikipedia.org/wiki/Machine_epsilon Calculated Machine epsilon: 1.19209E-07 david@david-laptop:~$ Error Propagation An optimistic value for the round-off accumulation in performing N arithmetic operations is roughly √(Nɛ). Could be Nɛ or even larger! Example: Subtractive Cancellation 4-digit base 10 arithmatic: ƒl [(10000 + 1) – 10000] = 0 (10000 + 1) – 10000 = 1 Error Analysis Two primary techniques of error analysis Forward Error Analysis Floating-point representation of the error is subjected to the same mathematical operations as the data itself. Equation for the error itself Backward Error Analysis Attempt to regenerate the original mathematical problem from previously computed solutions Minimizes error generation and propagation Testing for Error Propagation Use the computed solution in the original problem Use Double or Extended Precision rather than Single Precision Rerun the problem with slightly modified (incorrect) data and look at the results New Ideas Increased RAM and Processor speeds allow for more intricate solutions and alternatives to floating point errors. Nonfloating-point arithmetic implementations Rational Arithmetic Multiple or Full Precision Arithmetic Scalar and Dot Products of Vectors Conclusion User's Perception of Computers Real vs. Floating Point Numbers Rounding and Chopping Over and Under Flow Machine Constants Error Propagation and Analysis New Ideas and Solutions ...Questions?