Download Document

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Bootstrapping (statistics) wikipedia , lookup

History of statistics wikipedia , lookup

Misuse of statistics wikipedia , lookup

Transcript
ERRORS AND TREATMENT OF ANALYTICAL DATA
Introduction:
Every physical measurement is subject to a degree of uncertainty that can never be
completely eliminated but, at best, can only be reduced to an acceptable level. The
amount of uncertainty is often difficult to quantify and requires additional effort and good
judgement on the part of the chemist. Nevertheless, it is a task that cannot be neglected
because an analysis of totally unknown reliability is worthless.
Thus the task of the analytical chemist goes beyond that of correctly performing
the manipulations and taking the readings required in a procedure. To obtain meaningful
results the following steps must also be carried out:
1. Results of each analysis must be properly recorded and calculated.
2. Since analyses are done in replicate (usually two to five times), the analyst must
determine the best value to report. Although the best value is often the arithmetic
mean, or average, of the individual results, there is often the question of whether to
include a result that seems out of line with the others.
3. Finally, the analyst must evaluate the results obtained and establish the probable limits
of error that can be placed on the final result.
Unfortunately there exists no simple, generally applicable means by which quality of an
experimental result can be assessed with absolute certainty. In this unit we will consider
the types of errors encountered in analyses, methods for their recognition, and techniques
for estimating and reporting their magnitude.
Definition of Terms:
Mean and Median:
Individual results for a set of replicate measurements will seldom be identical and
it is thus necessary to select a central, "best" value for the set. The central value of the
set ought to be more reliable than any of the individual results and the variations among
the results ought to provide some measure of reliability of the chosen "best" value.
Either of two quantities, the mean or the median may be chosen as the central
value of a set of measurements.
The mean, arithmetic mean, and average ( X ) are synonymous terms for the numerical
value obtained by dividing the sum of a set of replicate measurements by the number of
individual results in the set:
X 
BB/CH408
X1  X 2    X n

n
X
i
n
1
The mean of n results is n times as reliable as any one of the individual results. Thus
there is a diminishing return for accumulating more and more replicate measurements.
The mean of 4 results is twice as reliable as one result; the mean of 9 results is 3 times as
reliable; the mean of 25 results is 5 times as reliable, etc.
The median (Md) of an odd number of results is simply the middle value when the results
are listed in increasing or decreasing order; for an even number of results, the median is
the average of the two middle results. In general, the mean is a better measure of the
central value, but in certain cases, such as with a wide spread of a small set of results, the
median may give better results.
Example 1:
Calculate the mean and median for 10.06, 10.20, 10.08, and 10.10
Accuracy and Precision:
Whereas accuracy refers to how close a measured result is to the true value,
precision refers to agreement (repeatibility) amoung a group of experimental results; but
says nothing about how close the results are to the true value.
A group of measurements may be very precise (agree closely) but may still be inaccurate.
Ideally, all measurements should be both precise and accurate.
precise but not accurate
accurate and precise
not accurate or precise
Accuracy is the nearness of a result ( Xi ) or the mean ( X ) of a set of results to the true
value (). Accuracy is usually described in terms of absolute error, E, which is the
difference between the measured and accepted value:
E = ( Xi -  ) or ( X -  )
Example 2:
Calculate the mean % Cl- and the absolute error of the mean for a sample, given that the
true value is 24.36% and three measured results were: 24.39, 24.19, and 24.36%.
BB/CH408
2
Often a more useful quantity than the absolute error is the relative error, which is
expressed as a percentage (pph) or as parts per thousand (ppt) of the accepted value:
pph 
( X i  )
 100

and
ppt 
( X i  )
 1000

Calculate the relative error in pph and ppt for the mean % Cl- determined in Example 2.
Calculate the relative error in pph and ppt and the absolute error for Example 1, given
that the true value is 10.18 .
Just as accuracy is reported in terms of its inverse, error, likewise, precision is reported in
terms of its inverse, deviation (meaning scatter, spread, dispersion, variation, etc. of data).
Both absolute deviation and relative deviation are used.
Measures of absolute deviation include range and standard deviation.
Relative deviation is reported as relative standard deviation.
Range, (R), in a set of data is the absolute value of the numerical difference between the
highest and lowest result. In Example 1, the range is:
10.20 - 10.06 = 0.14
Calculate the range for the data in Example 2:
Standard deviation () is a more valid and useful measure of precision (absolute
deviation). The standard deviation of an infinitely large set of experimental data is the
square root of the average of the square of the individual deviations from the mean ():

BB/CH408
 X
i
N
 
2
, where N = # of results and approaches infinity.
3
This equation is valid for N > 30 samples, but has limited use for everyday analytical
analyses where typically only 2 to 10 replicates are analysed. When the number of data is
small, the standard deviation of the samples (s) rather than the standard deviation of an
infinite population () is calculated using the following formula:
s
 X
i 
X

2
N 1
where the true population average () is replaced by the sample mean ( X ) and the
denominator changes to N-1 rather than N.
Example 3:
A student calibrated a 10mL pipet by repeatedly weighing the amount of water delivered
by the pipet and converting this to mL using a density at 20 ºC of 0.99820 g/mL. The
following data were obtained: 9.9720, 9.9670, 9.9550, 9.9620, and 9.9640 grams.
Calculate for the volume in mL:
a) the median
b) the mean
c) the absolute error of the mean
d) the relative error of the mean
e) the range
f) the standard deviation (use the stats mode of your calculator)
The relative precision, called the relative standard deviation (rsd) or the coefficient of
variation (cv) is calculated as the as follows:
cv =
s
100
X
Calculate the cv for Example 3.
BB/CH408
4
Example 4:
The normality of a solution is determined by four separate titrations, the results being
0.2041, 0.2049, 0.2030, and 0.2043 . Assuming that the true normality is 0.2047,
calculate the median, mean, absolute error of the mean, relative error of the mean in pph
and ppt, range, standard deviation, and cv.
SOURCES, TYPES, EFFECTS OF, DETECTION AND REDUCTION OF
ERRORS AND DEVIATIONS:
SOURCES
1. instruments
2. methods
3. operations
DETERMINATE ERROR
-constant or proportional
-unidirectional (biased)
-assignable
-measurable, systematic
REDUCED ACCURACY
INCREASED ERROR
INDETERMINATE ERROR
-random
-unbiased
-not assignable
-not measurable
REDUCED PRECISION
INCREASED VARIATION
Types of Error:
BB/CH408
5
Both errors and deviations, collectively called uncertainties, are classified into two broad
categories, determinate errors and indeterminate errors.
Determinate errors are those that, as the name implies, are determinable and that usually
can be avoided or corrected. A determinate error is ordinarily unidirectional or biased,
that is, it will cause all replicate analyses to be either high or low. They may be constant ,
as in the case of an out of calibration balance, or proportional, as in the case of and
impure reagent-the more reagent used, the greater the amount of impurity present. The
significance of a constant error generally decreases as the size of the sample increases.
Compare the relative error (in ppt) of a constant balance error of -0.0010g on a 0.5000g
sample versus a 15.0000g sample.
Compare the relative error (in ppt) of a proportional error of +0.20% moisture in a
supposedly dry reagent when 0.5000g sample and 15.0000g sample are weighed out.
Indeterminate errors:
If a measurement is sufficiently coarse, repetition will yield exactly the same result
each time. For example, in weighing a 50g object to the nearest gram on a top loading
electronic two-place balance, only by extreme negligence could a person obtain different
values for a set of replicate weighings.
On the other hand , most measurements can be refined to the point where it is mere
coincidence that replicates agree to the last recorded digit. Eventually, the point is
approached where unpredictable and imperceptible factors introduce what appear to be
random fluctuations in the measured quantity. These are called indeterminate errors.
Instruments that may be at or beyond their performance limits fluctuate due to noise and
drift in an electronic circuit, temperature variations, and vibrations. Often, indeterminate
errors arise from the inablility of the eye to detect slight differences in reading a buret,
filling a pipet, or reading the dial of an analog instrument.
Indeterminate errors are revealed by small differences in successive measurements
made by the same analyst under virtually identical conditions, and they cannot be
predicted or estimated. These accidental errors follow a random or normal distribution;
therefore, mathematical laws of probability can be applied.
The relative frequency and magnitude of indeterminate errors are described by the
normal distribution or Gaussian curve or bell curve which is shown. It is apparent that
BB/CH408
6
there should be very few large errors, many small errors, and, since the curve is
symmetrical, there should be an equal number of positive and negative errors.
f
r
e
q
u
e
n
c
y
68.3%
of results
95.5%
99.74%
-3
-2
-

+
+2
+3
quantity measured, X
Sources of Errors:
Both determinate errors and indeterminate errors (deviations) have three sources;
instruments, methods, and operations. We will discuss determinate errors first
1. Sources of determinate instrument errors:
All measuring devices are potential sources of determinate errors. For example,
pipets, burets, and volumetric flasks frequently deliver or contain volumes slightly
different from those indicated by their graduations. These differences arise from such
sources as their use at temperatures that differ significantly from the calibration
temperature, distortions in the container walls due to heating while drying, errors in the
original calibration, or contaminants on the inner surfaces of the containers. Many
determinate errors of this type are readily eliminated by calibration.
Measuring devices powered by electricity are commonly subject to determinate
errors. Examples include decreases in the voltage of battery operated power supplies with
use, increased resistance in cricuits because of dirty electrical contacts, vibration, and
currents induced from 110-V power lines. Again, these errors are usually detectable and
correctabe; most are unidirectional.
BB/CH408
7
2. Sources of determinate method errors:
These errors are often introduced from nonideal chemical or physical behavior of
the reagents and reactions upon which an analysis is based. Sources include slowness or
incompleteness of some reactions, instability of some species, the nonspecificity of most
reagents, and occurrence of side reactions which interfere with the desired reaction. For
example, coprecipitation yields positive errors in gravimetry as does insufficient washing
of a precipitate, while overwashing a precipitate yields negative errors. Visual indicators
which change colour before or after equivalence point will cause determinate method
errors. In many cases this can be corrected using a blank reagent.
Determinate method errors are the most serious errors for the analyst. They are the
most likely to remain undetected and require changes in the procedure to be corrected.
3. Sources of determinate operational errors:
These include personal errors and can be reduced by experience and care of the
analyst in the physical manipulations involved. Operations in which these errors may
occur include transfer of solutions, effervescence and “bumping” during sample
dissolution, incomplete drying of samples, parallax error in reading a buret, tilting a pipet
while it drains, colour blindness or poor colour judgement in finding a titration end point.
Other personal errors include gross mistakes in calculations, transposing numbers while
recording data, reading a scale incorrectly, reversing a sign, and prejudice in estimating
measurements.
Effects of determinate errors:
Determinate errors are often constant or proportional. In either case, they are
unidirectional and bias the results, creating a negative or positive error and thus reducing
the accuracy. Occasionally, determinate errors may be variable, as in the case of a buret
which is out of calibration by different amounts throughout its length. In such infrequent
cases, precision decreases and deviation increases.
Correction of determinate errors:
Instrumental errors are usually found and corrected by calibration procedures.
Most personal errors can be minimized by care and self-discipline. Good chemists
develop the habit of always rechecking instrument readings, notebook entries, and
calculations. Method errors are particularly hard to detect. Identification and
compensation for systematic errors of this type require one or more of the following :
a) Analysis of standard samples either purchased or prepared in house
b) Independent analysis by outside accredited laboratories
BB/CH408
8
c) Blank determinations are run, in which all steps are performed with no sample present
and the result is subtracted from each real analysis.
d) Variation in sample size can detect constant errors as already discussed.
Sources of indeterminate error:
Indeterminate errors are generally chance or random. Examples include visual
judgement in reading the etch mark of a pipet, the mercury level in a thermometer, and
the position of an indicator on a scale. Other sources include variable pipet drainage
time, variable temperatures of solutions and glassware, variable hot plate temperature,
etc. All of these assume that no personal error is involved, but rather just the natural
limitations of the chemist and his/her equipment, and methods.
Effects of indeterminate error:
Indeterminate errors cause random fluctuation or scatter in results which is another
name for decreased precision and increased deviation. Increased scatter of results will
usually also decrease accuracy as well.
Correction of indeterminate errors:
Indeterminate errors cannot be eliminated. Careful work and instrument servicing
can only reduce the magnitude of indeterminate error. Replicate analyses followed by
statistical analysis of data allows indeterminate error to estimated and reported so as to
produce realistic results.
CONFIDENCE INTERVAL AND CONFIDENCE LIMITS
Frequently a chemist must make use of an new method to analyse a sample.
Limitations of time prohibit numerous replicate analyses to determine a good value of the
population mean () and the population standard deviation (). As indicated earlier, the
sample mean ( X ) and the sample standard deviation (s) can and should be calculated in
these cases however these parameters are, at best, only estimates. The use of confidence
limits is probably the most meaningful way to report a result when uncertainty due to few
replicates is present.
In 1908, an English chemist, W. S. Gosset, writing under the pen name of
“Student” (to avoid chastisement by his boss) developed a sound statistical method for
estimating the true mean with only X and s based on limited data.
BB/CH408
9
The numerical range (confidence interval) within which the true mean will be
found is given by the following formula:
Confidence limits of   X 
t s
n
The quantity t (called Student’s t) is a statistical factor (obtained from a table) which is
dependent upon the the number of degrees of freedom (the number of individual results
less one, i.e., n-1 ) and the confidence level or probability level in %.
Values for t at various probability levels in %
# of
# of degrees
factor
for
confidence
level
observations (n)
of freedom (n-1)
80%
90%
95%
99%
2
1
3.08
6.31
12.7
63.7
3
2
1.89
2.92
4.30
9.92
4
3
1.64
2.35
3.18
5.84
5
4
1.53
2.13
2.78
4.60
6
5
1.48
2.02
2.57
4.03
7
6
1.44
1.94
2.45
3.71
8
7
1.42
1.90
2.36
3.50
9
8
1.40
1.86
2.31
3.36
10
9
1.38
1.83
2.26
3.25
11
10
1.37
1.81
2.23
3.17
21
20
1.73
2.09
2.85


1.65
1.96
2.58
1.29
Example 5:
A soda ash sample is analysed in the lab by titration with standard HCl. The analysis is
performed in triplicate with the following results: 93.50, 93.58, and 93.43% Na2CO3.
Within what range are you a) 90%, b) 95%, and c) 99% confident that the true value lies?
Note that in order to obtain higher condidence, much larger confidence limits are
obtained.
BB/CH408
10
Example 6:
A chemist determined the percentage of iron in an ore, obtaining the following
results: X = 15.30, s = 0.10, n = 4
Calculate the 90% and 99% confidence interval of the mean.
Example 7:
Suppose 10 results have a mean of 56.06%, and the cv = 0.375. Calculate the 90%
confidence limits for the true mean.
s
decrease, with the
n
result that the confidence interval is narrowed. So the more measurements you make, the
more the range is narrowed within which the true value will probably lie. A 90%
confidence limit means that the stated limits of X will contain the true value of the mean
every 9 times out of 10.
Note that as the number of measurements increases, both t and
REJECTION OF A RESULT
When a set of data contains an outlying result that appears to differ excessively
from the mean, the decision must be made to accept it or reject it. If a valid outlier is
rejected, the mean becomes biased; however by retaining a spurious result, the mean is
biased in the other direction. In either case, accuracy is lost. Unfortunately, there is no
sound statistical rule that always shows whether the outlier was caused by an error or by
chance variation. The only reliable basis for rejecting a result is knowing that an error
occurred during analysis, in which case that result is always rejected.
The Q-test or “Rejection Quotient” is an easy method of testing for flyers in small data
sets. The results are arranged in decreasing or increasing order. The difference between the
suspect value and its nearest neighbour is divided by the range of all the results, giving a
fraction. This is compared to tabulated values of Q for a 90% confidence limit. If it is equal to
or greater than the tabulated value, the suspect value is rejected.
range
d
BB/CH408
11
Q  d range
Example 8:
# of results (n)
Q
3
0.94
4
0.76
5
0.64
6
0.56
7
0.51
8
0.47
9
0.44
10
0.41
Use the Q test to test the seven results in
this sample: 5.12, 6.82, 6.32, 6.22, 6.02, 6.32,
and 6.12 .
Arrange the results in increasing order and test
both the highest and lowest results.
The lowest value is tested first. If it is not
rejected, then the largest value is tested.
If the smallest value is rejected, the range is
recalculated without the rejected value, and the
largest value is then tested; and so forth.
For the common situation or 3 results, the calculated Q (quotient) is compared with
0.94. To reject a questionable result, the value of 0.94 makes it necessary that the
questionable value deviate quite widely from two values which agree quite closely.
For example, intuition would reject 6.00% from a set of results such as 5.00%,
5.07%, and 6.00%. Yet the Q test would not permit it to be rejected. In cases like this it
is recommended that 1 or 2 more results be obtained and the Q test be applied again.
If the questionable result still cannot be rejected, then the median is reported
rather than the mean, since the median is less influenced by an outlier. For 4 results, this
case involves averaging the two middle results (the “interior average”).
Example 9:
Four results obtained for the normality of a solution are 0.1014, 0.1012, 0.1026,
and 0.1015 . Calculate the best central value after applying the Q test.
Despite the validity of the Q-test, it must be applied with good judgement as well.
For example, consider three results obtained by titration 96.00, 95.01, and 95.00%. The
calculated Q is 0.99, and so 96.00% would be rejected by the Q test. However, this
BB/CH408
12
leaves only two results, which are virtually identical. The relative deviation of these
values is given by:
95.01  95.00
 1000  01
. ppt and this is unrealistically low for titrations.
95
It is clear that 95.01 and 95.00% are “lucky” results in that they are accidentally so close
together. The average of these two results would not necessarily be close to the true
value. In such a case it would be wise to run one or two additional analyses and
reevaluate the data.
Example 10:
Analysis of a calcite sample yielded CaO percentages of 55.95, 56.00, 56.04,
56.08, and 56.23 . The last value appears anomalous. Calculate the best central value for
the material.
Example 11:
A student obtained the following values for the normality of a solution:
0.0990, 0.0991, 0.0992, and 0.0998 .
a) Can any result be rejected by the Q test?
b) What should the normality be reported as?
c) A fifth result was run and a value of 0.0991was obtained. Can 0.0998 now be
discarded? Explain.
d) Calculate the 90% confidence interval of the mean in c).
BB/CH408
13
SIGNIFICANT FIGURES AND COMPUTATION RULES
When a computation is made from experimental data, the error or uncertaintity in
the final result is often calculated using computation rules for significant figures. The
principal advantage of this method is that it is easy. The principal disadvantage is that
only a rough estimate of uncertaintity is obtained. Despite its limitations, we will review
this procedure along with a better method for determining how uncertainty of
measurements affects results.
Significant Figures:
Most scientists define significant figures in a measurement as “all digits that are
known for certain plus one estimated digit”. The following exercise will illustrate.
The length of a desk top is measured with 3 different meter sticks (A, B, and C) as shown
below. For each meter stick, record the length of the desk top in common notation, the
number of sig figs, and the length in scientific notation.
0
A
1
0
.1
.2
.3
.4
.5
.6
.7
.8
.9
1.0
.1
.2
0
.1
.2
.3
.4
.5
.6
.7
.8
.9
1.0
.1
.2
common
notation
number of
sig figs
B
C
scientific
notation
A
B
C
How does the sensitivity of a measuring instrument affect the number of significant
figures obtainable in a measurement?
BB/CH408
14
Note that by using ruler A we are able to exactly measure the object's length to the
nearest meter and estimate the length to the nearest 0.1 meters (2 sig figs).
Note that by using ruler B we are able to exactly measure the object's length in
meters and in 0.1 meters and estimate the length to the nearest 0.01 meters (3 sig figs).
Note that by using ruler C we are able to exactly measure the object's length in
meters, 0.1 meters, and 0.01 meters and estimate the length to the nearest 0.001 meters (4
sig figs).
For graduated devices in general (e.g., thermometers, Mohr pipets, burets,
graduated cylinders, etc.) the smallest unit we can exactly measure is the smallest
division on the measuring scale. We may then estimate 1 digit more. All of these (and
only these) digits are significant digits (sig figs).
For instruments with digital readouts, the final digit on the display is, in fact, an
estimate as determined by the instrument. The uncertainty of the last digit, unless
otherwise known, is taken to be  1. Likewise for data in literature, the last digit reported
is assumed to have an absolute uncertainty of  1, unless otherwise stated or known.
Exercise: Complete the following table of sig figs, tolerances, and absolute uncertainty
for some common lab measuring devices. Calculate the relative uncertainties in ppt.
Note that tolerance is the manufacturer’s guaranteed limit of uncertainty and will
usually produce a constant bias (determinate error) of either high or low readings.
Uncertainty in reading these devices is due to limitations of the analyst’s eye or
limitations of the instrument in reading itself (in the case of electronic balances). These
uncertainties produce indeterminate, random, unbiased error.
device
example
analytical balance
tolerance
absolute
uncertainty
100.0000g
_
 0.0001
top loader balance
100.00g
_
 0.01
class A volumetric pipet
10.00mL
 0.02
_
25.00mL
 0.03
_
class A 50mL buret
41.00mL
 0.1
 0.02 × 2
class A volumetric flask
50.00mL
 0.05
_
250.00mL
 0.12
_
1000.00mL
 0.30
_
29.7mL
1
 0.3
50mL graduated cylinder
BB/CH408
# of sig figs
relative
uncertainty
15
Rules for Significant Digits in Numbers
A mass of 4.56g weighed to 3 sig figs on a top loader balance can be reported as
4,560mg or 4,560,000µg, however, changing its units cannot affect its # of sig figs.
Scientists must follow conventions for unambiguously reporting and interpreting
numerical data. One method is to report all data in scientific notation which only list
digits which are significant.
When common notation is used, the following rules for sig figs are followed.
Learn these rules for determining sig figs in common notation!
1. Nonzero digits: 1, 2, 3, 4, 5, 6, 7, 8, and 9 are always significant.
6.2
16.2
16.25
two significant digits
three significant digits
four significant digits
2. Leading zeros: zeros that appear at the start of a number, are never significant because they
act only to fix the position of the decimal point in a number less than 1.
`
0.564
0.0564
three significant digits
three significant digits
3. Confined zeros: that appear between nonzero numbers are always significant.
104
1004
three significant digits
four significant digits
4. Trailing zeros: zeros at the end of a number are significant only if:
a) the number contains a decimal point or
b) the number contains an overbar
15400
1540.0
15.4000
three significant digits
five significant digits
six significant digits
5,600
5,600
four significant digits
three significant digits
Practice: Determine the number of significant digits in the following numbers:
a)
345
_______
f)
b)
32000
_______
g)
10700
122.0
_______
_______
c)
0.0078
_______
h)
10.04
_______
d)
9.0068
_______
i)
20
_______
e)
5.0200
_______
j)
BB/CH408
0.604
_______
16
Rounding off Numbers
If the digit following the last significant digit is greater than 5, the number is rounded up
to the next higher digit:
9.47 to 2 significant digits = 9.5
If the digit following the last significant digit is less than 5, the number is rounded to the
present value of the last significant digit:
9.43 to 2 significant digits = 9.4
If the last digit is exactly 5, the number is rounded to the nearest even digit:
8.65 to 2 significant digits = 8.6
8.75 to 2 significant digits = 8.8
8.55 to 2 significant digits = 8.6
Practice: Round the following numbers to 2 sig figs.
a) 2249
_____
d) 2250.001
____
b)
2250
_____
e)
0.07750
____
c)
2251
_____
f)
0.07749
____
Uncertainty in Measurement
In scientific work, we recognize two kinds of numbers:
exact numbers
- values known exactly
- numbers which are defined or counted
- defined: 12 donuts in a dozen
1000 grams in a kilogram
-counted: 25 students in a classroom
inexact numbers
- values are uncertain
- numbers which are measured
eg. - speed of an automobile
- temperature of a cup of soup
- volume of water in a beaker
Exact numbers are assigned an infinite
number of significant figures for
calculations
Inexact numbers have a number of
significant figures equal to the number
of digits known for certain plus one
more.
Significant Figures in Calculations - a rough method for estimating
error/precision of results
One additional digit (one more than the number of sig figs) is usually carried on all
values throughout a calculation and the answer is rounded to the correct number of sig
figs.
In calculations involving multiplication, division, roots, and powers: round off the
answer so that it has only as many significant figures as the value in the calculation with
the fewest significant digits.
e.g.
35.63  0.5481  0.0530
 100%  88.5470578 %  88.5%
1.1689
The answer only contains 3 significant figures since this is the least number of sig figs
amoung the values in the calculation. 100 % is an exact number having an infinite number
of significant digits.
In addition and subtraction: the answer is only as precise as the least precise
measurement.
e.g. Suppose we have 3 measurements of length to be added, i.e.,
6.6 m
18.74 m
0.766 m
26.106 m  26.1 m
Note that the least precise measurement, 6.1 m, was only reported to the nearest
0.1m, therefore the answer cannot be reported with greater precision.
Note that the correct answer has more significant digits than one of the
measurements. Remember that the answer of additions or subtractions cannot have more
precision than the least precise measurement. For example, if approx. 1 L of Pepsi is
divided equally between 3 people, each person will not get 0.333 L, but rather 0.3L.
Sig Figs in Logarithms and Antilogarithms:
The pH of a 2.0 × 10-3 M HCl solution is calculated as follows:
pH = -log (2.0 × 10-3 ) = -( log 2.0 + log 10-3) = -(+0.30 - 3) = 2.70
Although the logarithm appears to have one more sig fig than the original number, it only
has 2 sig figs as well. The number in front of the decimal place (the “characteristic”) only
indicates the power of 10 in the original number.
The log of 12.1 is 1.083 and the antilogarthm of 0.072 is the number 1.18 .
All the digits in the “mantissa” of a log (digits after the decimal) are significant.
18
Calculating Uncertainty of Final Results (a better method)
Estimate the uncertainty of each piece of data (measurement). All uncertainties, both
determinate and indeterminate, are handled mathematically as determinate errors.
For calculations involving addition and subtraction the absolute uncertainty of each
measurement is added together.
For calculations involving multiplication and division the relative uncertainty of each
measurement is added together.
Procedure for addition and subtraction:
Retain as many decimal places in the result as in the least precise reading (same as
before). To determine the limits of uncertainty in the result, add the absolute
uncertainties of each value.
Examples:
For each of the following, add the numbers and report the answer to the correct
number of decimal places and report the absolute uncertainty of the answer.
a) 50.1 ( 0.1), 1.36 ( 0.02), 0.5182 ( 0.0001), and 6.453 ( 0.003)
answer 58.4 ( 0.1)
b) 14.23 + 8.145 - 3.6750 + 120.4
answer 139.1 ( 0.1)
c) 0.50 ( 0.02) + 4.10 ( 0.03) - 2.63 ( 0.05)
answer 1.97 ( 0.10)
19
Procedure for multiplication and division:
Convert the absolute uncertainty of each value to its relative uncertainty (in ppt)
and add all these together to obtain the total relative uncertainty of the final result.
Convert the relative uncertainty of the result to absolute uncertainty. Keep only as many
decimal places in the result as the first occupied decimal place in the absolute uncertainty.
Examples:
For each of the following, carry out the indicated operations and report the answer
to the correct number of decimal places and report the absolute uncertainty of the answer.
(10.00  0.02)  (5000
.
 0.001)
2.50  0.01
a)
answer 20.0 ( 0.1)
b) The percentage Cr is calculated from a titration as follows:
%Cr 
40.64mL  01027
.
mmol / mL  51996
. mg / mmol / 3
 100
346.4mg
answer 20.88 ( 0.05)
c) Compare these with the method of only using sig figs:
(0.98  0.01) (1.07  0.01) = 1.05  0.02
(1.02  0.01) (1.03  0.01) = 1.05  0.02
Since the uncertainties in these numbers are the same, both results should have the same
number of decimal places. The sig fig method would not work here nor in the next
example.
d)
24  0.452
100.0
e)
. ( 01
. )  0.050( 0.001)
14.3(01. )  116
820(10)  1030(5)  42.3(01. )
answer 0.108 ( 0.005)
20