Download chapter 2 - Purdue Engineering

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Dynamic range compression wikipedia , lookup

Multidimensional empirical mode decomposition wikipedia , lookup

Linear time-invariant theory wikipedia , lookup

Immunity-aware programming wikipedia , lookup

Buck converter wikipedia , lookup

Control system wikipedia , lookup

Resistive opto-isolator wikipedia , lookup

Scattering parameters wikipedia , lookup

Heterodyne wikipedia , lookup

Oscilloscope history wikipedia , lookup

Flip-flop (electronics) wikipedia , lookup

Switched-mode power supply wikipedia , lookup

Metadyne wikipedia , lookup

Analog-to-digital converter wikipedia , lookup

Schmitt trigger wikipedia , lookup

Integrating ADC wikipedia , lookup

Opto-isolator wikipedia , lookup

Transcript
CHAPTER 2
STATIC CALIBRATION
A measurement system is made up of many components. At the input is the quantity that you
wish to measure, let's denote this by I for input. I may be changing with time. Examples of the
input could be temperature, acceleration, flow rate, particle size or displacement. The
measurement system output will usually be a voltage. Let's denote this by O for output. In
Figure 1 is shown a block diagram of components that may be part of a measurement system. In
some systems not all components will be present. The three building blocks are:
(1) The Transducer
This will produce some electrical quantity such as voltage, current, charge, resistance,
inductance, or capacitance that is related to the input, I. Examples include an RTD
which converts temperature to resistance, a piezoelectric force transducer that changes
force to charge, a microphone that converts pressure fluctuations to capacitance
fluctuations, a thermocouple which changes temperature differences into voltage
differences, a strain gage which converts strain into a change in resistance.
(2) The Signal Conditioner
This stage can often be split into further sub components. One component may convert
resistance to a voltage change as in a bridge circuit (see Chapter 9). Another component
may be a demodulation circuit as in an LVDT (see Appendix of Beam Lab). Amplifiers
are often part of the signal-conditioning block, and are used to raise the signal level
above the noise level. Filters are also commonly used in signal conditioning to remove
noise. This stage conditions the signal so that it is a voltage within the input range of
the data acquisition and display device. This input range is part of the acquisition
device specifications and will be provided by the manufacturer.
(3) The Data Acquisition and Display Device
This could be an analog to digital converter attached to a computer on which the data
can be graphed and analyzed. This could be a digital multimeter (DMM), which will
display RMS voltage in a digital display, or it could be an ammeter, or an oscilloscope.
These components may be connected by leads or, when measurements are being made at a
remote site, there may be transmitters or receivers between these components, or between their
subcomponents.
2-2
Input, I
Output, O
TRANSDUCER
SIGNAL
CONDITIONER
DATA ACQUISTITION
AND DISPLAY
Figure 1: Components of a measurement system.
The Calibration Procedure and Some Definitions
It is important that we determine what a suitable operating range is for the transducer. We are
ultimately concerned with frequency response characteristics and the static calibration
characteristics. The frequency response characteristics will be discussed in the System
Identification Chapter. In this chapter we will focus on the relationship between the output (O)
and the input (I) as the level of the input is slowly increased or decreased. Usually known
constant values of input are put into the measurement system, the system is allowed to come to
steady state, and the level of the constant output is recorded. A plot of measured output level
versus known input level is known as a calibration curve. An example of such a curve is shown
in Figure 2. Note only the data points are plotted, the points are not joined by lines.
Figure 2: An example of a calibration curve for a proximity probe.
In an ideal measurement system the data would lie on a straight line through the origin at 45
degrees to the axes. That is, the voltage output of the measurement system would numerically
equal the quantity we wished to measure, e.g., temperature in degrees C. More typically we pick
2-3
a range over which the relationship is approximately linear, draw a best fit line (see Chapter 4)
and determine the gradient (sensitivity, K) and intercept (bias, B).
Estimated Output = K I + B.
If we are storing measurements on a computer, we can calculate the input that generated the
given output voltage by computing:
Estimated Input = O/K – B/K.
While this is generally referred to as static calibration, there is no reason why we could not do
the same type of test at a constant frequency (input a sine wave to the system) and report
amplitude of output as a function of amplitude of input. In this way we could generate many
calibration curves, one for each frequency at which we examine the relationship between the
amplitude of the input and the amplitude of the output. Some systems do not have an output at
DC (constant input, zero frequency) and so the so-called static calibration has to be performed at
some frequency that is thought to be typical of what would be seen at frequencies in the
operating range of the system. An example of this is a microphone, which is typically calibrated
at 1000 Hz.
Nonlinearity
Usually there are nonlinear effects present in the system. We need some way to quantify how
nonlinear the system is, and also specify the operating range over which we may consider the
system linear. If the system is nonlinear but the calibration curve is monotonic (always
increasing or always decreasing, and hence only one input could cause a particular output value),
then we can linearize the curve on the computer by solving for the input I in terms of the output
O. An example of a nonlinear, monotonic calibration curve is shown in Figure 3(a). Here the
functional relationship between O and I is:
Estimated Output = 4I2 + 2
If I is only positive, then for every output we can identify the only input that could have caused
it.
Estimated Input = +
(O 2)
4
This would not be the case if I were both positive and negative. An example of a nonmonotonic
functional relationship between input and output is illustrated in Figure 3(b).
If the calibration curve is nonlinear then the sensitivity (gradient) changes with increasing I,
and is equal to the derivative of the output with respect to I. So for the example shown in Figure
3(a) the sensitivity is a function of I, so we denote it by K(I).
K(I) = 8 I
2-4
In this course, we will pick an input range: Imin to Imax and measure the output at certain inputs
within this range. The range of output values will be: Omin to Omax. Note that Imin may not cause
Omin. The span is the difference between the maximum and minimum values. We usually report
the system output span and input span. Within this range of values we will find a best-fit line to
the linear portion of the data. Initially we will do this by eye. After studying statistics we will
have a mathematical formula for determining the gradient and intercept of the best-fit line. Many
software packages, e.g., MATLAB and EXCEL, have these statistical quantities already
programmed and will calculate the gradient and intercept for you. I trust that you have an interest
in what the calculator or computer is doing when it makes these calculations! (More of that in
Chapter 4). Note that a calibration curve is a plot of the output versus the known input (not vice
versa). Since you probably think in terms of x and y-axes, the x-axis is the known input (I) and
the y-axis is the output (O).
Figure 3: Nonlinear calibration curves
(a) Monotonic within operating range, (b) Nonmonotonic within operating range
So having collected the data, plotted the data points, and estimated the best-fit straight line to
the linear portion of the data, we are now in a position to do some calculations.
Sensitivity
Gradient of the straight line. Units are output units per input unit. For
example, in the example shown in Figure 2, the sensitivity is 10
Volts/mm.
Bias
Intercept of the best-fit straight line (where it intersects with the O-axis).
Units are in output units, often these are Volts. In Figure 2 the bias is 1
2-5
Volt, you can see that this is where the straight line intercepts the O axis.
In the example shown in Figure 4, the bias is 5 Volts, which happens to
coincide with a data point.
Nonlinearity
Difference between the straight line and the output data. Units are the
output units.
Max. Nonlinearity
Maximum difference between straight line and data, often expressed as a
percentage of what is termed "full scale deflection" or f.s.d., which is the
difference between the maximum and minimum of the measured output
data.
max nonlin (%fsd) =
max | estimated O measured O |
Omax Omin
max | K I + B O |
Omax Omin
Another form of nonlinearity is hysteresis. This is illustrated in Figure 4. The input is changed
in steps from Imin to Imax, and the output recorded after steady state has been reached. The input is
then decreased through the same values from Imax to Imin, the output is recorded again. When
there is hysteresis in the system, the increasing and decreasing paths follow different curves.
Hysteresis is the difference between the output values for increasing and decreasing I, at each
value of I. As with nonlinearity, we usually report maximum hysteresis as a function of f.s.d.
Max. Hysteresis
max hysteresis (%fsd) =
max | OdecreasingI
Omax
OincreasingI |
Omin
Figure 4: Calibration of a system exhibiting hysteresis.
2-6
Resolution
Every measurement device will have a resolution limitation. So you will only be able to
measure the output, which is often a voltage, down to a particular accuracy. This limits how
finely you can measure the quantity that you are interested in, i.e., the input, e.g., temperature or
acceleration. If you are using a spirit thermometer, you use the markings on the glass to
determine what the temperature is, you can probably only read this accurately to within half of
each marked division, i.e. 1/2 C. If you are using a digital multimeter, the last decimal place on
the display is the smallest division you can see. What this represents will be a function of which
switch settings you are using, it could be 0.001 Volts or it may be 0.1 Volts. When you measure
a signal using a computer with analog to digital conversion (ADC) boards, the smallest
differences between voltages that you can discern is a function of the number of bits and the
input range of the ADC (see Chapter 3). This is often of the order of millivolts. We use the
known output device resolution and the sensitivity to determine the resolution of the
measurement system.
Resolution
This is the finest change in the INPUT that you can measure. To determine
The input resolution, you take the resolution of the output device and
convert it into a change in input.
Resolution, I =
Output Device Resolution
sensitivity
O
.
K
Note: O is not the difference between consecutively measured output values in your calibration
test, nor is it the markings on your graph paper! You probably will not be doing calibration at all
the possible settings of the input in the range you are interested in. You will select 5-10 values
evenly spread across the input range of interest. The calculation of resolution is illustrated in
Figure 5.
Figure 5: Calculating resolution ( I) from output device resolution ( O) and
the straight line fit to the linear region of the calibration data.
2-7
When Should I Calibrate?
As measurement system components warm up, as humidity changes, or the power supply
changes (battery getting low), the sensitivity and bias may change. So it is important to repeat
calibration at intervals during your experiment to ensure that your calibration values are still
accurate. Time varying bias, in particular, is a problem with amplifiers; the phenomenon is
known as drift.
Instruments of the same type will not have exactly the same calibration statistics, so do not,
for example, interchange two separate accelerometers of the same type and expect the input output relationship to be the same; they will be of the same order of magnitude, but they will not
be equal. Whenever you do an experiment the first task should be to record the make, type
and serial number of each component you use, so that you can check the exact instrument if
you find strange results. Calibration is also a good way to check that your measurement rig is
still in working order and no one has been playing with your meticulously set dials. It is also
easier to identify faulty components close to when they start to malfunction, and hence you know
which data sets are good and which data may be poor.
While these rules are more critical when you are doing a long series of tests involving the
measurement of many signals, it is a good idea to form these good measurement practices early
in your career as an experimentalist. As we said at the end of Chapter 1, calibrate at the start,
calibrate at the end and calibrate often.
What is the True Input?
A key component to calibration is knowing the true input. In general we will have to measure the
input too and hence there will be inaccuracies. We often use a standard instrument to do this. A
standard instrument is usually more expensive than the one you are calibrating, or more time
consuming and less convenient to use (else why not use the standard instrument) and hence
probably not available to be part of your measurement rig. Procedures for using the standard
instrument are often specified in International and National Standards, so that measurements in
one Laboratory can be compared with the same measurements taken in other Laboratories.
Sometimes, as with microphone calibration, it is possible to create known inputs. In acoustics a
pistonphone, which creates a known sound pressure under certain conditions, is used to calibrate
microphones.
Standard sources and Laboratory instruments must also be calibrated. This is usually done by
their manufacturers, who have their own set of more accurate standard instruments for the
calibration of their products. Their standard instruments will in turn be calibrated using
techniques specified in National Standards (NIST) or International Standards (ISO). As we
progress upwards through this hierarchy of standards, the procedures for measurement become
more time consuming and more complicated, involving the use of more expensive equipment. It
is impractical to calibrate all instruments with these exacting standards, hence the evolution of
this hierarchy of calibration instrumentation. What you need to be careful of in your Laboratory
2-8
is standard instrumentation that has not been regularly sent back to the manufacturer for
recalibration.
Random Fluctuations in Data
In this chapter we have introduced the procedure to do static calibration, whereby you are
examining the relationship between the amplitude of a known input and the amplitude of the
output of a measurement system. Calibration involves fitting a model, usually a straight line, to
the data. From this sensitivity, bias, nonlinearity, hysteresis, and resolution can be calculated. In
reality, there will also be random fluctuations in the measurements and a repetition of the same
calibration procedure will not yield exactly the same results. We have not discussed how to deal
with this randomness; a detailed description is given in Chapter 4.
One problem when both the random and nonlinear effects are present is distinguishing
between the two phenomena. If the randomness in the output is large the nonlinearity may be
obscured. One way to deal with the randomness is to repeat the calibration many times and
average the output at each value of I, treating increasing and decreasing input tests separately. If
the fluctuations are large, you will have to average the results of many tests before the random
fluctuations become insignificant. Having done this, any nonlinearity in the system should
become apparent.
The variation in the data from calibration run to calibration run is a measure of repeatability
that you can expect when you use your measurement system. Hopefully, in the calibration you
are able to control the input sufficiently and the randomness is due to the measurement system
and not due to fluctuations in the poorly controlled input. If the level of input fluctuation is
known, techniques described in Chapter 4 can be used. These techniques can be used to predict
what level of fluctuation in the output can be expected due to this input fluctuation. Any
differences between the predicted and the measured output variation can be attributed to
randomness introduced by the measurement system.
Summary
Calibration is done to check over which range the measurement system is behaving linearly, and
also to calculate the sensitivity, bias and resolution. The straight-line equation can be used with
future measurements to estimate the input, the quantity we wish to measure (see the top of page
three). The resolution tells us how finely (precisely) a quantity can be measured.
Having completed the calibration, recall why you have done it, i.e., check whether the
measurement device is linear over the range of inputs that you wish to measure and is
sufficiently precise. If not, then the system must be redesigned using different transducers,
conditioners etc. and the calibration repeated.
Also of importance when designing a measurement system, is how the measurement system
behaves when signals of different frequencies are input. Low frequency signals fluctuate slowly
2-9
and high frequency signals fluctuate quickly. Over which range of frequencies does the system
treat all frequencies in the same way? Signals are usually made up of many frequency
components (sine waves); this is discussed in the Spectrum Analysis Chapter. For the shape of
the output signal to match the shape of the signal we wish to measure, all frequency components
in the input signal must be treated in the same way: the same sensitivity, bias and time delay. To
find out the region of frequencies where this happens, we measure the frequency response of the
system; this is described in the System Identification Chapter.
So in this Chapter, we have looked at Static Calibration: an examination of what happens to
the output of a measurement system as we increase or decrease the input amplitude. To
completely understand whether the system is suitable for the measurements that we wish to take,
knowledge of the frequency response of the system is also important, as is the ability to quantify
and deal with random fluctuations in the data. These subjects will be discussed in the coming
chapters.