Download CHAPTER 6: MEASUREMENT AND STANDARDS

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Time-to-digital converter wikipedia , lookup

Spectral density wikipedia , lookup

Telecommunications engineering wikipedia , lookup

Electronic engineering wikipedia , lookup

Electromagnetic compatibility wikipedia , lookup

Sound level meter wikipedia , lookup

Opto-isolator wikipedia , lookup

Heterodyne wikipedia , lookup

Music technology wikipedia , lookup

Electronic musical instrument wikipedia , lookup

Transcript
CHAPTER 6: MEASUREMENT AND STANDARDS
I.
MEASUREMENT PRINCIPLES
1. MEASUREMENT SYSTEMS
Measurement is the process of associating a number with a quantity by comparing the
quantity to a standard. When it is necessary to measure objectively a quantity such as temperature,
rotational speed, or sound level from an industrial process or a scientific experiment, a
measurement system is required.
Although measurements can be made strictly by mechanical or other means, electronic
measuring systems offer substantial advantages for many measurements. These include very high
speed, high reliability, capability to take data in the range of pico seconds to years, very large
dynamic range, and ability to send, process, and store information by electronic means.
There are a great variety of measurement systems. Electronic measuring systems can be
very complex but usually contain the common elements in the simplified measuring system shown
in Figure 1-1. These elements are a transducer to convert the physical quantity into an electrical
signal, a transmission path, signal-conditioning elements, electronic processing to convert the
signal into a suitable form, and recording or display devices for the user. The term instrument
generally refers to all of these elements, but it is also common practice to refer to a specific
electronic device within the measuring system (such as an oscilloscope or digital voltmeter) as an
instrument.
The measurand is the physical quantity to be measured. It can be a quantity such as
temperature, light intensity, or nuclear radiation. Transducers convert the measurand into an
electrical signal. Transducers can include thermocouples, strain gauges, photocells, phonograph
cartridges, and microphones, among others. Often, the desired signal from the transducer or sensor
is a very low level signal; for these cases, a means of separating the signal from noise or
interference is important. When the signal delivered by a transducer and transmission line is a lowlevel signal, it generally requires amplification or other conditioning prior to processing. This can
include filtering, such as in high-pass cable compensation networks, amplification, buffering,
linearization, decoding, or detecting the signal. The signal is then processed electronically and
may be converted to digital form (A/D conversion) for further processing. Finally, it is converted
to a form useful to the users.
The user can be a human observer, a control mechanism, a computer, or a recording system
(such as a tape recorder to log a performance of a system). The user can also be part of an
automatic control system, in which the data is acted on in some manner to control a process. For
human observers, the output from the instrument is normally made into some form of visual
display but occasionally is made into an audible form (usually for warnings for certain conditions).
Visual displays include a number or a set of numbers on a CRT (cathode- ray tube), a pointer on a
EE 323 - Measurement and standards
118
scale, a graph or graphical display, a pictorial representation (such as an X ray), or printed
information.
Electronic telemetry systems are sometimes used to send data from remote sensors to the
instrument or user. Data is acquired at the remote location and sent via the telemetry system. The
basic measurement system may include the telemetry system illustrated in Figure below. The
desired signal is acquired by the transducer and encoded into a signal suitable for the transmitter.
It can then be used to modulate a high-frequency radio signal, which is fed to the transmission
system. Typically, telemetry systems are designed to handle multiple channels of information.
Such a system requires that the signals be multiplexed-that is, several signals are sent over the
same link to the receiver and measuring instrument. The transmission link can be very high
frequency (VHF), microwave radio, satellite, fiber optics, infrared laser, or simply a conducting
path such as a telephone link, twisted-pair wires, a coaxial cable, or an electric power line to a
receiver. The receiver decodes the original signal and converts it into an appropriate format for the
users.
2. SYSTEM RESPONSE
Each component of a measuring system contributes to the overall response of the system.
The observer needs to be aware of how the measurement system affects the observations. One way
to characterize the system response is by the system transfer function. The transfer function of a
system or subsystem is the ratio of the output signal to the input signal. It always includes the
magnitude and may include the phase of the signals. If the input and output signals are voltages, as
in a linear amplifier, then the transfer function is the ratio of the output voltage to the input
voltage. Actually, the transfer function can represent the ratio of any two physical quantities. For
example, the transfer function of a digital temperature-measuring system is the displayed reading
divided by the actual input temperature. If the input and output quantities are measured in the
same units. the transfer function is a dimensionless quantity. In addition to the frequency response,
a finite time delay is associated with the transfer function causing a lag in output response. This
delay can be important in high-speed measurements.
Another way to characterize a system is by transient response. Transient response is the
response time of a system to an abrupt input change. For example, if you could obtain an ideal
square wave from a generator, it would rise in zero time. Of course, no signal generator can
produce an ideal waveform like this. Neither can any real system respond instantaneously to this
signal.
All systems are limited to a certain minimum and maximum values that can be recorded
accurately. The input threshold of a system is the smallest detectable value of the measured
EE 323 - Measurement and standards
119
quantity starting near the zero value of the variable. The input threshold is limited by the system's
sensitivity, which is the ratio of the output response to the input cause under static conditions. For
example, the sensitivity of a meter can be stated in units of microamps required for a stated
deflection of the meter. The sensitivity of measurement systems is generally limited by noise.
which is inherent in all measuring systems. Noise is any signal that does not convey useful
information and can include frequencies ranging from power-line to radio frequencies. The source
of noise is typically electrical in nature and includes several effects. Other factors can contribute to
the minimum observable signal, including the sensitivity of the transducer or measuring
instrument, the stability and reproducibility of the system, and even the data-reduction process.
The maximum signal is limited by the transducer response or limitation of the instrument,
including saturation or breakdown. The different between the maximum and minimum signal to
which a given system can respond with a stated degree of accuracy is the dynamic range of that
system. Note that dynamic range is dependent on the frequency response that limits the maximum
signal outside the passband.
Another important parameter that characterizes measurement systems is resolution.
Resolution is the minimum discernible change in the measurand that can be detected. The
definition of resolution does not require the property to begin near the threshold level. For
example, the resolution of a time-varying quantity is determined by the ability of the system to
resolve time: it may also be set by the ability of the system to detect a small change in the
amplitude of the measured quantity. The resolution of a system is affected by all the parts of the
system but is ultimately determined by the weakest link in the system. It is important that the user
is aware of such limitations in any measuring system.
As previously mentioned, noise limits measurement systems. Noise can be induced in
measurement systems because of electric or magnetic fields or mechanical coupling, or it can
occur inside the system because of thermal noise due to random vibrations of electrons and other
sources. Noise affects both the input threshold of the system and the resolution. Chapter 12
discusses the sources of noise and methods for dealing with noise problems in measurement
systems.
3. CHARACTERISTICS OF SIGNALS
In a general sense, a signal is any physical quantity that varies with time (or other
independent variable) and carries information. By this definition, a signal can be represented by
audio, visual, or other physical means; however, in electronics work, it usually refers to
information that is carried by electrical waves, either in a conductor or as an electromagnetic field.
Signals can be classified as either continuous or discrete. A continuous signal changes
smoothly, without interruption. A discrete signal changes in definite steps, or in a quantized
manner. The terms continuous and discrete can be applied either to the value (amplitude) or to the
time characteristics of a signal. In nature, most signals can take on a continuous range of values
within certain limits (although the limits can extend to infinity). We refer to such signals as analog
signals. Signals that take on a limited number of discrete values are referred to as digital signals;
they are discontinuous.
Signals can also be classified as either continuous-time or discrete-time. A continuous-time
signal exists at all instances in time. A discrete-time signal is defined only at selected instants of
time; frequently they are equally spaced intervals. Many instruments sample the signals at their
input terminals only at certain times, in effect producing discrete-time signals. Others (frequency
counters, for example) accumulate the signal for an interval of time and then show a discrete- time
signal. An analog signal is converted to a digital signal by two steps: (1) sampling, in which the
EE 323 - Measurement and standards
120
signal is converted to a discrete-time signal, and (2) quantizing, in which the samples are given a
numeric value.
Repeating waveforms are said to be periodic. The period T represents the time for a
periodic wave to complete one cycle. A cycle is the complete sequence of values that a waveform
exhibits before an identical pattern occurs. The period can be measured between any two
corresponding points on successive cycles
The most basic and important periodic waveform is the sinusoidal wave. Sinusoidal waves
can be defined in terms of either the sine function or the cosine function. Both the sine and cosine
functions have the shape of a sinusoidal wave (see Figure below). The term sine wave usually
implies the trigonometric function, whereas the term sinusoidal wave means a waveform with the
shape of a sine wave. A graph of simple harmonic motion as a function of time produces a
sinusoidal wave. It is generated as the natural waveform from ac generators, lasers, and radio
waves. The sinusoidal curve can also be generated by plotting the projection of the endpoint of a
rotating vector that is turning with uniform circular motion, as illustrated in Figure below.
4. TRANSDUCERS
A transducer is a device that receives energy from a measurand and responds to the data by
converting it into some usable form for a measuring system. The simple mercury is a transducer
that converts temperature into a displacement of the liquid mercury.
The output signal from a transducer used in electronic measuring systems is usually
electrical for direct use by the instrument. There are an enormous variety of transducers found in
all major industries and all the sciences for measuring quantities such as temperature, pressure,
altitude, moisture content, humidity, velocity, liquid level, flow rate, and a host of other things.
The output of a transducer depends on the type of transducer and its sensitivity. For electronic
measuring systems, the output is typically dc or ac voltage, resistance, current, inductance,
capacitance, or a change in frequency. One simple resistive transducer is the common
potentiometer. The resistance changes as a function of the rotational position of the control knob.
The sensitivity of a transducer is the ratio of the output signal to the measured input parameter. A
higher sensitivity implies greater response of the transducer (volts, current, resistance, etc.) to the
parameter being measured (temperature, radiation, light intensity).
5. TRANSMISSION
The basic purpose of a transmission line is to transfer electrical signals from one point to
another. In an instrumentation system, a transmission line is typically used between the transducer
EE 323 - Measurement and standards
121
and the signal-conditioning circuits. At low frequencies and short distances, the transmission line
does not present any particular problems; however, when low-level signals are transmitted over a
distance, particularly at high frequencies, a number of adverse effects may be introduced. These
include:
1. Attenuation of the signal
2. Decrease in the high-frequency response of the system
3. Noise pickup
The electrical properties of transmission lines are determined by their physical
characteristics, which also relate to the capacitance and inductance per unit length of the line.
These characteristics are distributed along the length of the transmission line and are called
distributed parameters. The distributed parameters of a transmission line include its series
resistance and inductance, its parallel capacitance, and the conductance of the dielectric. (The
dielectric is the insulating material between the conductors.) These parameters contribute to the
attenuation of the line specified by the manufacturer in units of decibels per 100 ft at a specified
frequency. At frequencies above about 1 MHz, the inductance and capacitance dominate the
distributed parameters and determine the characteristic impedance of the line, an ac quantity that
is basically independent of the signal frequency or the length of the line. The characteristic
impedance is the complex ratio of the voltage to current of a signal traveling away from a
generator or source of voltage and represents the opposition to the flow of current by the
transmission line. For a transmission line, the characteristic impedance is a purely resistive
quantity given by
Z0 =
L
C
where Zo = characteristic impedance of a transmission line, n
L = inductance per unit length of the line, H
C = capacitance per unit length of the line, F
Another important parameter for transmission lines is the propagation delay time. The
propagation delay time depends on the velocity of electromagnetic waves in the line and its length.
For cable, the propagation delay time per unit length is given by
TD = LC
Type of transmission lines:
1. Coaxial cable
Coaxial cable is one of the most popular transmission methods. It is a broad-band
transmission line with a concentric center and outer conductors separated by an air or foam
dielectric. The outer conductor forms a grounded shield for the inner signal-carrying conductor.
Coax offers important advantages because of its uniform characteristics, high-frequency response,
and grounded shields, which tend to reduce noise pickup. It is useful in limiting cross talk.
Depending on its size and length, coax is useful for frequencies up to about 1 GHz.
The characteristic impedance of coaxial cable is independent of its length and depends only
on the geometry of the conductors and the dielectric core material. Typically, coax is
manufactured with 50, 75, 93 or 15011 characteristic impedance and is available with solid or
braided outer conductor. Fifty-Ohm cable is standard for test equipment and high-frequency work
(over 30 MHz). This cable is easy for manufacturers to construct with a stable mechanical
structure and uniform impedance. Seventy-five ohm cable is widely used for video and broadcast
work because it has minimum attenuation.
EE 323 - Measurement and standards
122
Many times the uniformity of the impedance is more important than the actual impedance
itself. The uniformity of a coaxial cable can be affected by connectors or by physical damage to
the cable itself. An improperly installed connector appears as a sharp discontinuity in the path. In
addition, a sharp bend in the cable can also affect the impedance, providing a point for reflections
and erratic performance of the line.
The characteristic impedance for a coaxial cable can be computed from its geometry and
dielectric constant of the dielectric using the equation
Z0 =
138
R
log( )
r
εr
εr = dielectric constant
R and r = outer and inner radius of the cable
2. Twisted-pair wiring
For lower frequencies, twisted-pair wiring is widely used. Twisted-pair wiring normally
consists of AWG-22 or AWG-24 wires, twisted about 30 times per foot. It is less expensive than
coax and is available in a variety of configurations and sizes, including shielding, hightemperature insulation, multiple-conductor paired cable, and so forth. It can help reduce cross talk
and, when used with a differential amplifier can reduce common mode interference. The
propagation velocity of twisted-pair wiring is approximately 50% of coax. The rate that data can
be sent over twisted-pair wiring varies inversely with the distance - as the distance increases, the
data rate is reduced due to the limited bandwidth of longer wires.
3. Flat cable
Flat cable is available as ribbons of side-by-side conductors. One advantage of ribbon
cable is that it can easily be fitted with connectors using mass production methods. For cross talk
control, flat cable can be connected with signal grounds used in every other position. For even
higher protection, flat cable is available as ribbon coaxial cable. The impedance of flat cable
ranges from 50Ohms to 200Ohms.
There are a number of other special-purpose cables available for instrumentation and other
applications. Examples include cables particularly designed for special instrumentation and
control applications, computers, thermocouples, telephone systems, and TV cameras, to name a
few.
4. Fiber Optic
In the last two decades, many advances have occurred in optical-fiber technology, making
this an increasingly popular type of transmission line for both telecommunications systems and for
shorter distances required in instrumentation applications. A fiber-optic cable is made from thin
strands of plastic or glass that carry light instead of electricity. Although they are generally more
expensive than conductive lines, fiber-optic transmission lines offer significant advantages.
Advantages include very high bit rates (greater than 1Gb/s) and immunity from electromagnetic pickup and cross talk as well as low attenuation for signals. Other advantages are gained
when data needs to be sent through severe environments such as nuclear radiation, chemical or
explosive vapors, electrical noise, salt water, high humidity and dust, and so forth.
Fiber-optic links are often selected in instrumentation and control systems because they
can solve electrical isolation and interference problems typical in industrial environments.
Applications include object-sensing, counting, and positioning systems, connections between
controllers and production equipment, electrical speed-control links, and connections from
computer-control systems and the units they control. Many off-the-shelf systems are available
EE 323 - Measurement and standards
123
with a complete link, including the transmitter, cable assembly, and receiver with electronic inputs
and outputs that are compatible with logic systems.
6. INSTRUMENTS
Instruments can be considered as either stimulus instruments or measuring (response)
instruments. A measuring instrument is a device that converts a quantity to be measured into
some usable form for interpretation. Measuring instruments receive signals from the signal
conditioner or cable and provide an output in some form for the user. Examples of measuring
instruments include digital multimeters (DMMs), frequency counters, and oscilloscopes. Stimulus
instruments are sources for test signals. They can be function generators, power supplies,
oscillators, or pulse generators, for example. Stimulus instruments are used in measuring systems
for testing, troubleshooting, or supplying power or signals needed by a transducer. Some
instruments have been combined into multifunctional instruments, which can simplify a system
and provide accuracy while at the same time reducing the power requirement compared to a
collection of separate instruments. Plug-in modular instruments offer versatility by allowing the
implementation of custom modules.
Electronic instruments, as contrasted with other types of instruments, offer many
advantages for measuring physical phenomena. Among these are high speed, operation over large
distances, internal signal processing, sensitivity, versatility, and reliability. All measurements,
regardless of type, require a source of energy. For electronic measurements, the energy can be
supplied from a circuit under test, an instrument, or from both. Despite the great variety of
electronic measuring instruments, they can be divided into three categories, depending on the
source of energy for the measurement:
1. Passive instruments, such as voltmeters, absorb energy from the circuit under test.
When a passive instrument is connected to a circuit, it changes the circuit impedances.
This effect is called loading the circuit.
2. Active instruments, such as ohmmeters, supply energy to the circuit under test. These
instruments can, in some cases, supply too much current and destroy sensitive
components such as meter movements. Active instruments are frequently used with
the power removed from the circuit under test.
3. Balancing instruments, such as certain bridge circuits and potentiometer circuits,
neither absorb nor supply energy to a circuit under test; instead they operate by
detecting a null condition between the circuit and the instrument. This type of
instrument does not violate the premise that all measurements require a source of
energy because the energy for the measurement is supplied by the circuit under test.
An advantage of these instruments is that the balanced condition results in almost no
loading effect.
Many measurement and stimulus instruments contain internal microprocessors to make
possible functions such as data analysis or automated measurements. Some instruments can be
programmed for a series of automatic setups that can be recalled later. Instruments with built-in
microprocessors that have the ability to perform one or more functions that would otherwise be
performed by a person have been marketed by manufacturers as intelligent instruments. This
definition includes instruments of limited "intelligence," such as instruments that can perform selfdiagnostics (error checking) and self-calibration when they are turned on. Instruments with more
intelligence can be programmed for decision making, such as comparing incoming signals with a
reference and taking some action based on the comparison or performing a series of tests under
control of a computer and sending the processed data back to the computer for analysis or storage.
EE 323 - Measurement and standards
124
These instruments are frequently interconnected to form a complete test system. Many
intelligent instruments have software-driven controls that have replaced a multitude of hardware
front-panel controls. Software-driven keys (sometimes called soft keys) change function depending
on a menu of options selected by the user to simplify instrument setup. Some intelligent
instruments allow the user to store and recall various front-panel setups to allow quick cycling
through a series of measurements or perform certain measurements automatically. Even power
supplies are available that are endowed with some intelligence, allowing them to be programmed
remotely, and that can read back the actual current or voltage out as well as other functions.
7. THE USERS
Measurement instruments need to detect the measurand and provide an output. The output
of an instrument can go to any of several users. The user can be a human, in which case the output
is usually a visual display such as a meter scale or CRT; it can be audio, as in speech or an alarm.
The output can be used directly in automatic control or be sent to a computer for processing; in
either case the output is usually a digital signal. The advent of microprocessors has increased the
applications of digital control systems, where the system itself is the end user of the data; one
example is radar tracking and control of a missile. Other possible end users of an instrument are
recording devices such as chart recorders or magnetic tape, which store the information for use at
some later time.
8. BLOCK DIAGRAMS
Circuits and measurement systems can be depicted with various levels of abstraction. The
simplest representation is that of the block diagram, drawn as a series of interconnected boxes.
Block diagrams are used to show the flow of information and the basic functional relationships in
a system. A block can represent a complete device (such as a digital multimeter) or it can represent
a specific functional unit that occurs in a system (such as a mixer). Connecting lines on block
diagrams are shown with arrows to indicate the signal paths.
The basic symbol used for block diagrams is a rectangle; however, other shapes and
symbols are sometimes used. A triangle is frequently used to represent an amplifier. Other
schematic symbols, such as those for resistors, capacitors, and antennas, may be used when they
clarify the overall system or the signal flow. Each block is labeled with its function, or the
function of the block may be shown symbolically.
When it is necessary to show a portion of signal removed from some point, a pick-off point is
drawn, represented by a dot at the junction. Conversely, when two signals are either added or
subtracted at some point, they can be brought into a common block, or a summing point can be
shown. A summing point is drawn as a circle. If a variable goes through the summing point
without a change in sign, a plus (+) sign is shown with the variable; if it is reversed, a minus (-)
sign is placed beside the variable. Examples of block diagrams for several systems are illustrated
in the following Figure.
EE 323 - Measurement and standards
125
II.
STANDARDS AND CALIBRATION OF INSTRUMENTS
The National Institute of Standards and Technology (NIST), formerly known as the
National Bureau of Standards (NBS), was established in 1901, replacing the Office of Standard
Weights and Measures. At the time, state laws for standard weights and measures were
antiquated. Some laws were virtually unchanged since colonial times when states adopted
standards of their own choosing. The NIST took over custody of basic standards for weights and
measures but - more importantly - began a broad program of physical research to determine
precise physical standards and construct new standards when necessary. Early in its history, it
EE 323 - Measurement and standards
126
developed instruments to measure electrical current and other properties of electricity. It also
contributed to the data in many areas other than fundamental measurement standards, including
data on physical constants, properties of materials, and tests and procedures, and has contributed
to national codes such as the National Electrical Safety Code.
1. Units of measurement
The measurement of a quantity is done with specified measurement units. Measurement
units are the reference quantities that have been selected as a basis for comparison for most
measurable quantities. The definitions for various units are modified and improved from time to
time as necessary. The original "old" metric system was proposed by the French Academy of
Science in 1790. In 1875, at a conference called by the French, the Convention du Metre was
signed by 18 countries to set up and maintain a body of weights and measures in Paris. Through a
governing authority, the Conference General des Poids et Mesures (CGPM), delegates from
member countries adopted necessary procedures for the improvement of the metric system. By
international agreement, metric units have been adopted for use in all scientific and engineering
work. The dominant system in use today is called the International System of Units (Le Systeme
International d' Unites, abbreviated SI) which was defined at the Eleventh Session of the CGPM
in 1960. The SI system was simplified from the original system and was based on six
fundamental units (sometimes called base units) and two supplementary units.
In 1971, the General Conference added a standard of quantity called the mole to the
fundamental units. This addition was made because the number of atoms or molecules in a
substance is frequently more relevant to chemists than the mass of substance. A mole is defined
as the number of atoms in 12 grams of carbon 12. All other SI units can be formed by combining
the fundamental and supplementary units so are called derived units. Table 3-1 lists the
fundamental units and Table 3-2 lists the supplementary units. Definitions' of the fundamental
units are given in Table 3-3.
In addition to the fundamental and supplementary units, 27 derived units were adopted by
the CGPM. Table 3-4 lists these 27 derived units and some additional useful electrical units. An
important advantage of the derived units was that of coherence. Coherence means that derived
units can be expressed directly in terms of products or quotients of fundamental units without
numerical factors.
• 6 fundamental units:
•
2 supplementary units:
•
27 derived units:
EE 323 - Measurement and standards
127
To express very large and very small quantities, a prefix can be used in conjunction
with the measurement unit. An important advantage of the metric system is that metric prefixes are
based on the decimal system and stand for powers of ten. The names and abbreviations of the
metric prefixes, together with the powers of ten they represent, are shown in Table 3-5. The
prefixes can be used in conjunction with all measurement units; however, with electrical
measurements, prefixes with an exponent that is a multiple of three are preferred. Numbers
expressed with an exponent or metric prefix that is a multiple of three are said to be ex- pressed in
engineering notation. Engineering notation is used for marking many electronic components and
is also an aid to computations.
For historical reasons, the kilogram is the only fundamental unit defined with a metric
prefix. Names of decimal multiples and submultiples of the kilogram are formed by attaching
prefixes to the word gram. As an example, it is not correct to write a milligram as a
microkilogram.
EE 323 - Measurement and standards
128
Name and abbreviation of metric prefix
2. Standard organizations
All measurements are based on comparison to some known quantity or reference. The
reference is called a standard. Standard requires a uniformity of measurement with international
cooperation.
The need for a clear structure for international cooperation to assure the uniformity of
measurements has long been recognized. The exchange of scientific information as well as
international trade, communication, navigation, and a host of other activities require a high
degree of cooperation between nations. Currently, the CGPM consists of nearly 50 member
nations, including all the major industrialized nations, which have mutually agreed to accept
the standards adopted by the CGPM. Every four years, delegates from the member nations
meet to discuss and adopt improvements of the SI units. The general organization of the
CGPM is shown in Figure 3-1.
Eighteen delegates from the CGPM are elected to serve on an overseeing group called the
Comite’ International des Poids et Mesures (CIPM) that meets annually to take up matters that
have been referred to it. CIPM members serve as chairpersons for various committees composed
of international metrology experts. These committees, occasionally helped by working groups,
examine advances in the field of physics and metrology that have a direct influence on standards.
CIPM members also direct the activities of the laboratory arm of the CGPM known as the Bureau
International des Poids et Mesures (BIPM). The BIPM maintains permanent laboratories in
Sevres, France.
The search for more accurate ways to measure physical quantities is a never-ending
pursuit. Standards continue to be redefined through scientific advances in the field of metrology,
particularly by the BIPM and the various national laboratories. The goal for standardization is to
find standards based on constants in nature, preferably related to atomic behavior, that can be
reproduced in technical facilities anywhere in the world. All the fundamental SI units have been
defined in such terms except for the unit for mass. The kilogram is still defined by a prototype
quantity of platinum-iridium known as the international kilogram. Work at the BIPM and national
laboratories has lead to various improvements in the accepted definitions of the fundamental units.
EE 323 - Measurement and standards
129
For example, the definition of the meter was changed from a prototype standard to
a definition based on a certain orange spectral line of light in 1960. Then, at the seventeenth
conference of the CGPM in October 1983, the definition of the meter was redefined as the length
of path traveled by light in a vacuum during a time interval of 1/(299,792,458) s. This definition is
clear and scientifically sound. More importantly, it can be reproduced with a high degree of
precision by any of several methods in laboratories throughout the world.
Within each country, national laboratories provide a central authority for uniform
measurement standards within their respective countries. In the United States, that authority is
the National Institute of Standards and Technology (NIST), formerly named the National Bureau
of Standards (NBS). The legislation that created the NBS in 1901 described its intended function
as the custodian of national physical standards and assigned it the mission of determining
physical constants and properties of materials as well as measurements that cannot be obtained to
sufficient accuracy elsewhere. The need for ever-increasing accuracy has required a thorough
understanding of the underlying physical principles of measurement and implementation of new
technology. The mission of the NIST has been expanded to include a considerable amount of
related scientific research in areas such as energy conservation, environmental protection,
utilization of materials, development of test methods, and coordination of activities such as
product standards. Along with this expanded mission, the NIST publishes a significant amount of
EE 323 - Measurement and standards
130
literature covering a range of topics, including product standards, mathematics studies, technical
notes, and consumer information.
In addition to research, the NIST offers physical measurement services (for a fee) to
organizations and individuals. These services include calibrations of the highest order available,
special tests, and measurement assurance programs for industry.
3. Types of standards
Standards are divided into four categories: international standards, primary standards,
secondary standards, and working standards. All standards need to be rigorously traceable through
documentation to accepted international standards or definitions representing the fundamental
units. Traceability implies that local reference standards have been compared to higher-echelon
standards through an unbroken chain of comparisons to national standards maintained by the
NIST. Other important attributes of all standards include long-term stability, accuracy, and
insensitivity to environmental conditions.
a. International standards
Except for the kilogram, all the fundamental units are defined in terms of nature. Intrinsic
standards are those that can be realized directly from their definitions rather than comparing
them to an artifact maintained at a laboratory. The advantage to intrinsic standards is that they can
be constituted in specialized laboratories throughout the world, thus ensuring that the units remain
stable, reproducible, and indestructible. Intrinsic (or "absolute") standards are tied to the basic
laws of physics rather than an artifact. Rules for specifying the practical realization of various
intrinsic standards were adopted by the CIPM in 1983. Advances in making standards based on
their physical definitions has recently improved, particularly in electrical standards, and will
ultimately improve the measurement accuracy at all levels.
When a physical object is used as a standard, as with the kilogram, it is called a prototype
standard. Although it would be useful to have a mass standard that is defined by nature, no one
has been able to propose a standard with the required precision. It is conceivable that, in the
future, actual counting of atoms in a specified crystalline structure could be employed as a
reproducible mass standard. The international kilogram is maintained by the BIPM along with a
series of working prototype standards of the kilogram that are used for comparison with national
prototype standards. The surest method of protecting the invariability of the international
prototype kilogram is to make the least possible use of the international kilogram. The BIPM has
only rarely compared the international kilogram with its own working prototype standards; a
series of measurements begun in 1988 is only the third use of the international prototype in nearly
100 years.
The BIPM maintains a number of other standards for fundamental and de- rived units that
can be used as calibration sources for the national laboratories. These standards are used for
routine calibrations as needed by national laboratories for international comparisons. They are not
available for use by the public.
b. National primary standards
A primary standard is one that does not require any other reference for calibration.
National Reference Standards are primary standards that are maintained and controlled by a
national measurements laboratory such as the National Institute of Standards and Technology for
the United States or the National Physical Laboratory for England. For example, a primary
standard for time is generated from a cesium atomic beam-controlled oscillator. NIST uses such a
device as the national reference standard.
EE 323 - Measurement and standards
131
The U.S. copy of the international kilogram (number 20) is kept in a vault at the NIST
laboratory at Gaithersburg, Maryland. It is considered a primary standard for the United States but
in fact is a very accurate copy of the international kilogram (number 1) retained by the BIPM in
Sevres, France. The U.S. prototype is used no more than once a year to compare with highprecision copies that are accurate to 1 part in 100 million. Other primary reference standards of the
fundamental units can be reproduced from their definition in nature.
c. Secondary standards
The difference between a primary standard and a secondary standard is that a primary
standard does not require any other reference for calibration, but a secondary standard must be
compared periodically to a primary standard, depending on the accuracy required. Secondary
standards are often used as transfer standards, to move a unit of measurement from a primary
standard to one that can be used in a laboratory as a working standard. Secondary standards, with
calibrations directly traceable to NIST, are retained by many laboratories and industries. An
example of a secondary standard is the rubidium frequency standard. Although it is typically more
than 100 times more accurate than an ordinary quartz crystal oscillator, the rubidium gas cell is
subject to very small degrees of drift and must be periodically tested against a primary standard.
d. Working standards
Typical electronic measuring equipment cannot approach the precision of primary
standards. Except for the most exacting scientific work, it is seldom necessary to calibrate
electronic equipment to these standards. A working standard is a device used to maintain a unit
of measurement for routine calibration and certification work on test equipment. Working
standards require periodic calibration and certification to assure that they are accurate. They are
normally not used for routine measurements in an organization but instead are reserved for
calibration service. Calibration standards must have the attributes of accuracy and stability. They
must be used only for the measurement range for which they are intended.
Working standards form the basis of the quality of measurement work for an organization.
A record should be kept for all working standards. This record includes calibrations with higherechelon standards, comparisons made with other working standards, adjustments or repairs that
could give an indication of possible deterioration of the standard, and other pertinent information.
The calibration interval depends on the delicacy of the standard, required accuracy, frequency of
use, and the experience level of personnel who use the standard.
4. Calibrations
Calibration refers to the comparison of a measurement instrument to a standard or other
instrument of known accuracy in order to bring the instrument into substantial agreement with an
established standard. The established standard is normally of at least four times greater accuracy
than the instrument being calibrated or the average of multiple standards if the four-times criterion
is not feasible. To say that an instrument is calibrated means that it indicates measurements within
specified limits of error for that instrument.
To calibrate an instrument, the person doing the calibration should consider the limits of
accuracy of the instrument to be calibrated, the methods that will be used to ensure that the
required accuracy can be obtained, and the accuracy of the standard or instrument used as a
reference. The accuracy of an instrument is dependent on three major contributing factors:
EE 323 - Measurement and standards
132
1. The time since it was last calibrate
2. The difference between the calibration temperature and the operating temperature
3. The uncertainty of the standard used for calibration
In most organizations, calibration is the responsibility of a separate group. That group is
responsible for maintaining working standards, keeping records of instruments for periodic
calibration and certification, and being knowledgeable of the latest calibration procedures. The
calibration facility is usually located in a separate standards laboratory, which is equipped to make
the highest-accuracy measurements needed by the organization. The standards laboratory should
have environmental controls to assure consistency because environmental factors can affect the
accuracy of certain standards and the performance of instruments. Environmental factors that are
controlled usually include temperature, humidity, vibration, dust, radio-frequency interference,
grounding, and line voltage regulation
In general, instruments should be recalibrated at specified intervals or after any repair work
to ensure they are within accuracy specifications. Many instruments have accuracy specifications
based on hours of operation-typically 1000 to 2000 hours. The recalibration interval is dependent
on various factors, including the accuracy and confidence level required, the average daily use of
the instrument, and the predicted or actual long-term drift of the instrument. Depending on these
factors, the recalibration interval may be as short as 30 days for stringent requirements; more
typically it is 6 months to one year or more. Other instruments, such as passive resistance boxes,
may have longer recalibration intervals-frequently up to two years.
The long-term drift of instruments depends on the class of instrument (oscilloscope, digital
instruments, meters), the specific model, and the type of use it receives. Advances in the
manufacture of precision components having very low long-term drift and improved stable
reference standards make it possible to predict instrument long-term drift, allowing extended
recalibration intervals. Some instruments rarely require adjustment-only verification. The
recommended calibration interval can be extended on instruments with demonstrated long-term
stability; however, the documentation required to extend the interval may make it economically
infeasible. A performance check of such instruments may be all that is required to assure that they
are still within specifications.
The calibration procedure and instrument performance checks are generally specified by
the instrument manufacturer in the manuals that are provided with the instrument. Sometimes
calibration consists only of a performance check of the instrument to assure that it is operating
within specified accuracy limits. Normally, if an instrument is operating well within its specified
accuracy limits, adjustments to optimize its performance are unnecessary and should not be made.
Frequent unnecessary adjustments may also interfere with the records for an instrument and
accelerate progressive drift in values.
An exception to this procedure can be made if the parameter is very near the tolerance limit
and there is reason to believe it may be out of tolerance before the next calibration. The decision to
adjust instruments that are close to the specified limits depends on the type of instrument and on
the operating practice and philosophy of the laboratory doing the recalibration. The key factor is
that the calibration laboratory is, in effect, guaranteeing that the instrument is calibrated and will
remain calibrated until the next calibration due date (barring a breakdown). The technician making
optimizing adjustments to an instrument needs to be aware that the adjustment of one parameter
often has an interaction effect on another parameter. For example, if the power supply is
"tweaked," it can have an effect on all other parameters and require a complete calibration check
of the instrument.
EE 323 - Measurement and standards
133
Calibration records:
After an instrument's performance has been checked or it has been recalibrated, certain
records should be completed. A label or other coding is attached to the instrument to indicate the
instrument identification, date, person who performed the calibration, and the date when the
instrument is to be rechecked. Different labels are used to indicate certain specific information
about the calibration. A label may indicate conditions that affect the use of the instrument, such as
a performance limitation, calibration deviation, that the instrument is traceable to NIST, or that no
calibration is necessary.
In addition to the instrument label, a file record should be kept for each instrument
belonging to an organization, listed by model and serial number. It should include the previously
cited information plus any service work performed on the instrument. This record is valuable
when troubleshooting an instrument, since it may point out likely problem locations. A similar
record is often kept in a computer database to keep track of parts used on a particular instrument
model, technician time, accounting charges, and the like. The computer record also simplifies the
procedure for assuring that all instruments requiring calibration are recalled by the standards
laboratory at the proper time interval. Careful monitoring of the calibration history of a model or
class of instruments, including records on those that are out of tolerance and the repair history, can
point to the need for corrective action, such as changing the recalibration interval, changing the
recalibration procedure, derating the calibration, changing the specifications, or even removing an
instrument from service.
EE 323 - Measurement and standards
134