Survey
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
730 Special Report Recommendations for Standardization and Specifications in Automated Electrocardiography: Bandwidth and Digital Signal Processing A Report for Health Professionals by an Ad Hoc Writing Group of the Committee on Electrocardiography and Cardiac Electrophysiology of the Council on Clinical Cardiology, American Heart Association Downloaded from http://circ.ahajournals.org/ by guest on June 15, 2017 James J. Bailey, MD, Alan S. Berson, PhD, Arthur Garson Jr., MD, Leo G. Horan, MD, Peter W. Macfarlane, PhD, David W. Mortara, PhD, Christoph Zywietz, MSc R ecent technologic advances and published studies point to the need to reevaluate requirements for recording and analysis of electrocardiographic (ECG) data. Several issues have arisen from recent developments. First, approximately 52 million ECGs were processed by computer in 1987 (up from 4 million in 1975), and more than 15,000 digital ECG systems are now in use (up from 85 in 1975).1 In addition, computer-assisted electrocardiography is spreading rapidly from academic medical centers to private hospitals, clinics, and physicians' offices. A major factor in this diffusion of technology is the introduction of inexpensive (less than $5,000) stand-alone interpretative carts using microchip technology.' Second, a large number of ECG systems, particularly stand-alone carts, convert data into digital form immediately after amplification and process it digitally. This type of processing, combined with preamplifiers with lower noise characteristics than those available earlier, permits higher resolutions and signal-to-noise ratios than conventional linear analog systems. Third, high-frequency components and lowamplitude signals in the ECG known primarily to researchers are now regarded with wider interest. Distinguishing between the needs of researchers and requirements for routine ECG recording has become more difficult. Approved by the SAC/Steering Review Committee of the American Heart Association in October 1989. Single reprints are free; ask for 00-7501. Correspondence should be addressed to the American Heart Association, Office of Scientific Affairs, 7320 Greenville Avenue, Dallas, TX 75231. Fourth, increased recognition of the value of serial analysis of ECGs requires storage and retrieval of records and, often, transmission of raw ECG data over telephone lines. Data compression techniques for efficient handling of large volumes of data will probably become widespread over the next several years. There are a variety of methods for describing the features of these often complex systems. Some guidelines for specifying performance would be useful. Fifth, a consortium of European investigators seeking common standards in quantitative electrocardiography (CSE) has found a variety of systematic biases and inaccuracies in measurements by 16 different computer programs for ECG analysis.2-5 As they point out, investigators cannot exchange diagnostic criteria until measurements by the various programs are uniform. Thus, criteria cannot be standardized without standardization of metrology, morphology, and terminology among all programs. The recommendations of the American Heart Association (AHA) have previously focused on conventional ECG strip chart recording, although in 1975, the AHA issued guidelines for tape recording and analog-to-digital conversion.6 7 In 1983, the American National Standards Institute (ANSI) published a report on analog specifications.8 With the exception of bandwidth, the Committee on Electrocardiography and Cardiac Electrophysiology finds the analog standards recommended in these reports still acceptable. This report is primarily concerned with digital signal processing, which has become an integral function in many ECG systems. Digital processing of the ECG signal requires analogto-digital conversion and includes filtering, data compression, teletransmission, storage or retrieval onto or AlA Scientific Council Recommendations in Electrocardiography from digital media, and measurement of waveforms. In every instance, the result of digital processing is judged by the fidelity with which it represents the original ECG signal. This fidelity underlies the accuracy and uniformity of metrology (i.e., waveform measurement) needed for final analysis and diagnostic interpretation of the ECG. This is the central theme of the discussions that follow. Downloaded from http://circ.ahajournals.org/ by guest on June 15, 2017 Signal Fidelity After Digital Processing Fidelity may be defined as a measure of how closely the result of digital processing represents the "true" input signal; that is, the greater the error of a digital processing method, the lower the fidelity. Output data after digital transmission can be compared bit-wise with input data. After digital storage of raw data, the retrieved data can be similarly tested against input data. After data compression, the data can be reconstructed and compared with the original data, but after analog-to-digital conversion, the test of fidelity is how well the analog signal is represented by digital data. There are several ways to perform such a test, but their details and relative merits are beyond the scope of this discussion. Testing the effect of digital filtering is a complicated process. One approach is to use an extremely "clean" ECG signal and arbitrarily add various types of noise (muscle, baseline wander, mains hum, etc.) at various amplitudes before filtering. The output data after filtering can then be compared with the original ECG data.9,10 Requirements for fidelity may vary, depending on the capabilities of the digital processing method. Many systems for data transmission or storage of raw data are capable of complete duplication of input data down to the least significant bit, which is unrealistic for filtering and inappropriate for analogto-digital conversion. In these cases, the requirement for fidelity will depend on how the data are to be used. For example, if the purpose is to visually display the ECG for routine manual reading as currently practiced, then previous recommendations78 are relevant. In its 1975 report (section A1), the AHA recommends a limit of 0.25 mm error (25 ,uV) for direct-writers at 10 mm/mV sensitivity for peakto-peak amplitudes of less than 5 mm; otherwise, error should be limited to +5%.7 At half-standard (5 mm/mV), this means the greater of 5% or 50 ,uV. The ANSI standard (section 3.2.7.1) suggests the greater of 10% or 50 JuV.8 However, the AHA's specification (the greater of +5% or ±25 ,V) is readily achievable by current digital technology. If, on the other hand, the purpose is to extract measurements from digital data and classify waveforms diagnostically, fidelity must be improved. (The high fidelity required for surface recording of HisPurkinje activity and late potentials is not considered here.) The AHA report on stand-alone amplifiers (section B) provides one possible guideline: an error 731 limit of ±2% or 10 ,uV, whichever is greater, for peak-to-peak deflections.7 Several error parameters can be used to quantify the differences between input and output data: absolute error at any point or over any interval, root mean square (RMS) error, maximum peak relative error (MPE), timing errors, and waveform detection errors. The fidelity criteria met by an automated ECG system should be publicly stated. The following examples of fidelity criteria are supported by either theoretical investigations or experimental studies reported in the literature: Fidelity Criteria For Routine Visual Reading Fl: Deviation of recorded output from an exact linear representation of the input signal may not exceed 25 gV or 5%, whichever is greater. (See AHA report, section A1.7) For Morphological Diagnosis by Digital Program F2: RMS error over the PQRST complex may not exceed 10 gV.H F3: Error in peak-to-peak deflections may not exceed 10 gV or 2%, whichever is greater. (See AHA report, section B.7) F4: Mean squared error divided by mean squared amplitude of a deflection may not exceed 1%.12 F5: QRS deflections with .20 ,uV amplitude and >6 msec duration should be detected (threshold defined by CSE4'13). F6: MPE may not exceed 10% over interval of any QRS deflection . 20 gV and . 12 msec. 14,15 For Digital Transmission or Storage/Retrieval of Data F7: Corresponding samples of input data and reconstructed data should differ by less than 10 gV (half of CSE threshold; see F5 above). Data Compression, Storage, and Retrieval Operational and economic considerations related to storage, retrieval, and teletransmission of ECG data have motivated a search for efficient data compression techniques. The long-recognized importance of serial ECG analysis has made storage and retrieval an integral part of computerized ECG systems. Efforts to improve ECG fidelity through higher sampling rates and higher resolution will only increase the need for efficient data compression techniques. Data compression can be achieved in many different ways, depending on the purpose of the data subsequently retrieved. At one extreme, the data compression scheme may be designed to permit reconstruction of the entire cardiac cycle to the least significant bit or with a correspondingly small error. At the other extreme, data can be stored with less accuracy if, for example, a display of the ECG for rhythm determination only is required. Various intermediate forms of data compression can also be visualized. 732 Special Report Circulation Vol 81, No 2, February 1990 Downloaded from http://circ.ahajournals.org/ by guest on June 15, 2017 One method for serial comparison16 stores only certain amplitude values used by a particular comparison program. Such values may be derived statistically from several cycles or from a cardiac complex that can be a single, selected cycle with very low noise. Another method uses algorithms for prediction or interpolation17 so that the difference between an actual value and a predicted or interpolated value can be used in subsequent storage/retrieval techniques. Its advantage is that the difference will have a much smaller amplitude and thus a smaller data storage requirement than the actual value itself. It is possible to go further and code the difference values (or residuals) so that the most frequently occurring residuals are represented by the shortest code. This is one form of Huffman coding.18 Care must be taken with this approach that errors due to truncation are not introduced. This is discussed elsewhere.19 Still another approach is to form the average cardiac cycle and subtract it from each PQRST cycle in the record after suitable time alignment. The residual signal should then have low amplitude and can be treated by the Huffman coding methods. Use of orthogonal functions to effect data compression is advocated by some. Recent studies have been based on the Karhunen-Loeve (K-L) series,20-22 which allow ECG-like waveforms to be used as basis functions and which in a linear combination represent the waveform to be compressed. The advantage of this approach is the need normally to store up to only 20 coefficients for each waveform to be compressed. The basis functions need be stored only once. Data compression may be implemented with techniques called "fan" algorithms, which are based on first-order interpolation with two degrees of freedom. The fan method does not require the storage of all actual data points between the last transmitted point and the present point during program execution. Blanchard and Barr23 compared five data compression techniques, including a fan method, for cardiac waveforms. The fan method produced reconstructed waveforms with errors comparable to or better than the other methods. Regardless of the method to achieve compression, the principal concern is that the user be fully informed about the technique. Recommendations 1. A complete description of the following is necessary: * data that are compressed/stored * what can be retrieved, that is, entire ECGs, selected cycles only, or measurements only * how retrieved data may reasonably be used, for example, rhythm analysis, visual interpretation, serial comparison, criteria/algorithm development, etc. 2. The fidelity with which data can be reproduced should be specified, using the criteria in the section on signal fidelity above. 3. The compression ratio should be stated, with reference to the original sampling details, that is, sampling rate, bit resolution, and number of bits per sample. For example, if data from a system with 500 Hz, 10-bit, equal-interval sampling are compressed with a technique that uses 1,000 bits/ sec, the compression ratio may be characterized as 5:1, assuming the sampling details are clearly stated. A preferable alternative is to express the data reduction as average bits/sample; for instance, in example above, the average length of the reduced data is 2 bits/sample. Low Frequency, Filters, and Phase The lowest frequency components in the ECG correspond to the longest RR interval. Thus, assuming a lower bound for heart rate of 30 beats/min, the low-frequency cutoff of an ECG should be approximately 0.5 Hz. Such a cutoff will suppress a still lower frequency baseline wander due to respiratory variation, etc. However, an amplifier with this cutoff frequency distorts ECG signals considerably, particularly T waves and ST segments, since linear systems with a nonuniform amplitude versus frequency response exhibit phase nonlinearities in frequency ranges where amplitude versus frequency response changes rapidly. In 1975, the AHA7 recommended a low-frequency cutoff of 0.05 Hz, based on previously published studies.24 An analog, single-pole amplifying system with this cutoff frequency will exhibit less than 6° phase shift for frequency components around 0.5 Hz, causing no important distortions in the ST segment, T wave, or QT interval. For example, such a system would respond with a displacement of approximately 60 gV after a 200 gV-sec impulse input (2 mV x 100 msec; see Figure 1). A more realistic model for a QRS might be a triangular waveform with 3 mV amplitude and 100 msec duration. This 150 ALV-sec impulse would be followed by an approximate 45 gLV offset if a linear system with a 0.05 Hz, single-pole filter were used (Figure 2). Since QRS fundamental frequencies are usually less than 30 Hz and T wave fundamental frequencies are usually greater than 1.0 Hz, the most sensitive frequency range with respect to phase shifts is 1.0-30 Hz. A system with a phase lag of 60 at 1.0 Hz and zero phase lag at frequencies above 10.0 Hz produces an error in OT interval of less than 12 msec. This is well within the tolerances suggested by the CSE studies.5 However, a frequency cutoff at 0.05 Hz does not sufficiently suppress baseline wander, and ECGs with higher cutoffs have introduced unacceptable phase nonlinearities with consequent distortions in ST segments.25-32 Figure 3 illustrates the relation of two step functions (positive followed by negative) to an impulse function. The first and second panels of Figure 3 depict the relation between the two unfiltered step functions and the corresponding impulse function. The third and fourth panels of Figure 3 show the AHA Scientific Council Recommendations in Electrocardiography 733 2000 ~~~ 1500 _ _ | (I) I_J 0 1000 _ 00 _ 1500 21000 _ FIGURE 1. Top panel: Solid line shows input signal, a square impulse with amplitude of 2 mV and duration of 100 (a 200 ,uV-sec impulse). Broken line represents output after signal has been passed over a 0.05 Hz, single-pole filter. Bottom panel: Error (equals output minus input) in gV over time. Note maximum depression of segment after impulse is about 60 ,uV 21500l wave msec U) _i 0 Downloaded from http://circ.ahajournals.org/ by guest on June 15, 2017 0G G: 0 TIME (msec) 2500 _ U') 2000 _ 11 1 -Jl 0 1500 _ 0 1000 1 0 500) 1 1 so1 FIGURE 2: Top panel: Solid line shows input signal, a triangular wave impulse with amplitude of 3 mVand duration of 100 msec (a 150 ,utV-sec impulse). Broken line represents output after signal has been passed over a 0.05 Hz, single-pole filter. Bottom panel: Error (equals output minus inp'ut) in ,uV over time. Note maximum depression of segment after impulse is about 45 ,tV. -500. 0s -- - -T - -- l- - - 1-. -- - 1- - -- 1 - -- -T-- -5 -10 un o -15 -20 -25 0 ° - 30 2 -35 -40 -45 1,1 -5 -;v 300 500 0 200 400 100 TIME (msec) _ 600 700 800 effect on the same step and impulse functions of a high-pass, single-pole, 0.5 Hz filter. Note a 500 ,uV depression of the segment immediately after the impulse. This model mimics the problem with artifactual displacement of the ST segment resulting from the phase nonlinearities mentioned above. In 1975, the AEA made no recommendations about phase distortion since it was assumed that a system with extended bandwidth (at least 0.05 Hz) would not introduce unacceptable phase response above 0.5 Hz. Requirements for phase response were also omitted from the ANSI standard.8 Special Report Circulation Vol 81, No 2, February 1990 734 1 2000 1 500 (n 1000 0 500 0 0 a: -500 1 i 1 0 i -1000 1 -1500 1 -2000 2000 1500 U) 1000 0 500 I-~ 0 FIGURE 3: Top panel: Unfiltered step functions; Positive step function followed 0.100 seconds later by negative step function. Second panel: Unfiltered impulse function composed of step functions in top panel. Third panel: Effect of a high-pass, single-pole, 0.5 Hz filter on step functions in top panel. Bottom panel: Effect of 0.5 Hz filter on impulse function in second panel. Note 500 ,uV depression of segment immediately following impulse. This model mimics the problem with artifactual displacement of ST segment resulting from phase nonlinearities (see text). 0 a: -500 Downloaded from http://circ.ahajournals.org/ by guest on June 15, 2017 -1000 -1500 -2000 cc (0 2000 1500 1000 Co 0 500 0 0 0 -500 -1000 - 1500 -2000 2000 1500 1000 -J 0 500 0 -~~~~~~~~~~~~~~~~ -500 -1000 cD 100 200 300 400 500 600 700 8(00 TIME (msec) Digital technology now permits a variety of methods for increasing cutoff frequency while maintaining phase linearity and minimizing ST distortion. A corrected baseline preceding each QRS may be created with several types of filters. In the presence of certain abnormalities, such as an interpolated premature ventricular complex, the baseline correction method may distort the preceding ST-T wave. On the other hand, many programs use an averaged complex for measurement; if baseline wander is not mini- mized before averaging, the final, averaged complex show artifactual shifts in the ST segment, which is particularly important in stress ECGs. Thus, caution must be exercised when implementing such methods to guard against unexpected and unintended distortions of low-frequency signals, namely, the P and ST-T waves. At least two alternative methods are improvements on the traditional 0.05 Hz, single-pole system. In one, after capture of a significant time segment, the filter may AHA Scientific Council Recommendations in Electrocardiography Downloaded from http://circ.ahajournals.org/ by guest on June 15, 2017 used during acquisition is used again, but in reverse time. This method is most easily implemented with digital technology and results in null phase shift for all frequencies and time-symmetric step and impulse responses.33 A real-time alternative is to construct a filter with a step response that is initially flat.34 To the extent that the flat response is maintained over the duration of an input impulse such as a QRS, displacement in the ST segment is minimized. However, two caveats must be observed: 1) All systems with initially flat step responses will have a gain at some frequency. Care must be taken that this gain does not occur within the range of ECG frequencies and in the range of common noise frequencies, for example, in the presence of baseline wander due to respiratory variation, the gain should be limited to 20%. 2) If the duration of flat step response extends beyond the impulse duration with a subsequent rapid decay, then an inverted image of the impulse can occur after original input. If a triangular impulse (3 mV by 100 msec) is used as a model for a QRS, then slopes outside the region of such an impulse should not exceed 150 ,V/sec. The remaining issue concerns minimum amplitude versus frequency response criteria. It is necessary to consider a lower bound for average heart rate and the degree of heart rate variability to be expected. Simonson's35,36 studies and data from the Framingham Heart Study noted by Robert J Garrison, MA, and Daniel Levy, MD (personal communication, June 26, 1989), indicate that over 99% of adults have a mean resting heart rate greater than 44 beats/min. Simonson's work implies that in 99% of resting adults, the intraindividual RR interval variation is less than 0.126 seconds. Hence, a heart rate of 40 forms a lower bound for over 99% of resting adults, 99% of the time. A low- frequency amplitude response down to 0.67 Hz is necessary to reproduce heart rate frequencies in such a range. The following recommendations do not include a specific reference to phase linearity because a system meeting the recommended amplitude and impulse response criteria is appropriate for accurate ECG reproduction, regardless of phase characteristics. However, a system that meets the amplitude response criterion and also has phase linearity at least equal to that of the linear 0.05 Hz, single-pole system is likely to meet the impulse response criteria. Recommendations 1. Amplitude response should be flat to within +6% (0.5 dB) over a range of 1.0-30 Hz; the 3 dB points should be less than or equal to 0.67 Hz and greater than or equal to 150 Hz. (Compare with the AHA's recommendations, section A8.7) 2. A 1 mV-sec impulse input should not produce a displacement greater than 0.3 mV after the impulse. 735 3. For a 1 mV-sec impulse input, the slope of the response outside the region of the impulse should nowhere exceed 1 mV/sec. (Note: Criteria 2 and 3 are met by an analog 0.05 Hz, single-pole filter.) High Frequency and Analog-to-Digital Conversion: Sampling Rate and Quantization In the following discussion, the conventional definition of bandwidth as the frequency range between low and high-frequency cutoffs (3 dB down) is used. The frequency content of ECG signals has been a subject of debate for many years. For routine recording, most electrocardiographers agree that visual diagnostic accuracy can be maintained with a highfrequency specification between 50 and 100 Hz. The AHA recommended the upper figure of 100 Hz for direct writing electrocardiographs in 19676 and again in 1975,7 based, in part, on technological limitations at that time. The ECG committees of the AHA based their recommendations on studies conducted in the 1960s and 1970s. Although medical and industrial communities in the United States have since accepted the need for 100 Hz bandwidth, controversy still exists worldwide. The international standard37 for ECGs has been "on hold" for several years for a number of reasons, including the negative vote by the United States for a 75 Hz bandwidth favored by other participating countries. In the meantime, the 1983 ANSI standard8 restates the same recommendation for an upper frequency cutoff of 100 Hz. However, many electrocardiographs on the current market already have bandwidths of up to 125 Hz, a standard proposed in 1976 by an instrumentation subcommittee of the joint advisory committee of the Canadian Cardiovascular Society and the Canadian Heart Foundation.38 Furthermore, for stand-alone amplifiers, in 1975, the AlHA recommended "a frequency response which is flat to within 6% from 0.14 to 1000 Hz."7 High-frequency components in the ECG have been studied extensively. Langner and Geselowitz39 demonstrated low-amplitude notches at high frequencies (greater than 100 Hz), and alterations in such highfrequency notches in the context of coronary artery disease have been studied by many investigators.40-48 However, the diagnostic significance of lowamplitude, small-duration QRS notches has not been uniformly accepted.49 Other studies have focused on errors in the principal Q, R, or S waves introduced as the bandwidth was reduced.50-52 Clearly, a bandwidth of up to at least 500 Hz is necessary to reproduce all measurable high-frequency components.53 Below this, amplitude errors occur more frequently. A judgment must be made about the level of error that is acceptable. Previously, 0.025 mV/0.050 mV (or 25 ,uV/50 ,uV) was considered allowable, based on human visual ability to resolve this amplitude (0.25 mm) when viewing a tracing on standard 1 mm ruled paper, at full or half standard, respectively. 736 Special Report Circulation Vol 81, No 2, February 1990 Downloaded from http://circ.ahajournals.org/ by guest on June 15, 2017 However, automated systems are capable of exceeding this resolution. Wolf54 reported a program that resolves waveforms down to 25 uV in amplitude and 20 msec in duration if noise is less than 60 JLV root mean square (RMS) error. The European CSE group13,55 tested nine computer programs. Seven consistently detected 12 msec Q waves with amplitude greater than or equal to 30 ,uV or 12 msec initial R waves with amplitude greater than or equal to 40 ,uV. Furthermore, based on their library of routine ECGs, the CSE group recommended defining a QRS deflection as "present" if it has an amplitude greater than or equal to 20 ,LV and a duration greater than or equal to 6 msec (see Fidelity Criterion F5). Typical ECG analysis systems have 10-12 bit analog-to-digital conversion, which translates to 9.76-2.44 SuV resolution for a full 10 mV scale respectively. The equivalent input (preamplifier) noise is in the range of 5-10 gV RMS. Amplitude differences of 10-20 ,uV may occur, resulting in significant changes in diagnostic statements in programs that use decision-tree logic. In such systems there is good reason for an increased system bandwidth that keeps amplitude errors below 20 gV for low-amplitude deflections. Using 100 Hz bandwidth for recording adult ECGs and 150 Hz for infant ECGs, Berson et a150-52 showed that amplitude errors greater than 50 gV can occur in over 10% of the recordings. Clearly, a bandwidth exceeding 100 Hz for adult ECGs and 150 Hz for infant ECGs will be required to reduce amplitude errors well below 50 ,iV and to determine small R or Q waves with reliability. Exactly how high the bandwidth must extend has yet to be determined. A significant increase in bandwidth will, of course, affect the digital sampling rate. Theoretical considerations guide determination of the minimum sampling rate necessary to allow reconstruction of the analog signal to retain information. Nyquist's theorem asserts that the minimum sampling rate should be twice the highest frequency component contained in the original analog signal. However, this theorem becomes valid only in the limit for an infinite sampling period. There are additional considerations for the finite epochs of ECG recording. First, the signal of interest may be mixed with undesirable high-frequency components, in which case it is necessary to choose a sampling rate based on those components, although the signal of interest may not contain such components. If a lower sampling rate is chosen, errors will occur in the reconstructed waveform as these high-frequency components superimpose on and mix with the low-frequency components. An alternative is to filter out the undesirable highfrequency components before sampling; then a lower sampling rate may be adequate. Based on 100 Hz bandwidth, in 1975, the AHA7 recommended a minimum sampling rate of 500 Hz for equal-interval sampling, recognizing the need for a sampling rate two to three times the theoretical minimum. Barr and Spach12 tested different sampling rates, concluding that 500 Hz was the minimum sampling rate required for adult ECGs if the reconstructed waveform were to be within 1% mean error compared to the original waveform. They recommended a rate of 834 Hz for waveforms from infants to eliminate visible errors, even when the mean error level was below 1%. Using a triangular shape to model the R and S waves, Mortara56 estimated that a 250 Hz sampling and 100 Hz frequency response would result in 25-30% attenuations in R and S amplitudes. Garson57 compared digital electrocardiographs (100 Hz frequency, 250 Hz sampling) with an analog electrocardiograph (100 Hz) and found losses in RS voltage of 15-25% in pediatric cases. Yamamoto58 examined pediatric ECGs with 800 Hz frequency and 4,000 Hz sampling. Reducing these data to 100 Hz frequency and 250 Hz sampling lowered the number of visible QRS notches (greater than or equal to 40 ,uV) in 20 of 35 normal children and 29 of 51 subjects with heart disease; a reduction of more than 100 gV in peak-to-peak QRS voltage was found in nearly 30% of the cases. Berson et a159 showed that for a 100 Hz bandwidth, a 250 Hz sampling rate causes amplitude errors greater than 100 AV in less than 5% of 400 adult ECGs. QRS averaging reduced incidence of this error to fewer than 0.5% of records. The method for interpolation of digital data to reconstruct the original continuous signal must be ideal if the minimum sampling rate determined by Nyquist's theorem is used. To the extent that ideal interpolation cannot be achieved, sampling rates higher than the theoretical minimum must be used. Zywietz'4,15 modeled ECG waves with triangular, parabolic, and bell-shaped waveforms and showed that a waveform of 16 msec can be reconstructed with a maximum error of 10% when the sampling rate is 500 Hz. If reconstruction between samples is to be precise, the combined interpolation method and sampling rate becomes more important. If, on the other hand, the sampled points only are of interest, interpolation methods are not important. Although there may be a theoretical gain if optimum interpolation techniques are used, the findings of both McRae60 and Barr and Spach12 indicate that sophisticated interpolation methods are not superior to straight line interpolation. A third consideration is the interacting effect of sampling rate and quantization on error as shown by Neubert et al.11 They found that a sampling rate of 250 Hz (equal interval) at a 10-bit resolution is not sufficient to achieve a 10 gV RMS error and suggest that maximum error may be up to 20 times as large as RMS error. Zywietz14,15 studied maximum peakrelated error (MPE; see criterion F6), which is more relevant to ECG diagnostic measurements than RMS error and 10-100 times larger. This analysis suggests that sampling intervals should be about 3/25 of minimum wave duration to achieve an MPE less than or equal to 10%. For the small initial QRS deflec- AlIA Scientific Council Recommendations in Electrocardiography Downloaded from http://circ.ahajournals.org/ by guest on June 15, 2017 tions in the CSE study (12 msec), this translates into a sampling rate of about 680-700 Hz, assuming that QRS complexes are not averaged. Berson et al59 compared 8- and 10-bit resolution with 12-bit resolution in 115 adult ECGs. When resolution was reduced from 12 to 10 bits, differences in waveform durations were 4 msec or more in 3% of the records. Amplitude errors were greater than or equal to 50 ,uV in 2%. When resolution was reduced to 8 bits, the differences in durations were 4 msec or more in 29% of the records. Based on Neubert's theoretical study, it would be necessary to digitize at 650-700 Hz at 8 bits resolution to achieve an RMS error of 10 ,gV or less. To minimize the effects of noise, some programs average measurements across many similar beats. Others construct a PQRST complex by coherently aligning similar complexes and using an average, mode, or other parameter of samples from the different complexes. The extent to which noncoherent noise is suppressed depends on how coherently the complexes are aligned, the rejection of dissimilar complexes (e.g., premature ventricular complexes), and the pitfalls of the various methods. Zywietz's studies14'5 of the effect of noise and sampling on error suggest that the noise level in the constructed complex can be reduced below 5 guV, allowing deflections of 20 glV to be estimated with 10% or less error. Nevertheless, it is not feasible at this time to devise criteria in this complex evolving issue. In 1975, the AHA ECG Committee7 recommended that digitizing be accomplished with a precision of 10 ,uV, based on the 50 gV resolution achievable with ordinary visual interpretation and consideration of future automated measurements, although at that time, the interaction of quantizing and sampling rates and resolution of small waveforms achievable by advanced technology could not have been predicted in detail. The above discussion assumes equal-interval sampling, which is used by most current automated ECG programs. However, it is important to recognize that nonuniform sampling rates and quantization levels may be used to reduce the average sampling rate below 500 Hz and the average number of bits below 10/sample. Barr and Pollard61,62 applied a method of nonuniform sampling to cardiac electrograms; their fan method of adaptive sampling results in markedly lower average sampling rates compared with equal interval sampling. Such rates and quantizing levels may be quite acceptable. But, as stated in the AHA recommendations, "when such methods are used, the performance of such systems shall be adequately documented."7 Although the majority of currently available automated ECG programs use a 250 Hz sampling rate, many use 500 Hz, conforming to the AHA's recommendations. While increased storage capacity will cost more, depending on efficiency of data compression, low cost and availability of microchip processors make higher sampling rates economically feasible.63 737 Recommendations 1. Analog bandwidth should extend to 125 Hz, the current de facto industrial standard and a Canadian standard since 1976.38 It should extend to 150 Hz for infant ECGs. 2. For visual reading of the ECG, analog-to-digital conversion (sampling rate and bit resolution) should be sufficient to maintain fidelity criterion Fl (see "Signal Fidelity," page 731). 3. For computer analysis of the ECG, analogto-digital conversion parameters should be adequate to maintain low RMS error criteria (such as F2 or F4), plus certain waveform resolution criteria (such as F3, F5, or F6). In general, a minimal sampling rate of 500 Hz (equal-interval) with minimal resolution of 10 gV) is implied. However, this does not preclude other ways of meeting fidelity criteria. Calibration For Digital ECG Systems Proper calibration of an ECG system as described in section A13 of the AHA report7 is still a necessity. Introduction of a known signal is the only secure means of thoroughly testing calibration. Since many digital systems cannot easily cope with fast-rise time signals, a triangularly shaped calibration pulse, typically 1 mV at the patient electrode input point, is also acceptable. The calibration pulse should be traceable through the entire system, including analog-to-digital converter, data compression, telecommunications link, storage, and digital-to-analog converter. Since the interval between such tests may be prolonged, the stability of calibration is an important issue. Therefore, ECG recording system specifications should include a maximum drift of the absolute system gain over time and environmental changes. Conclusion Again, it is important to emphasize the difference between digital processing in automated ECG analysis and visual interpretation by humans. As frequently observed, human visual criteria are typically not appropriate for implementation in a computer program that resolves the ECG in microvolts and milliseconds.464 This report treats issues far beyond those of the mid-1970s, which mainly addressed problems of visual analyses and interpretation. As mentioned previously, an explosion of digital technology has overtaken the field of ECG analysis in the 1980s; technical developments have outpaced most efforts to evaluate the quality of the product and its impact on medical care. Each innovation in digital methodology should be extensively tested before implementation in a commercial system so that developers of such innovations can specify the recommendations and fidelity criteria that their devices meet. It would be helpful to those who evaluate ECG systems if developers would disclose the underlying logic of their computer-based ECG interpretation 738 Special Report Circulation Vol 81, No 2, February 1990 programs and whether it includes diagnostic criteria recognized by experienced ECG readers or a more complex algorithm such as those based on multivariate statistics. If the logic incorporates demographic or clinical parameters such as age, habitus, gender, symptoms, medications, clinical status, coronary risk factors, etc., the role or weighting of each parameter in the diagnostic procedure should be explicitly identified. Finally, it is recommended that developers test the performance of their programs against standardized databases such as the CSE multilead ECG database, just as monitoring devices are now tested against the AHA Arrhythmia and MIT/BIH databases.65,66 Downloaded from http://circ.ahajournals.org/ by guest on June 15, 2017 Acknowledgments We gratefully acknowledge the helpful comments of Professors Roger C. Barr, PhD, Mitsuhara Okajima, MD, John A. Milliken, MD, Hubert V. Pipberger, MD, Ronald H. Startt/Selvester, MD, and Jos L. Willems, MD, who reviewed this manuscript. References 1. Drazen E, Mann N, Borun R, Laks M, Berson A: Survey of computer-assisted electrocardiography in the United States. J Electrocardiol 1988;21(suppl):S98-S104 2. Willems JL, Arnaud P, van Bemmel JH, Bourdillon PJ, Brohet C, dalla Volta S, Andersen JD, Degani R, Denis B, Demeester M, et al: Assessment of the performance of electrocardiographic computer programs with the use of a reference data base. Circulation 1985;71:523-534 3. Willems JL: Standards for quantitative electrocardiography: Facts and fiction, in Wagner GS, Scherlag BJ, Bailey JJ (eds): Computerized Interpretation of the Electrocardiogram. New York, Engineering Foundation, 1986, pp 135-144 4. Willems JL: The CSE project: Goals and achievements, in Willems JL, van Bemmel JH, Zywietz C (eds): Computer ECG Analysis: Towards Standardization: Proceedings of the IML4 Working Conference, Leuven, Belgium, 2-5 June, 1985. Amsterdam, North-Holland, Elsevier Science Publishing Co, Inc, 1986, pp 11-21 5. The CSE Working Party: Recommendations for measurement standards in quantitative electrocardiography. Eur Heart J 1985;6:815-825 6. Recommendations for standardization of leads and of specifications for instruments in electrocardiography and vectorgraphy. Report of the Committee on Electrocardiography, American Heart Association: Kossmann CE, Brody DA, Burch GE, Hecht HH, Johnston FD, Kay C, Lepeschkin E, Pipberger HV. Circulation 1967;35:583-602 7. Recommendations for standardization of leads and of specifications for instruments in electrocardiography and vectorcardiography. Report of the Committee on Electrocardiography, American Heart Association: Pipberger HV, Arzbaecher RC, Berson AS, Briller SA, Brody DA, Flowers NC, Geselowitz DB, Lepeschkin E, Oliver GC, Schmitt OH, Spach M. Circulation 1975;52:11-31 8. Association for the Advancement of Medical Instrumentation: American National Standard for Diagnostic Electrocardiographic Devices. Arlington, Va, 1983 9. Alraum W, Zywietz C, Borovsky D, Willems JL: Methods for noise testing of ECG analysis programs, in Computers in Cardiology. New York, Institute of Electrical and Electronics Engineers, 1983, pp 253-256 10. Zywietz C: Noise tolerance in ECG computer programs and measurement stability of minimum waves: A summary of results from CSE noise tests and conclusions, in Willems JL, van Bemmel JH, Zywietz C (eds): Computer ECG Analysis: Towards Standardization: Proceedings of the IMIA Working Conference, Leuven, Belgium, 2-5 June, 1985. Amsterdam, North-Holland, Elsevier Science Publishing Co, Inc, 1986, pp 87-96 11. Neubert D, Cramer E, Teppner U: Stability of measurement parameters, in Willems JL, van Bemmel JH, Zywietz C (eds): Computer ECG Analysis: Towards Standardization: Proceedings of the IML4 Working Conference, Leuven, Belgium, 2-5 June, 1985. Amsterdam, North-Holland, Elsevier Science Publishing Co, Inc, 1986, pp 129-133 12. Barr RC, Spach MS: Sampling rates required for digital recording of intracellular and extracellular cardiac potentials (review). Circulation 1977;55:40-48 13. Willems JL: Common Standards for Quantitative Electrocardiography. (Concerted action project 11.2.2 of the sectoral research programme in the field of medical and public health research, CSE 3rd progress report, 1983). Leuven, Belgium, ACCO Publications, 1983, pp 201-223 14. Zywietz C, Spitzenberger U, Palm C, Wetjen A: A new approach to determine the sampling rate for ECGs, in Computers in Cardiology. New York, Institute of Electrical and Electronics Engineers, 1983, pp 261-264 15. Zywietz C: Sampling rate of ECGs in relation to measurement accuracy, in Wagner GS, Scherlag BJ, Bailey JJ (eds): Computerized Interpretation of the Electrocardiogram. New York, Engineering Foundation, 1986, pp 122-125. 16. Macfarlane PW, Cawood HT, Lawrie TD: A basis for computer interpretation of serial electrocardiograms. Comput Biomed Res 1975;8:189-200 17. Ruttiman UE, Pipberger HV: Data compression and the quality of the reconstructed ECG, in Wolf HK, Macfarlane PW (eds): Optimization of Computer ECG Processing. Amsterdam, North-Holland, 1980, pp 77-83 18. Huffman DA: Method for construction of minimum redundancy codes. Proc Institute of Radio Engineers 1952; 40:1099-1101 19. Peden JMC: ECG data compression: Some practical considerations, in Paul JP et al (eds): Computing in Medicine. London, McMillan Publications, Inc, 1982, pp 62-67 20. Womble ME, Zied AM: A statistical approach to ECG/VCG data compression, in Wolf HK, Macfarlane PW (eds): Optimization of Computer ECG Processing. Amsterdam, NorthHolland, 1980, pp 91-99 21. Lux RL, Evans AK, Burgess MJ, Wyatt RF, Abildskov JA: Redundancy reduction for improved display and analysis of body surface potential maps. I. Spatial compression. Circ Res 1981;49:186-196 22. Evans AK, Lux RL, Burgess MJ, Wyatt RF, Abildskov JA: Redundancy reduction for improved display and analysis of body surface potential maps. II. Temporal compression. Circ Res 1981;49:197-203 23. Blanchard SM, Barr RC: Comparison of methods for adaptive sampling of cardiac electrograms and electrocardiograms. Med Biol Eng Comput 1985;23: 377-386 24. Berson AS, Pipberger HV: The low-frequency response of electrocardiographs: A frequent source of recording errors. Am Heart J 1966;71:779-789 25. Berson AS, Geselowitz DB, Pipberger HV: Low-frequency requirements for electrocardiographic recording of ST segments (letter). Am J Cardiol 1987;60:939-940 26. Krucoff MK: Critical challenges to computer-assisted ST segment analysis, in Wagner GS, Scherlag BJ, Bailey JJ (eds): Computerized Interpretation of the Electrocardiogram. New York, Engineering Foundation, 1986, pp 85-88 27. Gold RG: Do we need a new standard for electrocardiographs? (editorial) Br Heart J 1985;54:119-120 28. Tayler DI, Vincent R: Artefactual ST segment abnormalities due to electrocardiograph design. Br Heart J 1985;54:121-128 29. Nygards M-E, Ahren T, Ringquist I, Walker A: Phase correction for accurate ST segment reproduction in ambulatory ECG recording, in Computers in Cardiology. New York, Institute of Electrical and Electronics Engineers, 1984, pp 33-38 30. Tayler DI, Vincent R: Signal distortion in the electrocardiogram due to inadequate phase response. IEEE Trans Biomed Eng 1983;30:352-356 AHA Scientific Council Recommendations in Electrocardiography Downloaded from http://circ.ahajournals.org/ by guest on June 15, 2017 31. Bragg-Remschel DA, Anderson CM, Winkle RA: Frequency response characteristics of ambulatory ECG monitoring systems and their implications for ST segment analysis. Am Heart J 1982;103:20-31 32. Longini RL, Giolma JP, Wall C III, Quick RF: Filtering without phase shift. IEEE Trans Biomed Eng 1975;22:432-433 33. Pottala EW, Bailey JJ, Horton MR, Gradwohl JR: Suppression of baseline wander in the ECG using a bilinearly transformed, null phase filter. J Electrocardiol 1989;22:244-248 34. Mortara DW: Digital filters for ECG signals, in Computers in Cardiology. New York, Institute of Electrical and Electronics Engineers, 1977, pp 511-514 35. Simonson E: Differentiation Between Normal and Abnornal in Electrocardiography. St. Louis, CV Mosby Co, 1961, p 158 36. Simonson E, Brozek J, Keys A: Variability of the electrocardiogram in normal young men. Am Heart J 1949;38:407-422 37. International Electrotechnical Commission: Draft Standard Specification for the Performance of Single and Multi-Channel Electrocardiographs. IEC document 62D. Geneva, Switzerland, International Electrotechnical Commission 1978, p 6 38. Wolf HK, Graystone P, Lywood DW, et al: Suggested minimum performance characteristics of data acquisition instrumentation in computer-assisted ECG processing systems. J Electrocardiol 1976;9:239-247 39. Langner PH Jr, Geselowitz DB: Characteristics of the frequency spectrum in the normal electrocardiogram and in subjects following myocardial infarction. Circ Res 1960; 8:577-584 40. Langner PH Jr, Geselowitz DB, Mansure FT: High-frequency components in the electrocardiograms of normal subjects and of patients with coronary heart disease. Am Heart J 1961; 62:746-755 41. Boyle D, Carson P, Hamer J: High frequency electrocardiography in ischemic heart disease. Br Heart J 1966;28:539-545 42. Reynolds EW Jr, Muller BF, Anderson GJ, Muller BT: High-frequency components in the electrocardiogram: A comparative study of normals and patients with myocardial disease. Circulation 1967;35:195-206 43. Flowers NC, Horan LG, Tolleson WJ, Thomas JR: Localization of the site of myocardial scarring in man by highfrequency components. Circulation 1969;40:927-934 44. Flowers NC, Horan LG: Diagnostic import of QRS notching in high-frequency electrocardiograms of living subjects with heart disease. Circulation 1971;44:605-611 45. Langner PH Jr, Geselowitz DB, Briller SA: Wide band recording of the electrocardiogram and coronary heart disease. Am Heart J 1973;86:308-317 46. Anderson GJ, Blieden MF: The high frequency electrocardiogram in coronary artery disease. Am Heart J 1975;89:349-358 47. Sapoznikov D, Tzivoni D, Weinman J, Penchas S, Gotsman MS: High fidelity ECG in the diagnosis of occult coronary artery disease: A study of patients with normal conventional ECG. J Electrocardiol 1977;10:137-148 48. Goldberger AL, Bhargava V, Froelicher V, Covell J: Effect of myocardial infarction on high-frequency QRS potentials. Circulation 1981;64:34-42 739 49. Pipberger HV, Zafar SA, Pipberger HA: Diagnostic information derived from high frequency components in the electrocardiogram, in Antaloczy Z (ed): Modem Electrocardiology. Amsterdam, The Netherlands, 1978, pp 165-180 50. Berson AS, Pipberger HV: Electrocardiographic distortions caused by inadequate high-frequency response of directwriting electrocardiographs. Am Heart J 1967;74:208-218 51. Berson AS, Ferguson TA, Batchlor CD, Dunn RA, Pipberger HV: Filtering and sampling for electrocardiographic data processing. Comput Biomed Res 1977;10:605-610 52. Berson AS, Lau FY, Wojick JM, Pipberger HV: Distortions in infant electrocardiograms caused by inadequate highfrequency response. Am Heart J 1977;93:730-734 53. Golden DP Jr, Wolthius RA, Hoffler GW: A spectral analysis of the normal resting electrocardiogram. IEEE Trans Biomed Eng 1973;13:366-372 54. Wolf HK, Maclnnis PJ, Stock S, Helppi RK, Rautaharju PM: Computer analysis of rest and exercise electrocardiograms. Comput Biomed Res 1972;5:329-346 55. Macfarlane PW, Willems JF: The CSE project: Progress as viewed by the cooperating centres, in Selvester RH, Geselowitz DB (eds): Computerized Interpretation of the Electrocardiogram (VIII). New York, Engineering Foundation, 1983, pp 293-306 56. Mortara DW: Differences between filters with similar cut-off frequencies, in Wagner GS, Scherlag BJ, Bailey JJ (eds): Computerized Interpretation of the Electrocardiogram. New York, Engineering Foundation, 1986, pp 130-133 57. Garson A Jr: Clinically significant differences between the "old" analog and the "new" digital electrocardiograms. Am Heart J 1987;114:194-197 58. Yamamoto H, Miyahara H, Domae A: Is a higher sampling rate desirable in the computer processing of the pediatric electrocardiogram? J Electro Cardiol 1987;10:321-328 59. Berson AS, Wojick JM, Pipberger HV: Precision requirements for electrocardiographic measurements computed automatically. IEEE Trans Biomed Eng 1977;24:382-385 60. McRae DD: Interpolation errors, in Advanced Telemetry Study, Technical Report #1, Parts 1 and 2. Melbourne, Fla, Radiation, Inc, 1961 61. Barr RC: Adaptive sampling of cardiac waveforms (review). J Electrocardiol 1988;21:S57-S60 62. Pollard AE, Barr RC: Adaptive sampling of intracellular and extracellular cardiac potentials with the fan method. Med Biol Eng Comput 1987;25:261-268 63. Berson AS: Bandwidth, sampling, and quantizing for ECG, in Wagner GS, Scherlag BJ, Bailey JJ (eds): Computerized Interpretation of the Electrocardiogram. New York, Engineering Foundation, 1986, pp 118-121 64. American College of Cardiology: Tenth Bethesda Conference on Optimal Electrocardiology. Am J Cardiol 1978;41:111-184 65. Greenwald SD, Patil RS, Mark RG: Improved arrhythmia detection in noisy ECGs through the use of expert systems, in Computers in Cardiology. Washington, DC, IEEE Computer Society, 1989, pp 163-165 66. Moody GB, Mark RG, Goldberger AL: Evaluation of the "TRIM" ECG data compressor, in Computers in Cardiology. Washington, DC, IEEE Computer Society, 1989, pp 167-170 Downloaded from http://circ.ahajournals.org/ by guest on June 15, 2017 Recommendations for standardization and specifications in automated electrocardiography: bandwidth and digital signal processing. A report for health professionals by an ad hoc writing group of the Committee on Electrocardiography and Cardiac Electrophysiology of the Council on Clinical Cardiology, American Heart Association. J J Bailey, A S Berson, A Garson, Jr, L G Horan, P W Macfarlane, D W Mortara and C Zywietz Circulation. 1990;81:730-739 doi: 10.1161/01.CIR.81.2.730 Circulation is published by the American Heart Association, 7272 Greenville Avenue, Dallas, TX 75231 Copyright © 1990 American Heart Association, Inc. All rights reserved. Print ISSN: 0009-7322. Online ISSN: 1524-4539 The online version of this article, along with updated information and services, is located on the World Wide Web at: http://circ.ahajournals.org/content/81/2/730.citation Permissions: Requests for permissions to reproduce figures, tables, or portions of articles originally published in Circulation can be obtained via RightsLink, a service of the Copyright Clearance Center, not the Editorial Office. Once the online version of the published article for which permission is being requested is located, click Request Permissions in the middle column of the Web page under Services. Further information about this process is available in the Permissions and Rights Question and Answer document. Reprints: Information about reprints can be found online at: http://www.lww.com/reprints Subscriptions: Information about subscribing to Circulation is online at: http://circ.ahajournals.org//subscriptions/