Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Lecture 4 Measurement Accuracy and Statistical Variation Accuracy vs. Precision Expectation of deviation of a given measurement from a known standard Often written as a percentage of the possible values for an instrument Precision is the expectation of deviation of a set of measurements “standard deviation” in the case of normally distributed measurements Few instruments have normally distributed errors Deviations Systematic errors Portion of errors that is constant over data gathering experiment Beware timescales and conditions of experiment– if one can identify a measurable input parameter which correlates to an error – the error is systematic Calibration is the process of reducing systematic errors Both means and medians provide estimates of the systematic portion of a set of measurements Random Errors The portion of deviations of a set of measurements which cannot be reduced by knowledge of measurement parameters E.g. the temperature of an experiment might correlate to the variance, but the measurement deviations cannot be reduced unless it is known that temperature noise was the sole source of error Error analysis is based on estimating the magnitude of all noise sources in a system on a given measurement Stability is the relative freedom from errors that can be reduced by calibration– not freedom from random errors Quantization Error +lsb/2 x -lsb/2 Deviations produced by digitization of analog measurements For random signal with uniform quantization of xlsb: xRMS xlsb 12 xavg 0 Test Correlation Tester to Bench Tester to Tester DIB to DIB Day to Day Goal is reproducible measurements within expected error magnitude Model based Calibration Given a set of accurate references and a model of the measurement error process Estimate a correction to the measurement which minimizes the modeled systematic error E.g. given two references and measurements, the linear model: vmeasured Gvreal O vm 2 Gv2 O vm1 Gv1 O vm 2 vm1 G v2 v1 vm1v2 vm 2 v1 O v2 v1 vreal vcal vmeasured O G Multi-tone Calibration DSP testing often uses multi-tone signals from digital sources Analog signal recovery and DIB impedance matching distort the signal Tester Calibration can restore signal levels Signal strength usually measured as RMS value Corresponds to square-law calibration fixture Modeling proceeds similarly to linear calibration as long as the model is unimodal. In principle, any such model can be approximated by linear segments, and each segment inverted to find the calibration adjustment. Noise Reduction: Filtering Noise is specified as a spectral density (V/Hz1/2) or W/Hz RMS noise is proportional to the bandwidth of the signal: vRMS S ( f )df 0 Noise density is the square of the transfer function S o ( f ) Si ( f ) G ( f ) Net (RMS) noise after filtering is: vo Si ( f ) G( f ) df 2 0 2 Filter Noise Example RC filtering of a noisy signal Assume uniform input noise, 1st order filter Si ( f ) G( f ) The resulting output noise density is: V0( RMS ) Vo 1 Vi 1 2ifRC 1 2 RC We can invert this relation to get the equivalent input noise: 4Vo2 b (V 2 / Hz ) Averaging (filter analysis) Simple processing to reduce noise – running average of data samples 1 N y (n) x(n k 1) N k 1 The frequency transfer function for an N-pt average is: sin( 2fN / 2) i 2 f ( N 1) / 2 e G f N sin( 2f / 2) To find the RMS voltage noise, use the previous technique: VRMS 1/ 2 1 / 2 2 sin( 2fN / 2) i 2 f ( N 1) / 2 e df N sin( 2f / 2) So input noise is reduced by 1/N1/2 N ‘Normal’ Statistics N 1 x ( n) Mean Standard Deviation 1 N n 0 1 N 1 ( x ( n) ) 2 N n 0 Note that this is not an estimate for a total sample set (issue if N<<100), use 1/(N-1) For large set of data with independent noise sources the distribution is: ( x ) 1 2 2 d ( x) b 2 e 1 e Probability P(a X b) a 2 2 ( x ) 2 22 Issues with Normal statistics Assumptions: In many practical cases, data has ‘outliers’ where non-normal assumptions prevail Noise sources are all uncorrelated All Noise sources are accounted for Cannot Claim small probability of error unless sample set contains all possible failure modes Mean may be poor estimator given sporadic noise Median (middle value in sorted order of data samples) often is better behaved Not used often since analysis of expectations are difficult