Download Read Time and Spatial Domain Analysis

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Transcript
Techniques in Signal Processing
CSC 508
Time and Spatial Domain Analysis
Basics of Time Varying Signals
We have looked at a number of methods to analyze, predict and simulate measurements
of a fixed random variable. We have assumed that the quantities being measured were
constant and the differences between samples we observed were due to noise or
statistical variations about some fixed mean value.
Now we will investigate non-stationary random quantities. We will study signals that
change in time or relative position in their domains. A simple example of a time-varying
signal is the voltage level used to drive an audio speaker.
Audio
Signal
Source
voltage
Lets look at a sample of the electrical signal used to drive the speaker. We can plot the
voltage level as a function of time.
time
Quickly varying voltages correspond to high frequency sounds and slowly
varying voltage levels correspond to low frequency sounds.
The shape of the wave determines the timbre of the sound. Wave shape is the
reason a trumpet and a piano playing the same note sound different. This is
also the reason you can hear the difference between the sound of a live
performance and a recording.
voltage
original signal
time
distorted signal
It is the goal of the audiophile to achieve the perfect acoustic reproduction
of recorded sounds. But what does this mean?
Digitization
voltage
Digitization is the conversion from analog (continuous) to digital (discrete) samples.
Using our example audio signal we will convert it to a series of data values at a
specified sample rate.
clipping
time
The two types of errors we introduce by digitization are called decimation and
quantization errors.
Decimation is the sampling of the signal source at discrete moments in time.
Usually these samples are separated by a constant interval.
Quantization is the division of the signal levels into a number of discrete values
determined by the number of bits in the base-2 representation of the samples.
Sampling - Time is not directly represented in a digital device. A running comuter jumps
from one state to another in a sequence of discrete steps. Any simulation of a signal or
process on a digital computer must be quantized in time. Once a minimum time interval
dt is established we can determine the state of a simulated system only at times T for
which T/dt is an integer. Unless we change our time quantum the state of the system at
any other time, sat T+dt/2, never exists in the simulation.
In our analog-to-digital conversion, we must first establish a time between sample dt or
equivalently a sample rate 1/dt. This rate must be at a greater number of samples per
second than the rate of change of the highest frequency signal we wish to record.
Aliasing - Consider two sine wave signals as shown below. the dots indicate the points
at which sampling is performed.
12 samples/cycle
12 samples/ 11 cycles
In this example one sine wave is sampled at 12 samples/cycle, while the other sine wave
is 11 times higher in frequency. This higher frequency sine wave is indistinguisable from
the lower frequency sine wave at the specified sampling rate. This is a phenomenon
called aliasing.
To better understand aliasing, look closely at the samples of the two waves. The first
sample is at the peak of both waves. The second sample is 1/12 of a cycle along the
lower frequency wave and 1 1/12 of a cycle along the higher frequency wave. The third
sample is 2/12 (or 1/6) of a cycle along the lower frequency waves and 2 1/6 cycles along
the higher frequency wave. This trend continues to the 13th sample which is actually the
same as the 1st sample relative to the amplitudes of the two waves.
If we want to be able to measure a certain frequency fmax, we must make sure that our
sampling rate is at least twice as high. This is called the Nyquist interval named after
Harry Nyquist, the communications engineer who first defined it.
The selection of a sampling rate establishes the decimation in time of the digitization
process. Now we need to decide how many bits we will use to encode the voltage values
of the samples. this will define the quantization of the signal amplitude.
The number of bits we select to represent a sample in the digitization process is directly
related to the minimum detectable amplitude change in the signal. For example, if we
choose 8 bits for our sample word size for a signal whose amplitude is between +/- 10
Volts, our amplitude resolution is 20/28 Volts = 78 milliVolts, with a maximum
quantization error of 78/2 = 39 milliVolts. In general the quantization error is given by,
Equant 
Vmax  Vmin
2 n 1
where n is the number of bits in the data sample. Quantization error is a noise created by
the digitization process itself and is not an inherent part of the signal source or the
recording medium. Quantization noise can be reduced by increasing the number of bits
used to encode signal values. Additional errors are introduced when our signal exceeds
the range of our quantization limits. This is called clipping.
So whay would we want to add the additional errors of quantization and decimation by
digitizing a signal? The reason is simple. Once the sample values have been digitized
they can be saved, read, transmitted, or copied repeatedly without additional degradation.
The recording, playing, copying and transmitting of an analog signal all contribute to the
noise level in the signal. Consider the quality of a third-generation copy of an analog or
video tape recording compared to the perfect (usually) reproduction of a copy of a
computer file.
A major task of digital signal processing is dealing with the noise present in the original
signal before digitization. Time varying signals are always characterized by some
uncertainty in their values. The amount of uncertainty can be quantified by a relationship
called the signal-to-noise ratio (SNR). The SNR is defined as the peak signal divided by
the root-mean-square (RMS) noise,
SNR 
VSmax  VSmin
2
N
 i
This assumes that the noise sources Ni are not correlation (i.e. they are independent).
Both the signal and the noise are measured in the same units so that the SNR is itself a
unitless quantity. The mean noise levels for all noise sources Ni are combined as RMS
values for the same reason that we used RSS to compute standard deviations. Since we
assume noise sources are independent, they can cancel each other reducing the effect.
We will now briefly review some of the more common sources of noise in analog
signals.
Types of Noise in Signals
Thermal Noise - This noise is due to random motion o fht charge carriers within any
circuit element. Thermal noise is proportional to the square-root of electrical bandwidth
of the circuit and the temperature of the components. (When audible, this noise makes a
hissing sound.)
Shot Noise - This noise is due to the spontaneous emission of electrons by electrical
components. This noise always accompanies the motion of electrons. (If audible, shot
noise makes a popping or clicking sound.)
Temperature Noise - Variation in termperature cause variation in electrical conductivity.
These changes occur very slowly relative to other kinds of noise and therefore contribute
to the overall uncertainty in responsivity over hours or days.
1/F Noise - Also known as current noise, this noise is a characteristic of solid state
electronics. The distribution of 1/F noise is mostly at low frequencies and drops off as
1/F with increasing frequency F.
Preamplifier Noise - A source of noise inherent in the design of electronic preamplifiers
(these are amplfiers for very low power signals). This noise is a function of the manner
in which components are combined to make an amplifier circuit.
Effects of Noise on Signal Processing Performance
Lets look at an example of how the level of noise affects our ability to measure a common
feature in a signal. Assume that we wish to determine the location (in position or time) of
the peak of a Gaussian-shaped signal pulse.
X
A popular method for peak detection is to compute
the derivative of the signal and then determine the
zero-crossing point of this derivative signal.
Gaussian
Pulse
i
X'
Derivative of
Gaussian Pulse
Zero
Crossing
Point
i
In the digital world, the derivative is estimated by
simply taking the difference between sucessive
pairs of samples,
X i'  X i  X i 1
X1
X2
X3
X4
X5
_
X1'
X2'
X3'
X4'
X5'
noise
level
peak signal
amplitude
We see that the uncertainty in the position
that the signal crosses the zero amplitude
axis is a function of the signal amplitude and
noise level. (See area inside red circle.)
peak signal
amplitude
If we increase the noise level while holding
the peak signal amplitude constant, the
uncertainty in zero-crossing position
increases.
noise
level
noise
level
peak signal
amplitude
If we hold the noise level constant and
reduce the peak signal amplitude, the
uncertainty in zero-crossing point also
increases.
Development of Applications in Signal Processing
Creating effective solutions to signal processing problems requires a combination of
theoretical knowledge and expertise in recognizing the essential elements of a problem.
The good news is, these are skills that can be acquired with practice. The bad news is, you
have to practice. Use the following exercises to develop your problem solving techniques.
1. A person with normal hearning can determine the direction of a sound source to within a
few degrees. This is possible because the brain can detect defferences in the time of
arrival of sounds to the left and right ears. Expand on this idea as you answer the
following:
a. A sound course called a clicker is placed 10 meters in front of a blindfolded test subject.
It is discovered that the person can detect a change in the position of the clicker when it is
moved left or right by at least 45 centimeters. Use the speed of sound to determine the
temporal resolution (in seconds) of this subject.
b. Imagine that a sound source (e.g. the clicker) is placed approximately 10 meters in front
of you with the distance to each ear the same. Now imagine that the source is moved in an
arc over your head so that the distance from the source to either ear remains constant.
Assuming that you could tell that the source was being moved, develop a theory to explain
how the ears/brain detect the change in position of the sound source.
2. Using the concept of aliasing, explain why the spokes on a wagon wheel appear to move
backwards in a movie.
3. Given that a sine wave of frequency f0 is being sampled at a rate of m samples per cycle,
give two other frequencies f1 and f2 (in terms of f0 and m) that when sampled at the same
rate would appear identical to the original sine wave (i.e. f1 and f2 are aliased frequencies
of f0 at the sampling rate m).
Challenge Problem: Use the expression for the Gaussian function and its derivative to
analytically derive the relationship for the magnitude of the peak-detection error for a
Gaussian pulse as a function of SNR. (Assume the zero-crossing method described above.)
1
p ( x) 
e
21

( x  1 ) 2
2 12
Spatial Domains
In the time domain, the independent variable is time which is plotted against a dependent
variable we call signal amplitude S(t). This signal can be in units of voltage, decibels,
kilograms or any measurable quantity. The function S(t) is an indication of how the
dependent variable changes with time. In some applications the signal strength is a
function of a spatial parameter x that is not, itself, a function of time. Signals obtained
from an entity's position, or shape in a two-dimensional (or higher) spatial domain are
called spatial signals.
Parametric Forms - An example of a spatial function based on an objects shape is a
parametric representation of characters as shown below.
The parametric function
shows the slope of the line of
the character 5 as a function
of x, the fraction of the length
along the character line
(starting at the bottom of the
character).
Barcodes as Spatial Functions
A barcode scan is an example of a spatial function that is measured directly. The 3-of-9
Code is one of a number of barcodes commonly used in product labeling.
Symbol codes are separated by a
narrow white bar. The code string
*12345* is shown above. A laser
scanner repeatedly illuminates the bar
with a Helium-Neon (red) laser while a
photocell detects the reflected light
level across the bar image. When the
laser beam scans across the bar a signal
similar to the one shown above is produced. This signal is converted to the decoding
vector for each character 0-9 or *.
An Example Problem in the Time Domain
Consider a time-varying signal composed of a large amplitude, low frequency sine
wave, low amplitude random noise and a series of relatively narrow width, low
amplitude Gaussian pulses. In this example the peaks of the Gaussian pulses are much
less than the peak-to-peak amplitude of the sine wave. Our task is to detect (i.e. count
and locate the positions of) the Gaussian pulses. We cannot use a fixed amplitude
threshold due to the large variations in
composite signal
threshold 1
signal imposed by the sine wave. Either
the threshold will miss the pulses in the
threshold 2
valleys or it will incorrectly detect the
peaks of the sine wave as signal pulses.
Since sample-to-sample changes of the
signals of interest are more rapid than
those of the sine wave, we can compute
a running average of the last 10 to 15
samples and use it as an adaptive
threshold.
Tk  Pmin 

NW
 NW
S
i   NW
i
where the kth sample Sk of the signal is compared with the value Tk derived from the
samples in the range k-NW to k+NW. We accept any sample Sk that exceeds the value of
the adaptive threshold Tk.
Tk  Pmin 

 NW
NW
i   NW
S
i
The adaptive threshold Tk can be scaled to detect lower amplitude pulses by setting  to a
lower value. However, Tk must be kept above the noise level to prevent excessive false
alarms (i.e. detection of background noise as signal). When we study the frequency
domain, we will learn how to selectively remove the sine wave so that a fixed threshold
can be used.
Systems Effects on Signals
We have looked at time and spatial domain signals in terms of their information and
noise content. We have discussed the existence of distortion or changes in the shape of
a signal due to imperfect recording, amplification or transmission.
The first homwork problem introduced the effects of an optical system on an image.
We learned that a point of light entering the front aperture of an optical system is
transformed into a blurspot on the focal plane. We used this effect to help us detect
and remove noise spikes in images.
The effect of a physical system (amplifier, telescope, tape recorder) on a signal can be
determined by inputing a known signal and measuring the resulting signal. The
simplest possible input signal we can use is a single sample of a known amplitude
called an impulse. The effect on this input spike by a system is called its impulse
response.
System
impulse
impulse response
The Dirac Delta Function
The impulse function is expressed mathematically as a rectangular function of
infinitesimal width and unit area (weird). In this form the impulse function is known
as the Dirac delta function d(t).
(t )  0 for all t  0



(t )dt  1
The effects of a system on a signal can be
estimated by replacing each individual sample
of the input signal with a collection of
samples corresponding to the systems impulse
response.
The computation of the response is expressed
as the superposition integral also called the
convolution,
t
r (t )   h(t   ) x( )d
0
t 0
where h is the impulse response function and
x() is the input signal.
Input
Impulse
Response
Output
Convolution for Discrete-Time Systems
In computers and digital signal processors we are limited to discrete samples in time,
so our superposition integral becomes a summation,
t
r (t )   ht  k xk
k 0
Let's see how a discrete convolution is computed. Assume we have determined the
impulse response of our system given by h(t) and we want to see how the system will
affect the input signal x(t) (in this case a simple square pulse).
To compute the convolution, the response
function h(t) is flipped horizontally and then
slid over the signal function x(t). At each
sample position ti, the product of the two
functions x(ti)*h(ti-) is summed and the total
is plotted as r(ti).
Before the two functions overlap
the output r(t) is zero.
The first non-zero r(t) is just the
product of h(t-t) and x(t) at the
indicated sample.
When there is more the one
sample overlap r(t) is the sum of
the sample products.
Once h has passed beyond x(t) the
convolution is complete.
Homework:
4. A system has an impulse response represented by the sampled function h(t) shown
below. Derive the total output function for the system presented with the signal
function x(t).
5. Be prepared to discuss in class each of the following phenomena in terms of an
impulse and response (Web section may omit this question).
a. the sound of a gong (or bell).
b. feedback in an audio amplifier
c. an earthquake
d. out of focus photograph
6. Challenge Problem: Given any two constants a and b and any two functions f(x) and
g(x), we can define a linear system L by the relationship,
L(a f(x) + b g(x)) = a L(f(x)) + b L(g(x))
Given f(x)=sin x and g(x) = cos 3x determine if the system L(s)=2s+1 is linear.