Download comp_proj_report1

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Time-to-digital converter wikipedia , lookup

Radio direction finder wikipedia , lookup

Oscilloscope types wikipedia , lookup

HD-MAC wikipedia , lookup

Integrating ADC wikipedia , lookup

Switched-mode power supply wikipedia , lookup

Battle of the Beams wikipedia , lookup

Amplifier wikipedia , lookup

Oscilloscope wikipedia , lookup

Signal Corps (United States Army) wikipedia , lookup

Phase-locked loop wikipedia , lookup

Radio transmitter design wikipedia , lookup

Schmitt trigger wikipedia , lookup

Transistor–transistor logic wikipedia , lookup

Analog television wikipedia , lookup

Cellular repeater wikipedia , lookup

Operational amplifier wikipedia , lookup

Quantization (signal processing) wikipedia , lookup

Flip-flop (electronics) wikipedia , lookup

Valve RF amplifier wikipedia , lookup

Dynamic range compression wikipedia , lookup

Mixing console wikipedia , lookup

Oscilloscope history wikipedia , lookup

Bellini–Tosi direction finder wikipedia , lookup

Rectiverter wikipedia , lookup

High-frequency direction finding wikipedia , lookup

Analog-to-digital converter wikipedia , lookup

Opto-isolator wikipedia , lookup

Transcript
<html>
<head>
</head>
<body>
<p>
Abstract
INTRODUCTION
Digital Signal Processors (DSPs) are often used to process analog signals that are both sampled and
quantized. Since every sample must be stored in the processor with only a finite number of bits, some
quantization error inevitably results. Consider a fixed-point DSP. Fixed point means that the position of the
binary point for a particular signal in the DSP does not vary with time (or signal level).
Let K be the largest (absolute) value that this signal can take on without generating an overflow.
When the signal level is small relative to K, the signal is not taking advantage of all of the available bits.
The distortion of small signals in fixed point DSPs is therefore much larger than the distortion of large signals
(signal levels approximately equal to K). Another way to understand this problem is to note that if the
difference between two consecutive quantization levels is “d,” then the quantization errors are essentially
uniformly distributed from –d/2 to d/2, and are roughly independent of signal level. Therefore, when
normalized to the signal level (in other words, when looking at signal-to-noise ratio or percent distortion, or
other metrics that measure noise relative to signal level), the distortion due to quantization error is much worse
for small signals.
To reduce distortion due to quantization error in fixed-point DSPs, it would therefore be beneficial to
keep all the signals large at all times. The inputs and outputs will generally have a large dynamic range
(sometimes they will be large signals, and other times they will be small signals). To keep the signal levels in
the DSP just barely below K (the overflow level) at all times, it is necessary to compress the input dynamic
range at the input to the DSP (before the analog-to-digital converter, or ADC), and then expand back to the
output dynamic range at the output of the DSP (after the digital-to-analog converter, or DAC). Hence the name
“companding.”
If the entire system is composed of simply an ADC followed by a DAC (no DSP in between), then
compression at the input can be realized simply through multiplying the input by a time-varying signal g(n)
before the ADC. g(n) should be small when the input is large and large when the input is small, so that the
product g(n)*u(n) is at a roughly constant signal level. One choice for g(n) is K/e(n) where K is the overflow
tolerance and e(n) is the input’s envelope. This envelope must be:
(a) strictly greater than zero
(b) strictly less than the input (so that u(n)*g(n) = K*(u(n)/e(n)) is always strictly less than K)
(c) large when the input signal level is large and small when the input signal level is small
The expansion at the output can be realized by multiplying by 1/g(n) after the DAC, which cancels the
effect of the multiplication by g(n) at the input. Therefore, companding in this simple case is simply a
multiplication by g(n) at the input, and a multiplication by 1/g(n) at the output. I shall refer to this simple
scheme as “classical companding.”
Unfortunately, when there is a DSP between the ADC and DAC, the simple scheme of multiplying by
g(n) at the input, and then multiplying by 1/g(n) at the output will, in general, drastically alter the input/output
behavior of the DSP, and therefore horribly distort the output. For example, if the DSP were simply a k-delay
block, then y(n) should equal u(n-k). If, however, the input u(n) is multiplied by g(n), passing thru the k-delay
block gives u(n-k)*g(n-k), and dividing by g(n) at the output gives y(n) = [g(n-k)/g(n)] * u(n-k), which is not
equal to the desired u(n-k). It is distorted by the time-varying [g(n-k)/g(n)] signal. DSPs with more
complicated dynamics will have more complicated distortions. For example, when the DSP is a simple digital
reverberator with <a href=”original.wav”>this input</a>, then passing it thru the reverberator (in double
precision floating point) should give <a href=”float.wav”>this output</a>, while classical companding gives <a
href=”no_correct.wav”>this output</a>, which is clearly extremely distorted.
To ensure that the input/output behavior of the companding system is identical to that of the original
non-companding system, changes must be made inside the DSP. In his 1997 paper, <a
href=”paper.pdf”>“Externally Linear Time-Invariant Systems and their application to Companding Signal
Processors”</a>, Professor Tsividis discovered a method for creating a new system with the following two
critical properties:
1) The input-output behavior of the new system is identical to that of the original DSP
2) The state variables of the new system are large (they have constant envelope, which can then be
multiplied all the way up to the system’s overflow tolerance).
Let the original system (the “prototype system”) have input vector u, state vector x, and output vector y.
Let the system’s state equations be:
x(n+1) = A*x(n)+B*u(n)
y(n)=C*x(n)+D*u(n)
Let w(n)=G(n)*x(n) be the state vector of the new companding system. Then:
w(n+1) = G(n+1)*x(n+1) = G(n+1)*[A*G-1(n)*w(n)+B*u(n)] = [G(n+1)* A*G-1(n)]*w(n)+[G(n+1)*B]*u(n)
y(n) = C* [G-1(n)*w(n)] + D*u(n) = [C* G-1(n)]*w(n)+[D]*u(n)
Therefore, the new “companding” system has the following state equations:
w(n+1) = A1*w(n)+B1*u(n)
y(n)=C1*w(n)+D1*u(n)
where u(n) and y(n) are the same as before, w(n) = G(n)*x(n), and
A1(n)=G(n+1)*A*G-1(n)
B1(n)=G(n+1)*B
C1(n)=C*G-1(n)
D1(n)=D
are the new time-varying state-space matrices. Since u(n) and y(n) are the same as before (the only difference is
a basis transformation of the state vector), the input-output behavior of the companding system is identical to
that of the original non-companding system, and the companding system is therefore externally time-invariant.
But the companding state-space matrices are time-varying, so the companding system, while externally timeinvariant, is internally time-varying.
For all invertible G(n), following the above equations ensures that the companding system has the same inputoutput behavior as the original system (in the absence of quantization errors). To minimize quantization errors,
G(n) should be chosen such that all the signals w(n)=G(n)*x(n) are large at all times. If all the x(n) are all large
at the same times and small at the same times, then G(n) can simply be chosen as g(n)*I where I is the identity
matrix, and g(n) is a single scalar-valued function of time, which is large when all the signals are small and
small when all the signals are large. I call this “single-g companding.” However, in the more general case,
where many of the signals are out of phase with each other, it is necessary to have a set of distinct functions:
gi(n). G(n) is still a diagonal matrix for all n, but now there can be unequal elements on the diagonal. I call this
multi-g companding.
The goal of this project is to implement multi-g companding on a simple digital
reverberator in the Mathworks’s Simulink program, and then make both qualitative
and quantitative observations concerning the differences in the outputs of the noncompanding (original) and companding systems.
To view a more graphical, pictorial version of this introduction, see these <a href=”intro_slides”>slides</a>.
The Reverberator
Consider the following reverberator at a sampling rate r = 22.05kHz (used throughout):
<img src=”rev.bmp”></img>
At 22.05 kHz, a delay of 1000 samples corresponds to 1000/22050 = 0.0454 seconds, and a delay of
2205 samples corresponds to 0.1 seconds. Essentially, the system to the left of ymid(n) generates exponentially
decaying echoes every 0.0454 seconds, and the system to the right of ymid(n) echoes these echoes every 0.1
seconds. Because of the symmetry around the ymid(n) signal, it seems natural, when analyzing this system, to
“split” the system at the ymid(n) signal, and analyze only one of the “halves,” and then apply the same results to
the other “half.” Note that the only difference between the two systems on either side of ymid(n) is the size of
the delay block. Writing state equations where the states are given by the outputs of delay blocks, I found (for
either of the “halves”):
x1(n+1)=-.8*xk(n)+.2*input(n)
For 1<i<=k, xi(n+1) = xi-1(n)
output(n) = 1.8*xk(n)+.8*u(n)
Where it should be understood that the variable “k” is the size of the delay blocks (k=1000 for the left-hand
system, and 2205 for the right-hand system), the input of the left-hand system is u(n) and its output is ymid(n),
and the input of the right-hand system is ymid(n) and its output is y(n).
Generating the companding system
First consider the state equations given by:
For 1<i<=k, xi(n+1) = xi-1(n)
This pattern appears often in systems. It is the state space description of a k-delay block. Consider the effect of
companding on this part of the system. Let wi(n) = gi(n) * xi(n). gi(n) should be large when xi(n) is small and
small when xi(n) is large. Therefore, gi-1(n), which corresponds to xi-1(n) = xi(n+1), should be large when
xi(n+1) is small and small when xi(n+1) is large, so it should be given by gi(n+1). So gi(n+1)=gi-1(n), meaning
that the g(n) functions share the same relationship as the corresponding states! If the input to the k-delay block
is w1(n+1)=g1(n+1)*x1(n+1), then after one delay, get g1(n)*x1(n) = w1(n)
After two delays, get g1(n-1)*x1(n-1) = g2(n)*x2(n) = w2(n).
After three delays, get g2(n-1)*x2(n-1) = g3(n)*x3(n) = w3(n)……
And after k delays, get gk(n)*xk(n) = wk(n)
So all the wi(n) have been automatically correctly generated by the k-delay block. The only requirement was to
input w1(n+1)=g1(n+1)*x1(n+1). Therefore, I only need to generate g1(n+1). No modifications need to be
made to the k-delay block since if its input is always large, all of its internal states are automatically always
large (which makes sense). The output of the k-delay block has corresponding gk(n) function equal to
g1(n-k+1), which I can easily compute by simply delaying g1(n+1) by k samples.
Describe method to companding on half sys:
Add delays
implement
Remove delays
Qualitative Results
Quantitative Results
Discussion
Possible Improvements
Conclusion
</p>
</body>
</html>