Download Section 4: INTRO TO DIGITAL TRANSMISSION

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Oscilloscope history wikipedia , lookup

Telecommunications engineering wikipedia , lookup

Oscilloscope types wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

Time-to-digital converter wikipedia , lookup

Continuous-wave radar wikipedia , lookup

Immunity-aware programming wikipedia , lookup

Analog-to-digital converter wikipedia , lookup

ATSC tuner wikipedia , lookup

Regenerative circuit wikipedia , lookup

Tektronix analog oscilloscopes wikipedia , lookup

Analog television wikipedia , lookup

Active electronically scanned array wikipedia , lookup

Telecommunication wikipedia , lookup

Valve RF amplifier wikipedia , lookup

Index of electronics articles wikipedia , lookup

HD-MAC wikipedia , lookup

Transcript
CS3282 Digital Communications
4.1
25/08/06/BMGC
University of Manchester
CS3282 : Digital Communications'05-'06
Section 4: Introduction to digital transmission
This section is concerned with the transmission of digital signals in the form of suitably shaped
pulses over wire-lines or radio channels. Such pulses are often visualised as being rectangular in
shape, and this visualisation is not too unrealistic for base-band transmission over short distances, as
used for example with a wired Ethernet based wireless LAN. However, a rectangular pulse shape
requires infinitely wide frequency bandwidth and is therefore undesirable for transmission over a
wire-line or channel where economy of bandwidth utilisation is a consideration. This is usually the
case with long distance high speed transmission. A more typical pulse shape is rounded and has
ringing before and after the main part of the pulse to reduce its bandwidth.
Data (bit-) rate and signalling rate:
The 'data-rate' or 'bit-rate' is the number of bits (binary digits) per second.
The 'signalling-rate' is the number of 'symbols' per second. Units for the signalling rate are 'bauds'. A
symbol is a voltage pulse whose shape and amplitude is chosen from a set of two or more
possibilities. With binary signalling, there are two possible symbols, say a rectangular pulse of
amplitude +V and a rectangular pulse of amplitude −V. In this case the signalling rate can be equal to
the data rate if no redundancy is included for error checking.
Often 'ternary' signalling is used with pulses of amplitude +V, 0, and −V. Now the signalling rate
can be less than the bit-rate. We could send three-bits using two ternary (+V, 0, -V) symbols
because there are 8 possible 3-bit binary numbers and nine different ways of combining two
ternary pulses. Hence the bit-rate could be 1.5 times the signalling (baud) rate.
With 'quaternary' signalling, there are four possible symbols and therefore the bit-rate can be twice the
symbol rate.
The symbol period will always be T seconds, therefore the signalling-rate will be 1/T baud.
Asynchronous transmission (low data rates):
A digital transmitter must apply suitably shaped symbols to the channel at times specified by a timing
reference or 'clock'. The clock is a circuit which generates a 'timing waveform' which is usually a
regular sequence of rectangular timing pulses. The timing waveform is normally not transmitted. A
timing waveform must also be available at the receiver to indicate the time-points at which the channel
may be examined to extract a symbol. The receiver's timing waveform must have the same frequency
as that used at the transmitter. Also, it must be synchronised with the symbols being received, even
though delay will have been introduced by the channel. It is usual for the receiver to extract the exact
symbol frequency and symbol synchronisation from the signal received from the channel, even though
in many cases this signal will be distorted in various ways. If lengthy transmissions are intended, the
receiver clock must be very accurately matched to the clock frequency of the transmitter since any
small discrepancy will accumulate over time to produce large timing errors. However for short blocklength transmissions as used for transmitting 8-bit binary numbers between computers and peripherals
over short distances, for example, the transmitter and receiver clocks need only be approximately
matched and they may resynchronise at the beginning of each short block. This is often referred to as
'asynchronous' transmission and is the basis of the well known RS232 standard.. Data is sent in short
words, say 8 bits long, with synchronising start and stop bits. The receiver clock resynchronises
itself at each start-bit. Consider the transmission of 8-bit ASCII characters according to the RS232
protocol. When idle, the line remains high at voltage V1. Asynchronous operation usually has a “startbit” to signify the start of a transmission. This bit is always “0”. The eight bits of data are then
transmitted using “non-return to zero” (NRZ) pulses and finally a number of “1” stop-bits (in this case
two) are transmitted to ensure that the next character is not sent immediately.
CS3282 Digital Communications
4.2
25/08/06/BMGC
V1
V0
0
1
Start-bit
0
1
1
0
0
Data
0
1
1
1
t
Stop-bits
The advantage of this form of transmission is the simplicity of the transmitter and especially the
receiver. Its disadvantage for some applications would be that it is inefficient in its utilisation of the
channel capacity. The receiver waits for a transition from the 'idle state' "1" to "0" indicating a 'start
bit'. It delays for half a symbol period according to its own free running clock having approximately
the same frequency as that of the transmitter, and then samples the channel eleven times at intervals of
T seconds. The samples will hopefully lie in or close to the centre of each symbol, but the timing will
drift over the eleven samples. The drift is acceptable because of the frequent resynchronisation.
Synchronous transmission:
Synchronous techniques are used for the efficient transmission of continuous data for long periods of
time, often at data rates close to the maximum possible over a channel of specified bandwidth. A
synchronising code (say 10101010) is sent at start of transmission, and thereafter, the receiver clock
must be kept synchronised in frequency and symbol-timing from the transmission itself. To achieve
the required bandwidth efficiency, pulses must be appropriately shaped, usually by a filter whose
impulse response is the required shape. More about this later. To detect the presence or absence of a
pulse, the receiver samples the received waveform at the correct symbol timing point.
Base-band synchronous transmission over wire-lines:
When considering how to synchronously transmit digital information over wires, two factors must be
borne in mind:(i) We would like to keep the average voltage level as close as possible to zero since any voltage
offset carries no data and just wastes power. In many cases, the DC component of a signal is lost over
wire lines sometimes because of AC coupling, the use of transformers and/or because the line is used
to carry power as well as the data. This is certainly the case with telephone lines.
(ii) For synchronous transmission, we need to ensure that the signal always has a frequency
component at the signalling rate (or an exact multiple or sub-multiple of the signalling rate) to allow a
timing waveform to be extracted at the receiver for synchronising the detection process.
For binary transmission, we could try to achieve a zero average voltage by making the amplitudes of
the two pulses +V and -V, hoping that, on average the same number of ones and zeros will occur.
However, a long sequence of consecutive '0 0 0 0 ... 0' or '1 1 1 1 1 ... 1' would clearly cause
problems with this scheme. One solution to this problem is to use ternary coding with alternate mark
inversion (AMI) i.e. to transmit 0 volts for logic "0" and ±V volts, used alternately, for logic "1". For
example, to transmit:
' 1, 0, 1, 1, 0 , 1, 1 ' , we send pulses whose amplitudes are:
V, 0, -V, V, 0, -V, V
The average voltage (the "dc level") is now guaranteed to be zero.
Using AMI, as described above, timing waveform extraction and synchronisation is straightforward
when, say, '1 1 1 1 1 1 1 ...' is transmitted. If 'non-return to zero' (NRZ) rectangular pulses of
width T are used, the signal will be a rectangular wave with period 2T and hence of frequency half the
CS3282 Digital Communications
4.3
25/08/06/BMGC
signalling rate. If 'return to zero' (RZ) pulses are used, synchronisation is even easier as there will be
a strong harmonic at the signalling rate. However when a significant number of consecutive zeros: '0
0 0 0 0 0 .... 0' is transmitted, the receiver can lose synchronisation as the received signal will be zero.
A commonly used solution is known as HDB3 coding.
+V
+V
T
t
T
t
−V
−V
'..1111111..' by NRZ AMI
' ..1111111' by RZ AMI
HDB3 coding: (high density bipolar, order 3):
This scheme uses ternary coding to send binary coded data, as described above for AMI, but
places an incorrectly signed pulse in place of any 4th consecutive. zero.
E.g. for
'1
1
1
' +V -V
0
+V 0
0
0
0
0
0
0
0
0
0
0
1
0
1 … ' we send:
+V 0 0 0 +V 0 -V 0 +V … '
The incorrect "+V" pulse is included only for clock synchronisation. It is taken to be a “0” at the
receiver. The average voltage is no longer zero in the short block above, but over a longer time-span
the average will still remain zero since incorrect +V pulses and -V pulses will occur equally often.
Other base-band signalling waveforms:
NRZ-AMI, RZ-AMI and their variants NRZ-HDB3 and RZ-HDB3 are commonly used. However
there many other schemes as illustrated in any textbook. These schemes are often referred to as baseband "line-codes" or PCM waveforms. Schemes known as NRZ-L, NRZ-M & NRZ-S are binary
signalling methods in that there is only +V and –V. NRZ-L (level) is the most straightforward with
+V representing “1” and “-V” representing “0”. NRZ-M (mark) has logic “1” represented by a change
from +V to -V and “0” represented by no change. NRZ-S (space) represents “0” by a change, and “1”
by no change. Uni-polar RZ has binary 'return to zero' pulses (0 and +V). Bi-polar RZ has +V & -V
'return to zero' pulses. RZ-AMI has been discussed. Another group known as bi-phase-L, bi-phase-M
& bi-phase-S are used in magnetic recording systems, optical communications, satellite links, and
many other applications including Ethernet.
Bi-phase-L is better known as “Manchester coding”, and represents a “one” by a pulse of width T/2
positioned during the first half bit-interval. A zero has a pulse of width T/2 in the second half interval.
+V
+V
'one'
−V
T
t
'zero'
−V
T
Manchester
coding
CS3282 Digital Communications
4.4
25/08/06/BMGC
Question: What are the advantages and disadvantages of Manchester coding as compared with NRZHDB3?
Before completing this survey of base-band signalling waveforms, one more must be mentioned, i.e.
4B3T.
4B3T coding: (4-bits re-coded as 3 ternary digits): In four bits we can have 16 possible numbers. In 3
ternary digits we can have 27 possible numbers. We can represent each 4-bit number by a 3 ternary
digit number, and have some 3-ternary digit numbers left over. Therefore we allocate alternative
codes to some of the binary numbers and use these (i) to keep the average signal level zero, and (ii) to
ensure significant carrier content for receiver synchronisation. The ternary codes are:
BINARY
(a) TERNARY (b)
0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
1011
1100
1101
1110
1111
−−−
− −0
−0−
0 −−
−−+
−+−
+−−
−00
0−0
0 0 -−
+++
++0
+0+
0++
++−
+−+
−++
+00
0+0
00+
0+−
0−+
+0−
−0+
+−0
−+0
Represent “+V” by "+", “−V” by "−", & “0 volts” by “0”. Choose either column (a) or column (b)
where there is a choice. When decoded they give the same sequence of 4-bits. If the "accumulated
disparity" is "+", i.e. if we have previously sent more "+" pulses than "−" pulses, choose column (a) to
redress the balance. Otherwise choose column (b).
Given the same pulse shaping, “4B3T” would require less bandwidth than AMI as it makes better use
of the 3 levels –V, 0 and +V For example, if each pulse of width T seconds were shaped so that its
bandwidth were 1/T Hz, AMI would have a 'bandwidth efficiency' of 1/T bits/second in 1/T Hz , i.e. 1
bit/second per Hz. 4B3T would have 4/(3T) bits/second in 1/T Hz, i.e. 4/3 bits/second per Hz.
Example: Encode in 4B3T: 0010, 0000, 1111, 0001, 0001, 0001, ....
Answer: Assuming "accumulated disparity" to be 0 at start,
+ 0 +, - - -, - + 0, + + 0, - - 0, + + 0, ...
Estimation of 'bit-error probability' using the 'complementary error function Q(z)':
The channel and the receiver may be assumed to add white Gaussian noise of zero mean and fixed
variance σ2 to the transmitted signal. The receiver of a binary coded signal, receiving +V volt
rectangular pulses for '1' and zero volts for '0' may set a threshold at +V/2 and decide whether this is
exceeded at sampling points taken in the centre of each rectangular pulse. If the amplitude of the
noise exceeds +V/2 at a sampling point, an error may occur.
CS3282 Digital Communications
4.5
25/08/06/BMGC
+V
+V/2
t
The probability of a white Gaussian noise sample with zero mean and variance σ2=1 being greater
than some voltage z is
∞
Q( z ) = ∫ p (t )dt
z
(normalised error probability)
Q(z) is the "normalised error function", where p(t) = (1/(√2π))exp(-t2/2) is the probability density
function of unit variance (σ2 = 1) Gaussian noise as plotted below. Q(z) = probability of signal
exceeding z. Q(z) is plotted as a graph against z in the figure attached.
P(t)
z
t
For white Gaussian noise of variance σ2, the probability of a given sample exceeding z is Q(z/σ),
therefore the probability of a given sample exceeding A/2 is Q(A/(2σ)).
Q(z) may be obtained from the attached graph of Q(z) against z, tabulations of erfc, using the
MATLAB 'erfc' function, or the following approximation which is valid for z > 3:
Q(z)
=
0.5 erfc( z / √2 )
≈
(0.4 / z) exp (-z2 / 2) when z > 3.
Example 4.1: A receiver receives +1 volt and 0 volt pulses with Gaussian noise of variance σ2 = 0.01
Estimate the error rate assuming a 0.5 volt threshold and an equal number of 1s and 0s.
Solution: Probability of noise exceeding 0.5 v when "0" transmitted, times 0.5, plus probability of
noise being less than -0.5 when "1" transmitted, times 0.5 is simply Q(0.5/0.1) = 3 x 10-7. The error
rate = 1 in 0.33 x 107.
Example 4.2: A receiver receives +1 volt and 0 volt pulses and has an error probability of 10-3.
What is the variance of the received noise?
Solution: Q(0.5/σ) = 10-3. From graph, 0.5/σ ≈ 3.2 Therefore, σ = 0.16, σ2=0.026.
There is often little one can do to reduce the noise level, and hence, if we need to decrease the
error probability, our only option is to increase the signalling pulse amplitude as it reaches the
receiver. This can be done by increasing the gains of the regenerative repeaters, or, if this is not
possible because it would increase the level of interference (cross-talk), the repeater spacing may be
reduced.
CS3282 Digital Communications
4.6
25/08/06/BMGC
Example 4.3: A synchronous transmission system using cable with 30 dB attenuation per km
and regenerative repeaters every 5 km is affected by additive Gaussian noise. A binary bipolar linecode is used with equally spaced detection thresholds at the sampling points. If the repeater gains
cannot be increased because of the interference this would cause to other lines, how can the error
probability be reduced from its current value of 10-5 to 10-7. (Use the graph of Q(z) against z).
Solution:
Each regenerative repeater has within it a receiver and a re-transmitter. At each sampling point, the
receiver receives a +V or a –V corrupted by noise and possibly other distortion. It must decide
whether a “0” or a “1” was intended and then the re-transmitter reconstructs a perfectly shaped symbol
of appropriate amplitude and sends it on to the next repeater.
Assume that, at a receiver, the symbols are expected to be +V and –V at the sampling points to signal
“1” and 0 respectively. We could take the threshold to be 0 volts, and decide a +V symbol was
intended if the voltage is greater than 0 and a –V symbol was intended if the received voltage is
negative at the sampling point. To produce an error, the noise must exceed +V when –V (logic “0”)
is sent, or be less than –V when +V (logic “1”) is sent. Hence the probability of an error is:
(Prob of a “0”)x (Prob of noise sample being greater than +V)
+ (Prob of “1”) x (Prob of noise sample being less than –V) )
We assume an equal probability of “1”s and “0”s. Therefore probability of an error is:
0.5 x Q(V/σ) + 0.5 x Q(V/σ)
where σ is the standard deviation (σ2 = variance) of the noise. The probability of a noise sample
being less than –V is same as the probability of a noise sample being greater than V.
Q(z) as plotted in these notes is the probability of a noise sample being greater than z for Gaussian
noise of zero mean and standard deviation equal to one. Therefore Q(z/σ) is the probability of a noise
sample being greater than z/σ for Gaussian noise of zero mean and standard deviation equal to one.
But, most importantly, Q(z/σ) is also the probability of a Gaussian noise sample being greater than z
when the noise has zero mean and standard deviation equal to σ. It is also the probability of a
Gaussian noise sample being less than –z when the noise has zero mean and standard deviation equal
to σ.
If Q(V/σ) = 10-5, from the graph we find that (V/σ) = 4.25
Therefore σ = V/4.25. This is the standard deviation of the zero mean Gaussian noise that is causing
the errors.
With the same noise level, to produce an error probability of 10-7 rather than 10-5, we need the
regenerative repeaters to receive higher voltage levels for the symbols.
Assume they are raised from ±V to amplitude ±U at the sampling points.
Then Q(U/σ) = 10-7 and from the graph, we find that U/σ = 5.2.
This means that U = 5.2σ = (5.2/4.25)V = 1.22 V.
20log10(1.22) = 1.75dB.
Therefore we need to arrange that the voltages as received at the receiver are raised by 1.75 dB.
If we cannot raise the voltages transmitted by the regenerative repeaters because of cross-talk, we can
only reduce the distance between the repeaters so that less attenuation occurs between one
regenerative repeater and the next.
The attenuation must be reduced from 5x30 = 150 dB to 148.25 dB.
Distance between repeaters must be reduced to 148.25/30 = 4.94 km instead of 5 km.
Wow!
Problems
4.1. How would the bit-error rate be affected by sampling at the end or at the beginning of each
rectangular pulse rather than in the centre.
CS3282 Digital Communications
4.7
25/08/06/BMGC
4.2. Considering again Example 4.1, how would the bit-error rate be affected by sampling each
rectangular pulse three times, rather than just once in the centre, and averaging over the three
measurements?
4.3: A mobile phone receives a signal over "line of sight" radio (no reflections) from a base station
and is affected by additive white Gaussian noise mainly introduced by the mobile phone itself. The
received power from the base-station decreases with increasing distance according to an "inverse
square power law", i.e. P(d) is proportional to 1/d2 where P(d) is the received power at distance d.
Binary bipolar signalling is used with optimal detection threshold at the sampling points. If the
distance from the base-station is currently 800 metres, how much nearer must we move towards it if
the error probability is be reduced from its current value of 10-5 to 10-7. (Use the graph of Q(z)
against z).
4.4. A signalling system has eight symbols which are rectangular pulses of amplitude -4, -3, -2, -1, 0,
1, 2, 3 volts. If the signalling rate is 10 kbaud, what is the maximum achievable bit-rate.
4.5. A system has a bit-rate of 32 kbits/second. How could you achieve this with a signalling rate of 1
kbaud.
4.6. What is the bandwidth efficiency of (i) AMI and (ii) 4B3T coding if binary or ternary pulses of
duration T seconds require a bandwidth of 3T/4 Hz.
CS3282 Digital Communications
4.8
Graph of Complementary Error function, Q ( z ) =
25/08/06/BMGC
∫
∞
z
 t2
exp −
2π
 2
1

dt
