Download Error Probability for M Signals

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Signal Corps (United States Army) wikipedia , lookup

Immunity-aware programming wikipedia , lookup

Cellular repeater wikipedia , lookup

Soft error wikipedia , lookup

Analog-to-digital converter wikipedia , lookup

Battle of the Beams wikipedia , lookup

Opto-isolator wikipedia , lookup

Radio direction finder wikipedia , lookup

Telecommunication wikipedia , lookup

Analog television wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

ATSC tuner wikipedia , lookup

High-frequency direction finding wikipedia , lookup

Transcript
Chapter 3
Error Probability for M Signals
In this chapter we discuss the error probability in deciding which of M signals was transmitted over an arbitrary
channel. We assume the signals are represented by a set of N orthonormal functions. Writing down an expression
for the error probability in terms of an N-dimensional integral is straightforward. However, evaluating the integrals
involved in the expression in all but a few special cases is very difficult or impossible if N is fairly large (e.g.
N 4). For the special case of orthogonal signals in the last chapter we derived the error probability as a single
integral. Because of the difficulty of evaluating the error probability in general bounds are needed to determine
the performance. Different bounds have different complexity of evaluation. This first bound we derive is known
as the Gallager bound. We apply this bound to the case of orthogonal signals (for which the true answer is already
known). The Gallager bound has the property that when the number of signals become large the bound becomes
tight. However, the bound is fairly difficult to evaluate for many signal sets. A special case of the Gallager
bound is the Union-Bhattacharayya bound. This is simpler than the Gallager bound to evaluate but also is looser
than the Gallager bound. The last bound considered is the union bound. This bound is tighter than the UnionBhattacharayya bound and the Gallager bound for sufficiently high signal-to-noise ratios. Finally we consider a
simple random coding bound on the ensemble of all signal sets using the Union-Bhattacharayya bound.
The general signal set we consider has the form
N 1
∑ si j ϕ j t i si t 0 1 M 1 j 0
The optimum receiver does a correlation with the N orthonormal waveforms to form the decision variables.
rj
T
0
r t ϕ j t dt j 0 1 N 1 The decision regions are for equally likely signals given by
Ri
∆
r : pi r p j r j i The error probability is then determined by
Pe i
P
M 1
R j Hi j 0 j i
For all but a few small dimensional signals or signals with special structures (such as orthogonal signal sets)
the exact error probability is very difficult to calculate.
3-1
CHAPTER 3. ERROR PROBABILITY FOR M SIGNALS
3-2
ϕ0 t r0
r1
r t ϕ0 t dt
r t ϕ1 t dt
ϕ1 t r t
Find si with
smallest
ϕN 1
r si 2
t
r t ϕN 1
rN t dt
1
Figure 3.1: Optimum Receiver in Additive White Gaussian Noise
1. Error Probability for Orthogonal Signals
Represent the M signals in terms of M orthogonal function ϕi t as follows
s0 t Eϕ0 t s1 t Eϕ1 t sM 1
t
EϕM 1
t As shown in the previous chapter we need to find the largest value of r t s j t for j 0 M 1. Instead we
will normalize this and determine the largest value of r j r t s j t E. To determine the error probability we
need to determine the statistics of r j . Assume signal si is transmitted. Then
rj
∆
T
0
E r j Hi E
0
0
r t ϕ j t dt Hi T
E r t Hi ϕ j t dt
T
E si t n t ϕ j t dt
T
0
T
0
r t ϕ j t dt
Eϕi t ϕ j t dt
Eδi j 1.
3-3
The variance of r j is determined as follows. Given Hi
r j E r j Hi E r j E r j Hi Hi 2
0
0
n t ϕ j t dt
T
T
0
0
T
0
0
T
T
E n t n s ϕ j t ϕ j s dtds
T
K t s ϕ j t ϕ j s dtds
N0 δ t s ϕ j t ϕ j s dtds
2
T
0
N0 ϕ j t ϕ j t dt
2
T
0
N0
2
Furthermore, each of these random variables is Gaussian (and independent).
1 P correct
P error
P correct
M 1
∑ P Hi Hi πi
i 0
∆ P Hi Hi Pc i
P ri r j j i Hi E P ri r j j i Hi ri M 1
E
∏
P ri
∏
Φ
M 1
r j Hi ri j 0 j i
E
Φ x
x
∞
2
1
e u 2 du
2π
Pe i
. Now let u 1 Pc i j 0 j i
∞
1
πN0
∞ ri
N0 2
ri
N0 2
exp 1 ri E N0
2
ΦM 1
ri
dri N0 2
. Then
N0
N0
1
exp u E 2 N0 1 ΦM 2
∞ πN0 2
∞ 1
2E 2
2 1 ΦM 1 u du
exp u N0
∞ 2π
∞ 1 2E M 2 2
M 1
u e u 2 du
Φ u
Φ
N
∞ 2π
0
∞
1
u du
where the last step follows from using an integration by parts approach. Later on we will find an upper bound on
the above that is more insightful. It is possible to determine (using L’Hospital’s rule) the limiting behavior of the
error probability as M ∞.
In general if we have M decision variables for an M-ary hypothesis testing problem that are conditionally
independent given the true
hypothesis and there is a density (distribution) of the decision variable for the true
statistic denoted f 1 x (F1 x ) and a density and distribution function for the other decision variables ( f 2 x F2 x then the probability of correct is
Pc
∞
∞
f1 x F2M 1
x dx
CHAPTER 3. ERROR PROBABILITY FOR M SIGNALS
3-4
The probability of error is
1
Pe
∞
∞
M 1
f1 x F2M ∞
∞
1
x dx
F1 x F2M 2
x f2 x dx
The last formula is many times easier to compute numerically than the first because the former is the difference
between two numbers that are very close (for small error probabilities).
2. Gallager Bound
In this section we derive an upper bound on the error probability for M signals received in some form of noise. Let
r : pi r ∆
Ri
∆
Ri
Pc i
∆
Now
Ri
For λ
∆ r :
0 let
j i
r : pi r p j r for some j i P Hi Hi P H i Hi P Ri Hi ∆
Pe i
p j r p j r
pi r 1 for some j i λ
1
r:∑
j i
λ
1 which
p j r p j r
Claim: Then Ri Ri . Proof: If r Ri then
1 for some j i. Thus for some j i,
pi r pi r implies that
λ
p j r
1
∑
p
r
i
j i
and thus r Ri . Thus we have shown that Ri Ri . Now we use this to upper bound the error probability.
Pe i P Ri Hi P Ri Hi pi r dr
Ri
I Ri pi r dr
RM
where
1 r Ri
I Ri 0 r Ri Ri
For r
Ri and ρ
∆
0 we have
Ri and ρ
j i
0 we have
∑
j i
λ
ρ
λ
ρ
p j r
pi r ∑
For r
p j r
pi r p j r
pi r 1
0
2.
3-5
Thus
∑
I Ri λ
p j r
pi r j i
ρ
Applying this bound to the expression for the error probability we obtain
Pe i
∑
R
λ
p j r
pi r RM
j i
ρ
pi r dr
pi r M
∑ p j r 1 λρ
λ
ρ
dr
j i
for ρ 0 and λ 0. If we let λ 1 1 ρ (this is the value that minimizes the bound, see Gallager problem 5.6) the
resulting bound is known as the Gallager bound.
Pe i
pi
RM
r 1
1
∑
ρ
p j r 1
1
ρ
ρ
dr j i
If we let ρ 1 we obtain what is known as the Bhattacharayya bound.
Pe i
RM
∑
pi
r ∑
1
2
j i
p j r 1
2
dr
pi r p j r dr M
j i R
The average error probability is then written as
M
Pe
∑ πi Pe i i 1
1. Example of Gallager bound for M-ary orthogonal signals in AWGN.
pi r Pe i
∞
∞
∞
∑
j i
∞
∞
∞
∞
∑
j i
∞
∞
∞
∞
E 2 N0
1
e r j πN
0
M 1
∏
j 0
M 1
∏
k 0
∏
j 0
1
e
πN0
∏
r2j N0
1
E N0
∏ πN0 e j i
2
E
1
e
πN0
1
e
πN
0
M 1
j i
1
e ri πN0
∞
1
e ri πN0
1
e
πN0
2
N0
∏
k j
r2j
rk2
r2j
N0
e2
r2j N0
rk2
N0
1
e
πN
0
Eri N0
e2
e
Er j N0
E N0
e
e
N0
1
1 ρ
N0
dr
E N0
E N0
ρ
1
1 ρ
1
1 ρ
1
1 ρ
ρ
2 Er j
2 Eri exp ∑ exp 1 ρ N0 dr 1 ρ N0
j i
ρ
dr
CHAPTER 3. ERROR PROBABILITY FOR M SIGNALS
3-6
Let
g z
where zi
ri 2E z
N0 1 ρ
exp N0 2. Then
Pe i
∞
e
∞
e
e
∏
E N0
z2j 2
∞ M 1
∞ j 0 2π
E N0
ρ
∑ g zj E g zi E
ρ
g zi ∑ g z j dz
j i
j i
Now it is easy to show (by completing the square) that
E g z Let f x ∞
e
z2 2
2E z
dz
N0 1 ρ
∞ exp 2π
E
N0 1 ρ exp
2
xρ where 0 ρ 1. Then f x is a concave function and thus by Jensen’s inequality we have that
f E X E f X Thus
E
∑ g zj ρ
E
j i
Thus
Pe i
M 1 ρe
E N0
M 1 ρ exp exp ∑ E g zj j i
M 1
ρ
ρ
ρ
E g zj ρ
E g z 1 ρ
E
E
1 ρ
N0
N0 1 ρ ρ
1 ρ
E
N0
j i
∑ g zj ρ ln M 2
Now we would like to minimize the bound over the parameter ρ keeping in mind that the bound is only valid for
0 ρ 1. Let a NE0 and b ln M and
ρ
ρb f ρ a
1 ρ
Then
a
1
f ρ 0 ρ b
Since 0
ρ
1 the minimum occurs at an interior point of the interval 0 1 if
1 4
ln M
E
N0
1
in which case the bound becomes
2
Pe i
exp E
N0
ln M
3.
3-7
0
10
-2
10
-4
10
M=8
-6
10
-8
Pes
10
-10
10
-12
10
M=64
-14
10
M=128
-16
10
-18
10
-20
10
-5
0
5
Eb/N0 (dB)
10
15
Figure 3.2: Comparison of Gallager Bound and exact error probability for orthogonal signals. In each group the
upper curve is the bound and the lower curve is the exact error probability
1 in which case the upper bound becomes Pe i exp ln M 2NE 0 . If lnEM 1
N0
then ρmin 0 in which case the upper bound becomes Pe i 1 In summary the Gallager bound for M orthogonal
If ln M E
N0
1 4 then ρmin
signals in white Gaussian noise is
Pe i
1
exp exp E
2N0
E
N0
E
N0
2
ln M
ln M
E lnM N0
ln M
E
N0
4 ln M
4 ln M Normally a communication engineer is more concerned with the energy transmitted per bit rather than the energy
transmitted per signal, E. If we let Eb be the energy transmitted per bit then these are related as follows
Eb
E
log2 M
Thus the bound on the error probability can be expressed in terms of the energy transmitted per bit as
Pe i
1
exp2 log2 M
exp2 log2 M
Eb
N0
2
ln 2
ln 2 Eb
2N0
Eb
N0
ln 2
Eb N0
ln 2
Eb
N0
4 ln 2
4 ln 2
where exp2 x denotes 2x . Note that as M
∞, Pe
0 if NEb0 ln 2 = -1.59dB. Below we plot the exact error
probability and the Gallager bound for M orthogonal signals for M 8 64 512.
3. Bit error probability
So far we have examined the symbol error probability for orthogonal signals. Usually the number of such signals
is a power of 2, e.g. 4, 8, 16, 32, .... If so then each transmission of a signal is carrying log 2 M bits of information.
CHAPTER 3. ERROR PROBABILITY FOR M SIGNALS
3-8
In this case a communication
engineer is usually interested in the bit error probability as opposed to the symbol
error probability. Let d si s j be the (Euclidean) distance between si and s j , i.e
∆
d 2 si s j ∞
∞
s j t 2 dt ∑ si l s j l 2 l 1
si t ∞
Now consider any signal set for which the distance between every pair of signals is the same. Orthogonal signal
sets with equal energy satisfy this condition. Let k log2 M. If si is transmitted there are M 1 other signals to
which an error can be made. The number of signals which cause an error of i bits out of the k is ki . Since all
signals are the same distance from si the conditional probability of a symbol error causing i bits to be in error is
k
i
M 1
So the average number of bit error given a symbol error is
k M 1
i
k
∑i
i 1
k2k 1
M 1
So the probability of bit error given symbol error is
1 k2k 1
kM 1
2k 1
2k 1
So
2k 1
Pe i
2k 1
and this is true for any equidistant, equienergy signal set.
Pb i
4. Union Bound
Assume
πi
Let
0
r : pi r Ri
M 1
i
i r : pi r p j r for some j i M 1
r : pi r p j r j 0 j i
r : pi r p j r Ri
Ri j
1
M
p j r
Then
Pe i
Pr
for all j
Pr
Ri Hi Ri j Hi j i
∑ P Ri j Hi j i
where
P Ri j Hi P
pi r p j r
1 Hi
4.
3-9
This is the union bound.
We now consider the bound for an arbitrary signal set in additive white Gaussian noise.
Let
N
∑ sil ϕl t l 1
si t 0
For additive white Gaussian noise
pi r pi r p j r
N
∏
l 1
l 1
sil 2 rl
1 rl
N0
P Ri j Hi rl
s jl 2 E j Ei
2 r si s j N0
N0
M 1 Thus
2 P
r si s j N0
sil 2 s jl and Ek ∑Nl 1 s2kl for 0 k
πN0
N
exp
1
N0
∏ exp where r si s j ∑Nl 1 rl sil
exp M 1
i
Ei E j
Hi
N0
To do this calculation we need to calculate the statistics of the random variable r s i s j . The mean and variance
are calculated as follows.
E r si s j Var r si s j Hi E n si si s j Ei si s j Hi Var n si si s j N0
si s j 2 2
Also r si s j is a Gaussian random variable. Thus
P
r si s j Ei E j
2
Φ
Ei E j
2
N0
2
Ei si s j si s j
Ei 2 si s j E j
2N0 si s j
si s j
Q
Q
2N0
Thus the union bound on the error probability is given as
Pe i
∑Q
j i
s j si
2N0
Note that si s j 2 dE2 si s j , i.e. the square of the Euclidean distance.
We now use the following to derive the Union-Bhattacharyya bound. This is an alternate way of obtaining
this bound. We could have started with the Union-Bhattacharyya bound derived from the Gallager bound, but we
would get the same answer.
CHAPTER 3. ERROR PROBABILITY FOR M SIGNALS
3-10
1 x 2
Fact: Q x e x 2 x 0. (To prove this let X1 and X2 be independent Gaussian random variables
2e
1 2
2 2
2
mean 0 variance 1. Then show Q2 x P X1 x X2 x 2 x . Use the fact the X1 X2 has
4 P X1 X2
Rayleigh density; see page 29 of Proakis)
Using this fact leads to the bound
2
2
si s j
4N0
∑ exp Pe i
j i
2
This is the Union Bhattacharyya bound for an additive white Gaussian noise channel.
5. Random Coding
Now consider 2NM communication systems corresponding to all possible signals where
si j
si
2
E
NE
0
M 1
i
Consider the average error probability, averaged over all possible selections of signal sets
For example: Let N 3 M 2 . There are 23 2 26 64 possible sets of 2 signals with each signal a
linear combination of three orthogonal signals with the coefficients required to be one of two values.
Set number 1
Set number 2
E ϕ1 t E ϕ2 t E ϕ3 t s1 t E ϕ1 t E ϕ2 t E ϕ3 t s0 t s0 t E ϕ1 t E ϕ1 t E ϕ1 t E ϕ2 t E ϕ3 t E ϕ1 t E ϕ2 t E ϕ3 t E ϕ2 t E ϕ2 t E ϕ3 t ..
.
s1 t s1 t ..
.
Set number 64
s0 t E ϕ3 t Let Pe i k be the error probability of signal set k given Hi . Then
Pe i
2NM
1
∑ Pe i k 2NM
k 1
If Pe i α then at least one of the of the 2NM signals sets must have Pe i k α (otherwise Pe i k α for all
k
Pe i α; contradiction). In other words there exists a signal set with Pe i α. This is known as the random
coding argument. Let si j 0 i M 1 1 j N be independent identically distributed random variables with
P si j E P si j E 12 and Pe i E Pe i s where the expectation is with respect to the random
variables s.
Pe i
Let Xi j
since P sil
si s j
2
E Pe i s ∑ E exp
j i
si s j
4N0
2
∑Nl 1 sil s jl 2 . Then
P Xi j 4Em P si and s j differ in m places out of N N N
m 2
s jl P sil s jl 1
2.
5.
3-11
So
E exp
E e
Xi j 4N0
si s j
4N0
∑
2
1 log2 1 e E N0
Pe i
e
m4E 4N0
N
N
m
m 0
1 e
e
E N0 m
E N0 N
exp2 N 1 log2 1 e Let R0
N
∑
N
2
N
Xi j 4N0
m 0
E e
N 2
m
N
2
E N0
then
∑ 2
NR0
M 1 2
M2 NR0
NR0
2 N R0 R j i
2
where R is the number of bits transmitted per dimension and E is the signal energy per dimension. We
N
have shown that there exist a signal set for which the average value of the error probability for the i th signal
is small. Thus we have shown that as N goes to ∞ the error probability given s i was transmitted goes to zero if
the rate is less than the cutoff rate R0 . This however does not imply that there exist a code s0 sM 1 such that
Pe 0 Pe M 1 are simultaneously small. It is possible that Pe i is small for some code for which Pe j is large. We
now show that we
can simultaneously make each of the error probabilities small simultaneously. First chose a
code with M 2 2RN codewords for which the average error probability is less than say εN 2 for large N. If
more than 2NR of these codewords has Pe i εN then the average error probability would be greater than εN 2, a
contradiction. Thus at least M 2 2NR of the codewords must have Pe i εN . So delete the codewords that have
Pe i εN (less than half). We obtain a code with (at least) 2NR codewords with Pe i 0 as n ∞ for R R0 .
Thus we have proved the following.
Pe
0 as N
Theorem: There exist a signal set with M signals in N dimensions with Pe 2 N R0 R ∞ provided R R0 .
Note: E is the energy per dimension. Each signal then has energy NE and is transmitting log 2 M bits of
information so that Eb logNEM E R is the energy per bit of information.
2
From the theorem, reliable communication (Pe 0) is possible provided R R0 1, i.e.
log M
1 log2 1 exp
log2 1 e 1 R
21 1 e
R
Eb R N0
ln 21 For
R
0
ln 21 Note: M orthogonal signals have Pe
R
Eb R N0
R
1
R
1
0 if Eb N0
log2 M
N
The theorem guarantees existence of signals with
Eb R N0
R
e Eb R N0 21 R 1
Eb ln 21 R 1 N0
R
R
Eb R N0 2 ln 2
Pe
0 if Eb N0
2 ln 2
ln 2. The rate of orthogonal signals is
log2 M
M
log2 M
N
0 as M
R
∞
0 and Pe
0 as M
∞.
CHAPTER 3. ERROR PROBABILITY FOR M SIGNALS
3-12
10
Eb/N0 (dB)
9
8
7
6
5
Achievable Region
4
3
2
1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Code Rate
Figure 3.3: Cutoff Rate for Binary Input-Continuous Output Channel
0
10
Pe
−5
10
n=50
−10
10
n=1000
n=100
n=200
n=10000
n=500
−15
10
0
1
2
3
4
5
6
7
8
9
10
Eb/N0 (dB)
Figure 3.4: Error Probabilities Based on Cutoff Rate for Binary Input-Continuous Output Channel for Rate 1/2
codes
5.
3-13
0
10
P
e
−5
10
n=50
n=100
−10
10
n=200
n=500
n=1000
−15
10
1
1.5
2
2.5
3
3.5
4
4.5
5
E /N (dB)
b
0
Figure 3.5: Error Probabilities Based on Cutoff Rate for Binary Input-Continuous Output Channel for Rate 1/8
codes
0
10
Pe
n=100
−5
10
n=200
−10
10
n=500
n=10000
n=1000
−15
10
2
3
4
5
6
7
Eb/N0 (dB)
8
Figure 3.6: Error Probabilities Based on Cutoff Rate for Binary Input-Continuous Output Channel for Rate 3/4
codes
CHAPTER 3. ERROR PROBABILITY FOR M SIGNALS
3-14
1. Example of Gallager bound for M-ary signals in AWGN.
In this section we evaluate the Gallager bound for an arbitrary signal set in additive white Gaussian noise channel.
As usual assume the signal set transmitted has the form
si t N 1
∑ µi j φ j t i 0 1 M 1
j 0
The optimal receiver does a correlation with each of the orthonormal
functions to produce the decision statistic
r0 rN 1 . The conditional density function of ri given signal si t transmitted is given by
pi r e rk ∏
k 1 πN0
N
1
µi k 2 N0
If we substitute this into the general form of the Gallager bound we obtain
∞
∞
∞ ∞
Pe i
∑
j i
∞
∞
1
∏ πN0 e rk k 1
N
1
∏ πN0 e rk k 1
N
k 1
N
∑ ∏
2
e rk ∞
∞
1
exp ∏
∞ k 1 πN0
1
1 ρ
ρ
1
1 ρ
dr
2rk µi k µ2i k N0 1 ρ
∞ N
N0
2rk µ j k µ2j k N0
j i k 1
µ j k 2 N0
2
1
e rk πN0
N
∏
∞ ∞
µi k
2
1
1 ρ
ρ
dr
rk2 2rk µi k µ2i k N0 1 ρ rk 2 ρ
N0 1 ρ ρ
2rk µ j k µ2j k dr
∑ ∏ exp N0 1 ρ j i k 1
µi 2 ρ ∞ ∞ N 1 exp 1 r µi k 2 exp k
N0 1 ρ 2 ∞
N0
1 ρ
∞ k∏
1 πN0
ρ
2
N
∑ exp µ j ∏ exp 2r k µ j k dr
N0 1 ρ k 1
N0 1 ρ j i
N
µ j 2 µi 2 ρ exp ∑
2
N0 1 ρ N0 1 ρ j i
ρ
∞ ∞ N 1
2rk µ j k
µi k 2 1 exp rk exp dr N0
1 ρ
N0 1 ρ ∞ ∞ k∏
1 πN0
2
µ j 2
µ µ 2
µi 2 ρ
1
µi 2 i j 2 exp 2 ∑ exp exp N0 1 ρ N0 1 ρ N0 1 ρ 1 ρ j i
ρ
1 ρ µ j 2 1 dE2 µi µ j exp
∑
N0
1 ρ 2
1 ρ 2 j i
exp 2E for i j and µ j 2 E and the bound becomes
ρ
1
2E
1 ρ E ∑ exp N0 1 ρ 2 1 ρ 2 j i
When the signals are all orthogonal to each other then d E2
Pe i
ρ
6.
3-15
1
10
Union-Bhatttacharyya Bound
0
Pec
10
-1
10
Union Bound
-2
10
-3
10
-10
-8
-6
-4
-2
0
2
Eb/N0 (dB)
4
6
8
10
Figure 3.7: Comparison of Gallager Bound, Union Bound and Union Bhattacharyya Bound for the Hamming Code
with BPSK Modulation
M 1 exp ρ
Eρ
N0 1 ρ This is identical to the previous expression.
Now we consider a couple of different signal sets. The first signal set has 16 signals in seven dimensions. The
energy in each dimension is E so the total energy transmitted is 7E. The energy transmitted per information bit
is Eb 7E 4 The geometry of the signal set is such that for any signal there are seven other signals at Euclidean
distance 12E, seven other signals at Euclidean distance 16E and one other signal at distance 28E. All signals have
energy 7E. (This is called the Hamming code). The fact that the signal set is geometrically uniform is due to the
linearity of the code. We plot the Gallager bound for ρ 0 1 0 2 1 0. The Union-Bhattacharyya bound is the
Gallger bound with ρ 1 0. The second signal set has 256 signals in 16 dimensions with 112 signals at distance
24E, 30 signals at distance 32E, 112 signals at distance 40E and 1 signal at distance 64E. In this case E b 2E.
As can be seen from the figures the union bound is the tightest bound except at very low signal-to-noise ratios
where the Gallager bound stays below 1. At reasonable signal-to-noise ratios the optimum ρ in the Gallager bound
is 1 and thus it reduces to the Union-Bhattacharyya bound.
6. Problems
1. Using L’Hospitals rule on the log of ΦM 1
show that if E
lim
M
∞
Φ x
2E N0
M 1
Eb log2 M then
0
01 EEb N
b N0
ln 2
ln 2
(3.1)
CHAPTER 3. ERROR PROBABILITY FOR M SIGNALS
3-16
1
10
Union-Bhattacharyya Bound
0
Pec
10
Union Bound
-1
10
-2
10
-3
10
-10
-8
-6
-4
-2
0
2
Eb/N0 (dB)
4
6
8
10
Figure 3.8: Comparison of Gallager Bound, Union Bound and Union Bhattacharyya Bound for the NordstromRobinson code with BPSK Modulation
6.
3-17
and consequently limM
M orthogonal signals.
∞ Pe
is 0 if Eb N0
ln 2 and 1 if Eb N0
ln 2 where Pe is the error probability for
2. (a) Show that for any equienergy, equidistant (real) signal set, si s j a constant for i
equienergy implies si 2 is a constant and equidistant implies si s j is a constant).
j. (Note:
(b) For any equienergy signal set show that
∑ ρi j M M 1 i∑
0 j 0
M 1M 1
1
ρave
1
M 1
j i
where
ρi j
si s j si s j si s j E
3. (R0 coding theorem for discrete memoryless channels)
Consider the discrete memoryless channel (DMC) which has input alphabet A a 1 a2 a A (with A being the number of letters in A and is finite) and output alphabet B b 1 b2 b B (with B being the
number of letters in B and is finite).
As in the Gaussian noise channel the channel is characterized by a set
of “transition probabilities” p b a a A b B such that if we transmit a signal si t where
N
∑ si l φl t si t l 1
with si l
A then the received signal has the form
N
∑ rl φl t l 1
r t
with p rl
b si l a p b a for a A, b B and
N
∏ p rl si l l 1
p r1 rN si 1 si N 1 Now we come to the R0 coding theorem for a discrete (finite alphabet) memoryless (equation (1) is satisfied)
channel (DMC).
Prove there exist M signals in N dimensions such that
Pe
1 M 1
Pe i
M i∑
0
where
N R0 R
log2 M
N
log2 J0 R
R0
2
min E J X1 X2 p x
J a1 a2 ∑ p b a 1 p b a 2 J0
2
b B
and
in (2) X1 and X2 are independent, identically distributed random variables with common distribution
p x ,
Step 1: Let M
2 and let
s0
s0 1 s0 N CHAPTER 3. ERROR PROBABILITY FOR M SIGNALS
3-18
and
s1
s1 1 s1 N The decoder will not decide s0 if
p r s1 p r s0 R1
r : p r s1 p r s0 R0
r : p r s0 p r s1 Let
and
(Note that R0 and R1 may not be disjoint). Show that
∑
Pe i
r Rj
p r si ∑
∏∑
i
p r s0 p r s1 all r
N
i 0 1 j
p rl s0l p rl s1l l 1 rl B
N
∏ J s0 l s1 l l 1
Step 2: Apply the union bound to obtain for M signals (codewords)
N
∑ ∏ J si l s j l Pe i
j il 1
Step 3: Average Pe i over all possible signal sets where
the signals are chosen independently according to
the distribution that achieves J0 (i.e. that distribution p x on A such that
J0
(treat si j : 1
i
M 1
j
E J X1 X2 N as i.i.d. random variables with distribution p x ) to show that
P̄e i
∆ E Pe i M2 NR0
Step 4: Complete the proof.
4. Show for a binary symmetric channel defined as
A B
that
R0
p
a b
0 1 p b a 1 p a b
1 log2 1 2 p 1 p 5. A set of 16 signals is constructed in 7 dimension using only two possible coefficients, i.e. s i j E E .
Let Ak i j : si s j 2 4Ek i.e. Ak is the number of signal pairs with squared distance 4Ek. The
signals are chosen so that
16 k 0
0 k 1 2
Ak 112 k 3 4
0 k 5 6
16 k 7
Find the union bound and the union-Bhattacharyya bound on the error probability of the optimum receiver
in additive white Gaussian noise with two sided power spectral density N0 2.
6.
3-19
6. A modulator uses two orthonormal signals (φ1 t and φ2 t ) to transmit 3 bits of information (8 possible
equally likely signals) over an additive white Gaussian noise channel with power spectral density N0 2. The
signals are given as
s1 t s2 t s3 t s4 t s5 t s6 t s7 t s8 t 1φ1 t y 3 φ2 t y 3 φ2 t E 1φ1 t E 2φ1 t y φ2 t E 0φ1 t yφ2 t y φ2 t E 2φ1 t y 3 φ2 t E 1φ1 t E 1φ1 t y 3 φ2 t E 0φ1 t y 2 3 φ2 t E
(a) Determine the optimum value of the parameter y to minimize the average signal power transmitted. Draw
the optimum decision regions for the signal set (in two dimensions). (b) Determine the union bound to the
average error probability in terms of energy per information bit E b to noise density ratio N0 . (c) Can you
tighten this bound?
ϕ2 t 3
y
2
1
1
ϕ1 t 2
3
3
7. Consider an additive (nonwhite) Gaussian noise channel with one of two signals transmitted. Assume the
noise has covariance function K s t . Using
the Bhattacharray bound show that the error probability when
transmitting one of two signals (s0 t or s1 t ) can be bounded by
Pe
e
K 1 2 s0 s1
2
8
CHAPTER 3. ERROR PROBABILITY FOR M SIGNALS
3-20
If the noise is white, what does the bound become?
8. Consider a Poisson channel with one of 4 signals transmitted. Let the signals be as shown below. Assume
when the signal is present that the intensity of the photon process is λ 1 and when the signal is not present the
intensity is λ0 . That is the received signal during the interval 0 T is Poisson with parameter λ 1 if the laser
is on and λ0 if the laser is off. Find the optimal receiver for minimizing the probability of error for a signal
(as opposed to a bit). Find an upper bound on the error probability.
s1 t λ1
4T
t
s2 t λ1
T
5T
t
s3 t λ1
2T
6T
t
s4 t λ1
3T
7T
t
9. A signal set consists of 256 signals in 16 dimensions with the coefficients being either E or
distance structure is given as
Ak
i j : si s j 2 4Ek E. The
256
k 0
28672
k 6
7680
k 8
28672
k 10
256
k 16
0
otherwise
These signals are transmitted with equal probability over an additive white Gaussian noise channel. De-
6.
3-21
termine the union bound on the error probability. Determine the union-Bhattacharyya bound on the error
probability. Express your answer in terms of the energy transmitted per bit. What is the rate of the code in
bits/dimension?
10. A communication system uses a N dimensions and a code rate of R bits/dimension. The goal is not low
error probability but high throughput (expected number of successfully received information bits per coded
bit in a block on length N). If we use a low code rate then we have high success probability for a packet
but few information bits. If we use a high code rate then we have a low success probability but a larger
number of bits transmitted. Assume the channel is an additive white Gaussian noise channel and the input
is restricted to binary modulation (each coefficient in the orthonormal expansion is either E or E.
Assume as well that the error probability is related to the block length, energy per dimension and code rate
via the cutoff rate theorem (soft decisions). Find (and plot) the throughput for code rates varying from 0.1
to 0.9 in steps of 0.1 as a function of the energy per information bit. (Use Matlab to plot the throughput).
Assume N 500. Be sure to normalize the energy per coded bit to the energy per information bit. Compare
the throughput of hard decision decoding (BPSK and AWGN) and soft decision decoding.