Download Elements of Information Theory Second Edition Solutions to Problems

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of randomness wikipedia , lookup

Randomness wikipedia , lookup

Birthday problem wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Probability interpretations wikipedia , lookup

Inductive probability wikipedia , lookup

Entropy (information theory) wikipedia , lookup

Probability box wikipedia , lookup

Law of large numbers wikipedia , lookup

Conditioning (probability) wikipedia , lookup

Transcript
Elements of Information Theory
Second Edition
Solutions to Problems
Thomas M. Cover
Joy A. Thomas
August 23, 2007
1
COPYRIGHT 2006
Thomas Cover
Joy Thomas
All rights reserved
2
Contents
1 Introduction
7
2 Entropy, Relative Entropy and Mutual Information
9
3 The Asymptotic Equipartition Property
49
4 Entropy Rates of a Stochastic Process
61
5 Data Compression
97
6 Gambling and Data Compression
139
7 Channel Capacity
163
8 Differential Entropy
203
9 Gaussian channel
217
10 Rate Distortion Theory
241
11 Information Theory and Statistics
273
12 Maximum Entropy
307
13 Universal Source Coding
323
14 Kolmogorov Complexity
337
15 Network Information Theory
347
16 Information Theory and Portfolio Theory
393
17 Inequalities in Information Theory
407
3
4
CONTENTS
Preface
Here we have the solutions to all the problems in the second edition of Elements of Information
Theory. First a word about how the problems and solutions were generated.
The problems arose over the many years the authors taught this course. At first the
homework problems and exam problems were generated each week. After a few years of
this double duty, the homework problems were rolled forward from previous years and only
the exam problems were fresh. So each year, the midterm and final exam problems became
candidates for addition to the body of homework problems that you see in the text. The
exam problems are necessarily brief, with a point, and reasonable free from time consuming
calculation, so the problems in the text for the most part share these properties.
The solutions to the problems were generated by the teaching assistants and graders for
the weekly homework assignments and handed back with the graded homeworks in the class
immediately following the date the assignment was due. Homeworks were optional and did
not enter into the course grade. Nonetheless most students did the homework. A list of the
many students who contributed to the solutions is given in the book acknowledgment. In
particular, we would like to thank Laura Ekroot, Will Equitz, Don Kimber, Mitchell Trott,
Andrew Nobel, Jim Roche, Vittorio Castelli, Mitchell Oslick, Chien-Wen Tseng, Michael Morrell, Marc Goldberg, George Gemelos, Navid Hassanpour, Young-Han Kim, Charles Mathis,
Styrmir Sigurjonsson, Jon Yard, Michael Baer, Mung Chiang, Suhas Diggavi, Elza Erkip,
Paul Fahn, Garud Iyengar, David Julian, Yiannis Kontoyiannis, Amos Lapidoth, Erik Ordentlich, Sandeep Pombra, Arak Sutivong, Josh Sweetkind-Singer and Assaf Zeevi. We would
like to thank Prof. John Gill and Prof. Abbas El Gamal for many interesting problems and
solutions.
The solutions therefore show a wide range of personalities and styles, although some of
them have been smoothed out over the years by the authors. The best way to look at the
solutions is that they offer more than you need to solve the problems. And the solutions in
some cases may be awkward or inefficient. We view that as a plus. An instructor can see the
extent of the problem by examining the solution but can still improve his or her own version.
The solution manual comes to some 400 pages. We are making electronic copies available
to course instructors in PDF. We hope that all the solutions are not put up on an insecure
website—it will not be useful to use the problems in the book for homeworks and exams if the
solutions can be obtained immediately with a quick Google search. Instead, we will put up a
small selected subset of problem solutions on our website, http://www.elementsofinformationtheory.com,
available to all. These will be problems that have particularly elegant or long solutions that
would not be suitable homework or exam problems.
5
6
CONTENTS
We have also seen some people trying to sell the solutions manual on Amazon or Ebay.
Please note that the Solutions Manual for Elements of Information Theory is copyrighted
and any sale or distribution without the permission of the authors is not permitted.
We would appreciate any comments, suggestions and corrections to this solutions manual.
Tom Cover
Durand 121, Information Systems Lab
Stanford University
Stanford, CA 94305.
Ph. 650-723-4505
FAX: 650-723-8473
Email: [email protected]
Joy Thomas
Stratify
701 N Shoreline Avenue
Mountain View, CA 94043.
Ph. 650-210-2722
FAX: 650-988-2159
Email: [email protected]
Chapter 1
Introduction
7
8
Introduction
Chapter 2
Entropy, Relative Entropy and
Mutual Information
1. Coin flips. A fair coin is flipped until the first head occurs. Let X denote the number
of flips required.
(a) Find the entropy H(X) in bits. The following expressions may be useful:
∞
!
∞
!
1
r =
,
1−r
n=0
n
nr n =
n=0
r
.
(1 − r)2
(b) A random variable X is drawn according to this distribution. Find an “efficient”
sequence of yes-no questions of the form, “Is X contained in the set S ?” Compare
H(X) to the expected number of questions required to determine X .
Solution:
(a) The number X of tosses till the first head appears has the geometric distribution
with parameter p = 1/2 , where P (X = n) = pq n−1 , n ∈ {1, 2, . . .} . Hence the
entropy of X is
H(X) = −
= −
∞
!
pq n−1 log(pq n−1 )
n=1
"∞
!
n=0
pq log p +
n
n=0
−p log p pq log q
−
1−q
p2
−p log p − q log q
=
p
= H(p)/p bits.
=
If p = 1/2 , then H(X) = 2 bits.
9
∞
!
npq log q
n
#
10
Entropy, Relative Entropy and Mutual Information
(b) Intuitively, it seems clear that the best questions are those that have equally likely
chances of receiving a yes or a no answer. Consequently, one possible guess is
that the most “efficient” series of questions is: Is X = 1 ? If not, is X = 2 ?
If not, is X = 3 ? . . . with a resulting expected number of questions equal to
$∞
n
n=1 n(1/2 ) = 2. This should reinforce the intuition that H(X) is a measure of the uncertainty of X . Indeed in this case, the entropy is exactly the
same as the average number of questions needed to define X , and in general
E(# of questions) ≥ H(X) . This problem has an interpretation as a source coding problem. Let 0 = no, 1 = yes, X = Source, and Y = Encoded Source. Then
the set of questions in the above procedure can be written as a collection of (X, Y )
pairs: (1, 1) , (2, 01) , (3, 001) , etc. . In fact, this intuitively derived code is the
optimal (Huffman) code minimizing the expected number of questions.
2. Entropy of functions. Let X be a random variable taking on a finite number of
values. What is the (general) inequality relationship of H(X) and H(Y ) if
(a) Y = 2X ?
(b) Y = cos X ?
Solution: Let y = g(x) . Then
!
p(y) =
p(x).
x: y=g(x)
Consider any set of x ’s that map onto a single y . For this set
!
x: y=g(x)
p(x) log p(x) ≤
!
p(x) log p(y) = p(y) log p(y),
x: y=g(x)
$
since log is a monotone increasing function and p(x) ≤ x: y=g(x) p(x) = p(y) . Extending this argument to the entire range of X (and Y ), we obtain
H(X) = −
= −
≥ −
!
p(x) log p(x)
x
!
!
p(x) log p(x)
y x: y=g(x)
!
p(y) log p(y)
y
= H(Y ),
with equality iff g is one-to-one with probability one.
(a) Y = 2X is one-to-one and hence the entropy, which is just a function of the
probabilities (and not the values of a random variable) does not change, i.e.,
H(X) = H(Y ) .
(b) Y = cos(X) is not necessarily one-to-one. Hence all that we can say is that
H(X) ≥ H(Y ) , with equality if cosine is one-to-one on the range of X .
Entropy, Relative Entropy and Mutual Information
11
3. Minimum entropy. What is the minimum value of H(p 1 , ..., pn ) = H(p) as p ranges
over the set of n -dimensional probability vectors? Find all p ’s which achieve this
minimum.
Solution: We wish to find all probability vectors p = (p 1 , p2 , . . . , pn ) which minimize
H(p) = −
!
pi log pi .
i
Now −pi log pi ≥ 0 , with equality iff pi = 0 or 1 . Hence the only possible probability
vectors which minimize H(p) are those with p i = 1 for some i and pj = 0, j %= i .
There are n such vectors, i.e., (1, 0, . . . , 0) , (0, 1, 0, . . . , 0) , . . . , (0, . . . , 0, 1) , and the
minimum value of H(p) is 0.
4. Entropy of functions of a random variable. Let X be a discrete random variable.
Show that the entropy of a function of X is less than or equal to the entropy of X by
justifying the following steps:
H(X, g(X))
(a)
=
(b)
=
H(X, g(X))
(c)
=
(d)
≥
Thus H(g(X)) ≤ H(X).
H(X) + H(g(X) | X)
(2.1)
H(X);
(2.2)
H(g(X)) + H(X | g(X))
(2.3)
H(g(X)).
(2.4)
Solution: Entropy of functions of a random variable.
(a) H(X, g(X)) = H(X) + H(g(X)|X) by the chain rule for entropies.
(b) H(g(X)|X) = 0 since for any particular value of X, g(X) is fixed, and hence
$
$
H(g(X)|X) = x p(x)H(g(X)|X = x) = x 0 = 0 .
(c) H(X, g(X)) = H(g(X)) + H(X|g(X)) again by the chain rule.
(d) H(X|g(X)) ≥ 0 , with equality iff X is a function of g(X) , i.e., g(.) is one-to-one.
Hence H(X, g(X)) ≥ H(g(X)) .
Combining parts (b) and (d), we obtain H(X) ≥ H(g(X)) .
5. Zero conditional entropy. Show that if H(Y |X) = 0 , then Y is a function of X ,
i.e., for all x with p(x) > 0 , there is only one possible value of y with p(x, y) > 0 .
Solution: Zero Conditional Entropy. Assume that there exists an x , say x 0 and two
different values of y , say y1 and y2 such that p(x0 , y1 ) > 0 and p(x0 , y2 ) > 0 . Then
p(x0 ) ≥ p(x0 , y1 ) + p(x0 , y2 ) > 0 , and p(y1 |x0 ) and p(y2 |x0 ) are not equal to 0 or 1.
Thus
H(Y |X) = −
!
x
p(x)
!
p(y|x) log p(y|x)
≥ p(x0 )(−p(y1 |x0 ) log p(y1 |x0 ) − p(y2 |x0 ) log p(y2 |x0 ))
> > 0,
(2.5)
y
(2.6)
(2.7)
12
Entropy, Relative Entropy and Mutual Information
since −t log t ≥ 0 for 0 ≤ t ≤ 1 , and is strictly positive for t not equal to 0 or 1.
Therefore the conditional entropy H(Y |X) is 0 if and only if Y is a function of X .
6. Conditional mutual information vs. unconditional mutual information. Give
examples of joint random variables X , Y and Z such that
(a) I(X; Y | Z) < I(X; Y ) ,
(b) I(X; Y | Z) > I(X; Y ) .
Solution: Conditional mutual information vs. unconditional mutual information.
(a) The last corollary to Theorem 2.8.1 in the text states that if X → Y → Z that
is, if p(x, y | z) = p(x | z)p(y | z) then, I(X; Y ) ≥ I(X; Y | Z) . Equality holds if
and only if I(X; Z) = 0 or X and Z are independent.
A simple example of random variables satisfying the inequality conditions above
is, X is a fair binary random variable and Y = X and Z = Y . In this case,
I(X; Y ) = H(X) − H(X | Y ) = H(X) = 1
and,
I(X; Y | Z) = H(X | Z) − H(X | Y, Z) = 0.
So that I(X; Y ) > I(X; Y | Z) .
(b) This example is also given in the text. Let X, Y be independent fair binary
random variables and let Z = X + Y . In this case we have that,
I(X; Y ) = 0
and,
I(X; Y | Z) = H(X | Z) = 1/2.
So I(X; Y ) < I(X; Y | Z) . Note that in this case X, Y, Z are not markov.
7. Coin weighing. Suppose one has n coins, among which there may or may not be one
counterfeit coin. If there is a counterfeit coin, it may be either heavier or lighter than
the other coins. The coins are to be weighed by a balance.
(a) Find an upper bound on the number of coins n so that k weighings will find the
counterfeit coin (if any) and correctly declare it to be heavier or lighter.
(b) (Difficult) What is the coin weighing strategy for k = 3 weighings and 12 coins?
Solution: Coin weighing.
(a) For n coins, there are 2n + 1 possible situations or “states”.
• One of the n coins is heavier.
• One of the n coins is lighter.
• They are all of equal weight.
Entropy, Relative Entropy and Mutual Information
13
Each weighing has three possible outcomes - equal, left pan heavier or right pan
heavier. Hence with k weighings, there are 3 k possible outcomes and hence we
can distinguish between at most 3k different “states”. Hence 2n + 1 ≤ 3k or
n ≤ (3k − 1)/2 .
Looking at it from an information theoretic viewpoint, each weighing gives at most
log2 3 bits of information. There are 2n + 1 possible “states”, with a maximum
entropy of log2 (2n + 1) bits. Hence in this situation, one would require at least
log2 (2n + 1)/ log 2 3 weighings to extract enough information for determination of
the odd coin, which gives the same result as above.
(b) There are many solutions to this problem. We will give one which is based on the
ternary number system.
We may express the numbers {−12, −11, . . . , −1, 0, 1, . . . , 12} in a ternary number
system with alphabet {−1, 0, 1} . For example, the number 8 is (-1,0,1) where
−1 × 30 + 0 × 31 + 1 × 32 = 8 . We form the matrix with the representation of the
positive numbers as its columns.
1 2 3 4 5 6 7 8 9 10 11 12
0
3
1 -1 0 1 -1 0 1 -1 0
1 -1
0 Σ1 = 0
31 0 1 1 1 -1 -1 -1 0 0
0
1
1 Σ2 = 2
2
3
0 0 0 0 1 1 1 1 1
1
1
1 Σ3 = 8
Note that the row sums are not all zero. We can negate some columns to make
the row sums zero. For example, negating columns 7,9,11 and 12, we obtain
1 2 3 4 5 6 7 8 9 10 11 12
0
3
1 -1 0 1 -1 0 -1 -1 0
1
1
0 Σ1 = 0
31 0 1 1 1 -1 -1 1 0 0
0 -1 -1 Σ2 = 0
32 0 0 0 0 1 1 -1 1 -1
1 -1 -1 Σ3 = 0
Now place the coins on the balance according to the following rule: For weighing
#i , place coin n
• On left pan, if ni = −1 .
• Aside, if ni = 0 .
• On right pan, if ni = 1 .
The outcome of the three weighings will find the odd coin if any and tell whether
it is heavy or light. The result of each weighing is 0 if both pans are equal, -1 if
the left pan is heavier, and 1 if the right pan is heavier. Then the three weighings
give the ternary expansion of the index of the odd coin. If the expansion is the
same as the expansion in the matrix, it indicates that the coin is heavier. If
the expansion is of the opposite sign, the coin is lighter. For example, (0,-1,-1)
indicates (0)30 +(−1)3+(−1)32 = −12 , hence coin #12 is heavy, (1,0,-1) indicates
#8 is light, (0,0,0) indicates no odd coin.
Why does this scheme work? It is a single error correcting Hamming code for the
ternary alphabet (discussed in Section 8.11 in the book). Here are some details.
First note a few properties of the matrix above that was used for the scheme.
All the columns are distinct and no two columns add to (0,0,0). Also if any coin
14
Entropy, Relative Entropy and Mutual Information
is heavier, it will produce the sequence of weighings that matches its column in
the matrix. If it is lighter, it produces the negative of its column as a sequence
of weighings. Combining all these facts, we can see that any single odd coin will
produce a unique sequence of weighings, and that the coin can be determined from
the sequence.
One of the questions that many of you had whether the bound derived in part (a)
was actually achievable. For example, can one distinguish 13 coins in 3 weighings?
No, not with a scheme like the one above. Yes, under the assumptions under
which the bound was derived. The bound did not prohibit the division of coins
into halves, neither did it disallow the existence of another coin known to be
normal. Under both these conditions, it is possible to find the odd coin of 13 coins
in 3 weighings. You could try modifying the above scheme to these cases.
8. Drawing with and without replacement. An urn contains r red, w white, and
b black balls. Which has higher entropy, drawing k ≥ 2 balls from the urn with
replacement or without replacement? Set it up and show why. (There is both a hard
way and a relatively simple way to do this.)
Solution: Drawing with and without replacement. Intuitively, it is clear that if the
balls are drawn with replacement, the number of possible choices for the i -th ball is
larger, and therefore the conditional entropy is larger. But computing the conditional
distributions is slightly involved. It is easier to compute the unconditional entropy.
• With replacement. In this case the conditional distribution of each draw is the
same for every draw. Thus


 red
and therefore
r
with prob. r+w+b
w
white with prob. r+w+b
Xi =


b
black with prob. r+w+b
(2.8)
H(Xi |Xi−1 , . . . , X1 ) = H(Xi )
(2.9)
r
w
b
= log(r + w + b) −
log r −
log w −
log(2.10)
b.
r+w+b
r+w+b
r+w+b
• Without replacement. The unconditional probability of the i -th ball being red is
still r/(r + w + b) , etc. Thus the unconditional entropy H(X i ) is still the same as
with replacement. The conditional entropy H(X i |Xi−1 , . . . , X1 ) is less than the
unconditional entropy, and therefore the entropy of drawing without replacement
is lower.
9. A metric. A function ρ(x, y) is a metric if for all x, y ,
• ρ(x, y) ≥ 0
• ρ(x, y) = ρ(y, x)
Entropy, Relative Entropy and Mutual Information
15
• ρ(x, y) = 0 if and only if x = y
• ρ(x, y) + ρ(y, z) ≥ ρ(x, z) .
(a) Show that ρ(X, Y ) = H(X|Y ) + H(Y |X) satisfies the first, second and fourth
properties above. If we say that X = Y if there is a one-to-one function mapping
from X to Y , then the third property is also satisfied, and ρ(X, Y ) is a metric.
(b) Verify that ρ(X, Y ) can also be expressed as
ρ(X, Y ) = H(X) + H(Y ) − 2I(X; Y )
= H(X, Y ) − I(X; Y )
= 2H(X, Y ) − H(X) − H(Y ).
(2.11)
(2.12)
(2.13)
Solution: A metric
(a) Let
ρ(X, Y ) = H(X|Y ) + H(Y |X).
(2.14)
Then
• Since conditional entropy is always ≥ 0 , ρ(X, Y ) ≥ 0 .
• The symmetry of the definition implies that ρ(X, Y ) = ρ(Y, X) .
• By problem 2.6, it follows that H(Y |X) is 0 iff Y is a function of X and
H(X|Y ) is 0 iff X is a function of Y . Thus ρ(X, Y ) is 0 iff X and Y
are functions of each other - and therefore are equivalent up to a reversible
transformation.
• Consider three random variables X , Y and Z . Then
H(X|Y ) + H(Y |Z) ≥ H(X|Y, Z) + H(Y |Z)
(2.15)
= H(X|Z) + H(Y |X, Z)
(2.17)
= H(X, Y |Z)
from which it follows that
≥ H(X|Z),
ρ(X, Y ) + ρ(Y, Z) ≥ ρ(X, Z).
(2.16)
(2.18)
(2.19)
Note that the inequality is strict unless X → Y → Z forms a Markov Chain
and Y is a function of X and Z .
(b) Since H(X|Y ) = H(X) − I(X; Y ) , the first equation follows. The second relation
follows from the first equation and the fact that H(X, Y ) = H(X) + H(Y ) −
I(X; Y ) . The third follows on substituting I(X; Y ) = H(X) + H(Y ) − H(X, Y ) .
10. Entropy of a disjoint mixture. Let X1 and X2 be discrete random variables drawn
according to probability mass functions p 1 (·) and p2 (·) over the respective alphabets
X1 = {1, 2, . . . , m} and X2 = {m + 1, . . . , n}. Let
X=
)
X1 , with probability α,
X2 , with probability 1 − α.
16
Entropy, Relative Entropy and Mutual Information
(a) Find H(X) in terms of H(X1 ) and H(X2 ) and α.
(b) Maximize over α to show that 2H(X) ≤ 2H(X1 ) + 2H(X2 ) and interpret using the
notion that 2H(X) is the effective alphabet size.
Solution: Entropy. We can do this problem by writing down the definition of entropy
and expanding the various terms. Instead, we will use the algebra of entropies for a
simpler proof.
Since X1 and X2 have disjoint support sets, we can write
X=
)
Define a function of X ,
X1
X2
with probability
α
with probability 1 − α
θ = f (X) =
)
1
2
when X = X1
when X = X2
Then as in problem 1, we have
H(X) = H(X, f (X)) = H(θ) + H(X|θ)
= H(θ) + p(θ = 1)H(X|θ = 1) + p(θ = 2)H(X|θ = 2)
= H(α) + αH(X1 ) + (1 − α)H(X2 )
where H(α) = −α log α − (1 − α) log(1 − α) .
11. A measure of correlation. Let X1 and X2 be identically distributed, but not
necessarily independent. Let
ρ=1−
(a) Show ρ =
H(X2 | X1 )
.
H(X1 )
I(X1 ;X2 )
H(X1 ) .
(b) Show 0 ≤ ρ ≤ 1.
(c) When is ρ = 0 ?
(d) When is ρ = 1 ?
Solution: A measure of correlation. X1 and X2 are identically distributed and
ρ=1−
H(X2 |X1 )
H(X1 )
(a)
ρ =
=
=
H(X1 ) − H(X2 |X1 )
H(X1 )
H(X2 ) − H(X2 |X1 )
(since H(X1 ) = H(X2 ))
H(X1 )
I(X1 ; X2 )
.
H(X1 )
17
Entropy, Relative Entropy and Mutual Information
(b) Since 0 ≤ H(X2 |X1 ) ≤ H(X2 ) = H(X1 ) , we have
0≤
H(X2 |X1 )
≤1
H(X1 )
0 ≤ ρ ≤ 1.
(c) ρ = 0 iff I(X1 ; X2 ) = 0 iff X1 and X2 are independent.
(d) ρ = 1 iff H(X2 |X1 ) = 0 iff X2 is a function of X1 . By symmetry, X1 is a
function of X2 , i.e., X1 and X2 have a one-to-one relationship.
12. Example of joint entropy. Let p(x, y) be given by
! Y
!
X !
0
1
0
1
3
1
3
1
0
1
3
Find
(a) H(X), H(Y ).
(b) H(X | Y ), H(Y | X).
(c) H(X, Y ).
(d) H(Y ) − H(Y | X).
(e) I(X; Y ) .
(f) Draw a Venn diagram for the quantities in (a) through (e).
Solution: Example of joint entropy
(a) H(X) =
2
3
log
(b) H(X|Y ) =
(c) H(X, Y ) =
3
2
+
1
3
log 3 = 0.918 bits = H(Y ) .
1
2
3 H(X|Y = 0) + 3 H(X|Y
3 × 13 log 3 = 1.585 bits.
= 1) = 0.667 bits = H(Y |X) .
(d) H(Y ) − H(Y |X) = 0.251 bits.
(e) I(X; Y ) = H(Y ) − H(Y |X) = 0.251 bits.
(f) See Figure 1.
13. Inequality. Show ln x ≥ 1 −
1
x
for x > 0.
Solution: Inequality. Using the Remainder form of the Taylor expansion of ln(x)
about x = 1 , we have for some c between 1 and x
ln(x) = ln(1) +
* +
1
t
t=1
(x − 1) +
*
−1
t2
+
t=c
(x − 1)2
≤ x−1
2
18
Entropy, Relative Entropy and Mutual Information
Figure 2.1: Venn diagram to illustrate the relationships of entropy and relative entropy
H(Y)
H(X)
H(X|Y)
I(X;Y)
H(Y|X)
since the second term is always negative. Hence letting y = 1/x , we obtain
− ln y ≤
1
−1
y
or
ln y ≥ 1 −
1
y
with equality iff y = 1 .
14. Entropy of a sum. Let X and Y be random variables that take on values x 1 , x2 , . . . , xr
and y1 , y2 , . . . , ys , respectively. Let Z = X + Y.
(a) Show that H(Z|X) = H(Y |X). Argue that if X, Y are independent, then H(Y ) ≤
H(Z) and H(X) ≤ H(Z). Thus the addition of independent random variables
adds uncertainty.
(b) Give an example of (necessarily dependent) random variables in which H(X) >
H(Z) and H(Y ) > H(Z).
(c) Under what conditions does H(Z) = H(X) + H(Y ) ?
Solution: Entropy of a sum.
(a) Z = X + Y . Hence p(Z = z|X = x) = p(Y = z − x|X = x) .
H(Z|X) =
!
= −
=
!
x
=
p(x)H(Z|X = x)
!
!
p(x)
x
p(x)
!
!
y
p(Z = z|X = x) log p(Z = z|X = x)
z
p(Y = z − x|X = x) log p(Y = z − x|X = x)
p(x)H(Y |X = x)
= H(Y |X).
Entropy, Relative Entropy and Mutual Information
19
If X and Y are independent, then H(Y |X) = H(Y ) . Since I(X; Z) ≥ 0 ,
we have H(Z) ≥ H(Z|X) = H(Y |X) = H(Y ) . Similarly we can show that
H(Z) ≥ H(X) .
(b) Consider the following joint distribution for X and Y Let
X = −Y =
)
1
0
with probability 1/2
with probability 1/2
Then H(X) = H(Y ) = 1 , but Z = 0 with prob. 1 and hence H(Z) = 0 .
(c) We have
H(Z) ≤ H(X, Y ) ≤ H(X) + H(Y )
because Z is a function of (X, Y ) and H(X, Y ) = H(X) + H(Y |X) ≤ H(X) +
H(Y ) . We have equality iff (X, Y ) is a function of Z and H(Y ) = H(Y |X) , i.e.,
X and Y are independent.
15. Data processing. Let X1 → X2 → X3 → · · · → Xn form a Markov chain in this
order; i.e., let
p(x1 , x2 , . . . , xn ) = p(x1 )p(x2 |x1 ) · · · p(xn |xn−1 ).
Reduce I(X1 ; X2 , . . . , Xn ) to its simplest form.
Solution: Data Processing. By the chain rule for mutual information,
I(X1 ; X2 , . . . , Xn ) = I(X1 ; X2 ) + I(X1 ; X3 |X2 ) + · · · + I(X1 ; Xn |X2 , . . . , Xn−2 ). (2.20)
By the Markov property, the past and the future are conditionally independent given
the present and hence all terms except the first are zero. Therefore
I(X1 ; X2 , . . . , Xn ) = I(X1 ; X2 ).
(2.21)
16. Bottleneck. Suppose a (non-stationary) Markov chain starts in one of n states, necks
down to k < n states, and then fans back to m > k states. Thus X 1 → X2 → X3 ,
i.e., p(x1 , x2 , x3 ) = p(x1 )p(x2 |x1 )p(x3 |x2 ) , for all x1 ∈ {1, 2, . . . , n} , x2 ∈ {1, 2, . . . , k} ,
x3 ∈ {1, 2, . . . , m} .
(a) Show that the dependence of X1 and X3 is limited by the bottleneck by proving
that I(X1 ; X3 ) ≤ log k.
(b) Evaluate I(X1 ; X3 ) for k = 1 , and conclude that no dependence can survive such
a bottleneck.
Solution:
Bottleneck.
20
Entropy, Relative Entropy and Mutual Information
(a) From the data processing inequality, and the fact that entropy is maximum for a
uniform distribution, we get
I(X1 ; X3 ) ≤ I(X1 ; X2 )
= H(X2 ) − H(X2 | X1 )
≤ H(X2 )
≤ log k.
Thus, the dependence between X1 and X3 is limited by the size of the bottleneck.
That is I(X1 ; X3 ) ≤ log k .
(b) For k = 1 , I(X1 ; X3 ) ≤ log 1 = 0 and since I(X1 , X3 ) ≥ 0 , I(X1 , X3 ) = 0 .
Thus, for k = 1 , X1 and X3 are independent.
17. Pure randomness and bent coins. Let X 1 , X2 , . . . , Xn denote the outcomes of
independent flips of a bent coin. Thus Pr {X i = 1} = p, Pr {Xi = 0} = 1 − p ,
where p is unknown. We wish to obtain a sequence Z 1 , Z2 , . . . , ZK of fair coin flips
from X1 , X2 , . . . , Xn . Toward this end let f : X n → {0, 1}∗ , (where {0, 1}∗ =
{Λ, 0, 1, 00, 01, . . .} is the set of all finite length binary sequences), be a mapping
f (X1 , X2 , . . . , Xn ) = (Z1 , Z2 , . . . , ZK ) , where Zi ∼ Bernoulli ( 12 ) , and K may depend
on (X1 , . . . , Xn ) . In order that the sequence Z1 , Z2 , . . . appear to be fair coin flips, the
map f from bent coin flips to fair flips must have the property that all 2 k sequences
(Z1 , Z2 , . . . , Zk ) of a given length k have equal probability (possibly 0), for k = 1, 2, . . . .
For example, for n = 2 , the map f (01) = 0 , f (10) = 1 , f (00) = f (11) = Λ (the null
string), has the property that Pr{Z1 = 1|K = 1} = Pr{Z1 = 0|K = 1} = 12 .
Give reasons for the following inequalities:
nH(p)
(a)
=
(b)
H(X1 , . . . , Xn )
≥
H(Z1 , Z2 , . . . , ZK , K)
(d)
H(K) + E(K)
(c)
=
=
(e)
≥
H(K) + H(Z1 , . . . , ZK |K)
EK.
Thus no more than nH(p) fair coin tosses can be derived from (X 1 , . . . , Xn ) , on the
average. Exhibit a good map f on sequences of length 4.
Solution: Pure randomness and bent coins.
nH(p)
(a)
=
(b)
≥
H(X1 , . . . , Xn )
H(Z1 , Z2 , . . . , ZK )
Entropy, Relative Entropy and Mutual Information
(c)
=
H(Z1 , Z2 , . . . , ZK , K)
(d)
H(K) + H(Z1 , . . . , ZK |K)
=
(e)
=
(f )
≥
21
H(K) + E(K)
EK .
(a) Since X1 , X2 , . . . , Xn are i.i.d. with probability of Xi = 1 being p , the entropy
H(X1 , X2 , . . . , Xn ) is nH(p) .
(b) Z1 , . . . , ZK is a function of X1 , X2 , . . . , Xn , and since the entropy of a function of a
random variable is less than the entropy of the random variable, H(Z 1 , . . . , ZK ) ≤
H(X1 , X2 , . . . , Xn ) .
(c) K is a function of Z1 , Z2 , . . . , ZK , so its conditional entropy given Z1 , Z2 , . . . , ZK
is 0. Hence H(Z1 , Z2 , . . . , ZK , K) = H(Z1 , . . . , ZK ) + H(K|Z1 , Z2 , . . . , ZK ) =
H(Z1 , Z2 , . . . , ZK ).
(d) Follows from the chain rule for entropy.
(e) By assumption, Z1 , Z2 , . . . , ZK are pure random bits (given K ), with entropy 1
bit per symbol. Hence
H(Z1 , Z2 , . . . , ZK |K) =
=
!
k
!
p(K = k)H(Z1 , Z2 , . . . , Zk |K = k)
(2.22)
p(k)k
(2.23)
k
= EK.
(2.24)
(f) Follows from the non-negativity of discrete entropy.
(g) Since we do not know p , the only way to generate pure random bits is to use
the fact that all sequences with the same number of ones are equally likely. For
example, the sequences 0001,0010,0100 and 1000 are equally likely and can be used
to generate 2 pure random bits. An example of a mapping to generate random
bits is
0000 → Λ
0001 → 00 0010 → 01 0100 → 10 1000 → 11
0011 → 00 0110 → 01 1100 → 10 1001 → 11
1010 → 0 0101 → 1
1110 → 11 1101 → 10 1011 → 01 0111 → 00
1111 → Λ
(2.25)
EK = 4pq 3 × 2 + 4p2 q 2 × 2 + 2p2 q 2 × 1 + 4p3 q × 2
(2.26)
The resulting expected number of bits is
= 8pq + 10p q + 8p q.
3
2 2
3
(2.27)
22
Entropy, Relative Entropy and Mutual Information
For example, for p ≈ 12 , the expected number of pure random bits is close to 1.625.
This is substantially less then the 4 pure random bits that could be generated if
p were exactly 12 .
We will now analyze the efficiency of this scheme of generating random bits for long
sequences of bent coin flips. Let n be the number of bent coin flips. The algorithm
that we will use is the obvious extension of the above method of generating pure
bits using the fact that all sequences with the same number of ones are equally
likely.
, -
Consider all sequences
with k ones. There are nk such sequences, which
are
,n,nall equally likely. If k were a power of 2, then we could generate
log
pure
k
,nrandom bits from such a set. However, in the general
case,
is
not
a
power
of
k
,n2 and the best we can to is the divide the set of k elements into subset of sizes
n
which are powers of 2. The largest set would have a size 2 $log (k )% and could be
, -
used to generate *log nk + random bits. We could divide the remaining elements
into
the largest set which is a power of 2, etc. The worst case would occur when
,n=
2l+1 − 1 , in which case the subsets would be of sizes 2 l , 2l−1 , 2l−2 , . . . , 1 .
k
Instead of analyzing the scheme exactly, we will
just find a lower
bound on number
,n,nof random bits generated from a set of size k . Let l = *log k + . Then at least
half of the elements belong to a set of size 2 l and would generate l random bits,
at least 14 th belong to a set of size 2l−1 and generate l − 1 random bits, etc. On
the average, the number of bits generated is
1
1
1
l + (l − 1) + · · · + l 1
2
4*
2
+
1
1 2 3
l−1
= l−
1 + + + + · · · + l−2
4
2 4 8
2
≥ l − 1,
E[K|k 1’s in sequence] ≥
(2.28)
(2.29)
(2.30)
since the infinite series sums to 1.
, -
Hence the fact that nk is not a power of 2 will cost at most 1 bit on the average
in the number of random bits that are produced.
Hence, the expected number of pure random bits produced by this algorithm is
EK ≥
≥
=
n
!
k=0
.
n
!
k=0
.
n
!
k=0
≥
. /
. /
n k n−k
n
p q
*log
− 1+
k
k
/
.
. /
(2.31)
/
n
n k n−k
p q
log
−2
k
k
(2.32)
n k n−k
n
p q
log
−2
k
k
(2.33)
/
!
n(p−!)≤k≤n(p+!)
. /
. /
. /
n k n−k
n
p q
log
− 2.
k
k
(2.34)
23
Entropy, Relative Entropy and Mutual Information
Now for sufficiently large n , the probability that the number of 1’s in the sequence
is close to np is near 1 (by the weak law of large numbers). For such sequences,
k
n is close to p and hence there exists a δ such that
. /
k
n
≥ 2n(H( n )−δ) ≥ 2n(H(p)−2δ)
k
(2.35)
using Stirling’s approximation for the binomial coefficients and the continuity of
the entropy function. If we assume that n is large enough so that the probability
that n(p − %) ≤ k ≤ n(p + %) is greater than 1 − % , then we see that EK ≥
(1 − %)n(H(p) − 2δ) − 2 , which is very good since nH(p) is an upper bound on the
number of pure random bits that can be produced from the bent coin sequence.
18. World Series. The World Series is a seven-game series that terminates as soon as
either team wins four games. Let X be the random variable that represents the outcome
of a World Series between teams A and B; possible values of X are AAAA, BABABAB,
and BBBAAAA. Let Y be the number of games played, which ranges from 4 to 7.
Assuming that A and B are equally matched and that the games are independent,
calculate H(X) , H(Y ) , H(Y |X) , and H(X|Y ) .
Solution:
World Series. Two teams play until one of them has won 4 games.
There are 2 (AAAA, BBBB) World Series with 4 games. Each happens with probability
(1/2)4 .
There are 8 = 2
,4-
There are 20 = 2
There are 40 = 2
3
,5-
World Series with 5 games. Each happens with probability (1/2) 5 .
3
World Series with 6 games. Each happens with probability (1/2) 6 .
3
World Series with 7 games. Each happens with probability (1/2) 7 .
,6-
The probability of a 4 game series ( Y = 4 ) is 2(1/2) 4 = 1/8 .
The probability of a 5 game series ( Y = 5 ) is 8(1/2) 5 = 1/4 .
The probability of a 6 game series ( Y = 6 ) is 20(1/2) 6 = 5/16 .
The probability of a 7 game series ( Y = 7 ) is 40(1/2) 7 = 5/16 .
1
p(x)
= 2(1/16) log 16 + 8(1/32) log 32 + 20(1/64) log 64 + 40(1/128) log 128
H(X) =
!
p(x)log
= 5.8125
1
p(y)
= 1/8 log 8 + 1/4 log 4 + 5/16 log(16/5) + 5/16 log(16/5)
H(Y ) =
!
p(y)log
= 1.924
24
Entropy, Relative Entropy and Mutual Information
Y is a deterministic function of X, so if you know X there is no randomness in Y. Or,
H(Y |X) = 0 .
Since H(X) + H(Y |X) = H(X, Y ) = H(Y ) + H(X|Y ) , it is easy to determine
H(X|Y ) = H(X) + H(Y |X) − H(Y ) = 3.889
19. Infinite entropy. This problem shows that the entropy of a discrete random variable
$
2
−1 . (It is easy to show that A is finite by
can be infinite. Let A = ∞
n=2 (n log n)
bounding the infinite sum by the integral of (x log 2 x)−1 .) Show that the integervalued random variable X defined by Pr(X = n) = (An log 2 n)−1 for n = 2, 3, . . . ,
has H(X) = +∞ .
Solution: Infinite entropy. By definition, p n = Pr(X = n) = 1/An log 2 n for n ≥ 2 .
Therefore
H(X) = −
= −
=
=
∞
!
p(n) log p(n)
n=2
∞ 0
!
1
0
1/An log 2 n log 1/An log 2 n
n=2
∞
!
log(An log 2 n)
1
An log2 n
n=2
∞
!
log A + log n + 2 log log n
An log2 n
n=2
= log A +
∞
!
∞
!
1
2 log log n
+
.
An log n n=2 An log2 n
n=2
The first term is finite. For base 2 logarithms, all the elements in the sum in the last
term are nonnegative. (For any other base, the terms of the last sum eventually all
become positive.) So all we have to do is bound the middle sum, which we do by
comparing with an integral.
∞
!
1
>
An
log
n
n=2
2
We conclude that H(X) = +∞ .
2
∞
3∞
1
3
dx = K ln ln x 3 = +∞ .
2
Ax log x
20. Run length coding. Let X1 , X2 , . . . , Xn be (possibly dependent) binary random
variables. Suppose one calculates the run lengths R = (R 1 , R2 , . . .) of this sequence
(in order as they occur). For example, the sequence X = 0001100100 yields run
lengths R = (3, 2, 2, 1, 2) . Compare H(X1 , X2 , . . . , Xn ) , H(R) and H(Xn , R) . Show
all equalities and inequalities, and bound all the differences.
Solution: Run length coding. Since the run lengths are a function of X 1 , X2 , . . . , Xn ,
H(R) ≤ H(X) . Any Xi together with the run lengths determine the entire sequence
25
Entropy, Relative Entropy and Mutual Information
X1 , X2 , . . . , Xn . Hence
H(X1 , X2 , . . . , Xn ) = H(Xi , R)
(2.36)
= H(R) + H(Xi |R)
(2.37)
≤ H(R) + 1.
(2.39)
≤ H(R) + H(Xi )
(2.38)
21. Markov’s inequality for probabilities. Let p(x) be a probability mass function.
Prove, for all d ≥ 0 ,
* +
1
Pr {p(X) ≤ d} log
≤ H(X).
(2.40)
d
Solution: Markov inequality applied to entropy.
P (p(X) < d) log
1
d
!
=
p(x) log
1
d
(2.41)
p(x) log
1
p(x)
(2.42)
x:p(x)<d
≤
≤
!
x:p(x)<d
!
p(x) log
x
= H(X)
1
p(x)
(2.43)
(2.44)
22. Logical order of ideas. Ideas have been developed in order of need, and then generalized if necessary. Reorder the following ideas, strongest first, implications following:
(a) Chain rule for I(X1 , . . . , Xn ; Y ) , chain rule for D(p(x1 , . . . , xn )||q(x1 , x2 , . . . , xn )) ,
and chain rule for H(X1 , X2 , . . . , Xn ) .
(b) D(f ||g) ≥ 0 , Jensen’s inequality, I(X; Y ) ≥ 0 .
Solution: Logical ordering of ideas.
(a) The following orderings are subjective. Since I(X; Y ) = D(p(x, y)||p(x)p(y)) is a
special case of relative entropy, it is possible to derive the chain rule for I from
the chain rule for D .
Since H(X) = I(X; X) , it is possible to derive the chain rule for H from the
chain rule for I .
It is also possible to derive the chain rule for I from the chain rule for H as was
done in the notes.
(b) In class, Jensen’s inequality was used to prove the non-negativity of D . The
inequality I(X; Y ) ≥ 0 followed as a special case of the non-negativity of D .
26
Entropy, Relative Entropy and Mutual Information
23. Conditional mutual information. Consider a sequence of n binary random variables X1 , X2 , . . . , Xn . Each sequence with an even number of 1’s has probability
2−(n−1) and each sequence with an odd number of 1’s has probability 0. Find the
mutual informations
I(X1 ; X2 ),
I(X2 ; X3 |X1 ), . . . , I(Xn−1 ; Xn |X1 , . . . , Xn−2 ).
Solution: Conditional mutual information.
Consider a sequence of n binary random variables X 1 , X2 , . . . , Xn . Each sequence of
length n with an even number of 1’s is equally likely and has probability 2 −(n−1) .
Any n − 1 or fewer of these are independent. Thus, for k ≤ n − 1 ,
I(Xk−1 ; Xk |X1 , X2 , . . . , Xk−2 ) = 0.
However, given X1 , X2 , . . . , Xn−2 , we know that once we know either Xn−1 or Xn we
know the other.
I(Xn−1 ; Xn |X1 , X2 , . . . , Xn−2 ) = H(Xn |X1 , X2 , . . . , Xn−2 ) − H(Xn |X1 , X2 , . . . , Xn−1 )
= 1 − 0 = 1 bit.
24. Average entropy. Let H(p) = −p log 2 p − (1 − p) log2 (1 − p) be the binary entropy
function.
(a) Evaluate H(1/4) using the fact that log 2 3 ≈ 1.584 . Hint: You may wish to
consider an experiment with four equally likely outcomes, one of which is more
interesting than the others.
(b) Calculate the average entropy H(p) when the probability p is chosen uniformly
in the range 0 ≤ p ≤ 1 .
(c) (Optional) Calculate the average entropy H(p 1 , p2 , p3 ) where (p1 , p2 , p3 ) is a uniformly distributed probability vector. Generalize to dimension n .
Solution: Average Entropy.
(a) We can generate two bits of information by picking one of four equally likely
alternatives. This selection can be made in two steps. First we decide whether the
first outcome occurs. Since this has probability 1/4 , the information generated
is H(1/4) . If not the first outcome, then we select one of the three remaining
outcomes; with probability 3/4 , this produces log 2 3 bits of information. Thus
H(1/4) + (3/4) log 2 3 = 2
and so H(1/4) = 2 − (3/4) log 2 3 = 2 − (.75)(1.585) = 0.811 bits.
27
Entropy, Relative Entropy and Mutual Information
(b) If p is chosen uniformly in the range 0 ≤ p ≤ 1 , then the average entropy (in
nats) is
−
2
0
1
p ln p + (1 − p) ln(1 − p)dp = −2
2
0
1
x ln x dx = −2
.
x2
x2
ln x +
2
4
/
31
3
3 =
0
1
2
.
Therefore the average entropy is 12 log2 e = 1/(2 ln 2) = .721 bits.
(c) Choosing a uniformly distributed probability vector (p 1 , p2 , p3 ) is equivalent to
choosing a point (p1 , p2 ) uniformly from the triangle 0 ≤ p 1 ≤ 1 , p1 ≤ p2 ≤ 1 .
The probability density function has the constant value 2 because the area of the
triangle is 1/2. So the average entropy H(p 1 , p2 , p3 ) is
−2
2 12
0
1
p1
p1 ln p1 + p2 ln p2 + (1 − p1 − p2 ) ln(1 − p1 − p2 )dp2 dp1 .
After some enjoyable calculus, we obtain the final result 5/(6 ln 2) = 1.202 bits.
25. Venn diagrams. There isn’t realy a notion of mutual information common to three
random variables. Here is one attempt at a definition: Using Venn diagrams, we can
see that the mutual information common to three random variables X , Y and Z can
be defined by
I(X; Y ; Z) = I(X; Y ) − I(X; Y |Z) .
This quantity is symmetric in X , Y and Z , despite the preceding asymmetric definition. Unfortunately, I(X; Y ; Z) is not necessarily nonnegative. Find X , Y and Z
such that I(X; Y ; Z) < 0 , and prove the following two identities:
(a) I(X; Y ; Z) = H(X, Y, Z) − H(X) − H(Y ) − H(Z) + I(X; Y ) + I(Y ; Z) + I(Z; X)
(b) I(X; Y ; Z) = H(X, Y, Z)− H(X, Y )− H(Y, Z)− H(Z, X)+ H(X)+ H(Y )+ H(Z)
The first identity can be understood using the Venn diagram analogy for entropy and
mutual information. The second identity follows easily from the first.
Solution: Venn Diagrams. To show the first identity,
I(X; Y ; Z) = I(X; Y ) − I(X; Y |Z) by definition
= I(X; Y ) − (I(X; Y, Z) − I(X; Z))
by chain rule
= I(X; Y ) + I(X; Z) − I(X; Y, Z)
= I(X; Y ) + I(X; Z) − (H(X) + H(Y, Z) − H(X, Y, Z))
= I(X; Y ) + I(X; Z) − H(X) + H(X, Y, Z) − H(Y, Z)
= I(X; Y ) + I(X; Z) − H(X) + H(X, Y, Z) − (H(Y ) + H(Z) − I(Y ; Z))
= I(X; Y ) + I(X; Z) + I(Y ; Z) + H(X, Y, Z) − H(X) − H(Y ) − H(Z).
To show the second identity, simply substitute for I(X; Y ) , I(X; Z) , and I(Y ; Z)
using equations like
I(X; Y ) = H(X) + H(Y ) − H(X, Y ) .
These two identities show that I(X; Y ; Z) is a symmetric (but not necessarily nonnegative) function of three random variables.
28
Entropy, Relative Entropy and Mutual Information
26. Another proof of non-negativity of relative entropy. In view of the fundamental
nature of the result D(p||q) ≥ 0 , we will give another proof.
(a) Show that ln x ≤ x − 1 for 0 < x < ∞ .
(b) Justify the following steps:
−D(p||q) =
≤
!
p(x) ln
x
!
p(x)
x
≤ 0
*
q(x)
p(x)
(2.45)
+
q(x)
−1
p(x)
(2.46)
(2.47)
(c) What are the conditions for equality?
Solution: Another proof of non-negativity of relative entropy. In view of the fundamental nature of the result D(p||q) ≥ 0 , we will give another proof.
(a) Show that ln x ≤ x − 1 for 0 < x < ∞ .
There are many ways to prove this. The easiest is using calculus. Let
f (x) = x − 1 − ln x
(2.48)
for 0 < x < ∞ . Then f ' (x) = 1 − x1 and f '' (x) = x12 > 0 , and therefore f (x)
is strictly convex. Therefore a local minimum of the function is also a global
minimum. The function has a local minimum at the point where f ' (x) = 0 , i.e.,
when x = 1 . Therefore f (x) ≥ f (1) , i.e.,
x − 1 − ln x ≥ 1 − 1 − ln 1 = 0
(2.49)
which gives us the desired inequality. Equality occurs only if x = 1 .
(b) We let A be the set of x such that p(x) > 0 .
−De (p||q) =
!
x∈A
!
p(x)ln
*
q(x)
p(x)
(2.50)
+
q(x)
≤
p(x)
−1
p(x)
x∈A
(2.51)
=
(2.52)
!
x∈A
≤ 0
q(x) −
!
p(x)
x∈A
(2.53)
The first step follows from the definition of D , the second step follows from the
inequality ln t ≤ t − 1 , the third step from expanding the sum, and the last step
from the fact that the q(A) ≤ 1 and p(A) = 1 .
29
Entropy, Relative Entropy and Mutual Information
(c) What are the conditions for equality?
We have equality in the inequality ln t ≤ t − 1 if and only if t = 1 . Therefore we
have equality in step 2 of the chain iff q(x)/p(x) = 1 for all x ∈ A . This implies
that p(x) = q(x) for all x , and we have equality in the last step as well. Thus
the condition for equality is that p(x) = q(x) for all x .
27. Grouping rule for entropy: Let p = (p 1 , p2 , . . . , pm ) be a probability distribution
$
on m elements, i.e, pi ≥ 0 , and m
i=1 pi = 1 . Define a new distribution q on m − 1
elements as q1 = p1 , q2 = p2 ,. . . , qm−2 = pm−2 , and qm−1 = pm−1 + pm , i.e., the
distribution q is the same as p on {1, 2, . . . , m − 2} , and the probability of the last
element in q is the sum of the last two probabilities of p . Show that
H(p) = H(q) + (pm−1 + pm )H
*
+
pm−1
pm
,
.
pm−1 + pm pm−1 + pm
(2.54)
Solution:
H(p) = −
= −
= −
m
!
pi log pi
i=1
m−2
!
i=1
m−2
!
i=1
(2.55)
pi log pi − pm−1 log pm−1 − pm log pm
pi log pi − pm−1 log
pm
pm−1
− pm log
pm−1 + pm
pm−1 + pm
(2.56)
(2.57)
−(pm−1 + pm ) log(pm−1 + pm )
(2.58)
pm−1
pm
= H(q) − pm−1 log
− pm log
(2.59)
pm−1 + pm
pm−1 + pm
*
+
pm−1
pm−1
pm
pm
= H(q) − (pm−1 + pm )
log
−
log
(2.60)
pm−1 + pm
pm−1 + pm pm−1 + pm
pm−1 + pm
*
+
pm−1
pm
= H(q) + (pm−1 + pm )H2
,
,
(2.61)
pm−1 + pm pm−1 + pm
where H2 (a, b) = −a log a − b log b .
28. Mixing increases entropy. Show that the entropy of the probability distribution,
(p1 , . . . , pi , . . . , pj , . . . , pm ) , is less than the entropy of the distribution
p +p
p +p
(p1 , . . . , i 2 j , . . . , i 2 j , . . . , pm ) . Show that in general any transfer of probability that
makes the distribution more uniform increases the entropy.
Solution:
Mixing increases entropy.
This problem depends on the convexity of the log function. Let
P1 = (p1 , . . . , pi , . . . , pj , . . . , pm )
pi + p j
pj + p i
P2 = (p1 , . . . ,
,...,
, . . . , pm )
2
2
30
Entropy, Relative Entropy and Mutual Information
Then, by the log sum inequality,
pi + p j
pi + p j
) log(
) + pi log pi + pj log pj
2
2
pi + p j
= −(pi + pj ) log(
) + pi log pi + pj log pj
2
≥ 0.
H(P2 ) − H(P1 ) = −2(
Thus,
H(P2 ) ≥ H(P1 ).
29. Inequalities. Let X , Y and Z be joint random variables. Prove the following
inequalities and find conditions for equality.
(a) H(X, Y |Z) ≥ H(X|Z) .
(b) I(X, Y ; Z) ≥ I(X; Z) .
(c) H(X, Y, Z) − H(X, Y ) ≤ H(X, Z) − H(X) .
(d) I(X; Z|Y ) ≥ I(Z; Y |X) − I(Z; Y ) + I(X; Z) .
Solution: Inequalities.
(a) Using the chain rule for conditional entropy,
H(X, Y |Z) = H(X|Z) + H(Y |X, Z) ≥ H(X|Z),
with equality iff H(Y |X, Z) = 0 , that is, when Y is a function of X and Z .
(b) Using the chain rule for mutual information,
I(X, Y ; Z) = I(X; Z) + I(Y ; Z|X) ≥ I(X; Z),
with equality iff I(Y ; Z|X) = 0 , that is, when Y and Z are conditionally independent given X .
(c) Using first the chain rule for entropy and then the definition of conditional mutual
information,
H(X, Y, Z) − H(X, Y ) = H(Z|X, Y ) = H(Z|X) − I(Y ; Z|X)
≤ H(Z|X) = H(X, Z) − H(X) ,
with equality iff I(Y ; Z|X) = 0 , that is, when Y and Z are conditionally independent given X .
(d) Using the chain rule for mutual information,
I(X; Z|Y ) + I(Z; Y ) = I(X, Y ; Z) = I(Z; Y |X) + I(X; Z) ,
and therefore
I(X; Z|Y ) = I(Z; Y |X) − I(Z; Y ) + I(X; Z) .
We see that this inequality is actually an equality in all cases.
31
Entropy, Relative Entropy and Mutual Information
30. Maximum entropy. Find the probability mass function p(x) that maximizes the
entropy H(X) of a non-negative integer-valued random variable X subject to the
constraint
EX =
∞
!
np(n) = A
n=0
for a fixed value A > 0 . Evaluate this maximum H(X) .
Solution: Maximum entropy
Recall that,
−
∞
!
i=0
pi log pi ≤ −
∞
!
pi log qi .
i=0
Let qi = α(β)i . Then we have that,
−
∞
!
i=0
pi log pi ≤ −
∞
!
pi log qi
i=0
.
= − log(α)
∞
!
pi + log(β)
i=0
∞
!
i=0
= − log α − A log β
ipi
/
Notice that the final right hand side expression is independent of {p i } , and that the
inequality,
−
holds for all α, β such that,
∞
!
i=0
pi log pi ≤ − log α − A log β
∞
!
αβ i = 1 = α
i=0
1
.
1−β
The constraint on the expected value also requires that,
∞
!
iαβ i = A = α
i=0
β
.
(1 − β)2
Combining the two constraints we have,
α
β
(1 − β)2
*
α
1−β
β
=
1−β
= A,
=
+*
β
1−β
+
32
Entropy, Relative Entropy and Mutual Information
which implies that,
A
A+1
1
.
A+1
β =
α =
So the entropy maximizing distribution is,
1
pi =
A+1
*
A
A+1
+i
.
Plugging these values into the expression for the maximum entropy,
− log α − A log β = (A + 1) log(A + 1) − A log A.
The general form of the distribution,
pi = αβ i
can be obtained either by guessing or by Lagrange multipliers where,
F (pi , λ1 , λ2 ) = −
∞
!
∞
!
pi log pi + λ1 (
i=0
i=0
is the function whose gradient we set to 0.
∞
!
pi − 1) + λ2 (
i=0
ipi − A)
To complete the argument with Lagrange multipliers, it is necessary to show that the
local maximum is the global maximum. One possible argument is based on the fact
that −H(p) is convex, it has only one local minima, no local maxima and therefore
Lagrange multiplier actually gives the global maximum for H(p) .
31. Conditional entropy. Under what conditions does H(X | g(Y )) = H(X | Y ) ?
Solution: (Conditional Entropy). If H(X|g(Y )) = H(X|Y ) , then H(X)−H(X|g(Y )) =
H(X) − H(X|Y ) , i.e., I(X; g(Y )) = I(X; Y ) . This is the condition for equality in
the data processing inequality. From the derivation of the inequality, we have equality iff X → g(Y ) → Y forms a Markov chain. Hence H(X|g(Y )) = H(X|Y ) iff
X → g(Y ) → Y . This condition includes many special cases, such as g being oneto-one, and X and Y being independent. However, these two special cases do not
exhaust all the possibilities.
32. Fano. We are given the following joint distribution on (X, Y )
Y
X
1
2
3
a
b
c
1
6
1
12
1
12
1
12
1
6
1
12
1
12
1
12
1
6
Let X̂(Y ) be an estimator for X (based on Y) and let P e = Pr{X̂(Y ) %= X}.
Entropy, Relative Entropy and Mutual Information
33
(a) Find the minimum probability of error estimator X̂(Y ) and the associated Pe .
(b) Evaluate Fano’s inequality for this problem and compare.
Solution:
(a) From inspection we see that
X̂(y) =


 1 y=a
2 y=b
y=c

 3
Hence the associated Pe is the sum of P (1, b), P (1, c), P (2, a), P (2, c), P (3, a)
and P (3, b). Therefore, Pe = 1/2.
(b) From Fano’s inequality we know
H(X|Y ) − 1
.
log |X |
Pe ≥
Here,
H(X|Y ) = H(X|Y = a) Pr{y = a} + H(X|Y = b) Pr{y = b} + H(X|Y = c) Pr{y = c}
*
+
*
+
*
+
1 1 1
1 1 1
1 1 1
= H
, ,
, ,
, ,
Pr{y = a} + H
Pr{y = b} + H
Pr{y = c}
2 4 4
2 4 4
2 4 4
+
*
1 1 1
, ,
(Pr{y = a} + Pr{y = b} + Pr{y = c})
= H
2 4 4
*
+
1 1 1
= H
, ,
2 4 4
= 1.5 bits.
Hence
Pe ≥
1.5 − 1
= .316.
log 3
Hence our estimator X̂(Y ) is not very close to Fano’s bound in this form. If
X̂ ∈ X , as it does here, we can use the stronger form of Fano’s inequality to get
Pe ≥
H(X|Y ) − 1
.
log(|X |-1)
Pe ≥
1.5 − 1
1
= .
log 2
2
and
Therefore our estimator X̂(Y ) is actually quite good.
33. Fano’s inequality. Let Pr(X = i) = p i , i = 1, 2, . . . , m and let p1 ≥ p2 ≥ p3 ≥
· · · ≥ pm . The minimal probability of error predictor of X is X̂ = 1 , with resulting
probability of error Pe = 1 − p1 . Maximize H(p) subject to the constraint 1 − p 1 = Pe
34
Entropy, Relative Entropy and Mutual Information
to find a bound on Pe in terms of H . This is Fano’s inequality in the absence of
conditioning.
Solution: (Fano’s Inequality.) The minimal probability of error predictor when there
is no information is X̂ = 1 , the most probable value of X . The probability of error in
this case is Pe = 1 − p1 . Hence if we fix Pe , we fix p1 . We maximize the entropy of X
for a given Pe to obtain an upper bound on the entropy for a given P e . The entropy,
H(p) = −p1 log p1 −
= −p1 log p1 −
m
!
i=2
m
!
i=2
*
pi log pi
Pe
pi
pi
log
− Pe log Pe
Pe
Pe
p2 p3
pm
, ,...,
Pe Pe
Pe
≤ H(Pe ) + Pe log(m − 1),
= H(Pe ) + Pe H
(2.62)
+
(2.63)
(2.64)
(2.65)
1
0
since the maximum of H Pp2e , Pp3e , . . . , pPme is attained by an uniform distribution. Hence
any X that can be predicted with a probability of error P e must satisfy
H(X) ≤ H(Pe ) + Pe log(m − 1),
(2.66)
which is the unconditional form of Fano’s inequality. We can weaken this inequality to
obtain an explicit lower bound for Pe ,
Pe ≥
H(X) − 1
.
log(m − 1)
(2.67)
34. Entropy of initial conditions. Prove that H(X 0 |Xn ) is non-decreasing with n for
any Markov chain.
Solution: Entropy of initial conditions. For a Markov chain, by the data processing
theorem, we have
I(X0 ; Xn−1 ) ≥ I(X0 ; Xn ).
(2.68)
Therefore
H(X0 ) − H(X0 |Xn−1 ) ≥ H(X0 ) − H(X0 |Xn )
(2.69)
or H(X0 |Xn ) increases with n .
35. Relative entropy is not symmetric: Let the random variable X have three possible
outcomes {a, b, c} . Consider two distributions on this random variable
Symbol
a
b
c
p(x)
1/2
1/4
1/4
q(x)
1/3
1/3
1/3
Calculate H(p) , H(q) , D(p||q) and D(q||p) . Verify that in this case D(p||q) %=
D(q||p) .
35
Entropy, Relative Entropy and Mutual Information
Solution:
1
1
1
log 2 + log 4 + log 4 = 1.5 bits.
2
4
4
1
1
1
H(q) = log 3 + log 3 + log 3 = log 3 = 1.58496 bits.
3
3
3
1
3 1
3 1
3
D(p||q) = log + log + log = log(3) − 1.5 = 1.58496 − 1.5 = 0.08496
2
2 4
4 4
4
1
2 1
4 1
4
5
D(q||p) = log + log + log = −log(3) = 1.66666−1.58496 = 0.08170
3
3 3
3 3
3
3
H(p) =
(2.70)
(2.71)
(2.72)
(2.73)
36. Symmetric relative entropy: Though, as the previous example shows, D(p||q) %=
D(q||p) in general, there could be distributions for which equality holds. Give an
example of two distributions p and q on a binary alphabet such that D(p||q) = D(q||p)
(other than the trivial case p = q ).
Solution:
A simple case for D((p, 1 − p)||(q, 1 − q)) = D((q, 1 − q)||(p, 1 − p)) , i.e., for
p log
is when q = 1 − p .
1−p
q
1−q
p
+ (1 − p) log
= q log + (1 − q) log
q
1−q
p
1−p
(2.74)
37. Relative entropy: Let X, Y, Z be three random variables with a joint probability
mass function p(x, y, z) . The relative entropy between the joint distribution and the
product of the marginals is
4
p(x, y, z)
D(p(x, y, z)||p(x)p(y)p(z)) = E log
p(x)p(y)p(z)
5
(2.75)
Expand this in terms of entropies. When is this quantity zero?
Solution:
4
5
p(x, y, z)
(2.76)
p(x)p(y)p(z)
= E[log p(x, y, z)] − E[log p(x)] − E[log p(y)] − E[log(2.77)
p(z)]
D(p(x, y, z)||p(x)p(y)p(z)) = E log
= −H(X, Y, Z) + H(X) + H(Y ) + H(Z)
(2.78)
We have D(p(x, y, z)||p(x)p(y)p(z)) = 0 if and only p(x, y, z) = p(x)p(y)p(z) for all
(x, y, z) , i.e., if X and Y and Z are independent.
38. The value of a question Let X ∼ p(x) , x = 1, 2, . . . , m . We are given a set
S ⊆ {1, 2, . . . , m} . We ask whether X ∈ S and receive the answer
Y =
)
1,
0,
if X ∈ S
if X ∈
% S.
Suppose Pr{X ∈ S} = α . Find the decrease in uncertainty H(X) − H(X|Y ) .
36
Entropy, Relative Entropy and Mutual Information
Apparently any set S with a given α is as good as any other.
Solution: The value of a question.
H(X) − H(X|Y ) = I(X; Y )
= H(Y ) − H(Y |X)
= H(α) − H(Y |X)
= H(α)
since H(Y |X) = 0 .
39. Entropy and pairwise independence.
Let X, Y, Z be three binary Bernoulli ( 21 ) random variables that are pairwise independent, that is, I(X; Y ) = I(X; Z) = I(Y ; Z) = 0 .
(a) Under this constraint, what is the minimum value for H(X, Y, Z) ?
(b) Give an example achieving this minimum.
Solution:
(a)
H(X, Y, Z) = H(X, Y ) + H(Z|X, Y )
(2.79)
≥ H(X, Y )
(2.80)
= 2.
(2.81)
So the minimum value for H(X, Y, Z) is at least 2. To show that is is actually
equal to 2, we show in part (b) that this bound is attainable.
(b) Let X and Y be iid Bernoulli( 12 ) and let Z = X ⊕ Y , where ⊕ denotes addition
mod 2 (xor).
40. Discrete entropies
Let X and Y be two independent integer-valued random variables. Let X be uniformly
distributed over {1, 2, . . . , 8} , and let Pr{Y = k} = 2 −k , k = 1, 2, 3, . . .
(a) Find H(X)
(b) Find H(Y )
(c) Find H(X + Y, X − Y ) .
Solution:
(a) For a uniform distribution, H(X) = log m = log 8 = 3 .
(b) For a geometric distribution, H(Y ) =
$
k
k2−k = 2 . (See solution to problem 2.1
Entropy, Relative Entropy and Mutual Information
37
(c) Since (X, Y ) → (X +Y, X −Y ) is a one to one transformation, H(X +Y, X −Y ) =
H(X, Y ) = H(X) + H(Y ) = 3 + 2 = 5 .
41. Random questions
One wishes to identify a random object X ∼ p(x) . A question Q ∼ r(q) is asked
at random according to r(q) . This results in a deterministic answer A = A(x, q) ∈
{a1 , a2 , . . .} . Suppose X and Q are independent. Then I(X; Q, A) is the uncertainty
in X removed by the question-answer (Q, A) .
(a) Show I(X; Q, A) = H(A|Q) . Interpret.
(b) Now suppose that two i.i.d. questions Q 1 , Q2 , ∼ r(q) are asked, eliciting answers
A1 and A2 . Show that two questions are less valuable than twice a single question
in the sense that I(X; Q1 , A1 , Q2 , A2 ) ≤ 2I(X; Q1 , A1 ) .
Solution: Random questions.
(a)
I(X; Q, A) = H(Q, A) − H(Q, A, |X)
= H(Q) + H(A|Q) − H(Q|X) − H(A|Q, X)
= H(Q) + H(A|Q) − H(Q)
= H(A|Q)
The interpretation is as follows. The uncertainty removed in X by (Q, A) is the
same as the uncertainty in the answer given the question.
(b) Using the result from part a and the fact that questions are independent, we can
easily obtain the desired relationship.
I(X; Q1 , A1 , Q2 , A2 )
(a)
=
(b)
=
(c)
=
=
(d)
=
(e)
≤
(f )
=
I(X; Q1 ) + I(X; A1 |Q1 ) + I(X; Q2 |A1 , Q1 ) + I(X; A2 |A1 , Q1 , Q2 )
I(X; A1 |Q1 ) + H(Q2 |A1 , Q1 ) − H(Q2 |X, A1 , Q1 ) + I(X; A2 |A1 , Q1 , Q2 )
I(X; A1 |Q1 ) + I(X; A2 |A1 , Q1 , Q2 )
I(X; A1 |Q1 ) + H(A2 |A1 , Q1 , Q2 ) − H(A2 |X, A1 , Q1 , Q2 )
I(X; A1 |Q1 ) + H(A2 |A1 , Q1 , Q2 )
I(X; A1 |Q1 ) + H(A2 |Q2 )
2I(X; A1 |Q1 )
(a) Chain rule.
(b) X and Q1 are independent.
38
Entropy, Relative Entropy and Mutual Information
(c) Q2 are independent of X , Q1 , and A1 .
(d) A2 is completely determined given Q2 and X .
(e) Conditioning decreases entropy.
(f) Result from part a.
42. Inequalities. Which of the following inequalities are generally ≥, =, ≤ ? Label each
with ≥, =, or ≤ .
(a)
(b)
(c)
(d)
H(5X) vs. H(X)
I(g(X); Y ) vs. I(X; Y )
H(X0 |X−1 ) vs. H(X0 |X−1 , X1 )
H(X1 , X2 , . . . , Xn ) vs. H(c(X1 , X2 , . . . , Xn )) , where c(x1 , x2 , . . . , xn ) is the Huffman codeword assigned to (x1 , x2 , . . . , xn ) .
(e) H(X, Y )/(H(X) + H(Y )) vs. 1
Solution:
(a)
(b)
(c)
(d)
X → 5X is a one to one mapping, and hence H(X) = H(5X) .
By data processing inequality, I(g(X); Y ) ≤ I(X; Y ) .
Because conditioning reduces entropy, H(X 0 |X−1 ) ≥ H(X0 |X−1 , X1 ) .
H(X, Y ) ≤ H(X) + H(Y ) , so H(X, Y )/(H(X) + H(Y )) ≤ 1 .
43. Mutual information of heads and tails.
(a) Consider a fair coin flip. What is the mutual information between the top side
and the bottom side of the coin?
(b) A 6-sided fair die is rolled. What is the mutual information between the top side
and the front face (the side most facing you)?
Solution:
Mutual information of heads and tails.
To prove (a) observe that
I(T ; B) = H(B) − H(B|T )
= log 2 = 1
since B ∼ Ber(1/2) , and B = f (T ) . Here B, T stand for Bottom and Top respectively.
To prove (b) note that having observed a side of the cube facing us F , there are four
possibilities for the top T , which are equally probable. Thus,
I(T ; F ) = H(T ) − H(T |F )
= log 6 − log 4
= log 3 − 1
since T has uniform distribution on {1, 2, . . . , 6} .
39
Entropy, Relative Entropy and Mutual Information
44. Pure randomness
We wish to use a 3-sided coin to generate a
probability mass function


 A,
X=
B,

 C,
fair coin toss. Let the coin X have
pA
pB
pC
where pA , pB , pC are unknown.
(a) How would you use 2 independent flips X 1 , X2 to generate (if possible) a Bernoulli( 12 )
random variable Z ?
(b) What is the resulting maximum expected number of fair bits generated?
Solution:
(a) The trick here is to notice that for any two letters Y and Z produced by two
independent tosses of our bent three-sided coin, Y Z has the same probability as
ZY . So we can produce B ∼ Bernoulli( 21 ) coin flips by letting B = 0 when we
get AB , BC or AC , and B = 1 when we get BA , CB or CA (if we get AA ,
BB or CC we don’t assign a value to B .)
(b) The expected number of bits generated by the above scheme is as follows. We get
one bit, except when the two flips of the 3-sided coin produce the same symbol.
So the expected number of fair bits generated is
0 ∗ [P (AA) + P (BB) + P (CC)] + 1 ∗ [1 − P (AA) − P (BB) − P (CC)],
or,
1 − p2A − p2B − p2C .
(2.82)
(2.83)
45. Finite entropy. Show that for a discrete random variable X ∈ {1, 2, . . .} , if E log X <
∞ , then H(X) < ∞ .
Solution: Let the distribution on the integers be p 1 , p2 , . . . . Then H(p) = −
$
and E log X = pi logi = c < ∞ .
$
pi logpi
We will now find the maximum entropy distribution subject to the constraint on the
expected logarithm. Using Lagrange multipliers or the results of Chapter 12, we have
the following functional to optimize
J(p) = −
!
pi log pi − λ1
!
p i − λ2
!
pi log i
(2.84)
Differentiating with respect to p i and setting to zero, we find that the p i that maximizes
$
the entropy set pi = aiλ , where a = 1/( iλ ) and λ chosed to meet the expected log
constraint, i.e.
!
!
iλ log i = c
iλ
(2.85)
Using this value of pi , we can see that the entropy is finite.
40
Entropy, Relative Entropy and Mutual Information
46. Axiomatic definition of entropy. If we assume certain axioms for our measure of
information, then we will be forced to use a logarithmic measure like entropy. Shannon
used this to justify his initial definition of entropy. In this book, we will rely more on
the other properties of entropy rather than its axiomatic derivation to justify its use.
The following problem is considerably more difficult than the other problems in this
section.
If a sequence of symmetric functions H m (p1 , p2 , . . . , pm ) satisfies the following properties,
• Normalization: H2
0
1 1
2, 2
1
= 1,
• Continuity: H2 (p, 1 − p) is a continuous function of p ,
• Grouping: Hm (p1 , p2 , . . . , pm ) = Hm−1 (p1 +p2 , p3 , . . . , pm )+(p1 +p2 )H2
prove that Hm must be of the form
Hm (p1 , p2 , . . . , pm ) = −
m
!
pi log pi ,
m = 2, 3, . . . .
0
p2
p1
p1 +p2 , p1 +p2
(2.86)
i=1
There are various other axiomatic formulations which also result in the same definition
of entropy. See, for example, the book by Csiszár and Körner[4].
Solution: Axiomatic definition of entropy. This is a long solution, so we will first
outline what we plan to do. First we will extend the grouping axiom by induction and
prove that
Hm (p1 , p2 , . . . , pm ) = Hm−k (p1 + p2 + · · · + pk , pk+1 , . . . , pm )
+
*
pk
p1
,...,
(. 2.87)
+(p1 + p2 + · · · + pk )Hk
p1 + p 2 + · · · + p k
p1 + p 2 + · · · + p k
Let f (m) be the entropy of a uniform distribution on m symbols, i.e.,
f (m) = Hm
*
+
1 1
1
, ,...,
.
m m
m
(2.88)
We will then show that for any two integers r and s , that f (rs) = f (r) + f (s) .
We use this to show that f (m) = log m . We then show for rational p = r/s , that
H2 (p, 1 − p) = −p log p − (1 − p) log(1 − p) . By continuity, we will extend it to irrational
p and finally by induction and grouping, we will extend the result to H m for m ≥ 2 .
To begin, we extend the grouping axiom. For convenience in notation, we will let
Sk =
k
!
(2.89)
pi
i=1
and we will denote H2 (q, 1 − q) as h(q) . Then we can write the grouping axiom as
*
+
p2
Hm (p1 , . . . , pm ) = Hm−1 (S2 , p3 , . . . , pm ) + S2 h
.
S2
(2.90)
1
,
41
Entropy, Relative Entropy and Mutual Information
Applying the grouping axiom again, we have
*
+
p2
S2
* +
* +
p3
p2
+ S2 h
= Hm−2 (S3 , p4 , . . . , pm ) + S3 h
S3
S2
..
.
Hm (p1 , . . . , pm ) = Hm−1 (S2 , p3 , . . . , pm ) + S2 h
= Hm−(k−1) (Sk , pk+1 , . . . , pm ) +
k
!
Si h
i=2
*
+
pi
.
Si
(2.91)
(2.92)
(2.93)
(2.94)
Now, we apply the same grouping axiom repeatedly to H k (p1 /Sk , . . . , pk /Sk ) , to obtain
Hk
*
pk
p1
,...,
Sk
Sk
+
= H2
=
*
Sk−1 pk
,
Sk Sk
+
+
* +
k
1 !
pi
Si h
.
Sk i=2
Si
k−1
!
i=2
*
Si
pi /Sk
h
Sk
Si /Sk
+
(2.95)
(2.96)
From (2.94) and (2.96), it follows that
Hm (p1 , . . . , pm ) = Hm−k (Sk , pk+1 , . . . , pm ) + Sk Hk
*
pk
p1
,...,
Sk
Sk
+
,
(2.97)
which is the extended grouping axiom.
Now we need to use an axiom that is not explicitly stated in the text, namely that the
function Hm is symmetric with respect to its arguments. Using this, we can combine
any set of arguments of Hm using the extended grouping axiom.
1 1
1
Let f (m) denote Hm ( m
, m, . . . , m
).
Consider
1
1
1
,
,...,
).
mn mn
mn
By repeatedly applying the extended grouping axiom, we have
f (mn) = Hmn (
1
1
1
,
,...,
)
mn mn
mn
1 1
1
1
1
1
= Hmn−n ( ,
,...,
) + Hn ( , . . . , )
m mn
mn
m
n
n
1 1 1
1
2
1
1
= Hmn−2n ( , ,
,...,
) + Hn ( , . . . , )
m m mn
mn
m
n
n
..
.
f (mn) = Hmn (
1
1
1
1
= Hm ( , . . . . ) + H( , . . . , )
m
m
n
n
= f (m) + f (n).
(2.98)
(2.99)
(2.100)
(2.101)
(2.102)
(2.103)
(2.104)
42
Entropy, Relative Entropy and Mutual Information
We can immediately use this to conclude that f (m k ) = kf (m) .
Now, we will argue that H2 (1, 0) = h(1) = 0 . We do this by expanding H 3 (p1 , p2 , 0)
( p1 + p2 = 1 ) in two different ways using the grouping axiom
H3 (p1 , p2 , 0) = H2 (p1 , p2 ) + p2 H2 (1, 0)
(2.105)
= H2 (1, 0) + (p1 + p2 )H2 (p1 , p2 )
(2.106)
Thus p2 H2 (1, 0) = H2 (1, 0) for all p2 , and therefore H(1, 0) = 0 .
We will also need to show that f (m + 1) − f (m) → 0 as m → ∞ . To prove this, we
use the extended grouping axiom and write
1
1
,...,
)
m+1
m+1
m
1
1
1
)+
Hm ( , . . . , )
= h(
m+1
m+1
m
m
1
m
= h(
)+
f (m)
m+1
m+1
f (m + 1) = Hm+1 (
and therefore
f (m + 1) −
m
1
f (m) = h(
).
m+1
m+1
(2.107)
(2.108)
(2.109)
(2.110)
m
1
Thus lim f (m + 1) − m+1
f (m) = lim h( m+1
). But by the continuity of H2 , it follows
1
) = 0.
that the limit on the right is h(0) = 0 . Thus lim h( m+1
Let us define
an+1 = f (n + 1) − f (n)
(2.111)
1
bn = h( ).
n
(2.112)
and
Then
1
f (n) + bn+1
n+1
n
1 !
= −
ai + bn+1
n + 1 i=2
an+1 = −
and therefore
(n + 1)bn+1 = (n + 1)an+1 +
n
!
(2.113)
(2.114)
(2.115)
ai .
i=2
Therefore summing over n , we have
N
!
n=2
nbn =
N
!
(nan + an−1 + . . . + a2 ) = N
n=2
N
!
n=2
ai .
(2.116)
43
Entropy, Relative Entropy and Mutual Information
Dividing both sides by
$N
n=1 n
= N (N + 1)/2 , we obtain
$
N
N
2 !
nbn
an = $n=2
N
N + 1 n=2
n=2 n
(2.117)
Now by continuity of H2 and the definition of bn , it follows that bn → 0 as n → ∞ .
Since the right hand side is essentially an average of the b n ’s, it also goes to 0 (This
can be proved more precisely using % ’s and δ ’s). Thus the left hand side goes to 0. We
can then see that
N
1 !
aN +1 = bN +1 −
an
(2.118)
N + 1 n=2
also goes to 0 as N → ∞ . Thus
f (n + 1) − f (n) → 0 asn → ∞.
(2.119)
We will now prove the following lemma
Lemma 2.0.1 Let the function f (m) satisfy the following assumptions:
• f (mn) = f (m) + f (n) for all integers m , n .
• limn→∞ (f (n + 1) − f (n)) = 0
• f (2) = 1 ,
then the function f (m) = log 2 m .
Proof of the lemma: Let P be an arbitrary prime number and let
g(n) = f (n) −
f (P ) log2 n
log2 P
(2.120)
Then g(n) satisfies the first assumption of the lemma. Also g(P ) = 0 .
Also if we let
αn = g(n + 1) − g(n) = f (n + 1) − f (n) +
f (P )
n
log2
log2 P
n+1
(2.121)
then the second assumption in the lemma implies that lim α n = 0 .
For an integer n , define
6
7
n
=
.
P
(2.122)
n = n(1) P + l
(2.123)
n
(1)
Then it follows that n(1) < n/P , and
44
Entropy, Relative Entropy and Mutual Information
where 0 ≤ l < P . From the fact that g(P ) = 0 , it follows that g(P n (1) ) = g(n(1) ) ,
and
g(n) = g(n(1) ) + g(n) − g(P n(1) ) = g(n(1) ) +
n−1
!
αi
(2.124)
i=P n(1)
Just as we have defined n(1) from n , we can define n(2) from n(1) . Continuing this
process, we can then write
g(n) = g(n(k) ) +
k
!
j=1
Since n(k) ≤ n/P k , after
k=
6


(i−1)
n!
i=P n(i)

αi  .
7
log n
+1
log P
(2.125)
(2.126)
terms, we have n(k) = 0 , and g(0) = 0 (this follows directly from the additive property
of g ). Thus we can write
g(n) =
tn
!
(2.127)
αi
i=1
the sum of bn terms, where
bn ≤ P
Since αn → 0 , it follows that
Thus it follows that
g(n)
log2 n
*
+
log n
+1 .
log P
(2.128)
→ 0 , since g(n) has at most o(log 2 n) terms αi .
f (n)
f (P )
=
n→∞ log n
log2 P
2
lim
(2.129)
Since P was arbitrary, it follows that f (P )/ log 2 P = c for every prime number P .
Applying the third axiom in the lemma, it follows that the constant is 1, and f (P ) =
log2 P .
For composite numbers N = P1 P2 . . . Pl , we can apply the first property of f and the
prime number factorization of N to show that
f (N ) =
!
f (Pi ) =
!
log2 Pi = log2 N.
(2.130)
Thus the lemma is proved.
The lemma can be simplified considerably, if instead of the second assumption, we
replace it by the assumption that f (n) is monotone in n . We will now argue that the
only function f (m) such that f (mn) = f (m) + f (n) for all integers m, n is of the form
f (m) = log a m for some base a .
Let c = f (2) . Now f (4) = f (2 × 2) = f (2) + f (2) = 2c . Similarly, it is easy to see
that f (2k ) = kc = c log 2 2k . We will extend this to integers that are not powers of 2.
45
Entropy, Relative Entropy and Mutual Information
For any integer m , let r > 0 , be another integer and let 2 k ≤ mr < 2k+1 . Then by
the monotonicity assumption on f , we have
kc ≤ rf (m) < (k + 1)c
(2.131)
or
k
k+1
≤ f (m) < c
r
r
Now by the monotonicity of log , we have
c
(2.132)
k
k+1
≤ log2 m <
r
r
(2.133)
Combining these two equations, we obtain
3
3
3
3
3f (m) − log 2 m 3 < 1
3
c 3 r
(2.134)
Since r was arbitrary, we must have
f (m) =
log2 m
c
(2.135)
and we can identify c = 1 from the last assumption of the lemma.
Now we are almost done. We have shown that for any uniform distribution on m
outcomes, f (m) = Hm (1/m, . . . , 1/m) = log 2 m .
We will now show that
H2 (p, 1 − p) = −p log p − (1 − p) log(1 − p).
(2.136)
To begin, let p be a rational number, r/s , say. Consider the extended grouping axiom
for Hs
1
1
1
1 s−r
s−r
f (s) = Hs ( , . . . , ) = H( , . . . , ,
)+
f (s − r)
s
s
s
s
<s => s?
(2.137)
r
r s−r
s
s−r
= H2 ( ,
) + f (s) +
f (s − r)
s
s
r
s
(2.138)
Substituting f (s) = log 2 s , etc, we obtain
*
+
*
+
r s−r
r
r
s−r
s−r
H2 ( ,
) = − log2 − 1 −
log2 1 −
.
s
s
s
s
s
s
(2.139)
Thus (2.136) is true for rational p . By the continuity assumption, (2.136) is also true
at irrational p .
To complete the proof, we have to extend the definition from H 2 to Hm , i.e., we have
to show that
!
Hm (p1 , . . . , pm ) = −
pi log pi
(2.140)
46
Entropy, Relative Entropy and Mutual Information
for all m . This is a straightforward induction. We have just shown that this is true for
m = 2 . Now assume that it is true for m = n − 1 . By the grouping axiom,
Hn (p1 , . . . , pn ) = Hn−1 (p1 + p2 , p3 , . . . , pn )
*
+
p1
p2
+(p1 + p2 )H2
,
p1 + p 2 p1 + p 2
= −(p1 + p2 ) log(p1 + p2 ) −
−
= −
n
!
pi log pi
(2.142)
(2.143)
i=3
p1
p2
p2
p1
log
−
log
p1 + p 2
p1 + p 2 p1 + p 2
p1 + p 2
n
!
(2.141)
pi log pi .
(2.144)
(2.145)
i=1
Thus the statement is true for m = n , and by induction, it is true for all m . Thus we
have finally proved that the only symmetric function that satisfies the axioms is
Hm (p1 , . . . , pm ) = −
m
!
pi log pi .
(2.146)
i=1
The proof above is due to Rényi[11]
47. The entropy of a missorted file.
A deck of n cards in order 1, 2, . . . , n is provided. One card is removed at random
then replaced at random. What is the entropy of the resulting deck?
Solution: The entropy of a missorted file.
The heart of this problem is simply carefully counting the possible outcome states.
There are n ways to choose which card gets mis-sorted, and, once the card is chosen,
there are again n ways to choose where the card is replaced in the deck. Each of these
shuffling actions has probability 1/n 2 . Unfortunately, not all of these n 2 actions results
in a unique mis-sorted file. So we need to carefully count the number of distinguishable
outcome states. The resulting deck can only take on one of the following three cases.
• The selected card is at its original location after a replacement.
• The selected card is at most one location away from its original location after a
replacement.
• The selected card is at least two locations away from its original location after a
replacement.
To compute the entropy of the resulting deck, we need to know the probability of each
case.
Case 1 (resulting deck is the same as the original): There are n ways to achieve this
outcome state, one for each of the n cards in the deck. Thus, the probability associated
with case 1 is n/n2 = 1/n .
Entropy, Relative Entropy and Mutual Information
47
Case 2 (adjacent pair swapping): There are n − 1 adjacent pairs, each of which will
have a probability of 2/n2 , since for each pair, there are two ways to achieve the swap,
either by selecting the left-hand card and moving it one to the right, or by selecting the
right-hand card and moving it one to the left.
Case 3 (typical situation): None of the remaining actions “collapses”. They all result
in unique outcome states, each with probability 1/n 2 . Of the n2 possible shuffling
actions, n2 − n − 2(n − 1) of them result in this third case (we’ve simply subtracted
the case 1 and case 2 situations above).
The entropy of the resulting deck can be computed as follows.
H(X) =
=
1
2
n2
1
log(n) + (n − 1) 2 log( ) + (n2 − 3n + 2) 2 log(n2 )
n
n
2
n
2n − 1
2(n − 1)
log(n) −
n
n2
48. Sequence length.
How much information does the length of a sequence give about the content of a sequence? Suppose we consider a Bernoulli (1/2) process {X i }.
Stop the process when the first 1 appears. Let N designate this stopping time.
Thus X N is an element of the set of all finite length binary sequences {0, 1} ∗ =
{0, 1, 00, 01, 10, 11, 000, . . .}.
(a) Find I(N ; X N ).
(b) Find H(X N |N ).
(c) Find H(X N ).
Let’s now consider a different stopping time. For this part, again assume X i ∼ Bernoulli (1/2)
but stop at time N = 6 , with probability 1/3 and stop at time N = 12 with probability
2/3. Let this stopping time be independent of the sequence X 1 X2 . . . X12 .
(d) Find I(N ; X N ).
(e) Find H(X N |N ).
(f) Find H(X N ).
Solution:
(a)
I(X N ; N )
=
=
H(N ) − H(N |X N )
H(N ) − 0
48
Entropy, Relative Entropy and Mutual Information
I(X ; N )
N
(a)
=
E(N )
=
2
where (a) comes from the fact that the entropy of a geometric random variable is
just the mean.
(b) Since given N we know that Xi = 0 for all i < N and XN = 1,
H(X N |N ) = 0.
(c)
H(X N ) = I(X N ; N ) + H(X N |N )
= I(X N ; N ) + 0
H(X N ) = 2.
(d)
I(X N ; N ) = H(N ) − H(N |X N )
= H(N ) − 0
I(X ; N ) = HB (1/3)
N
(e)
2
1
H(X 6 |N = 6) + H(X 12 |N = 12)
3
3
1
2
6
12
=
H(X ) + H(X )
3
3
1
2
=
6 + 12
3
3
H(X N |N ) = 10.
H(X N |N ) =
(f)
H(X N ) = I(X N ; N ) + H(X N |N )
= I(X N ; N ) + 10
H(X N ) = H(1/3) + 10.
Chapter 3
The Asymptotic Equipartition
Property
1. Markov’s inequality and Chebyshev’s inequality.
(a) (Markov’s inequality.) For any non-negative random variable X and any t > 0 ,
show that
EX
Pr {X ≥ t} ≤
.
(3.1)
t
Exhibit a random variable that achieves this inequality with equality.
(b) (Chebyshev’s inequality.) Let Y be a random variable with mean µ and variance
σ 2 . By letting X = (Y − µ)2 , show that for any % > 0 ,
Pr {|Y − µ| > %} ≤
σ2
.
%2
(3.2)
(c) (The weak law of large numbers.) Let Z 1 , Z2 , . . . , Zn be a sequence of i.i.d. random
$
variables with mean µ and variance σ 2 . Let Z n = n1 ni=1 Zi be the sample mean.
Show that
3
@3
A
σ2
3
3
Pr 3Z n − µ3 > % ≤ 2 .
(3.3)
n%
@3
3
3
3
A
Thus Pr 3Z n − µ3 > % → 0 as n → ∞ . This is known as the weak law of large
numbers.
Solution: Markov’s inequality and Chebyshev’s inequality.
(a) If X has distribution F (x) ,
EX =
2
∞
xdF
0
=
2
49
0
δ
xdF +
2
δ
∞
xdF
50
The Asymptotic Equipartition Property
≥
≥
2
∞
2δ ∞
xdF
δdF
δ
= δ Pr{X ≥ δ}.
Rearranging sides and dividing by δ we get,
Pr{X ≥ δ} ≤
EX
.
δ
(3.4)
One student gave a proof based on conditional expectations. It goes like
EX = E(X|X ≤ δ) Pr{X ≥ δ} + E(X|X < δ) Pr{X < δ}
≥ E(X|X ≤ δ) Pr{X ≥ δ}
≥ δ Pr{X ≥ δ},
which leads to (3.4) as well.
Given δ , the distribution achieving
Pr{X ≥ δ} =
is
X=
)
EX
,
δ
δ with probability µδ
0 with probability 1 − µδ ,
where µ ≤ δ .
(b) Letting X = (Y − µ)2 in Markov’s inequality,
Pr{(Y − µ)2 > %2 } ≤ Pr{(Y − µ)2 ≥ %2 }
E(Y − µ)2
≤
%2
2
σ
=
,
%2
and noticing that Pr{(Y − µ)2 > %2 } = Pr{|Y − µ| > %} , we get,
Pr{|Y − µ| > %} ≤
σ2
.
%2
(c) Letting Y in Chebyshev’s inequality from part (b) equal Z¯n , and noticing that
2
E Z¯n = µ and Var(Z¯n ) = σn (ie. Z¯n is the sum of n iid r.v.’s, Zni , each with
2
variance nσ2 ), we have,
σ2
Pr{|Z¯n − µ| > %} ≤ 2 .
n%
51
The Asymptotic Equipartition Property
2. AEP and mutual information. Let (Xi , Yi ) be i.i.d. ∼ p(x, y) . We form the log
likelihood ratio of the hypothesis that X and Y are independent vs. the hypothesis
that X and Y are dependent. What is the limit of
1
p(X n )p(Y n )
log
?
n
p(X n , Y n )
Solution:
1
p(X n )p(Y n )
log
n
p(X n , Y n )
=
=
n
B
1
p(Xi )p(Yi )
log
n
p(Xi , Yi )
i=1
n
1!
p(Xi )p(Yi )
log
n i=i
p(Xi , Yi )
p(Xi )p(Yi )
)
p(Xi , Yi )
−I(X; Y )
→ E(log
=
n
n
)p(Y )
−nI(X;Y ) , which will converge to 1 if X and Y are indeed
Thus, p(X
p(X n ,Y n ) → 2
independent.
3. Piece of cake
A cake is sliced roughly in half, the largest piece being chosen each time, the other
pieces discarded. We will assume that a random cut creates pieces of proportions:
P =
)
( 32 , 13 ) w.p.
( 25 , 35 ) w.p.
3
4
1
4
Thus, for example, the first cut (and choice of largest piece) may result in a piece of
size 35 . Cutting and choosing from this piece might reduce it to size ( 35 )( 23 ) at time 2,
and so on.
How large, to first order in the exponent, is the piece of cake after n cuts?
Solution: Let Ci be the fraction of the piece of cake that is cut at the i th cut, and let
C
Tn be the fraction of cake left after n cuts. Then we have T n = C1 C2 . . . Cn = ni=1 Ci .
Hence, as in Question 2 of Homework Set #3,
lim
n
1
1!
log Tn = lim
log Ci
n
n i=1
= E[log C1 ]
3
2 1
3
=
log + log .
4
3 4
5
52
The Asymptotic Equipartition Property
4. AEP
$
Let Xi be iid ∼ p(x), x ∈ {1, 2, . . . , m} . Let µ = EX, and H = − p(x) log p(x). Let
$
An = {xn ∈ X n : | − n1 log p(xn ) − H| ≤ %} . Let B n = {xn ∈ X n : | n1 ni=1 Xi − µ| ≤ %} .
(a) Does Pr{X n ∈ An } −→ 1 ?
(b) Does Pr{X n ∈ An ∩ B n } −→ 1 ?
(c) Show |An ∩ B n | ≤ 2n(H+!) , for all n .
(d) Show |An ∩ B n | ≥ ( 21 )2n(H−!) , for n sufficiently large.
Solution:
(a) Yes, by the AEP for discrete random variables the probability X n is typical goes
to 1.
(b) Yes, by the Strong Law of Large Numbers P r(X n ∈ B n ) → 1 . So there exists
% > 0 and N1 such that P r(X n ∈ An ) > 1 − 2! for all n > N1 , and there exists
N2 such that P r(X n ∈ B n ) > 1 − 2! for all n > N2 . So for all n > max(N1 , N2 ) :
P r(X n ∈ An ∩ B n ) = P r(X n ∈ An ) + P r(X n ∈ B n ) − P r(X n ∈ An ∪ B n )
%
%
> 1− +1− −1
2
2
= 1−%
So for any % > 0 there exists N = max(N1 , N2 ) such that P r(X n ∈ An ∩ B n ) >
1 − % for all n > N , therefore P r(X n ∈ An ∩ B n ) → 1 .
$
n
n
n
(c) By the law of total probability
xn ∈An ∩B n p(x ) ≤ 1 . Also, for x ∈ A , from
Theorem 3.1.2 in the text, p(xn ) ≥ 2−n(H+!) . Combining these two equations gives
$
$
1 ≥ xn ∈An ∩B n p(xn ) ≥ xn ∈An ∩B n 2−n(H+!) = |An ∩ B n |2−n(H+!) . Multiplying
through by 2n(H+!) gives the result |An ∩ B n | ≤ 2n(H+!) .
(d) Since from (b) P r{X n ∈ An ∩ B n } → 1 , there exists N such that P r{X n ∈
An ∩ B n } ≥ 21 for all n > N . From Theorem 3.1.2 in the text, for x n ∈ An ,
$
n
p(xn ) ≤ 2−n(H−!) . So combining these two gives 21 ≤
xn ∈An ∩B n p(x ) ≤
$
−n(H−!) = |An ∩ B n |2−n(H−!) . Multiplying through by 2n(H−!) gives
xn ∈An ∩B n 2
n
the result |A ∩ B n | ≥ ( 21 )2n(H−!) for n sufficiently large.
5. Sets defined by probabilities.
Let X1 , X2 , . . . be an i.i.d. sequence of discrete random variables with entropy H(X).
Let
Cn (t) = {xn ∈ X n : p(xn ) ≥ 2−nt }
denote the subset of n -sequences with probabilities ≥ 2 −nt .
(a) Show |Cn (t)| ≤ 2nt .
(b) For what values of t does P ({X n ∈ Cn (t)}) → 1?
Solution:
53
The Asymptotic Equipartition Property
(a) Since the total probability of all sequences is less than 1, |C n (t)| minxn ∈Cn (t) p(xn ) ≤
1 , and hence |Cn (t)| ≤ 2nt .
(b) Since − n1 log p(xn ) → H , if t < H , the probability that p(x n ) > 2−nt goes to 0,
and if t > H , the probability goes to 1.
6. An AEP-like limit. Let X1 , X2 , . . . be i.i.d. drawn according to probability mass
function p(x). Find
1
lim [p(X1 , X2 , . . . , Xn )] n .
n→∞
Solution: An AEP-like limit. X1 , X2 , . . . , i.i.d. ∼ p(x) . Hence log(Xi ) are also i.i.d.
and
1
lim(p(X1 , X2 , . . . , Xn )) n
1
= lim 2log(p(X1 ,X2 ,...,Xn )) n
1
= 2lim n
= 2
$
log p(Xi )
E(log(p(X)))
−H(X)
= 2
a.e.
a.e.
a.e.
by the strong law of large numbers (assuming of course that H(X) exists).
7. The AEP and source coding. A discrete memoryless source emits a sequence of
statistically independent binary digits with probabilities p(1) = 0.005 and p(0) =
0.995 . The digits are taken 100 at a time and a binary codeword is provided for every
sequence of 100 digits containing three or fewer ones.
(a) Assuming that all codewords are the same length, find the minimum length required to provide codewords for all sequences with three or fewer ones.
(b) Calculate the probability of observing a source sequence for which no codeword
has been assigned.
(c) Use Chebyshev’s inequality to bound the probability of observing a source sequence
for which no codeword has been assigned. Compare this bound with the actual
probability computed in part (b).
Solution: The AEP and source coding.
(a) The number of 100-bit binary sequences with three or fewer ones is
.
/
.
/
.
/
.
100
100
100
100
+
+
+
0
1
2
3
/
= 1 + 100 + 4950 + 161700 = 166751 .
The required codeword length is 2log 2 1667513 = 18 . (Note that H(0.005) =
0.0454 , so 18 is quite a bit larger than the 4.5 bits of entropy.)
(b) The probability that a 100-bit sequence has three or fewer ones is
.
/
3
!
100
i=0
i
(0.005)i (0.995)100−i = 0.60577 + 0.30441 + 0.7572 + 0.01243 = 0.99833
54
The Asymptotic Equipartition Property
Thus the probability that the sequence that is generated cannot be encoded is
1 − 0.99833 = 0.00167 .
(c) In the case of a random variable S n that is the sum of n i.i.d. random variables
X1 , X2 , . . . , Xn , Chebyshev’s inequality states that
Pr(|Sn − nµ| ≥ %) ≤
nσ 2
,
%2
where µ and σ 2 are the mean and variance of Xi . (Therefore nµ and nσ 2
are the mean and variance of Sn .) In this problem, n = 100 , µ = 0.005 , and
σ 2 = (0.005)(0.995) . Note that S100 ≥ 4 if and only if |S100 − 100(0.005)| ≥ 3.5 ,
so we should choose % = 3.5 . Then
Pr(S100 ≥ 4) ≤
100(0.005)(0.995)
≈ 0.04061 .
(3.5)2
This bound is much larger than the actual probability 0.00167.
8. Products. Let
X=


 1,
2,

 3,
1
2
1
4
1
4
Let X1 , X2 , . . . be drawn i.i.d. according to this distribution. Find the limiting behavior
of the product
1
(X1 X2 · · · Xn ) n .
Solution: Products. Let
1
Pn = (X1 X2 . . . Xn ) n .
Then
log Pn =
n
1!
log Xi → E log X,
n i=1
(3.5)
(3.6)
with probability 1, by the strong law of large numbers. Thus P n → 2E log X with prob.
1. We can easily calculate E log X = 12 log 1 + 14 log 2 + 41 log 3 = 14 log 6 , and therefore
1
Pn → 2 4 log 6 = 1.565 .
9. AEP. Let X1 , X2 , . . . be independent identically distributed random variables drawn
according to the probability mass function p(x), x ∈ {1, 2, . . . , m} . Thus p(x 1 , x2 , . . . , xn ) =
Cn
that − n1 log p(X1 , X2 , . . . , Xn ) → H(X) in probability. Let
i=1 p(xi ) . We know
Cn
q(x1 , x2 , . . . , xn ) = i=1 q(xi ), where q is another probability mass function on {1, 2, . . . , m} .
(a) Evaluate lim − n1 log q(X1 , X2 , . . . , Xn ) , where X1 , X2 , . . . are i.i.d. ∼ p(x) .
q(X1 ,...,Xn )
(b) Now evaluate the limit of the log likelihood ratio n1 log p(X
when X1 , X2 , . . .
1 ,...,Xn )
are i.i.d. ∼ p(x) . Thus the odds favoring q are exponentially small when p is
true.
55
The Asymptotic Equipartition Property
Solution: (AEP).
(a) Since the X1 , X2 , . . . , Xn are i.i.d., so are q(X1 ), q(X2 ), . . . , q(Xn ) , and hence we
can apply the strong law of large numbers to obtain
1
1!
lim − log q(X1 , X2 , . . . , Xn ) = lim −
log q(Xi )
n
n
= −E(log q(X)) w.p. 1
= −
!
!
p(x) log q(x)
(3.7)
(3.8)
(3.9)
p(x) !
−
p(x) log p(x) (3.10)
q(x)
= D(p||q) + H(p).
(3.11)
=
p(x) log
(b) Again, by the strong law of large numbers,
1
q(X1 , X2 , . . . , Xn )
lim − log
n
p(X1 , X2 , . . . , Xn )
1!
q(Xi )
log
n
p(Xi )
q(X)
−E(log
) w.p. 1
p(X)
!
q(x)
−
p(x) log
p(x)
!
p(x)
p(x) log
q(x)
D(p||q).
= lim −
(3.12)
=
(3.13)
=
=
=
(3.14)
(3.15)
(3.16)
10. Random box size. An n -dimensional rectangular box with sides X 1 , X2 , X3 , . . . , Xn
C
is to be constructed. The volume is Vn = ni=1 Xi . The edge length l of a n -cube
1/n
with the same volume as the random box is l = V n . Let X1 , X2 , . . . be i.i.d. uniform
1/n
random variables over the unit interval [0, 1]. Find lim n→∞ Vn , and compare to
1
(EVn ) n . Clearly the expected edge length does not capture the idea of the volume
of the box. The geometric mean, rather than the arithmetic mean, characterizes the
behavior of products.
C
Solution: Random box size. The volume V n = ni=1 Xi is a random variable, since
the Xi are random variables uniformly distributed on [0, 1] . V n tends to 0 as n → ∞ .
However
1
1
1!
loge Vnn = loge Vn =
loge Xi → E(log e (X)) a.e.
n
n
by the Strong Law of Large Numbers, since X i and loge (Xi ) are i.i.d. and E(log e (X)) <
∞ . Now
2
E(loge (Xi )) =
1
0
loge (x) dx = −1
Hence, since ex is a continuous function,
1
1
lim Vnn = elimn→∞ n loge Vn =
n→∞
1
1
< .
e
2
56
The Asymptotic Equipartition Property
Thus the “effective” edge length of this solid is e −1 . Note that since the Xi ’s are
C
independent, E(Vn ) = E(Xi ) = ( 21 )n . Also 12 is the arithmetic mean of the random
variable, and 1e is the geometric mean.
11. Proof of Theorem 3.3.1. This problem shows that the size of the smallest “probable”
(n)
set is about 2nH . Let X1 , X2 , . . . , Xn be i.i.d. ∼ p(x) . Let Bδ ⊂ X n such that
(n)
Pr(Bδ ) > 1 − δ . Fix % < 21 .
(a) Given any two sets A , B such that Pr(A) > 1 − % 1 and Pr(B) > 1 − %2 , show
(n)
(n)
that Pr(A ∩ B) > 1 − %1 − %2 . Hence Pr(A! ∩ Bδ ) ≥ 1 − % − δ.
(b) Justify the steps in the chain of inequalities
(n)
(3.17)
n
p(x )
(3.18)
2−n(H−!)
(3.19)
(n)
(3.20)
1 − % − δ ≤ Pr(A(n)
! ∩ Bδ )
=
!
(n)
(n)
A! ∩Bδ
≤
!
(n)
(n)
A! ∩Bδ
−n(H−!)
= |A(n)
! ∩ Bδ |2
≤
(c) Complete the proof of the theorem.
(n)
|Bδ |2−n(H−!) .
(3.21)
Solution: Proof of Theorem 3.3.1.
(a) Let Ac denote the complement of A . Then
P (Ac ∪ B c ) ≤ P (Ac ) + P (B c ).
(3.22)
Since P (A) ≥ 1 − %1 , P (Ac ) ≤ %1 . Similarly, P (B c ) ≤ %2 . Hence
P (A ∩ B) = 1 − P (Ac ∪ B c )
≥ 1 − P (A ) − P (B )
c
c
≥ 1 − % 1 − %2 .
(3.23)
(3.24)
(3.25)
(b) To complete the proof, we have the following chain of inequalities
1−%−δ
(a)
≤
(b)
=
(n)
(3.26)
p(xn )
(3.27)
2−n(H−!)
(3.28)
(n)
(3.29)
Pr(A(n)
! ∩ Bδ )
!
(n)
(n)
A! ∩Bδ
(c)
≤
(d)
=
(e)
≤
!
(n)
(n)
A! ∩Bδ
−n(H−!)
|A(n)
! ∩ Bδ |2
(n)
|Bδ |2−n(H−!) .
(3.30)
The Asymptotic Equipartition Property
57
where (a) follows from the previous part, (b) follows by definition of probability of
a set, (c) follows from the fact that the probability of elements of the typical set are
(n)
(n)
bounded by 2−n(H−!) , (d) from the definition of |A! ∩ Bδ | as the cardinality
(n)
(n)
(n)
(n)
(n)
of the set A! ∩ Bδ , and (e) from the fact that A! ∩ Bδ ⊆ Bδ .
12. Monotonic convergence of the empirical distribution. Let p̂ n denote the empirical probability mass function corresponding to X 1 , X2 , . . . , Xn i.i.d. ∼ p(x), x ∈ X .
Specifically,
n
1!
I(Xi = x)
p̂n (x) =
n i=1
is the proportion of times that Xi = x in the first n samples, where I is the indicator
function.
(a) Show for X binary that
ED(p̂2n 5 p) ≤ ED(p̂n 5 p).
Thus the expected relative entropy “distance” from the empirical distribution to
the true distribution decreases with sample size.
Hint: Write p̂2n = 21 p̂n + 12 p̂'n and use the convexity of D .
(b) Show for an arbitrary discrete X that
ED(p̂n 5 p) ≤ ED(p̂n−1 5 p).
Hint: Write p̂n as the average of n empirical mass functions with each of the n
samples deleted in turn.
Solution: Monotonic convergence of the empirical distribution.
(a) Note that,
p̂2n (x) =
=
=
2n
1 !
I(Xi = x)
2n i=1
n
2n
11!
11 !
I(Xi = x) +
I(Xi = x)
2 n i=1
2 n i=n+1
1
1
p̂n (x) + p̂'n (x).
2
2
Using convexity of D(p||q) we have that,
1
1
1
1
D(p̂2n ||p) = D( p̂n + p̂'n || p + p)
2
2
2
2
1
1
≤
D(p̂n ||p) + D(p̂'n ||p).
2
2
Taking expectations and using the fact the X i ’s are identically distributed we get,
ED(p̂2n ||p) ≤ ED(p̂n ||p).
58
The Asymptotic Equipartition Property
(b) The trick to this part is similar to part a) and involves rewriting p̂ n in terms of
p̂n−1 . We see that,
p̂n =
!
1 n−1
I(Xn = x)
I(Xi = x) +
n i=0
n
p̂n =
1!
I(Xj = x)
,
I(Xi = x) +
n i+=j
n
or in general,
where j ranges from 1 to n .
Summing over j we get,
n
n−1!
np̂n =
p̂j + p̂n ,
n j=1 n−1
or,
n
1!
p̂j
p̂n =
n j=1 n−1
where,
n
!
j=1
p̂jn−1 =
1 !
I(Xi = x).
n − 1 i+=j
Again using the convexity of D(p||q) and the fact that the D(p̂ jn−1 ||p) are identically distributed for all j and hence have the same expected value, we obtain the
final result.
(n)
13. Calculation of typical set To clarify the notion of a typical set A ! and the smallest
(n)
set of high probability Bδ , we will calculate the set for a simple example. Consider a
sequence of i.i.d. binary random variables, X 1 , X2 , . . . , Xn , where the probability that
Xi = 1 is 0.6 (and therefore the probability that X i = 0 is 0.4).
(a) Calculate H(X) .
(n)
(b) With n = 25 and % = 0.1 , which sequences fall in the typical set A ! ? What
is the probability of the typical set? How many elements are there in the typical
set? (This involves computation of a table of probabilities for sequences with k
1’s, 0 ≤ k ≤ 25 , and finding those sequences that are in the typical set.)
59
The Asymptotic Equipartition Property
k
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
,nk
1
25
300
2300
12650
53130
177100
480700
1081575
2042975
3268760
4457400
5200300
5200300
4457400
3268760
2042975
1081575
480700
177100
53130
12650
2300
300
25
1
,n k
pk (1 − p)n−k
0.000000
0.000000
0.000000
0.000001
0.000007
0.000054
0.000227
0.001205
0.003121
0.013169
0.021222
0.077801
0.075967
0.267718
0.146507
0.575383
0.151086
0.846448
0.079986
0.970638
0.019891
0.997633
0.001937
0.999950
0.000047
0.000003
− n1 log p(xn )
1.321928
1.298530
1.275131
1.251733
1.228334
1.204936
1.181537
1.158139
1.134740
1.111342
1.087943
1.064545
1.041146
1.017748
0.994349
0.970951
0.947552
0.924154
0.900755
0.877357
0.853958
0.830560
0.807161
0.783763
0.760364
0.736966
(c) How many elements are there in the smallest set that has probability 0.9?
(d) How many elements are there in the intersection of the sets in part (b) and (c)?
What is the probability of this intersection?
Solution:
(a) H(X) = −0.6 log 0.6 − 0.4 log 0.4 = 0.97095 bits.
(n)
(b) By definition, A! for % = 0.1 is the set of sequences such that − n1 log p(xn ) lies
in the range (H(X)−%, H(X)+%) , i.e., in the range (0.87095, 1.07095). Examining
the last column of the table, it is easy to see that the typical set is the set of all
sequences with the number of ones lying between 11 and 19.
The probability of the typical set can be calculated from cumulative probability
column. The probability that the number of 1’s lies between 11 and 19 is equal to
F (19) − F (10) = 0.970638 − 0.034392 = 0.936246 . Note that this is greater than
1 − % , i.e., the n is large enough for the probability of the typical set to be greater
than 1 − % .
60
The Asymptotic Equipartition Property
The number of elements in the typical set can be found using the third column.
19
!
. /
. /
. /
19
10
!
!
n
n
n
=
=
−
= 33486026 − 7119516 = 26366510.
k
k
k
k=11
k=0
k=0
(3.31)
(n)
Note that the upper and lower bounds for the size of the A ! can be calculated
as 2n(H+!) = 225(0.97095+0.1) = 226.77 = 1.147365 × 108 , and (1 − %)2n(H−!) =
0.9 × 225(0.97095−0.1) = 0.9 × 221.9875 = 3742308 . Both bounds are very loose!
|A(n)
! |
(n)
(c) To find the smallest set Bδ of probability 0.9, we can imagine that we are filling
a bag with pieces such that we want to reach a certain weight with the minimum
number of pieces. To minimize the number of pieces that we use, we should use
the largest possible pieces. In this case, it corresponds to using the sequences with
the highest probability.
Thus we keep putting the high probability sequences into this set until we reach
a total probability of 0.9. Looking at the fourth column of the table, it is clear
that the probability of a sequence increases monotonically with k . Thus the set
consists of sequences of k = 25, 24, . . . , until we have a total probability 0.9.
(n)
Using the cumulative probability column, it follows that the set B δ consist
of sequences with k ≥ 13 and some sequences with k = 12 . The sequences with
(n)
k ≥ 13 provide a total probability of 1−0.153768 = 0.846232 to the set B δ . The
remaining probability of 0.9 − 0.846232 = 0.053768 should come from sequences
with k = 12 . The number of such sequences needed to fill this probability is at
least 0.053768/p(xn ) = 0.053768/1.460813×10−8 = 3680690.1 , which we round up
to 3680691. Thus the smallest set with probability 0.9 has 33554432 − 16777216 +
(n)
3680691 = 20457907 sequences. Note that the set B δ is not uniquely defined
- it could include any 3680691 sequences with k = 12 . However, the size of the
smallest set is well defined.
(n)
(n)
(d) The intersection of the sets A! and Bδ in parts (b) and (c) consists of all
sequences with k between 13 and 19, and 3680691 sequences with k = 12 . The
probability of this intersection = 0.970638 − 0.153768 + 0.053768 = 0.870638 , and
the size of this intersection = 33486026 − 16777216 + 3680691 = 20389501 .
Chapter 4
Entropy Rates of a Stochastic
Process
1. Doubly stochastic matrices. An n × n matrix P = [P ij ] is said to be doubly
$
$
stochastic if Pij ≥ 0 and j Pij = 1 for all i and
i Pij = 1 for all j . An n × n
matrix P is said to be a permutation matrix if it is doubly stochastic and there is
precisely one Pij = 1 in each row and each column.
It can be shown that every doubly stochastic matrix can be written as the convex
combination of permutation matrices.
$
(a) Let at = (a1 , a2 , . . . , an ) , ai ≥ 0 ,
ai = 1 , be a probability vector. Let b = aP ,
where P is doubly stochastic. Show that b is a probability vector and that
H(b1 , b2 , . . . , bn ) ≥ H(a1 , a2 , . . . , an ) . Thus stochastic mixing increases entropy.
(b) Show that a stationary distribution µ for a doubly stochastic matrix P is the
uniform distribution.
(c) Conversely, prove that if the uniform distribution is a stationary distribution for
a Markov transition matrix P , then P is doubly stochastic.
Solution: Doubly Stochastic Matrices.
(a)
H(b) − H(a) = −
=
!
j
!!
j
=
i
!!
i
≥
bj log bj +
j
!
ai log ai
i
!
ai Pij log(
ak Pkj ) +
k
ai
k ak Pkj
ai Pij log $


$
!
i,j ai


ai Pij log $
i,j
61
i,j bj
(4.1)
!
ai log ai
(4.2)
i
(4.3)
(4.4)
62
Entropy Rates of a Stochastic Process
= 1 log
= 0,
m
m
(4.5)
(4.6)
where the inequality follows from the log sum inequality.
(b) If the matrix is doubly stochastic, the substituting µ i =
that it satisfies µ = µP .
1
m
, we can easily check
(c) If the uniform is a stationary distribution, then
!
1
1 !
= µi =
µj Pji =
Pji ,
m
m j
j
or
$
j
(4.7)
Pji = 1 or that the matrix is doubly stochastic.
2. Time’s arrow. Let {Xi }∞
i=−∞ be a stationary stochastic process. Prove that
H(X0 |X−1 , X−2 , . . . , X−n ) = H(X0 |X1 , X2 , . . . , Xn ).
In other words, the present has a conditional entropy given the past equal to the
conditional entropy given the future.
This is true even though it is quite easy to concoct stationary random processes for
which the flow into the future looks quite different from the flow into the past. That is
to say, one can determine the direction of time by looking at a sample function of the
process. Nonetheless, given the present state, the conditional uncertainty of the next
symbol in the future is equal to the conditional uncertainty of the previous symbol in
the past.
Solution: Time’s arrow. By the chain rule for entropy,
H(X0 |X−1 , . . . , X−n ) = H(X0 , X−1 , . . . , X−n ) − H(X−1 , . . . , X−n )
= H(X0 , X1 , X2 , . . . , Xn ) − H(X1 , X2 , . . . , Xn )
= H(X0 |X1 , X2 , . . . , Xn ),
(4.8)
(4.9)
(4.10)
where (4.9) follows from stationarity.
3. Shuffles increase entropy. Argue that for any distribution on shuffles T and any
distribution on card positions X that
H(T X) ≥ H(T X|T )
= H(T
if X and T are independent.
−1
T X|T )
(4.11)
(4.12)
= H(X|T )
(4.13)
= H(X),
(4.14)
63
Entropy Rates of a Stochastic Process
Solution: Shuffles increase entropy.
H(T X) ≥ H(T X|T )
= H(T
−1
T X|T )
(4.15)
(4.16)
= H(X|T )
(4.17)
= H(X).
(4.18)
The inequality follows from the fact that conditioning reduces entropy and the first
equality follows from the fact that given T , we can reverse the shuffle.
4. Second law of thermodynamics. Let X1 , X2 , X3 . . . be a stationary first-order
Markov chain. In Section 4.4, it was shown that H(X n | X1 ) ≥ H(Xn−1 | X1 ) for
n = 2, 3 . . . . Thus conditional uncertainty about the future grows with time. This is
true although the unconditional uncertainty H(X n ) remains constant. However, show
by example that H(Xn |X1 = x1 ) does not necessarily grow with n for every x 1 .
Solution: Second law of thermodynamics.
H(Xn |X1 ) ≤ H(Xn |X1 , X2 )
(Conditioning reduces entropy)
= H(Xn |X2 ) (by Markovity)
= H(Xn−1 |X1 )
(by stationarity)
(4.19)
(4.20)
(4.21)
Alternatively, by an application of the data processing inequality to the Markov chain
X1 → Xn−1 → Xn , we have
I(X1 ; Xn−1 ) ≥ I(X1 ; Xn ).
(4.22)
Expanding the mutual informations in terms of entropies, we have
H(Xn−1 ) − H(Xn−1 |X1 ) ≥ H(Xn ) − H(Xn |X1 ).
(4.23)
By stationarity, H(Xn−1 ) = H(Xn ) and hence we have
H(Xn−1 |X1 ) ≤ H(Xn |X1 ).
(4.24)
5. Entropy of a random tree. Consider the following method of generating a random
tree with n nodes. First expand the root node:
"!
"
!
"
!
Then expand one of the two terminal nodes at random:
"!
"
!
"
!
"!
"
!
"
!
"!
"
!
"
!
"!
!
""
!
64
Entropy Rates of a Stochastic Process
At time k , choose one of the k − 1 terminal nodes according to a uniform distribution
and expand it. Continue until n terminal nodes have been generated. Thus a sequence
leading to a five node tree might look like this:
"!
"!
"!
"!
"
!
"
!
"
!
"
!
"
! "
!
"
!
"
!
"!
"!
"!
"
!
"
!
"
!!
"
!
"
!
"
"!
"!
"
!
"
!
"
!
"
!
"!
!
""
!
Surprisingly, the following method of generating random trees yields the same probability distribution on trees with n terminal nodes. First choose an integer N 1 uniformly
distributed on {1, 2, . . . , n − 1} . We then have the picture.
"!
"
!
"
!
N1
n − N1
Then choose an integer N2 uniformly distributed over {1, 2, . . . , N 1 − 1} , and independently choose another integer N3 uniformly over {1, 2, . . . , (n − N1 ) − 1} . The picture
is now:
#$
$$
##
#
$$
#
"!
"!
"
!
"
!
"
!
"
!
N2
N1 − N2
N3
n − N 1 − N3
Continue the process until no further subdivision can be made. (The equivalence of
these two tree generation schemes follows, for example, from Polya’s urn model.)
Now let Tn denote a random n -node tree generated as described. The probability
distribution on such trees seems difficult to describe, but we can find the entropy of
this distribution in recursive form.
First some examples. For n = 2 , we have only one tree. Thus H(T 2 ) = 0 . For n = 3 ,
we have two equally probable trees:
"!
"
!
"
!
"!
"
!
"
!
"!
"
!
"
!
"!
!
""
!
65
Entropy Rates of a Stochastic Process
Thus H(T3 ) = log 2 . For n = 4 , we have five possible trees, with probabilities 1/3,
1/6, 1/6, 1/6, 1/6.
Now for the recurrence relation. Let N 1 (Tn ) denote the number of terminal nodes of
Tn in the right half of the tree. Justify each of the steps in the following:
H(Tn )
(a)
=
H(N1 , Tn )
(4.25)
(b)
H(N1 ) + H(Tn |N1 )
(4.26)
=
(c)
=
log(n − 1) + H(Tn |N1 )
(4.27)
(d)
=
log(n − 1) +
(4.28)
(e)
=
log(n − 1) +
=
log(n − 1) +
n−1
!
1
[H(Tk ) + H(Tn−k )]
n − 1 k=1
!
2 n−1
H(Tk ).
n − 1 k=1
!
2 n−1
Hk .
n − 1 k=1
(4.29)
(4.30)
(f) Use this to show that
or
(n − 1)Hn = nHn−1 + (n − 1) log(n − 1) − (n − 2) log(n − 2),
(4.31)
Hn
Hn−1
=
+ cn ,
n
n−1
(4.32)
$
for appropriately defined cn . Since
cn = c < ∞ , you have proved that n1 H(Tn )
converges to a constant. Thus the expected number of bits necessary to describe the
random tree Tn grows linearly with n .
Solution: Entropy of a random tree.
(a) H(Tn , N1 ) = H(Tn ) + H(N1 |Tn ) = H(Tn ) + 0 by the chain rule for entropies and
since N1 is a function of Tn .
(b) H(Tn , N1 ) = H(N1 ) + H(Tn |N1 ) by the chain rule for entropies.
(c) H(N1 ) = log(n − 1) since N1 is uniform on {1, 2, . . . , n − 1} .
(d)
H(Tn |N1 ) =
=
n−1
!
k=1
P (N1 = k)H(Tn |N1 = k)
!
1 n−1
H(Tn |N1 = k)
n − 1 k=1
(4.33)
(4.34)
by the definition of conditional entropy. Since conditional on N 1 , the left subtree
and the right subtree are chosen independently, H(T n |N1 = k) = H(Tk , Tn−k |N1 =
66
Entropy Rates of a Stochastic Process
k) = H(Tk ) + H(Tn−k ) , so
!
1 n−1
(H(Tk ) + H(Tn−k )) .
n − 1 k=1
H(Tn |N1 ) =
(4.35)
(e) By a simple change of variables,
n−1
!
H(Tn−k ) =
k=1
n−1
!
H(Tk ).
(4.36)
k=1
(f) Hence if we let Hn = H(Tn ) ,
(n − 1)Hn = (n − 1) log(n − 1) + 2
(n − 2)Hn−1 = (n − 2) log(n − 2) + 2
n−1
!
k=1
n−2
!
Hk
(4.37)
Hk
(4.38)
k=1
(4.39)
Subtracting the second equation from the first, we get
(n − 1)Hn − (n − 2)Hn−1 = (n − 1) log(n − 1) − (n − 2) log(n − 2) + 2H n−1 (4.40)
or
Hn
n
Hn−1 log(n − 1) (n − 2) log(n − 2)
+
−
n−1
n
n(n − 1)
Hn−1
+ Cn
n−1
=
=
(4.41)
(4.42)
where
Cn =
=
log(n − 1) (n − 2) log(n − 2)
−
n
n(n − 1)
log(n − 1) log(n − 2) 2 log(n − 2)
−
+
n
(n − 1)
n(n − 1)
(4.43)
(4.44)
Substituting the equation for H n−1 in the equation for Hn and proceeding recursively,
we obtain a telescoping sum
Hn
n
=
=
n
!
j=3
n
!
Cj +
H2
2
2 log(j − 2) 1
+ log(n − 1).
j(j − 1)
n
j=3
(4.45)
(4.46)
67
Entropy Rates of a Stochastic Process
Since limn→∞
1
n
log(n − 1) = 0
Hn
n→∞ n
lim
=
≤
=
∞
!
2
log(j − 2)
j(j − 1)
j=3
∞
!
2
log(j − 1)
(j − 1)2
j=3
∞
!
2
j=2
j2
log j
(4.47)
(4.48)
(4.49)
√
For sufficiently large j , log j ≤ j and hence the sum in (4.49) is dominated by the
$ −3
sum
j j 2 which converges. Hence the above sum converges. In fact, computer
evaluation shows that
lim
∞
Hn !
2
=
log(j − 2) = 1.736 bits.
n
j(j − 1)
j=3
(4.50)
Thus the number of bits required to describe a random n -node tree grows linearly with
n.
6. Monotonicity of entropy per element. For a stationary stochastic process X 1 , X2 , . . . , Xn ,
show that
(a)
H(X1 , X2 , . . . , Xn )
H(X1 , X2 , . . . , Xn−1 )
≤
.
n
n−1
(4.51)
H(X1 , X2 , . . . , Xn )
≥ H(Xn |Xn−1 , . . . , X1 ).
n
(4.52)
(b)
Solution: Monotonicity of entropy per element.
(a) By the chain rule for entropy,
H(X1 , X2 , . . . , Xn )
n
=
=
=
$n
i=1 H(Xi |X
i−1 )
n
$n−1
H(Xn |X n−1 ) + i=1
H(Xi |X i−1 )
n
H(Xn |X n−1 ) + H(X1 , X2 , . . . , Xn−1 )
.
n
From stationarity it follows that for all 1 ≤ i ≤ n ,
H(Xn |X n−1 ) ≤ H(Xi |X i−1 ),
(4.53)
(4.54)
(4.55)
68
Entropy Rates of a Stochastic Process
which further implies, by averaging both sides, that,
H(Xn |X
n−1
$n−1
H(Xi |X i−1 )
n−1
H(X1 , X2 , . . . , Xn−1 )
.
n−1
) ≤
i=1
=
Combining (4.55) and (4.57) yields,
H(X1 , X2 , . . . , Xn )
n
≤
=
(4.56)
(4.57)
5
4
1 H(X1 , X2 , . . . , Xn−1 )
+ H(X1 , X2 , . . . , Xn−1 )
n
n−1
H(X1 , X2 , . . . , Xn−1 )
.
(4.58)
n−1
(b) By stationarity we have for all 1 ≤ i ≤ n ,
which implies that
H(Xn |X n−1 ) ≤ H(Xi |X i−1 ),
$n
i=1 H(Xn |X
H(Xn |X n−1 ) =
n−1 )
n
$n
i−1 )
i=1 H(Xi |X
n
H(X1 , X2 , . . . , Xn )
.
n
≤
=
(4.59)
(4.60)
(4.61)
7. Entropy rates of Markov chains.
(a) Find the entropy rate of the two-state Markov chain with transition matrix
P =
"
1 − p01
p01
p10
1 − p10
#
.
(b) What values of p01 , p10 maximize the rate of part (a)?
(c) Find the entropy rate of the two-state Markov chain with transition matrix
P =
"
1−p p
1
0
#
.
(d) Find the maximum value of the entropy rate of the Markov chain of part (c). We
expect that the maximizing value of p should be less than 1/2 , since the 0 state
permits more information to be generated than the 1 state.
(e) Let N (t) be the number of allowable state sequences of length t for the Markov
chain of part (c). Find N (t) and calculate
1
log N (t) .
t→∞ t
Hint: Find a linear recurrence that expresses N (t) in terms of N (t − 1) and
N (t − 2) . Why is H0 an upper bound on the entropy rate of the Markov chain?
Compare H0 with the maximum entropy found in part (d).
H0 = lim
69
Entropy Rates of a Stochastic Process
Solution: Entropy rates of Markov chains.
(a) The stationary distribution is easily calculated. (See EIT pp. 62–63.)
p10
p01
µ0 =
, µ0 =
.
p01 + p10
p01 + p10
Therefore the entropy rate is
H(X2 |X1 ) = µ0 H(p01 ) + µ1 H(p10 ) =
p10 H(p01 ) + p01 H(p10 )
.
p01 + p10
(b) The entropy rate is at most 1 bit because the process has only two states. This
rate can be achieved if (and only if) p 01 = p10 = 1/2 , in which case the process is
actually i.i.d. with Pr(Xi = 0) = Pr(Xi = 1) = 1/2 .
(c) As a special case of the general two-state Markov chain, the entropy rate is
H(X2 |X1 ) = µ0 H(p) + µ1 H(1) =
H(p)
.
p+1
(d) By straightforward calculus,
we find that the maximum value of H(X) of part (c)
√
occurs for p = (3 − 5)/2 = 0.382 . The maximum value is
.√
/
5−1
H(p) = H(1 − p) = H
= 0.694 bits .
2
√
Note that ( 5 − 1)/2 = 0.618 is (the reciprocal of) the Golden Ratio.
(e) The Markov chain of part (c) forbids consecutive ones. Consider any allowable
sequence of symbols of length t . If the first symbol is 1, then the next symbol
must be 0; the remaining N (t − 2) symbols can form any allowable sequence. If
the first symbol is 0, then the remaining N (t − 1) symbols can be any allowable
sequence. So the number of allowable sequences of length t satisfies the recurrence
N (t) = N (t − 1) + N (t − 2)
N (1) = 2, N (2) = 3
(The initial conditions are obtained by observing that for t = 2 only the sequence
11 is not allowed. We could also choose N (0) = 1 as an initial condition, since
there is exactly one allowable sequence of length 0, namely, the empty sequence.)
The sequence N (t) grows exponentially, that is, N (t) ≈ cλ t , where λ is the
maximum magnitude solution of the characteristic equation
1 = z −1 + z −2 .
√
Solving the characteristic equation yields λ = (1+ 5)/2 , the Golden Ratio. (The
sequence {N (t)} is the sequence of Fibonacci numbers.) Therefore
√
1
H0 = lim
log N (t) = log(1 + 5)/2 = 0.694 bits .
n→∞ t
Since there are only N (t) possible outcomes for X 1 , . . . , Xt , an upper bound on
H(X1 , . . . , Xt ) is log N (t) , and so the entropy rate of the Markov chain of part (c)
is at most H0 . In fact, we saw in part (d) that this upper bound can be achieved.
70
Entropy Rates of a Stochastic Process
8. Maximum entropy process. A discrete memoryless source has alphabet {1, 2}
where the symbol 1 has duration 1 and the symbol 2 has duration 2. The probabilities of 1 and 2 are p1 and p2 , respectively. Find the value of p 1 that maximizes
the source entropy per unit time H(X)/El X . What is the maximum value H ?
Solution: Maximum entropy process. The entropy per symbol of the source is
H(p1 ) = −p1 log p1 − (1 − p1 ) log(1 − p1 )
and the average symbol duration (or time per symbol) is
T (p1 ) = 1 · p1 + 2 · p2 = p1 + 2(1 − p1 ) = 2 − p1 = 1 + p2 .
Therefore the source entropy per unit time is
f (p1 ) =
H(p1 )
−p1 log p1 − (1 − p1 ) log(1 − p1 )
=
.
T (p1 )
2 − p1
Since f (0) = f (1) = 0 , the maximum value of f (p 1 ) must occur for some point p1
such that 0 < p1 < 1 and ∂f /∂p1 = 0 and
T (∂H/∂p1 ) − H(∂T /∂p1 )
∂ H(p1 )
=
∂p1 T (p1 )
T2
After some calculus, we find that the numerator of the above expression (assuming
natural logarithms) is
T (∂H/∂p1 ) − H(∂T /∂p1 ) = ln(1 − p1 ) − 2 ln p1 ,
√
which is zero when 1 − p1 = p21 = p2 , that is, p1 = 21 ( 5 − 1) = 0.61803 , the reciprocal
√
of the golden ratio, 12 ( 5 + 1) = 1.61803 . The corresponding entropy per unit time is
H(p1 )
−p1 log p1 − p21 log p21
−(1 + p21 ) log p1
=
=
= − log p1 = 0.69424 bits.
T (p1 )
2 − p1
1 + p21
Note that this result is the same as the maximum entropy rate for the Markov chain
in problem 4.7(d). This is because a source in which every 1 must be followed by a 0
is equivalent to a source in which the symbol 1 has duration 2 and the symbol 0 has
duration 1.
9. Initial conditions. Show, for a Markov chain, that
H(X0 |Xn ) ≥ H(X0 |Xn−1 ).
Thus initial conditions X0 become more difficult to recover as the future X n unfolds.
Solution: Initial conditions. For a Markov chain, by the data processing theorem, we
have
I(X0 ; Xn−1 ) ≥ I(X0 ; Xn ).
(4.62)
Therefore
H(X0 ) − H(X0 |Xn−1 ) ≥ H(X0 ) − H(X0 |Xn )
or H(X0 |Xn ) increases with n .
(4.63)
71
Entropy Rates of a Stochastic Process
10. Pairwise independence. Let X1 , X2 , . . . , Xn−1 be i.i.d. random variables taking
$
values in {0, 1} , with Pr{Xi = 1} = 12 . Let Xn = 1 if n−1
i=1 Xi is odd and Xn = 0
otherwise. Let n ≥ 3 .
(a) Show that Xi and Xj are independent, for i %= j , i, j ∈ {1, 2, . . . , n} .
(b) Find H(Xi , Xj ) , for i %= j .
(c) Find H(X1 , X2 , . . . , Xn ) . Is this equal to nH(X1 ) ?
Solution: (Pairwise Independence) X1 , X2 , . . . , Xn−1 are i.i.d. Bernoulli(1/2) random
$k
variables. We will first prove that for any k ≤ n − 1 , the probability that
i=1 Xi is
odd is 1/2 . We will prove this by induction. Clearly this is true for k = 1 . Assume
$
that it is true for k − 1 . Let Sk = ki=1 Xi . Then
P (Sk odd) = P (Sk−1 odd)P (Xk = 0) + P (Sk−1 even)P (Xk = 1)
(4.64)
11 11
+
(4.65)
=
22 22
1
.
(4.66)
=
2
Hence for all k ≤ n − 1 , the probability that S k is odd is equal to the probability that
it is even. Hence,
1
(4.67)
P (Xn = 1) = P (Xn = 0) = .
2
(a) It is clear that when i and j are both less than n , X i and Xj are independent.
The only possible problem is when j = n . Taking i = 1 without loss of generality,
P (X1 = 1, Xn = 1) = P (X1 = 1,
n−1
!
Xi
i=2
n−1
!
= P (X1 = 1)P (
even)
Xi even)
(4.68)
(4.69)
i=2
11
22
= P (X1 = 1)P (Xn = 1)
=
(4.70)
(4.71)
and similarly for other possible values of the pair (X 1 , Xn ) . Hence X1 and Xn
are independent.
(b) Since Xi and Xj are independent and uniformly distributed on {0, 1} ,
H(Xi , Xj ) = H(Xi ) + H(Xj ) = 1 + 1 = 2 bits.
(4.72)
(c) By the chain rule and the independence of X 1 , X2 , . . . , Xn1 , we have
H(X1 , X2 , . . . , Xn ) = H(X1 , X2 , . . . , Xn−1 ) + H(Xn |Xn−1 , . . . , X1 )(4.73)
=
n−1
!
H(Xi ) + 0
(4.74)
i=1
= n − 1,
(4.75)
72
Entropy Rates of a Stochastic Process
since Xn is a function of the previous Xi ’s. The total entropy is not n , which is
what would be obtained if the Xi ’s were all independent. This example illustrates
that pairwise independence does not imply complete independence.
11. Stationary processes. Let . . . , X−1 , X0 , X1 , . . . be a stationary (not necessarily
Markov) stochastic process. Which of the following statements are true? Prove or
provide a counterexample.
(a) H(Xn |X0 ) = H(X−n |X0 ) .
(b) H(Xn |X0 ) ≥ H(Xn−1 |X0 ) .
(c) H(Xn |X1 , X2 , . . . , Xn−1 , Xn+1 ) is nonincreasing in n .
(d) H(Xn |X1 , . . . , Xn−1 , Xn+1 , . . . , X2n ) is non-increasing in n .
Solution: Stationary processes.
(a) H(Xn |X0 ) = H(X−n |X0 ) .
This statement is true, since
H(Xn |X0 ) = H(Xn , X0 ) − H(X0 )
H(X−n |X0 ) = H(X−n , X0 ) − H(X0 )
(4.76)
(4.77)
and H(Xn , X0 ) = H(X−n , X0 ) by stationarity.
(b) H(Xn |X0 ) ≥ H(Xn−1 |X0 ) .
This statement is not true in general, though it is true for first order Markov chains.
A simple counterexample is a periodic process with period n . Let X 0 , X1 , X2 , . . . , Xn−1
be i.i.d. uniformly distributed binary random variables and let X k = Xk−n for
k ≥ n . In this case, H(Xn |X0 ) = 0 and H(Xn−1 |X0 ) = 1 , contradicting the
statement H(Xn |X0 ) ≥ H(Xn−1 |X0 ) .
(c) H(Xn |X1n−1 , Xn+1 ) is non-increasing in n .
This statement is true, since by stationarity H(X n |X1n−1 , Xn+1 ) = H(Xn+1 |X2n , Xn+2 ) ≥
H(Xn+1 |X1n , Xn+2 ) where the inequality follows from the fact that conditioning
reduces entropy.
12. The entropy rate of a dog looking for a bone. A dog walks on the integers,
possibly reversing direction at each step with probability p = .1. Let X 0 = 0 . The
first step is equally likely to be positive or negative. A typical walk might look like this:
(X0 , X1 , . . .) = (0, −1, −2, −3, −4, −3, −2, −1, 0, 1, . . .).
(a) Find H(X1 , X2 , . . . , Xn ).
(b) Find the entropy rate of this browsing dog.
(c) What is the expected number of steps the dog takes before reversing direction?
Solution: The entropy rate of a dog looking for a bone.
73
Entropy Rates of a Stochastic Process
(a) By the chain rule,
H(X0 , X1 , . . . , Xn ) =
n
!
i=0
H(Xi |X i−1 )
= H(X0 ) + H(X1 |X0 ) +
n
!
i=2
H(Xi |Xi−1 , Xi−2 ),
since, for i > 1 , the next position depends only on the previous two (i.e., the
dog’s walk is 2nd order Markov, if the dog’s position is the state). Since X 0 = 0
deterministically, H(X0 ) = 0 and since the first step is equally likely to be positive
or negative, H(X1 |X0 ) = 1 . Furthermore for i > 1 ,
H(Xi |Xi−1 , Xi−2 ) = H(.1, .9).
Therefore,
H(X0 , X1 , . . . , Xn ) = 1 + (n − 1)H(.1, .9).
(b) From a),
H(X0 , X1 , . . . Xn )
n+1
1 + (n − 1)H(.1, .9)
n+1
→ H(.1, .9).
=
(c) The dog must take at least one step to establish the direction of travel from which
it ultimately reverses. Letting S be the number of steps taken between reversals,
we have
E(S) =
∞
!
s(.9)s−1 (.1)
s=1
= 10.
Starting at time 0, the expected number of steps to the first reversal is 11.
13. The past has little to say about the future. For a stationary stochastic process
X1 , X2 , . . . , Xn , . . . , show that
lim
n→∞
1
I(X1 , X2 , . . . , Xn ; Xn+1 , Xn+2 , . . . , X2n ) = 0.
2n
(4.78)
Thus the dependence between adjacent n -blocks of a stationary process does not grow
linearly with n .
Solution:
I(X1 , X2 , . . . , Xn ; Xn+1 , Xn+2 , . . . , X2n )
= H(X1 , X2 , . . . , Xn ) + H(Xn+1 , Xn+2 , . . . , X2n ) − H(X1 , X2 , . . . , Xn , Xn+1 , Xn+2 , . . . , X2n )
= 2H(X1 , X2 , . . . , Xn ) − H(X1 , X2 , . . . , Xn , Xn+1 , Xn+2 , . . . , X2n )
(4.79)
74
Entropy Rates of a Stochastic Process
since H(X1 , X2 , . . . , Xn ) = H(Xn+1 , Xn+2 , . . . , X2n ) by stationarity.
Thus
1
I(X1 , X2 , . . . , Xn ; Xn+1 , Xn+2 , . . . , X2n )
n→∞ 2n
1
1
= lim
2H(X1 , X2 , . . . , Xn ) − lim
H(X1 , X2 , . . . , Xn , Xn+1 , Xn+2 , . . . ,(4.80)
X2n )
n→∞ 2n
n→∞ 2n
1
1
= lim H(X1 , X2 , . . . , Xn ) − lim
H(X1 , X2 , . . . , Xn , Xn+1 , Xn+2 , . . . , X(4.81)
2n )
n→∞ n
n→∞ 2n
lim
1
Now limn→∞ n1 H(X1 , X2 , . . . , Xn ) = limn→∞ 2n
H(X1 , X2 , . . . , Xn , Xn+1 , Xn+2 , . . . , X2n )
since both converge to the entropy rate of the process, and therefore
lim
n→∞
1
I(X1 , X2 , . . . , Xn ; Xn+1 , Xn+2 , . . . , X2n ) = 0.
2n
(4.82)
14. Functions of a stochastic process.
(a) Consider a stationary stochastic process X 1 , X2 , . . . , Xn , and let Y1 , Y2 , . . . , Yn be
defined by
Yi = φ(Xi ), i = 1, 2, . . .
(4.83)
for some function φ . Prove that
H(Y) ≤ H(X )
(4.84)
(b) What is the relationship between the entropy rates H(Z) and H(X ) if
Zi = ψ(Xi , Xi+1 ),
i = 1, 2, . . .
(4.85)
for some function ψ .
Solution: The key point is that functions of a random variable have lower entropy.
Since (Y1 , Y2 , . . . , Yn ) is a function of (X1 , X2 , . . . , Xn ) (each Yi is a function of the
corresponding Xi ), we have (from Problem 2.4)
H(Y1 , Y2 , . . . , Yn ) ≤ H(X1 , X2 , . . . , Xn )
(4.86)
Dividing by n , and taking the limit as n → ∞ , we have
lim
n→∞
H(X1 , X2 , . . . , Xn )
H(Y1 , Y2 , . . . , Yn )
≤ lim
n→∞
n
n
(4.87)
or
H(Y) ≤ H(X )
(4.88)
75
Entropy Rates of a Stochastic Process
15. Entropy rate. Let {Xi } be a discrete stationary stochastic process with entropy rate
H(X ). Show
1
H(Xn , . . . , X1 | X0 , X−1 , . . . , X−k ) → H(X ),
(4.89)
n
for k = 1, 2, . . . .
Solution: Entropy rate of a stationary process. By the Cesáro mean theorem, the
running average of the terms tends to the same limit as the limit of the terms. Hence
1
H(X1 , X2 , . . . , Xn |X0 , X−1 , . . . , X−k )
n
=
n
1!
H(Xi |Xi−1 , Xi−2 , . . . , X−k(4.90)
)
n i=1
→ lim H(Xn |Xn−1 , Xn−2 , . . . , X−k(4.91)
)
=
(4.92)
H,
the entropy rate of the process.
16. Entropy rate of constrained sequences. In magnetic recording, the mechanism of
recording and reading the bits imposes constraints on the sequences of bits that can be
recorded. For example, to ensure proper sychronization, it is often necessary to limit
the length of runs of 0’s between two 1’s. Also to reduce intersymbol interference, it
may be necessary to require at least one 0 between any two 1’s. We will consider a
simple example of such a constraint.
Suppose that we are required to have at least one 0 and at most two 0’s between any
pair of 1’s in a sequences. Thus, sequences like 101001 and 0101001 are valid sequences,
but 0110010 and 0000101 are not. We wish to calculate the number of valid sequences
of length n .
(a) Show that the set of constrained sequences is the same as the set of allowed paths
on the following state diagram:
(b) Let Xi (n) be the number of valid paths of length n ending at state i . Argue that
X(n) = [X1 (n) X2 (n) X3 (n)]t satisfies the following recursion:





X1 (n)
0 1 1
X1 (n − 1)

 


 X2 (n)  =  1 0 0   X2 (n − 1)  ,
X3 (n)
0 1 0
X3 (n − 1)
(4.93)
with initial conditions X(1) = [1 1 0]t .
(c) Let

Then we have by induction

0 1 1


A =  1 0 0 .
0 1 0
X(n) = AX(n − 1) = A2 X(n − 2) = · · · = An−1 X(1).
(4.94)
(4.95)
76
Entropy Rates of a Stochastic Process
Figure 4.1: Entropy rate of constrained sequence
Using the eigenvalue decomposition of A for the case of distinct eigenvalues, we
can write A = U −1 ΛU , where Λ is the diagonal matrix of eigenvalues. Then
An−1 = U −1 Λn−1 U . Show that we can write
X(n) = λn−1
Y1 + λn−1
Y2 + λn−1
Y3 ,
1
2
3
(4.96)
where Y1 , Y2 , Y3 do not depend on n . For large n , this sum is dominated by
the largest term. Therefore argue that for i = 1, 2, 3 , we have
1
log Xi (n) → log λ,
n
(4.97)
where λ is the largest (positive) eigenvalue. Thus the number of sequences of
length n grows as λn for large n . Calculate λ for the matrix A above. (The
case when the eigenvalues are not distinct can be handled in a similar manner.)
(d) We will now take a different approach. Consider a Markov chain whose state
diagram is the one given in part (a), but with arbitrary transition probabilities.
Therefore the probability transition matrix of this Markov chain is


0 1
0


P =  α 0 1 − α .
1 0
0
(4.98)
Show that the stationary distribution of this Markov chain is
µ=
4
5
1
1
1−α
,
,
.
3−α 3−α 3−α
(4.99)
(e) Maximize the entropy rate of the Markov chain over choices of α . What is the
maximum entropy rate of the chain?
77
Entropy Rates of a Stochastic Process
(f) Compare the maximum entropy rate in part (e) with log λ in part (c). Why are
the two answers the same?
Solution:
Entropy rate of constrained sequences.
(a) The sequences are constrained to have at least one 0 and at most two 0’s between
two 1’s. Let the state of the system be the number of 0’s that has been seen since
the last 1. Then a sequence that ends in a 1 is in state 1, a sequence that ends in
10 in is state 2, and a sequence that ends in 100 is in state 3. From state 1, it is
only possible to go to state 2, since there has to be at least one 0 before the next
1. From state 2, we can go to either state 1 or state 3. From state 3, we have to
go to state 1, since there cannot be more than two 0’s in a row. Thus we can the
state diagram in the problem.
(b) Any valid sequence of length n that ends in a 1 must be formed by taking a valid
sequence of length n − 1 that ends in a 0 and adding a 1 at the end. The number
of valid sequences of length n − 1 that end in a 0 is equal to X 2 (n − 1) + X3 (n − 1)
and therefore,
X1 (n) = X2 (n − 1) + X3 (n − 1).
(4.100)
By similar arguments, we get the other two equations, and we have





X1 (n)
0 1 1
X1 (n − 1)



 
 X2 (n)  =  1 0 0   X2 (n − 1)  .
0 1 0
X3 (n − 1)
X3 (n)
(4.101)
The initial conditions are obvious, since both sequences of length 1 are valid and
therefore X(1) = [1 1 0]T .
(c) The induction step is obvious. Now using the eigenvalue decomposition of A =
U −1 ΛU , it follows that A2 = U −1 ΛU U −1 ΛU = U −1 Λ2 U , etc. and therefore
X(n) = An−1 X(1) = U −1 Λn−1 U X(1)


= U −1 
λn−1
1
0
0



0
0
1
 

U
1 
λn−1
0


2
n−1
0
0
λ3


(4.102)


(4.103)




1 0 0
1
0 0 0
1
 

 

−1 
n−1 −1 
= λn−1
U
0
0
0
U
1
+
λ
U
0
1
0
U

 


  1 
1
2
0 0 0
0
0 0 0
0




0 0 0
1
 

n−1 −1 
+λ3 U  0 0 0  U  1 
0 0 1
0
= λn−1
Y1 + λn−1
Y2 + λn−1
Y3 ,
1
2
3
(4.104)
(4.105)
78
Entropy Rates of a Stochastic Process
where Y1 , Y2 , Y3 do not depend on n . Without loss of generality, we can assume
that λ1 > λ2 > λ3 . Thus
X1 (n) = λn−1
Y11 + λn−1
Y21 + λn−1
Y31
1
2
3
(4.106)
X2 (n) =
(4.107)
X3 (n) =
λn−1
Y12
1
n−1
λ1 Y13
+
+
λn−1
Y22
2
n−1
λ2 Y23
+
+
λ3n−1 Y32
λn−1
Y33
3
(4.108)
For large n , this sum is dominated by the largest term. Thus if Y 1i > 0 , we have
1
log Xi (n) → log λ1 .
n
(4.109)
To be rigorous, we must also show that Y 1i > 0 for i = 1, 2, 3 . It is not difficult
to prove that if one of the Y1i is positive, then the other two terms must also be
positive, and therefore either
1
log Xi (n) → log λ1 .
n
(4.110)
for all i = 1, 2, 3 or they all tend to some other value.
The general argument is difficult since it is possible that the initial conditions of
the recursion do not have a component along the eigenvector that corresponds to
the maximum eigenvalue and thus Y1i = 0 and the above argument will fail. In
our example, we can simply compute the various quantities, and thus


0 1 1


A =  1 0 0  = U −1 ΛU,
0 1 0
where

and
(4.111)

1.3247
0
0


Λ=
0
−0.6624 + 0.5623i
0
,
0
0
−0.6624 − 0.5623i

(4.112)

−0.5664
−0.7503
−0.4276


U =  0.6508 − 0.0867i −0.3823 + 0.4234i −0.6536 − 0.4087i  ,
0.6508 + 0.0867i −0.3823i0.4234i −0.6536 + 0.4087i
and therefore


0.9566


Y1 =  0.7221  ,
0.5451
(4.113)
(4.114)
which has all positive components. Therefore,
1
log Xi (n) → log λi = log 1.3247 = 0.4057 bits.
n
(4.115)
79
Entropy Rates of a Stochastic Process
(d) To verify the that
µ=
4
1
1
1−α
,
,
3−α 3−α 3−α
5T
.
(4.116)
is the stationary distribution, we have to verify that P µ = µ . But this is straightforward.
(e) The entropy rate of the Markov chain (in nats) is
H=−
!
µi
i
!
j
Pij ln Pij =
1
(−α ln α − (1 − α) ln(1 − α)) ,
3−α
(4.117)
and differentiating with respect to α to find the maximum, we find that
dH
1
1
=
(−α ln α − (1 − α) ln(1 − α))+
(−1 − ln α + 1 + ln(1 − α)) = 0,
2
dα
(3 − α)
3−α
(4.118)
or
(3 − α) (ln a − ln(1 − α)) = (−α ln α − (1 − α) ln(1 − α))
(4.119)
which reduces to
i.e.,
3 ln α = 2 ln(1 − α),
(4.120)
α3 = α2 − 2α + 1,
(4.121)
which can be solved (numerically) to give α = 0.5698 and the maximum entropy
rate as 0.2812 nats = 0.4057 bits.
(f) The answers in parts (c) and (f) are the same. Why? A rigorous argument is
quite involved, but the essential idea is that both answers give the asymptotics of
the number of sequences of length n for the state diagram in part (a). In part (c)
we used a direct argument to calculate the number of sequences of length n and
found that asymptotically, X(n) ≈ λn1 .
If we extend the ideas of Chapter 3 (typical sequences) to the case of Markov
chains, we can see that there are approximately 2 nH typical sequences of length
n for a Markov chain of entropy rate H . If we consider all Markov chains with
state diagram given in part (a), the number of typical sequences should be less
than the total number of sequences of length n that satisfy the state constraints.
Thus, we see that 2nH ≤ λn1 or H ≤ log λ1 .
To complete the argument, we need to show that there exists a Markov transition
matrix that achieves the upper bound. This can be done by two different methods.
One is to derive the Markov transition matrix from the eigenvalues, etc. of parts
(a)–(c). Instead, we will use an argument from the method of types. In Chapter 12,
we show that there are at most a polynomial number of types, and that therefore,
the largest type class has the same number of sequences (to the first order in
the exponent) as the entire set. The same arguments can be applied to Markov
types. There are only a polynomial number of Markov types and therefore of all
80
Entropy Rates of a Stochastic Process
the Markov type classes that satisfy the state constraints of part (a), at least one
of them has the same exponent as the total number of sequences that satisfy the
state constraint. For this Markov type, the number of sequences in the typeclass
is 2nH , and therefore for this type class, H = log λ 1 .
This result is a very curious one that connects two apparently unrelated objects the maximum eigenvalue of a state transition matrix, and the maximum entropy
rate for a probability transition matrix with the same state diagram. We don’t
know a reference for a formal proof of this result.
17. Waiting times are insensitive to distributions. Let X 0 , X1 , X2 , . . . be drawn i.i.d.
∼ p(x), x ∈ X = {1, 2, . . . , m} and let N be the waiting time to the next occurrence
of X0 , where N = minn {Xn = X0 } .
(a) Show that EN = m .
(b) Show that E log N ≤ H(X) .
(c) (Optional) Prove part (a) for {Xi } stationary and ergodic.
Solution: Waiting times are insensitive to distributions. Since X 0 , X1 , X2 , . . . , Xn are
drawn i.i.d. ∼ p(x) , the waiting time for the next occurrence of X 0 has a geometric
distribution with probability of success p(x 0 ) .
(a) Given X0 = i , the expected time until we see it again is 1/p(i) . Therefore,
EN = E[E(N |X0 )] =
!
*
1
p(X0 = i)
p(i)
+
= m.
(4.122)
(b) By the same argument, since given X0 = i , N has a geometric distribution with
mean p(i) and
1
E(N |X0 = i) =
.
(4.123)
p(i)
Then using Jensen’s inequality, we have
E log N
=
≤
=
!
p(X0 = i)E(log N |X0 = i)
(4.124)
i
p(X0 = i) log E(N |X0 = i)
(4.125)
i
!
!
p(i) log
i
= H(X).
1
p(i)
(4.126)
(4.127)
(c) The property that EN = m is essentially a combinatorial property rather than
a statement about expectations. We prove this for stationary ergodic sources. In
essence, we will calculate the empirical average of the waiting time, and show that
this converges to m . Since the process is ergodic, the empirical average converges
to the expected value, and thus the expected value must be m .
81
Entropy Rates of a Stochastic Process
Let X1 = a , and define a sequence of random variables N 1 , N2 , . . . , where N1 = recurrence
time for X1 , etc. It is clear that the N process is also stationary and ergodic.
Let Ia (Xi ) be the indicator that Xi = a and Ja (Xi ) be the indicator that Xi %= a .
Then for all i , all a ∈ X , Ia (Xi ) + Ja (Xi ) = 1 .
Let N1 (a), N2 (a), . . . be the recurrence times of the symbol a in the sequence.
Thus X1 = a , Xi %= a, 1 < i < N1 (a) , and XN1 (a) = a , etc. Thus the sum of
$
Ja (Xi ) over all i is equal to the j (Nj (a) − 1) . Or equivalently
!
Nj (a) =
j
!
Ja (Xi ) +
i
!
Ia (Xi ) = n
(4.128)
i
Summing this over all a ∈ X , we obtain
!!
a
Nj (a) = nm
(4.129)
j
There are n terms in this sum, and therefore the empirical mean of N j (Xi ) is m .
Thus the empirical average of N over any sample sequence is m and thus the
expected value of N must also be m .
18. Stationary but not ergodic process. A bin has two biased coins, one with probability of heads p and the other with probability of heads 1 − p . One of these coins
is chosen at random (i.e., with probability 1/2), and is then tossed n times. Let X
denote the identity of the coin that is picked, and let Y 1 and Y2 denote the results of
the first two tosses.
(a) Calculate I(Y1 ; Y2 |X) .
(b) Calculate I(X; Y1 , Y2 ) .
(c) Let H(Y) be the entropy rate of the Y process (the sequence of coin tosses).
Calculate H(Y) . (Hint: Relate this to lim n→∞ n1 H(X, Y1 , Y2 , . . . , Yn ) ).
You can check the answer by considering the behavior as p → 1/2 .
Solution:
(a) SInce the coin tosses are indpendent conditional on the coin chosen, I(Y 1 ; Y2 |X) =
0.
(b) The key point is that if we did not know the coin being used, then Y 1 and Y2
are not independent. The joint distribution of Y 1 and Y2 can be easily calculated
from the following table
82
Entropy Rates of a Stochastic Process
X
1
1
1
1
2
2
2
2
Y1
H
H
T
T
H
H
T
T
Y2
H
T
H
T
H
T
H
T
Probability
p2
p(1 − p)
p(1 − p)
(1 − p)2
(1 − p)2
p(1 − p)
p(1 − p)
p2
Thus the joint distribution of (Y1 , Y2 ) is ( 12 (p2 + (1 − p)2 ), p(1 − p), p(1 − p), 12 (p2 +
(1 − p)2 )) , and we can now calculate
I(X; Y1 , Y2 ) = H(Y1 , Y2 ) − H(Y1 , Y2 |X)
= H(Y1 , Y2 ) − H(Y1 |X) − H(Y2 |X)
(4.130)
(4.131)
= H(Y1 , Y2 ) − 2H(p)
(4.132)
+
*
1
1 2
(p + (1 − p)2 ), p(1 − p), p(1 − p), (p2 + (1 − p)2 ) − 2H(p)
= H
2
2
= H(p(1 − p)) + 1 − 2H(p)
(4.133)
where the last step follows from using the grouping rule for entropy.
(c)
H(Y1 , Y2 , . . . , Yn )
(4.134)
n
H(X, Y1 , Y2 , . . . , Yn ) − H(X|Y1 , Y2 , . . . , Yn )
= lim
(4.135)
n
H(X) + H(Y1 , Y2 , . . . , Yn |X) − H(X|Y1 , Y2 , . . . , Yn )
= lim
(4.136)
n
H(Y) = lim
Since 0 ≤ H(X|Y1 , Y2 , . . . , Yn ) ≤ H(X) ≤ 1 , we have lim n1 H(X) = 0 and similarly n1 H(X|Y1 , Y2 , . . . , Yn ) = 0 . Also, H(Y1 , Y2 , . . . , Yn |X) = nH(p) , since the
Yi ’s are i.i.d. given X . Combining these terms, we get
H(Y) = lim
nH(p)
= H(p)
n
19. Random walk on graph. Consider a random walk on the graph
(4.137)
83
Entropy Rates of a Stochastic Process
2!
!"
! "
"!3
!
%&
!
% &
!
% &
!
&
%
!
&
!##
%
1 !
&
" ####%
&&!
%####
"
4
%
$
"
$
" % $$
!
"%$
5
(a) Calculate the stationary distribution.
(b) What is the entropy rate?
(c) Find the mutual information I(Xn+1 ; Xn ) assuming the process is stationary.
Solution:
(a) The stationary distribution for a connected graph of undirected edges with equal
Ei
weight is given as µi = 2E
where Ei denotes the number of edges emanating
from node i and E is the total number of edges in the graph. Hence, the station3 3 3 3 4
ary distribution is [ 16
, 16 , 16 , 16 , 16 ] ; i.e., the first four nodes exterior nodes have
3
steady state probability of 16
while node 5 has steady state probability of 14 .
3
4
(b) Thus, the entropy rate of the random walk on this graph is 4 16
log2 (3)+ 16
log2 (4) =
3
1
log
(3)
+
=
log
16
−
H(3/16,
3/16,
3/16,
3/16,
1/4)
2
4
2
(c) The mutual information
I(Xn+1 ; Xn ) = H(Xn+1 ) − H(Xn+1 |Xn )
(4.138)
= H(3/16, 3/16, 3/16, 3/16, 1/4) − (log16 − H(3/16, 3/16, 3/16, 3/16,
(4.139)
1/4))
= 2H(3/16, 3/16, 3/16, 3/16, 1/4) − log16
3
16 1
= 2( log
+ log 4) − log16
4
3
4
3
= 3 − log 3
2
20. Random walk on chessboard. Find the entropy rate of the Markov chain associated
with a random walk of a king on the 3 × 3 chessboard
1
4
7
2
5
8
3
6
9
(4.140)
(4.141)
(4.142)
84
Entropy Rates of a Stochastic Process
What about the entropy rate of rooks, bishops and queens? There are two types of
bishops.
Solution:
Random walk on the chessboard.
Notice that the king cannot remain where it is. It has to move from one state to the
next. The stationary distribution is given by µ i = Ei /E , where Ei = number of edges
$
emanating from node i and E = 9i=1 Ei . By inspection, E1 = E3 = E7 = E9 = 3 ,
E2 = E4 = E6 = E8 = 5 , E5 = 8 and E = 40 , so µ1 = µ3 = µ7 = µ9 = 3/40 ,
µ2 = µ4 = µ6 = µ8 = 5/40 and µ5 = 8/40 . In a random walk the next state is
chosen with equal probability among possible choices, so H(X 2 |X1 = i) = log 3 bits
for i = 1, 3, 7, 9 , H(X2 |X1 = i) = log 5 for i = 2, 4, 6, 8 and H(X2 |X1 = i) = log 8
bits for i = 5 . Therefore, we can calculate the entropy rate of the king as
H =
9
!
i=1
µi H(X2 |X1 = i)
(4.143)
= 0.3 log 3 + 0.5 log 5 + 0.2 log 8
(4.144)
= 2.24 bits.
(4.145)
21. Maximal entropy graphs. Consider a random walk on a connected graph with 4
edges.
(a) Which graph has the highest entropy rate?
(b) Which graph has the lowest?
Solution: Graph entropy.
There are five graphs with four edges.
Entropy Rates of a Stochastic Process
85
Where the entropy rates are 1/2+ 3/8 log(3) ≈ 1.094, 1, .75, 1 and 1/4+ 3/8 log(3) ≈
.844.
(a) From the above we see that the first graph maximizes entropy rate with and
entropy rate of 1.094.
(b) From the above we see that the third graph minimizes entropy rate with and
entropy rate of .75.
22. 3-D Maze.
A bird is lost in a 3 × 3 × 3 cubical maze. The bird flies from room to room going to
adjoining rooms with equal probability through each of the walls. To be specific, the
corner rooms have 3 exits.
86
Entropy Rates of a Stochastic Process
(a) What is the stationary distribution?
(b) What is the entropy rate of this random walk?
Solution: 3D Maze.
The entropy rate of a random walk on a graph with equal weights is given by equation
4.41 in the text:
H(X ) = log(2E) − H
*
E1
Em
,...,
2E
2E
+
There are 8 corners, 12 edges, 6 faces and 1 center. Corners have 3 edges, edges have
4 edges, faces have 5 edges and centers have 6 edges. Therefore, the total number of
edges E = 54 . So,
H(X ) = log(108) + 8
= 2.03 bits
*
3
3
log
108
108
+
+ 12
*
4
4
log
108
108
+
+6
*
5
5
log
108
108
+
+1
*
6
6
log
108
108
23. Entropy rate
Let {Xi } be a stationary stochastic process with entropy rate H(X ) .
(a) Argue that H(X ) ≤ H(X1 ) .
(b) What are the conditions for equality?
Solution: Entropy Rate
(a) From Theorem 4.2.1
H(X ) = H(X1 |X0 , X−1 , . . .) ≤ H(X1 )
(4.146)
since conditioning reduces entropy
(b) We have equality only if X1 is independent of the past X0 , X−1 , . . . , i.e., if and
only if Xi is an i.i.d. process.
24. Entropy rates
Let {Xi } be a stationary process. Let Yi = (Xi , Xi+1 ) . Let Zi = (X2i , X2i+1 ) . Let
Vi = X2i . Consider the entropy rates H(X ) , H(Y) , H(Z) , and H(V) of the processes
{Xi } , {Yi } , {Zi } , and {Vi } . What is the inequality relationship ≤ , =, or ≥ between
each of the pairs listed below:
≥
(a) H(X ) ≤ H(Y) .
≥
(b) H(X ) ≤ H(Z) .
≥
(c) H(X ) ≤ H(V) .
≥
(d) H(Z) ≤ H(X ) .
+
87
Entropy Rates of a Stochastic Process
Solution: Entropy rates
{Xi } is a stationary process, Yi = (Xi , Xi+1 ) . Let Zi = (X2i , X2i+1 ) . Let Vi = X2i .
Consider the entropy rates H(X ) , H(Y) , H(Z) , and H(V) of the processes {X i } ,
{Zi } , and {Vi } .
(a) H(X ) = H (Y) , since H(X1 , X2 , . . . , Xn , Xn+1 ) = H(Y1 , Y2 , . . . , Yn ) , and dividing
by n and taking the limit, we get equality.
(b) H(X ) < H (Z) , since H(X1 , . . . , X2n ) = H(Z1 , . . . , Zn ) , and dividing by n and
taking the limit, we get 2H(X ) = H(Z) .
(c) H(X ) > H (V) , since H(V1 |V0 , . . .) = H(X2 |X0 , X−2 , . . .) ≤ H(X2 |X1 , X0 , X−1 , . . .) .
(d) H(Z) = 2H (X ) since H(X1 , . . . , X2n ) = H(Z1 , . . . , Zn ) , and dividing by n and
taking the limit, we get 2H(X ) = H(Z) .
25. Monotonicity.
(a) Show that I(X; Y1 , Y2 , . . . , Yn ) is non-decreasing in n .
(b) Under what conditions is the mutual information constant for all n ?
Solution: Monotonicity
(a) Since conditioning reduces entropy,
H(X|Y1 , Y2 , . . . , Yn ) ≥ H(X|Y1 , Y2 , . . . , Yn , Yn+1 )
(4.147)
I(X; Y1 , Y2 , . . . , Yn ) = H(X) − H(X|Y1 , Y2 , . . . , Yn )
(4.148)
and hence
≤ H(X) − H(X|Y1 , Y2 , . . . , Yn ,n+1 )
= I(X; Y1 , Y2 , . . . , Yn , Yn+1 )
(4.149)
(4.150)
(b) We have equality if and only if H(X|Y 1 , Y2 , . . . , Yn ) = H(X|Y1 ) for all n , i.e., if
X is conditionally independent of Y 2 , . . . given Y1 .
26. Transitions in Markov chains. Suppose {X i } forms an irreducible Markov chain
with transition matrix P and stationary distribution µ . Form the associated “edgeprocess” {Yi } by keeping track only of the transitions. Thus the new process {Y i }
takes values in X × X , and Yi = (Xi−1 , Xi ) .
For example
X = 3, 2, 8, 5, 7, . . .
becomes
Y = (∅, 3), (3, 2), (2, 8), (8, 5), (5, 7), . . .
Find the entropy rate of the edge process {Y i } .
Solution: Edge Process H(X ) = H (Y) , since H(X 1 , X2 , . . . , Xn , Xn+1 ) = H(Y1 , Y2 , . . . , Yn ) ,
and dividing by n and taking the limit, we get equality.
88
Entropy Rates of a Stochastic Process
27. Entropy rate
Let {Xi } be a stationary {0, 1} valued stochastic process obeying
Xk+1 = Xk ⊕ Xk−1 ⊕ Zk+1 ,
where {Zi } is Bernoulli( p ) and ⊕ denotes mod 2 addition. What is the entropy rate
H(X ) ?
Solution: Entropy Rate
H(X ) = H(Xk+1 |Xk , Xk−1 , . . .) = H(Xk+1 |Xk , Xk−1 ) = H(Zk+1 ) = H(p)
(4.151)
28. Mixture of processes
Suppose we observe one of two stochastic processes but don’t know which. What is the
entropy rate? Specifically, let X11 , X12 , X13 , . . . be a Bernoulli process with parameter
p1 and let X21 , X22 , X23 , . . . be Bernoulli (p2 ) . Let
θ=

 1,
 2,
with probability
1
2
with probability
1
2
and let Yi = Xθi , i = 1, 2, . . . , be the observed stochastic process. Thus Y observes
the process {X1i } or {X2i } . Eventually Y will know which.
(a) Is {Yi } stationary?
(b) Is {Yi } an i.i.d. process?
(c) What is the entropy rate H of {Yi } ?
(d) Does
1
− log p(Y1 , Y2 , . . . Yn ) −→ H?
n
(e) Is there a code that achieves an expected per-symbol description length
H?
1
n ELn
−→
Now let θi be Bern( 21 ). Observe
Zi = Xθi i ,
i = 1, 2, . . . ,
Thus θ is not fixed for all time, as it was in the first part, but is chosen i.i.d. each time.
Answer (a), (b), (c), (d), (e) for the process {Z i } , labeling the answers (a ' ), (b ' ), (c ' ),
(d ' ), (e ' ).
Solution: Mixture of processes.
(a) Yes, {Yi } is stationary, since the scheme that we use to generate the Y i s doesn’t
change with time.
89
Entropy Rates of a Stochastic Process
(b) No, it is not IID, since there’s dependence now – all Y i s have been generated
according to the same parameter θ .
Alternatively, we can arrive at the result by examining I(Y n+1 ; Y n ) . If the process
were to be IID, then the expression I(Y n+1 ; Y n ) would have to be 0 . However,
if we are given Y n , then we can estimate what θ is, which in turn allows us to
predict Yn+1 . Thus, I(Yn+1 ; Y n ) is nonzero.
(c) The process {Yi } is the mixture of two Bernoulli processes with different parameters, and its entropy rate is the mixture of the two entropy rates of the two
processes so it’s given by
H(p1 ) + H(p2 )
.
2
More rigorously,
1
H(Y n )
n
1
= lim (H(θ) + H(Y n |θ) − H(θ|Y n ))
n→∞ n
H(p1 ) + H(p2 )
=
2
H =
lim
n→∞
Note that only H(Y n |θ) grows with n . The rest of the term is finite and will go
to 0 as n goes to ∞ .
(d) The process {Yi } is not ergodic, so the AEP does not apply and the quantity
−(1/n) log P (Y1 , Y2 , . . . , Yn ) does NOT converge to the entropy rate. (But it does
converge to a random variable that equals H(p 1 ) w.p. 1/2 and H(p2 ) w.p. 1/2.)
(e) Since the process is stationary, we can do Huffman coding on longer and longer
blocks of the process. These codes will have an expected per-symbol length
bounded above by H(X1 ,X2n,...,Xn )+1 and this converges to H(X ) .
(a’) Yes, {Yi } is stationary, since the scheme that we use to generate the Y i ’s doesn’t
change with time.
(b’) Yes, it is IID, since there’s no dependence now – each Y i is generated according
to an independent parameter θi , and Yi ∼ Bernoulli( (p1 + p2 )/2) .
(c’) Since the process is now IID, its entropy rate is
H
*
+
p1 + p 2
.
2
(d’) Yes, the limit exists by the AEP.
(e’) Yes, as in (e) above.
29. Waiting times.
Let X be the waiting time for the first heads to appear in successive flips of a fair coin.
Thus, for example, P r{X = 3} = ( 12 )3 .
90
Entropy Rates of a Stochastic Process
Let Sn be the waiting time for the nth head to appear.
Thus,
S0 = 0
Sn+1 = Sn + Xn+1
where X1 , X2 , X3 , . . . are i.i.d according to the distribution above.
(a)
(b)
(c)
(d)
Is the process {Sn } stationary?
Calculate H(S1 , S2 , . . . , Sn ) .
Does the process {Sn } have an entropy rate? If so, what is it? If not, why not?
What is the expected number of fair coin flips required to generate a random
variable having the same distribution as S n ?
Solution: Waiting time process.
(a) For the process to be stationary, the distribution must be time invariant. It turns
out that process {Sn } is not stationary. There are several ways to show this.
• S0 is always 0 while Si , i %= 0 can take on several values. Since the marginals
for S0 and S1 , for example, are not the same, the process can’t be stationary.
• It’s clear that the variance of S n grows with n , which again implies that the
marginals are not time-invariant.
• Process {Sn } is an independent increment process. An independent increment
process is not stationary (not even wide sense stationary), since var(S n ) =
var(Xn ) + var(Sn−1 ) > var(Sn−1 ) .
(b) We can use chain rule and Markov properties to obtain the following results.
H(S1 , S2 , . . . , Sn ) = H(S1 ) +
n
!
i=2
= H(S1 ) +
= H(X1 ) +
n
!
H(Si |S i−1 )
H(Si |Si−1 )
i=2
n
!
i=2
=
n
!
H(Xi )
i=1
= 2n
(c) It follows trivially from the previous part that
H(S n )
n→∞
n
2n
= lim
n→∞ n
= 2
H(S) =
lim
H(Xi )
91
Entropy Rates of a Stochastic Process
Note that the entropy rate can still exist even when the process is not stationary.
Furthermore, the entropy rate (for this problem) is the same as the entropy of X.
(d) The expected number of flips required can be lower-bounded by H(S n ) and upperbounded by H(Sn ) + 2 (Theorem.5.12.3, page
115). S n has a negative binomial
/
k−1
distribution; i.e., P r(Sn = k) =
( 12 )k for k ≥ n . (We have the n th
n−1
success at the k th trial if and only if we have exactly n − 1 successes in k − 1
trials and a suceess at the k th trial.)
Since computing the exact value of H(S n ) is difficult (and fruitless in the exam
setting), it would be sufficient to show that the expected number of flips required
is between H(Sn ) and H(Sn ) + 2 , and set up the expression of H(S n ) in terms
of the pmf of Sn .
Note, however, that for large n , however, the distribution of S n will tend to
Gaussian with mean np = 2n and variance n(1 − p)/p2 = 2n .
Let pk = P r(Sn = k + ESn ) = P r(Sn = k + 2n) . Let φ(x) be the normal √
density
2
2
function with mean zero and variance 2n , i.e. φ(x) = exp (−x /2σ )/ 2πσ 2 ,
where σ 2 = 2n .
Then for large n , since the entropy is invariant under any constant shift of a
random variable and φ(x) log φ(x) is Riemann integrable,
H(Sn ) = H(Sn − E(Sn ))
= −
≈ −
≈ −
!
!
2
pk log pk
φ(k) log φ(k)
φ(x) log φ(x)dx
= (− log e)
= (− log e)
2
2
φ(x) ln φ(x)dx
φ(x)(−
√
x2
−
ln
2πσ 2 )
2σ 2
1 1
= (log e)( + ln 2πσ 2 )
2 2
1
=
log 2πeσ 2
2
1
=
log nπe + 1.
2
(Refer to Chapter 9 for a more general discussion of the entropy of a continuous
random variable and its relation to discrete entropy.)
Here is
. a specific/example for n = 100 . Based on earlier discussion, P r(S 100 =
k−1
k) =
( 12 )k . The Gaussian approximation of H(S n ) is 5.8690 while
100 − 1
92
Entropy Rates of a Stochastic Process
the exact value of H(Sn ) is 5.8636 . The expected number of flips required is
somewhere between 5.8636 and 7.8636 .
30. Markov chain transitions.


P = [Pij ] = 

1
2
1
4
1
4
1
4
1
2
1
4
1
4
1
4
1
2




Let X1 be uniformly distributed over the states {0, 1, 2}. Let {X i }∞
1 be a Markov
chain with transition matrix P , thus P (X n+1 = j|Xn = i) = Pij , i, j ∈ {0, 1, 2}.
(a) Is {Xn } stationary?
(b) Find limn→∞ n1 H(X1 , . . . , Xn ).
Now consider the derived process Z1 , Z2 , . . . , Zn , where
Z1 = X1
Zi = Xi − Xi−1
(mod 3),
i = 2, . . . , n.
Thus Z n encodes the transitions, not the states.
(c) Find H(Z1 , Z2 , ..., Zn ).
(d) Find H(Zn ) and H(Xn ), for n ≥ 2 .
(e) Find H(Zn |Zn−1 ) for n ≥ 2 .
(f) Are Zn−1 and Zn independent for n ≥ 2 ?
Solution:
(a) Let µn denote the probability mass function at time n . Since µ 1 = ( 13 , 13 , 13 ) and
µ2 = µ1 P = µ1 , µn = µ1 = ( 31 , 13 , 13 ) for all n and {Xn } is stationary.
Alternatively, the observation P is doubly stochastic will lead the same conclusion.
(b) Since {Xn } is stationary Markov,
lim H(X1 , . . . , Xn ) = H(X2 |X1 )
n→∞
=
2
!
k=0
= 3×
=
3
.
2
P (X1 = k)H(X2 |X1 = k)
1
1 1 1
× H( , , )
3
2 4 4
93
Entropy Rates of a Stochastic Process
(c) Since (X1 , . . . , Xn ) and (Z1 , . . . , Zn ) are one-to-one, by the chain rule of entropy
and the Markovity,
H(Z1 , . . . , Zn ) = H(X1 , . . . , Xn )
=
n
!
k=1
H(Xk |X1 , . . . , Xk−1 )
= H(X1 ) +
n
!
k=2
H(Xk |Xk−1 )
= H(X1 ) + (n − 1)H(X2 |X1 )
3
= log 3 + (n − 1).
2
Alternatively, we can use the results of parts (d), (e), and (f). Since Z 1 , . . . , Zn
are independent and Z2 , . . . , Zn are identically distributed with the probability
distribution ( 12 , 14 , 14 ) ,
H(Z1 , . . . , Zn ) = H(Z1 ) + H(Z2 ) + . . . + H(Zn )
= H(Z1 ) + (n − 1)H(Z2 )
3
= log 3 + (n − 1).
2
(d) Since {Xn } is stationary with µn = ( 31 , 13 , 13 ) ,
1 1 1
H(Xn ) = H(X1 ) = H( , , ) = log 3.
3 3 3


 0,
1
2,
For n ≥ 2 , Zn =
1, 4,1


2, 4.1
Hence, H(Zn ) = H( 21 , 14 , 14 )
= 32 .
(e) Due to the symmetry of P , P (Zn |Zn−1 ) = P (Zn ) for n ≥ 2. Hence, H(Zn |Zn−1 ) =
H(Zn ) = 23 .
Alternatively, using the result of part (f), we can trivially reach the same conclusion.
(f) Let k ≥ 2 . First observe that by the symmetry of P , Z k+1 = Xk+1 − Xk is
independent of Xk . Now that
H(Zk+1 |Xk , Xk−1 ) = H(Xk+1 − Xk |Xk , Xk−1 )
= H(Xk+1 − Xk |Xk )
= H(Xk+1 − Xk )
= H(Zk+1 ),
Zk+1 is independent of (Xk , Xk−1 ) and hence independent of Zk = Xk − Xk−1 .
For k = 1 , again by the symmetry of P , Z2 is independent of Z1 = X1 trivially.
94
Entropy Rates of a Stochastic Process
31. Markov.
Let {Xi } ∼ Bernoulli(p) . Consider the associated Markov chain {Y i }ni=1 where
Yi = (the number of 1’s in the current run of 1’s) . For example, if X n = 101110 . . . ,
we have Y n = 101230 . . . .
(a) Find the entropy rate of X n .
(b) Find the entropy rate of Y n .
Solution: Markov solution.
(a) For an i.i.d. source, H(X ) = H(X) = H(p) .
(b) Observe that X n and Y n have a one-to-one mapping. Thus, H(Y) = H(X ) =
H(p) .
32. Time symmetry.
Let {Xn } be a stationary Markov process. We condition on (X 0 , X1 ) and look into
the past and future. For what index k is
H(X−n |X0 , X1 ) = H(Xk |X0 , X1 )?
Give the argument.
Solution: Time symmetry.
The trivial solution is k = −n. To find other possible values of k we expand
H(X−n |X0 , X1 )
=
=
=
(a)
=
=
(b)
=
(c)
=
(d)
=
(e)
=
H(X−n , X0 , X1 ) − H(X0 , X1 )
H(X−n ) + H(X0 , X1 |X−n ) − H(X0 , X1 )
H(X−n ) + H(X0 |X−n ) + H(X1 |X0 , X−n ) − H(X0 , X1 )
H(X−n ) + H(X0 |X−n ) + H(X1 |X0 ) − H(X0 , X1 )
H(X−n ) + H(X0 |X−n ) − H(X0 )
H(X0 ) + H(X0 |X−n ) − H(X0 )
H(Xn |X0 )
H(Xn |X0 , X−1 )
H(Xn+1 |X1 , X0 )
where (a) and (d) come from Markovity and (b), (c) and (e) come from stationarity.
Hence k = n + 1 is also a solution. There are no other solution since for any other
k, we can construct a periodic Markov process as a counterexample. Therefore k ∈
{−n, n + 1}.
95
Entropy Rates of a Stochastic Process
33. Chain inequality: Let X1 → X2 → X3 → X4 form a Markov chain. Show that
I(X1 ; X3 ) + I(X2 ; X4 ) ≤ I(X1 ; X4 ) + I(X2 ; X3 )
(4.152)
Solution: Chain inequality X1 → X2 → X3 → X4
I(X1 ; X4 )
+I(X2 ; X3 ) − I(X1 ; X3 ) − I(X2 ; X4 )
(4.153)
= H(X1 ) − H(X1 |X4 ) + H(X2 ) − H(X2 |X3 ) − (H(X1 ) − H(X1 |X3 ))
−(H(X2 ) − H(X2 |X4 ))
(4.154)
= H(X1 |X3 ) − H(X1 |X4 ) + H(X2 |X4 ) − H(X2 |X3 )
(4.155)
= H(X1 , X2 |X3 ) − H(X2 |X1 , X3 ) − H(X1 , X2 |X4 ) + H(X2 |X1 , X4(4.156)
)
+H(X1 , X2 |X4 ) − H(X1 |X2 , X4 ) − H(X1 , X2 |X3 ) + H(X1 |X(4.157)
2 , X3 ))
= −H(X2 |X1 , X3 ) + H(X2 |X1 , X4 )
(4.158)
− H(X2 |X1 , X4 ) − H(X2 |X1 , X3 , X4 )
(4.159)
≥ 0
(4.161)
= I(X2 ; X3 |X1 , X4 )
(4.160)
where H(X1 |X2 , X3 ) = H(X1 |X2 , X4 ) by the Markovity of the random variables.
34. Broadcast channel. Let X → Y → (Z, W ) form a Markov chain, i.e., p(x, y, z, w) =
p(x)p(y|x)p(z, w|y) for all x, y, z, w . Show that
I(X; Z) + I(X; W ) ≤ I(X; Y ) + I(Z; W )
(4.162)
Solution: Broadcast Channel
X → Y → (Z, W ) , hence by the data processing inequality, I(X; Y ) ≥ I(X; (Z, W )) ,
and hence
I(X : Y )
+I(Z; W ) − I(X; Z) − I(X; W )
(4.163)
≥ I(X : Z, W ) + I(Z; W ) − I(X; Z) − I(X; W )
(4.164)
= H(Z, W ) + H(X) − H(X, W, Z) + H(W ) + H(Z) − H(W, Z)
−H(Z) − H(X) + H(X, Z)) − H(W ) − H(X) + H(W, X)(4.165)
= −H(X, W, Z) + H(X, Z) + H(X, W ) − H(X)
(4.166)
= H(W |X) − H(W |X, Z)
(4.167)
= I(W ; Z|X)
(4.168)
≥ 0
(4.169)
35. Concavity of second law. Let {Xn }∞
−∞ be a stationary Markov process. Show that
H(Xn |X0 ) is concave in n . Specifically show that
H(Xn |X0 ) − H(Xn−1 |X0 ) − (H(Xn−1 |X0 ) − H(Xn−2 |X0 )) = −I(X1 ; Xn−1 |X(4.170)
0 , Xn )
≤ 0
(4.171)
96
Entropy Rates of a Stochastic Process
Thus the second difference is negative, establishing that H(X n |X0 ) is a concave function of n .
Solution: Concavity of second law of thermodynamics
Since X0 → Xn−2 → Xn−1 → Xn is a Markov chain
H(Xn |X0 )
−H(Xn−1 |X0 ) − (H(Xn−1 |X0 ) − H(Xn−2 |X0 )
(4.172)
= H(Xn |X0 ) − H(Xn−1 |X0 , X−1 ) − (H(Xn−1 |X0 , X−1 ) − H(Xn−2 |X0(4.173)
, X−1 )
= H(Xn |X0 ) − H(Xn |X1 , X0 ) − (H(Xn−1 |X0 ) − H(Xn−1 |X1 , X0 )
(4.174)
= H(X1 |X0 ) − H(X1 |Xn , X0 ) − H(X1 |X0 ) + H(X1 |Xn−1 , X0 )
(4.176)
= I(X1 ; Xn |X0 ) − I(X1 ; Xn−1 |X0 )
(4.175)
= H(X1 |Xn−1 , X0 ) − H(X1 |Xn , X0 )
(4.177)
= −I(X1 ; Xn−1 |Xn , X0 )
(4.179)
= H(X1 , Xn−1 , Xn , X0 ) − H(X1 |Xn , X0 )
(4.178)
≤ 0
(4.180)
where (4.173) and (4.178) follows from Markovity and (4.174) follows from stationarity
of the Markov chain.
If we define
∆n = H(Xn |X0 ) − H(Xn−1 |X0 )
(4.181)
then the above chain of inequalities implies that ∆ n − ∆n−1 ≤ 0 , which implies that
H(Xn |X0 ) is a concave function of n .
Chapter 5
Data Compression
$
100 be the ex1. Uniquely decodable and instantaneous codes. Let L = m
i=1 pi li
pected value of the 100th power of the word lengths associated with an encoding of the
random variable X. Let L1 = min L over all instantaneous codes; and let L 2 = min L
over all uniquely decodable codes. What inequality relationship exists between L 1 and
L2 ?
Solution: Uniquely decodable and instantaneous codes.
L=
m
!
(5.1)
pi n100
i
i=1
L1 =
L2 =
min
L
Instantaneous codes
(5.2)
min
L
Uniquely decodable codes
(5.3)
Since all instantaneous codes are uniquely decodable, we must have L 2 ≤ L1 . Any set
of codeword lengths which achieve the minimum of L 2 will satisfy the Kraft inequality
and hence we can construct an instantaneous code with the same codeword lengths,
and hence the same L . Hence we have L1 ≤ L2 . From both these conditions, we must
have L1 = L2 .
2. How many fingers has a Martian? Let
.
/
S1 , . . . , S m
S=
.
p1 , . . . , p m
The Si ’s are encoded into strings from a D -symbol output alphabet in a uniquely decodable manner. If m = 6 and the codeword lengths are (l 1 , l2 , . . . , l6 ) = (1, 1, 2, 3, 2, 3),
find a good lower bound on D. You may wish to explain the title of the problem.
Solution: How many fingers has a Martian?
97
98
Data Compression
Uniquely decodable codes satisfy Kraft’s inequality. Therefore
f (D) = D −1 + D −1 + D −2 + D −3 + D −2 + D −3 ≤ 1.
(5.4)
We have f (2) = 7/4 > 1 , hence D > 2 . We have f (3) = 26/27 < 1 . So a possible
value of D is 3. Our counting system is base 10, probably because we have 10 fingers.
Perhaps the Martians were using a base 3 representation because they have 3 fingers.
(Maybe they are like Maine lobsters ?)
3. Slackness in the Kraft inequality. An instantaneous code has word lengths l 1 , l2 , . . . , lm
which satisfy the strict inequality
m
!
D−li < 1.
i=1
The code alphabet is D = {0, 1, 2, . . . , D − 1}. Show that there exist arbitrarily long
sequences of code symbols in D ∗ which cannot be decoded into sequences of codewords.
Solution:
Slackness in the Kraft inequality. Instantaneous codes are prefix free codes, i.e., no
codeword is a prefix of any other codeword. Let n max = max{n1 , n2 , ..., nq }. There
are D nmax sequences of length nmax . Of these sequences, D nmax −ni start with the
i -th codeword. Because of the prefix condition no two sequences can start with the
same codeword. Hence the total number of sequences which start with some codeword
$
$
is qi=1 Dnmax −ni = D nmax qi=1 D−ni < D nmax . Hence there are sequences which do
not start with any codeword. These and all longer sequences with these length n max
sequences as prefixes cannot be decoded. (This situation can be visualized with the aid
of a tree.)
Alternatively, we can map codewords onto dyadic intervals on the real line corresponding to real numbers whose decimal expansions start with that codeword. Since the
$ −ni
length of the interval for a codeword of length n i is D −ni , and
D
< 1 , there exists some interval(s) not used by any codeword. The binary sequences in these intervals
do not begin with any codeword and hence cannot be decoded.
4. Huffman coding. Consider the random variable
X=
.
x1
x2
x3
x4
x5
x6
x7
0.49 0.26 0.12 0.04 0.04 0.03 0.02
(a) Find a binary Huffman code for X.
(b) Find the expected codelength for this encoding.
(c) Find a ternary Huffman code for X.
Solution: Examples of Huffman codes.
/
99
Data Compression
(a) The Huffman tree
Codeword
1
x1
00
x2
011
x3
01000
x4
01001
x5
01010
x6
01011
x7
for this distribution is
0.49
0.26
0.12
0.04
0.04
0.03
0.02
0.49
0.26
0.12
0.05
0.04
0.04
0.49
0.26
0.12
0.08
0.05
0.49
0.26
0.13
0.12
0.49
0.26
0.25
0.51
0.49
1
(b) The expected length of the codewords for the binary Huffman code is 2.02 bits.
( H(X) = 2.01 bits)
(c) The ternary Huffman tree is
Codeword
0
x1 0.49 0.49 0.49 1.0
1
x2 0.26 0.26 0.26
20
x3 0.12 0.12 0.25
22
x4 0.04 0.09
210
x5 0.04 0.04
211
x6 0.03
212
x7 0.02
This code has an expected length 1.34 ternary symbols. ( H 3 (X) = 1.27 ternary
symbols).
5. More Huffman codes. Find the binary Huffman code for the source with probabilities
(1/3, 1/5, 1/5, 2/15, 2/15) . Argue that this code is also optimal for the source with
probabilities (1/5, 1/5, 1/5, 1/5, 1/5).
Solution: More Huffman codes. The Huffman code for the source with probabilities
2 2
, 15 ) has codewords {00,10,11,010,011}.
( 13 , 15 , 15 , 15
To show that this code (*) is also optimal for (1/5, 1/5, 1/5, 1/5, 1/5) we have to
show that it has minimum expected length, that is, no shorter code can be constructed
without violating H(X) ≤ EL .
H(X) = log 5 = 2.32 bits.
E(L(∗)) = 2 ×
3
2
12
+3× =
bits.
5
5
5
Since
E(L(any code)) =
5
!
li
i=1
5
=
k
bits
5
(5.5)
(5.6)
(5.7)
for some integer k , the next lowest possible value of E(L) is 11/5 = 2.2 bits ¡ 2.32
bits. Hence (*) is optimal.
Note that one could also prove the optimality of (*) by showing that the Huffman
code for the (1/5, 1/5, 1/5, 1/5, 1/5) source has average length 12/5 bits. (Since each
100
Data Compression
Huffman code produced by the Huffman encoding algorithm is optimal, they all have
the same average length.)
6. Bad codes. Which of these codes cannot be Huffman codes for any probability assignment?
(a) {0, 10, 11}.
(b) {00, 01, 10, 110}.
(c) {01, 10}.
Solution: Bad codes
(a) {0,10,11} is a Huffman code for the distribution (1/2,1/4,1/4).
(b) The code {00,01,10, 110} can be shortened to {00,01,10, 11} without losing its
instantaneous property, and therefore is not optimal, so it cannot be a Huffman
code. Alternatively, it is not a Huffman code because there is a unique longest
codeword.
(c) The code {01,10} can be shortened to {0,1} without losing its instantaneous property, and therefore is not optimal and not a Huffman code.
7. Huffman 20 questions. Consider a set of n objects. Let X i = 1 or 0 accordingly as
the i-th object is good or defective. Let X 1 , X2 , . . . , Xn be independent with Pr {Xi =
1} = pi ; and p1 > p2 > . . . > pn > 1/2 . We are asked to determine the set of all
defective objects. Any yes-no question you can think of is admissible.
(a) Give a good lower bound on the minimum average number of questions required.
(b) If the longest sequence of questions is required by nature’s answers to our questions,
what (in words) is the last question we should ask? And what two sets are we
distinguishing with this question? Assume a compact (minimum average length)
sequence of questions.
(c) Give an upper bound (within 1 question) on the minimum average number of
questions required.
Solution: Huffman 20 Questions.
(a) We will be using the questions to determine the sequence X 1 , X2 , . . . , Xn , where
Xi is 1 or 0 according to whether the i -th object is good or defective. Thus the
C
most likely sequence is all 1’s, with a probability of ni=1 pi , and the least likely
Cn
sequence is the all 0’s sequence with probability i=1 (1 − pi ) . Since the optimal
set of questions corresponds to a Huffman code for the source, a good lower bound
on the average number of questions is the entropy of the sequence X 1 , X2 , . . . , Xn .
But since the Xi ’s are independent Bernoulli random variables, we have
EQ ≥ H(X1 , X2 , . . . , Xn ) =
!
H(Xi ) =
!
H(pi ).
(5.8)
101
Data Compression
(b) The last bit in the Huffman code distinguishes between the least likely source
symbols. (By the conditions of the problem, all the probabilities are different,
and thus the two least likely sequences are uniquely defined.) In this case, the
two least likely sequences are 000 . . . 00 and 000 . . . 01 , which have probabilities
(1 − p1 )(1 − p2 ) . . . (1 − pn ) and (1 − p1 )(1 − p2 ) . . . (1 − pn−1 )pn respectively. Thus
the last question will ask “Is Xn = 1 ”, i.e., “Is the last item defective?”.
(c) By the same arguments as in Part (a), an upper bound on the minimum average
number of questions is an upper bound on the average length of a Huffman code,
$
namely H(X1 , X2 , . . . , Xn ) + 1 = H(pi ) + 1 .
8. Simple optimum compression of a Markov source. Consider the 3-state Markov
process U1 , U2 , . . . , having transition matrix
Un−1 \Un
S1
S2
S3
S1
1/2
1/4
0
S2
1/4
1/2
1/2
S3
1/4
1/4
1/2
Thus the probability that S1 follows S3 is equal to zero. Design 3 codes C1 , C2 , C3
(one for each state 1, 2 and 3 ), each code mapping elements of the set of S i ’s into
sequences of 0’s and 1’s, such that this Markov process can be sent with maximal
compression by the following scheme:
(a) Note the present symbol Xn = i .
(b) Select code Ci .
(c) Note the next symbol Xn+1 = j and send the codeword in Ci corresponding to
j.
(d) Repeat for the next symbol.
What is the average message length of the next symbol conditioned on the previous
state Xn = i using this coding scheme? What is the unconditional average number
of bits per source symbol? Relate this to the entropy rate H(U) of the Markov
chain.
Solution: Simple optimum compression of a Markov source.
It is easy to design an optimal code for each state. A possible solution is
Next state
Code C1
code C2
code C3
S1
0
10
-
S2
10
0
0
S3
11
11
1
E(L|C1 ) = 1.5 bits/symbol
E(L|C2 ) = 1.5 bits/symbol
E(L|C3 ) = 1 bit/symbol
The average message lengths of the next symbol conditioned on the previous state
being Si are just the expected lengths of the codes C i . Note that this code assignment
achieves the conditional entropy lower bound.
102
Data Compression
To find the unconditional average, we have to find the stationary distribution on the
states. Let µ be the stationary distribution. Then


1/2 1/4 1/4


µ = µ  1/4 1/2 1/4 
0 1/2 1/2
(5.9)
We can solve this to find that µ = (2/9, 4/9, 1/3) . Thus the unconditional average
number of bits per source symbol
EL =
3
!
µi E(L|Ci )
(5.10)
i=1
=
=
2
4
1
× 1.5 + × 1.5 + × 1
9
9
3
4
bits/symbol.
3
(5.11)
(5.12)
The entropy rate H of the Markov chain is
H = H(X2 |X1 )
=
3
!
i=1
µi H(X2 |X1 = Si )
= 4/3 bits/symbol.
(5.13)
(5.14)
(5.15)
Thus the unconditional average number of bits per source symbol and the entropy rate
H of the Markov chain are equal, because the expected length of each code C i equals
the entropy of the state after state i , H(X 2 |X1 = Si ) , and thus maximal compression
is obtained.
9. Optimal code lengths that require one bit above entropy. The source coding
theorem shows that the optimal code for a random variable X has an expected length
less than H(X) + 1 . Give an example of a random variable for which the expected
length of the optimal code is close to H(X) + 1 , i.e., for any % > 0 , construct a
distribution for which the optimal code has L > H(X) + 1 − % .
Solution: Optimal code lengths that require one bit above entropy. There is a trivial
example that requires almost 1 bit above its entropy. Let X be a binary random
variable with probability of X = 1 close to 1. Then entropy of X is close to 0 , but
the length of its optimal code is 1 bit, which is almost 1 bit above its entropy.
10. Ternary codes that achieve the entropy bound. A random variable X takes
on m values and has entropy H(X) . An instantaneous ternary code is found for this
source, with average length
H(X)
L=
= H3 (X).
(5.16)
log2 3
Data Compression
103
(a) Show that each symbol of X has a probability of the form 3 −i for some i .
(b) Show that m is odd.
Solution: Ternary codes that achieve the entropy bound.
(a) We will argue that an optimal ternary code that meets the entropy bound corresponds to complete ternary tree, with the probability of each leaf of the form 3 −i .
To do this, we essentially repeat the arguments of Theorem 5.3.1. We achieve the
ternary entropy bound only if D(p||r) = 0 and c = 1 , in (5.25). Thus we achieve
the entropy bound if and only if pi = 3−j for all i .
(b) We will show that any distribution that has p i = 3−li for all i must have an
odd number of symbols. We know from Theorem 5.2.1, that given the set of
lengths, li , we can construct a ternary tree with nodes at the depths l i . Now,
$ −l
since
3 i = 1 , the tree must be complete. A complete ternary tree has an
odd number of leaves (this can be proved by induction on the number of internal
nodes). Thus the number of source symbols is odd.
Another simple argument is to use basic number theory. We know that for
$ −li
$
this distribution,
3
= 1 . We can write this as 3−lmax 3lmax −li = 1 or
$ lmax −li
3
= 3lmax . Each of the terms in the sum is odd, and since their sum is
odd, the number of terms in the sum has to be odd (the sum of an even number
of odd terms is even). Thus there are an odd number of source symbols for any
code that meets the ternary entropy bound.
11. Suffix condition. Consider codes that satisfy the suffix condition, which says that
no codeword is a suffix of any other codeword. Show that a suffix condition code is
uniquely decodable, and show that the minimum average length over all codes satisfying
the suffix condition is the same as the average length of the Huffman code for that
random variable.
Solution: Suffix condition. The fact that the codes are uniquely decodable can be
seen easily be reversing the order of the code. For any received sequence, we work
backwards from the end, and look for the reversed codewords. Since the codewords
satisfy the suffix condition, the reversed codewords satisfy the prefix condition, and the
we can uniquely decode the reversed code.
The fact that we achieve the same minimum expected length then follows directly from
the results of Section 5.5. But we can use the same reversal argument to argue that
corresponding to every suffix code, there is a prefix code of the same length and vice
versa, and therefore we cannot achieve any lower codeword lengths with a suffix code
than we can with a prefix code.
12. Shannon codes and Huffman codes. Consider a random variable X which takes
1
).
on four values with probabilities ( 13 , 13 , 14 , 12
(a) Construct a Huffman code for this random variable.
104
Data Compression
(b) Show that there exist two different sets of optimal lengths for the codewords,
namely, show that codeword length assignments (1, 2, 3, 3) and (2, 2, 2, 2) are
both optimal.
(c) Conclude that there are optimal codes with codeword lengths for some symbols
1
that exceed the Shannon code length 2log p(x)
3.
Solution: Shannon codes and Huffman codes.
(a) Applying the Huffman algorithm gives us the following table
Code Symbol Probability
0
1
1/3
1/3 2/3 1
11
2
1/3
1/3 1/3
101
3
1/4
1/3
100
4
1/12
which gives codeword lengths of 1,2,3,3 for the different codewords.
(b) Both set of lengths 1,2,3,3 and 2,2,2,2 satisfy the Kraft inequality, and they both
achieve the same expected length (2 bits) for the above distribution. Therefore
they are both optimal.
(c) The symbol with probability 1/4 has an Huffman code of length 3, which is greater
than 2log p1 3 . Thus the Huffman code for a particular symbol may be longer than
the Shannon code for that symbol. But on the average, the Huffman code cannot
be longer than the Shannon code.
13. Twenty questions. Player A chooses some object in the universe, and player B
attempts to identify the object with a series of yes-no questions. Suppose that player B
is clever enough to use the code achieving the minimal expected length with respect to
player A’s distribution. We observe that player B requires an average of 38.5 questions
to determine the object. Find a rough lower bound to the number of objects in the
universe.
Solution: Twenty questions.
37.5 = L∗ − 1 < H(X) ≤ log |X |
(5.17)
and hence number of objects in the universe > 2 37.5 = 1.94 × 1011 .
14. Huffman code. Find the (a) binary and (b) ternary Huffman codes for the random
variable X with probabilities
p=(
(c) Calculate L =
$
1 2 3 4 5 6
, , , , , ) .
21 21 21 21 21 21
pi li in each case.
Solution: Huffman code.
Data Compression
105
(a) The Huffman tree for this distribution is
Codeword
00
x1 6/21 6/21 6/21 9/21 12/21 1
10
x2 5/21 5/21 6/21 6/21 9/21
11
x3 4/21 4/21 5/21 6/21
010
x4 3/21 3/21 4/21
0110
x5 2/21 3/21
0111
x6 1/21
(b) The ternary Huffman tree is
Codeword
1
x1 6/21 6/21 10/21 1
2
x2 5/21 5/21 6/21
00
x3 4/21 4/21 5/21
01
x4 3/21 3/21
020
x5 2/21 3/21
021
x6 1/21
022
x7 0/21
(c) The expected length of the codewords for the binary Huffman code is 51/21 = 2.43
bits.
The ternary code has an expected length of 34/21 = 1.62 ternary symbols.
15. Huffman codes.
(a) Construct a binary Huffman code for the following distribution on 5 symbols p =
(0.3, 0.3, 0.2, 0.1, 0.1) . What is the average length of this code?
(b) Construct a probability distribution p ' on 5 symbols for which the code that you
constructed in part (a) has an average length (under p ' ) equal to its entropy
H(p' ) .
Solution: Huffman codes
(a) The code constructed by the standard Huffman procedure
Codeword X Probability
10
1
0.3
0.3 0.4 0.6 1
11
2
0.3
0.3 0.3 0.4
00
3
0.2
0.2 0.3
010
4
0.1
0.2
011
5
0.1
The average length = 2 ∗ 0.8 + 3 ∗ 0.2 = 2.2 bits/symbol.
(b) The code would have a rate equal to the entropy if each of the codewords was of
length 1/p(X) . In this case, the code constructed above would be efficient for the
distrubution (0.25.0.25,0.25,0.125,0.125).
16. Huffman codes: Consider a random variable X which takes 6 values {A, B, C, D, E, F }
with probabilities (0.5, 0.25, 0.1, 0.05, 0.05, 0.05) respectively.
106
Data Compression
(a) Construct a binary Huffman code for this random variable. What is its average
length?
(b) Construct a quaternary Huffman code for this random variable, i.e., a code over
an alphabet of four symbols (call them a, b, c and d ). What is the average length
of this code?
(c) One way to construct a binary code for the random variable is to start with a
quaternary code, and convert the symbols into binary using the mapping a → 00 ,
b → 01 , c → 10 and d → 11 . What is the average length of the binary code for
the above random variable constructed by this process?
(d) For any random variable X , let LH be the average length of the binary Huffman
code for the random variable, and let L QB be the average length code constructed
by first building a quaternary Huffman code and converting it to binary. Show
that
LH ≤ LQB < LH + 2
(5.18)
(e) The lower bound in the previous example is tight. Give an example where the
code constructed by converting an optimal quaternary code is also the optimal
binary code.
(f) The upper bound, i.e., LQB < LH + 2 is not tight. In fact, a better bound is
LQB ≤ LH + 1 . Prove this bound, and provide an example where this bound is
tight.
Solution: Huffman codes: Consider a random variable X which takes 6 values {A, B, C, D, E, F }
with probabilities (0.5, 0.25, 0.1, 0.05, 0.05, 0.05) respectively.
(a) Construct a binary Huffman code for this random variable. What is its average
length?
Solution:
Code Source symbol Prob.
0
A
0.5
0.5
0.5
0.5 0.5 1.0
10
B
0.25 0.25 0.25 0.25 0.5
1100
C
0.1
0.1 0.15 0.25
1101
D
0.05
0.1
0.1
1110
E
0.05 0.05
1111
F
0.05
The average length of this code is 1×0.5+2×0.25+4×(0.1+0.05+0.05+0.05) = 2
bits. The entropy H(X) in this case is 1.98 bits.
(b) Construct a quaternary Huffman code for this random variable, i.e., a code over
an alphabet of four symbols (call them a, b, c and d ). What is the average length
of this code?
Solution:Since the number of symbols, i.e., 6 is not of the form 1 + k(D − 1) ,
we need to add a dummy symbol of probability 0 to bring it to this form. In this
case, drawing up the Huffman tree is straightforward.
107
Data Compression
Code Symbol Prob.
a
A
0.5
0.5 1.0
b
B
0.25 0.25
d
C
0.1
0.15
ca
D
0.05
0.1
cb
E
0.05
cc
F
0.05
cd
G
0.0
The average length of this code is 1 × 0.85 + 2 × 0.15 = 1.15 quaternary symbols.
(c) One way to construct a binary code for the random variable is to start with a
quaternary code, and convert the symbols into binary using the mapping a → 00 ,
b → 01 , c → 10 and d → 11 . What is the average length of the binary code for
the above random variable constructed by this process?
Solution:The code constructed by the above process is A → 00 , B → 01 , C →
11 , D → 1000 , E → 1001 , and F → 1010 , and the average length is 2 × 0.85 +
4 × 0.15 = 2.3 bits.
(d) For any random variable X , let LH be the average length of the binary Huffman
code for the random variable, and let L QB be the average length code constructed
by firsting building a quaternary Huffman code and converting it to binary. Show
that
LH ≤ LQB < LH + 2
(5.19)
Solution:Since the binary code constructed from the quaternary code is also instantaneous, its average length cannot be better than the average length of the
best instantaneous code, i.e., the Huffman code. That gives the lower bound of
the inequality above.
To prove the upper bound, the LQ be the length of the optimal quaternary code.
Then from the results proved in the book, we have
H4 (X) ≤ LQ < H4 (X) + 1
(5.20)
H2 (X) ≤ LQB < H2 (X) + 2.
(5.21)
Also, it is easy to see that LQB = 2LQ , since each symbol in the quaternary code
is converted into two bits. Also, from the properties of entropy, it follows that
H4 (X) = H2 (X)/2 . Substituting these in the previous equation, we get
Combining this with the bound that H 2 (X) ≤ LH , we obtain LQB < LH + 2 .
(e) The lower bound in the previous example is tight. Give an example where the
code constructed by converting an optimal quaternary code is also the optimal
binary code?
Solution:Consider a random variable that takes on four equiprobable values.
Then the quaternary Huffman code for this is 1 quaternary symbol for each source
symbol, with average length 1 quaternary symbol. The average length L QB for
this code is then 2 bits. The Huffman code for this case is also easily seen to assign
2 bit codewords to each symbol, and therefore for this case, L H = LQB .
108
Data Compression
(f) (Optional, no credit) The upper bound, i.e., L QB < LH + 2 is not tight. In fact, a
better bound is LQB ≤ LH + 1 . Prove this bound, and provide an example where
this bound is tight.
Solution:Consider a binary Huffman code for the random variable X and consider
all codewords of odd length. Append a 0 to each of these codewords, and we will
obtain an instantaneous code where all the codewords have even length. Then we
can use the inverse of the mapping mentioned in part (c) to construct a quaternary
code for the random variable - it is easy to see that the quatnerary code is also
instantaneous. Let LBQ be the average length of this quaternary code. Since the
length of the quaternary codewords of BQ are half the length of the corresponding
binary codewords, we have
LBQ

1
=  LH +
2
i:li
!
is odd

pi  <
LH + 1
2
(5.22)
and since the BQ code is at best as good as the quaternary Huffman code, we
have
LBQ ≥ LQ
(5.23)
Therefore LQB = 2LQ ≤ 2LBQ < LH + 1 .
An example where this upper bound is tight is the case when we have only two
possible symbols. Then LH = 1 , and LQB = 2 .
17. Data compression. Find an optimal set of binary codeword lengths l 1 , l2 , . . . (min$
imizing
pi li ) for an instantaneous code for each of the following probability mass
functions:
10 9 8 7 7
(a) p = ( 41
, 41 , 41 , 41 , 41 )
9
9
1
9
1 2
9
1 3
(b) p = ( 10
, ( 10
)( 10
), ( 10
)( 10
) , ( 10
)( 10
) , . . .)
Solution: Data compression
(a)
Code
10
00
01
110
111
Source symbol
A
B
C
D
E
Prob.
10/41
9/41
8/41
7/41
7/41
14/41
10/41
9/41
8/41
17/41
14/41
10/41
24/41
17/41
41/41
(b) This is case of an Huffman code on an infinite alphabet. If we consider an initial
subset of the symbols, we can see that the cumulative probability of all symbols
$
j−1 = 0.9(0.1)i−1 (1/(1 − 0.1)) = (0.1)i−1 . Since
{x : x > i} is
j>i 0.9 ∗ (0.1)
i−1
this is less than 0.9 ∗ (0.1) , the cumulative sum of all the remaining terms is
less than the last term used. Thus Huffman coding will always merge the last two
terms. This in terms implies that the Huffman code in this case is of the form
1,01,001,0001, etc.
Data Compression
109
18. Classes of codes. Consider the code {0, 01}
(a) Is it instantaneous?
(b) Is it uniquely decodable?
(c) Is it nonsingular?
Solution: Codes.
(a) No, the code is not instantaneous, since the first codeword, 0, is a prefix of the
second codeword, 01.
(b) Yes, the code is uniquely decodable. Given a sequence of codewords, first isolate
occurrences of 01 (i.e., find all the ones) and then parse the rest into 0’s.
(c) Yes, all uniquely decodable codes are non-singular.
19. The game of Hi-Lo.
(a) A computer generates a number X according to a known probability mass function
p(x), x ∈ {1, 2, . . . , 100} . The player asks a question, “Is X = i ?” and is told
“Yes”, “You’re too high,” or “You’re too low.” He continues for a total of six
questions. If he is right (i.e., he receives the answer “Yes”) during this sequence,
he receives a prize of value v(X). How should the player proceed to maximize his
expected winnings?
(b) The above doesn’t have much to do with information theory. Consider the following variation: X ∼ p(x), prize = v(x) , p(x) known, as before. But arbitrary
Yes-No questions are asked sequentially until X is determined. (“Determined”
doesn’t mean that a “Yes” answer is received.) Questions cost one unit each. How
should the player proceed? What is the expected payoff?
(c) Continuing (b), what if v(x) is fixed, but p(x) can be chosen by the computer
(and then announced to the player)? The computer wishes to minimize the player’s
expected return. What should p(x) be? What is the expected return to the player?
Solution: The game of Hi-Lo.
(a) The first thing to recognize in this problem is that the player cannot cover more
than 63 values of X with 6 questions. This can be easily seen by induction.
With one question, there is only one value of X that can be covered. With two
questions, there is one value of X that can be covered with the first question,
and depending on the answer to the first question, there are two possible values
of X that can be asked in the next question. By extending this argument, we see
that we can ask at more 63 different questions of the form “Is X = i ?” with 6
questions. (The fact that we have narrowed the range at the end is irrelevant, if
we have not isolated the value of X .)
Thus if the player seeks to maximize his return, he should choose the 63 most
valuable outcomes for X , and play to isolate these values. The probabilities are
110
Data Compression
irrelevant to this procedure. He will choose the 63 most valuable outcomes, and
his first question will be “Is X = i ?” where i is the median of these 63 numbers.
After isolating to either half, his next question will be “Is X = j ?”, where j is
the median of that half. Proceeding this way, he will win if X is one of the 63
most valuable outcomes, and lose otherwise. This strategy maximizes his expected
winnings.
(b) Now if arbitrary questions are allowed, the game reduces to a game of 20 questions
$
to determine the object. The return in this case to the player is
x p(x)(v(x) −
l(x)) , where l(x) is the number of questions required to determine the object.
Maximizing the return is equivalent to minimizing the expected number of questions, and thus, as argued in the text, the optimal strategy is to construct a
Huffman code for the source and use that to construct a question strategy. His
$
$
expected return is therefore between
p(x)v(x) − H and
p(x)v(x) − H − 1 .
(c) A computer wishing to minimize the return to player will want to minimize
$
p(x)v(x) − H(X) over choices of p(x) . We can write this as a standard minimization problem with constraints. Let
J(p) =
!
pi vi +
!
pi log pi + λ
and differentiating and setting to 0, we obtain
!
(5.24)
pi
vi + log pi + 1 + λ = 0
(5.25)
or after normalizing to ensure that the p i ’s form a probability distribution,
2−vi
pi = $ −vj .
j2
−vi
2−vj
j
To complete the proof, we let ri = $2
!
pi vi +
!
pi log pi =
=
!
!
(5.26)
, and rewrite the return as
pi log pi −
pi log pi −
!
!
= D(p||r) − log(
!
pi log 2−vi
pi log ri − log(
2−vj ),
!
(5.27)
2−vj ) (5.28)
(5.29)
and thus the return is minimized by choosing p i = ri . This is the distribution
that the computer must choose to minimize the return to the player.
20. Huffman codes with costs. Words like Run! Help! and Fire! are short, not because
they are frequently used, but perhaps because time is precious in the situations in which
these words are required. Suppose that X = i with probability p i , i = 1, 2, . . . , m. Let
li be the number of binary symbols in the codeword associated with X = i, and let c i
denote the cost per letter of the codeword when X = i. Thus the average cost C of
$
the description of X is C = m
i=1 pi ci li .
111
Data Compression
$
(a) Minimize C over all l1 , l2 , . . . , lm such that
2−li ≤ 1. Ignore any implied in∗ and the associated
teger constraints on li . Exhibit the minimizing l1∗ , l2∗ , . . . , lm
∗
minimum value C .
(b) How would you use the Huffman code procedure to minimize C over all uniquely
decodable codes? Let CHuf f man denote this minimum.
(c) Can you show that
C ∗ ≤ CHuf f man ≤ C ∗ +
m
!
pi ci ?
i=1
Solution: Huffman codes with costs.
$
$
(a) We wish to minimize C =
pi ci ni subject to
2−ni ≤ 1 . We will assume
$
equality in the constraint and let ri = 2−ni and let Q = i pi ci . Let qi =
(pi ci )/Q . Then q also forms a probability distribution and we can write C as
C =
!
pi ci ni
1
ri
*!
+
qi !
= Q
qi log −
qi log qi
ri
= Q(D(q||r) + H(q)).
= Q
!
qi log
(5.30)
(5.31)
(5.32)
(5.33)
Since the only freedom is in the choice of r i , we can minimize C by choosing
r = q or
pi ci
n∗i = − log $
,
(5.34)
pj cj
where we have ignored any integer constraints on n i . The minimum cost C ∗ for
this assignment of codewords is
C ∗ = QH(q)
(5.35)
(b) If we use q instead of p for the Huffman procedure, we obtain a code minimizing
expected cost.
(c) Now we can account for the integer constraints.
Let
ni = 2− log qi 3
(5.36)
Then
− log qi ≤ ni < − log qi + 1
(5.37)
Multiplying by pi ci and summing over i , we get the relationship
C ∗ ≤ CHuf f man < C ∗ + Q.
(5.38)
112
Data Compression
21. Conditions for unique decodability. Prove that a code C is uniquely decodable if
(and only if) the extension
C k (x1 , x2 , . . . , xk ) = C(x1 )C(x2 ) · · · C(xk )
is a one-to-one mapping from X k to D ∗ for every k ≥ 1 . (The only if part is obvious.)
Solution: Conditions for unique decodability. If C k is not one-to-one for some k , then
C is not UD, since there exist two distinct sequences, (x 1 , . . . , xk ) and (x'1 , . . . , x'k ) such
that
C k (x1 , . . . , xk ) = C(x1 ) · · · C(xk ) = C(x'1 ) · · · C(x'k ) = C(x'1 , . . . , x'k ) .
Conversely, if C is not UD then by definition there exist distinct sequences of source
symbols, (x1 , . . . , xi ) and (y1 , . . . , yj ) , such that
C(x1 )C(x2 ) · · · C(xi ) = C(y1 )C(y2 ) · · · C(yj ) .
Concatenating the input sequences (x 1 , . . . , xi ) and (y1 , . . . , yj ) , we obtain
C(x1 ) · · · C(xi )C(y1 ) · · · C(yj ) = C(y1 ) · · · C(yj )C(x1 ) · · · C(xi ) ,
which shows that C k is not one-to-one for k = i + j .
22. Average length of an optimal code. Prove that L(p 1 , . . . , pm ) , the average codeword length for an optimal D -ary prefix code for probabilities {p 1 , . . . , pm } , is a continuous function of p1 , . . . , pm . This is true even though the optimal code changes
discontinuously as the probabilities vary.
Solution: Average length of an optimal code. The longest possible codeword in an
optimal code has n − 1 binary digits. This corresponds to a completely unbalanced tree
in which each codeword has a different length. Using a D -ary alphabet for codewords
can only decrease its length. Since we know the maximum possible codeword length,
there are only a finite number of possible codes to consider. For each candidate code C ,
the average codeword length is determined by the probability distribution p 1 , p2 , . . . , pn :
L(C) =
n
!
p i -i .
i=1
This is a linear, and therefore continuous, function of p 1 , p2 , . . . , pn . The optimal
code is the candidate code with the minimum L , and its length is the minimum of a
finite number of continuous functions and is therefore itself a continuous function of
p1 , p 2 , . . . , p n .
23. Unused code sequences. Let C be a variable length code that satisfies the Kraft
inequality with equality but does not satisfy the prefix condition.
(a) Prove that some finite sequence of code alphabet symbols is not the prefix of any
sequence of codewords.
(b) (Optional) Prove or disprove: C has infinite decoding delay.
113
Data Compression
Solution: Unused code sequences. Let C be a variable length code that satisfies the
Kraft inequality with equality but does not satisfy the prefix condition.
(a) When a prefix code satisfies the Kraft inequality with equality, every (infinite)
sequence of code alphabet symbols corresponds to a sequence of codewords, since
the probability that a random generated sequence begins with a codeword is
m
!
D−&i = 1 .
i=1
If the code does not satisfy the prefix condition, then at least one codeword, say
C(x1 ) , is a prefix of another, say C(xm ) . Then the probability that a random
generated sequence begins with a codeword is at most
m−1
!
i=1
D−&i ≤ 1 − D −&m < 1 ,
which shows that not every sequence of code alphabet symbols is the beginning of
a sequence of codewords.
(b) (Optional) A reference to a paper proving that C has infinite decoding delay will
be supplied later. It is easy to see by example that the decoding delay cannot be
finite. An simple example of a code that satisfies the Kraft inequality, but not the
prefix condition is a suffix code (see problem 11). The simplest non-trivial suffix
code is one for three symbols {0, 01, 11} . For such a code, consider decoding a
string 011111 . . . 1110. If the number of one’s is even, then the string must be
parsed 0,11,11, . . . ,11,0, whereas if the number of 1’s is odd, the string must be
parsed 01,11, . . . ,11. Thus the string cannot be decoded until the string of 1’s has
ended, and therefore the decoding delay could be infinite.
24. Optimal codes for uniform distributions. Consider a random variable with m
equiprobable outcomes. The entropy of this information source is obviously log 2 m
bits.
(a) Describe the optimal instantaneous binary code for this source and compute the
average codeword length Lm .
(b) For what values of m does the average codeword length L m equal the entropy
H = log 2 m ?
(c) We know that L < H + 1 for any probability distribution. The redundancy of a
variable length code is defined to be ρ = L − H . For what value(s) of m , where
2k ≤ m ≤ 2k+1 , is the redundancy of the code maximized? What is the limiting
value of this worst case redundancy as m → ∞ ?
Solution: Optimal codes for uniform distributions.
(a) For uniformly probable codewords, there exists an optimal binary variable length
prefix code such that the longest and shortest codewords differ by at most one bit.
114
Data Compression
If two codes differ by 2 bits or more, call m s the message with the shorter codeword
Cs and m& the message with the longer codeword C & . Change the codewords
for these two messages so that the new codeword C s' is the old Cs with a zero
appended (Cs' = Cs 0) and C&' is the old Cs with a one appended (C&' = Cs 1) . Cs'
and C&' are legitimate codewords since no other codeword contained C s as a prefix
(by definition of a prefix code), so obviously no other codeword could contain C s'
or C&' as a prefix. The length of the codeword for m s increases by 1 and the
length of the codeword for m& decreases by at least 1. Since these messages are
equally likely, L' ≤ L . By this method we can transform any optimal code into a
code in which the length of the shortest and longest codewords differ by at most
one bit. (In fact, it is easy to see that every optimal code has this property.)
For a source with n messages, -(ms ) = *log 2 n+ and -(m& ) = 2log 2 n3 . Let d be
the difference between n and the next smaller power of 2:
d = n − 2*log2 n+ .
Then the optimal code has 2d codewords of length 2log 2 n3 and n−2d codewords
of length *log 2 n+ . This gives
1
(2d2log 2 n3 + (n − 2d)*log 2 n+)
n
1
=
(n*log 2 n+ + 2d)
n
2d
.
= *log2 n+ +
n
L =
Note that d = 0 is a special case in the above equation.
(b) The average codeword length equals the entropy if and only if n is a power of 2.
To see this, consider the following calculation of L :
L=
!
i
p i -i = −
!
pi log2 2−&i = H + D(p5q) ,
i
where qi = 2−&i . Therefore L = H only if pi = qi , that is, when all codewords
have equal length, or when d = 0 .
(c) For n = 2m + d , the redundancy r = L − H is given by
r = L − log2 n
= *log2 n+ +
2d
− log2 n
n
2d
− log2 (2m + d)
n
2d
ln(2m + d)
= m+ m
−
.
2 +d
ln 2
= m+
Therefore
∂r
(2m + d)(2) − 2d
1
1
=
−
· m
2
m
∂d
ln 2 2 + d
(2 + d)
115
Data Compression
Setting this equal to zero implies d∗ = 2m (2 ln 2 − 1) . Since there is only one
maximum, and since the function is convex ∩ , the maximizing d is one of the two
integers nearest (.3862)(2m ) . The corresponding maximum redundancy is
2d∗
ln(2m + d∗ )
−
2m + d∗
ln 2
ln(2m + (.3862)2m )
2(.3862)(2m )
−
= m+ m
2 + (.3862)(2m )
ln 2
= .0861 .
r∗ ≈ m +
This is achieved with arbitrary accuracy as n → ∞ . (The quantity σ = 0.0861 is
one of the lesser fundamental constants of the universe. See Robert Gallager[8]).
25. Optimal codeword lengths. Although the codeword lengths of an optimal variable
length code are complicated functions of the message probabilities {p 1 , p2 , . . . , pm } , it
can be said that less probable symbols are encoded into longer codewords. Suppose
that the message probabilities are given in decreasing order p 1 > p2 ≥ · · · ≥ pm .
(a) Prove that for any binary Huffman code, if the most probable message symbol has
probability p1 > 2/5 , then that symbol must be assigned a codeword of length 1.
(b) Prove that for any binary Huffman code, if the most probable message symbol
has probability p1 < 1/3 , then that symbol must be assigned a codeword of
length ≥ 2 .
Solution: Optimal codeword lengths. Let {c 1 , c2 , . . . , cm } be codewords of respective
lengths {-1 , -2 , . . . , -m } corresponding to probabilities {p 1 , p2 , . . . , pm } .
(a) We prove that if p1 > p2 and p1 > 2/5 then -1 = 1 . Suppose, for the sake of
contradiction, that -1 ≥ 2 . Then there are no codewords of length 1; otherwise
c1 would not be the shortest codeword. Without loss of generality, we can assume
that c1 begins with 00. For x, y ∈ {0, 1} let Cxy denote the set of codewords
beginning with xy . Then the sets C01 , C10 , and C11 have total probability
1 − p1 < 3/5 , so some two of these sets (without loss of generality, C 10 and C11 )
have total probability less 2/5. We can now obtain a better code by interchanging
the subtree of the decoding tree beginning with 1 with the subtree beginning with
00; that is, we replace codewords of the form 1x . . . by 00x . . . and codewords of
the form 00y . . . by 1y . . . . This improvement contradicts the assumption that
-1 ≥ 2 , and so -1 = 1 . (Note that p1 > p2 was a hidden assumption for this
problem; otherwise, for example, the probabilities {.49, .49, .02} have the optimal
code {00, 1, 01} .)
(b) The argument is similar to that of part (a). Suppose, for the sake of contradiction,
that -1 = 1 . Without loss of generality, assume that c 1 = 0 . The total probability
of C10 and C11 is 1 − p1 > 2/3 , so at least one of these two sets (without loss
of generality, C10 ) has probability greater than 2/3. We can now obtain a better
code by interchanging the subtree of the decoding tree beginning with 0 with the
116
Data Compression
subtree beginning with 10; that is, we replace codewords of the form 10x . . . by
0x . . . and we let c1 = 10 . This improvement contradicts the assumption that
-1 = 1 , and so -1 ≥ 2 .
26. Merges. Companies with values W 1 , W2 , . . . , Wm are merged as follows. The two least
valuable companies are merged, thus forming a list of m − 1 companies. The value
of the merge is the sum of the values of the two merged companies. This continues
until one supercompany remains. Let V equal the sum of the values of the merges.
Thus V represents the total reported dollar volume of the merges. For example, if
W = (3, 3, 2, 2) , the merges yield (3, 3, 2, 2) → (4, 3, 3) → (6, 4) → (10) , and V =
4 + 6 + 10 = 20 .
(a) Argue that V is the minimum volume achievable by sequences of pair-wise merges
terminating in one supercompany. (Hint: Compare to Huffman coding.)
(b) Let W =
satisfies
$
Wi ,
W̃i = Wi /W , and show that the minimum merge volume V
W H(W̃) ≤ V ≤ W H(W̃) + W
(5.39)
Solution: Problem: Merges
(a) We first normalize the values of the companies to add to one. The total volume of
the merges is equal to the sum of value of each company times the number of times
it takes part in a merge. This is identical to the average length of a Huffman code,
with a tree which corresponds to the merges. Since Huffman coding minimizes
average length, this scheme of merges minimizes total merge volume.
(b) Just as in the case of Huffman coding, we have
H ≤ EL < H + 1,
(5.40)
we have in this case for the corresponding merge scheme
W H(W̃) ≤ V ≤ W H(W̃) + W
(5.41)
27. The Sardinas-Patterson test for unique decodability. A code is not uniquely
decodable if and only if there exists a finite sequence of code symbols which can be
resolved in two different ways into sequences of codewords. That is, a situation such as
|
A1
| B1 |
|
B2
|
A2
B3
|
A3
...
...
Am
Bn
|
|
must occur where each Ai and each Bi is a codeword. Note that B1 must be a
prefix of A1 with some resulting “dangling suffix.” Each dangling suffix must in turn
be either a prefix of a codeword or have another codeword as its prefix, resulting in
another dangling suffix. Finally, the last dangling suffix in the sequence must also be
a codeword. Thus one can set up a test for unique decodability (which is essentially
the Sardinas-Patterson test[12]) in the following way: Construct a set S of all possible
dangling suffixes. The code is uniquely decodable if and only if S contains no codeword.
Data Compression
117
(a) State the precise rules for building the set S .
(b) Suppose the codeword lengths are l i , i = 1, 2, . . . , m . Find a good upper bound
on the number of elements in the set S .
(c) Determine which of the following codes is uniquely decodable:
i.
ii.
iii.
iv.
v.
vi.
vii.
{0, 10, 11} .
{0, 01, 11} .
{0, 01, 10} .
{0, 01} .
{00, 01, 10, 11} .
{110, 11, 10} .
{110, 11, 100, 00, 10} .
(d) For each uniquely decodable code in part (c), construct, if possible, an infinite
encoded sequence with a known starting point, such that it can be resolved into
codewords in two different ways. (This illustrates that unique decodability does
not imply finite decodability.) Prove that such a sequence cannot arise in a prefix
code.
Solution: Test for unique decodability.
The proof of the Sardinas-Patterson test has two parts. In the first part, we will show
that if there is a code string that has two different interpretations, then the code will fail
the test. The simplest case is when the concatenation of two codewords yields another
codeword. In this case, S2 will contain a codeword, and hence the test will fail.
In general, the code is not uniquely decodeable, iff there exists a string that admits two
different parsings into codewords, e.g.
x1 x2 x3 x4 x5 x6 x7 x8 = x1 x2 , x3 x4 x5 , x6 x7 x8 = x1 x2 x3 x4 , x5 x6 x7 x8 .
(5.42)
In this case, S2 will contain the string x3 x4 , S3 will contain x5 , S4 will contain
x6 x7 x8 , which is a codeword. It is easy to see that this procedure will work for any
string that has two different parsings into codewords; a formal proof is slightly more
difficult and using induction.
In the second part, we will show that if there is a codeword in one of the sets S i , i ≥ 2 ,
then there exists a string with two different possible interpretations, thus showing that
the code is not uniquely decodeable. To do this, we essentially reverse the construction
of the sets. We will not go into the details - the reader is referred to the original paper.
(a) Let S1 be the original set of codewords. We construct S i+1 from Si as follows:
A string y is in Si+1 iff there is a codeword x in S1 , such that xy is in Si or if
there exists a z ∈ Si such that zy is in S1 (i.e., is a codeword). Then the code
is uniquely decodable iff none of the S i , i ≥ 2 contains a codeword. Thus the set
S = ∪i≥2 Si .
118
Data Compression
(b) A simple upper bound can be obtained from the fact that all strings in the sets
Si have length less than lmax , and therefore the maximum number of elements in
S is less than 2lmax .
(c)
i. {0, 10, 11} . This code is instantaneous and hence uniquely decodable.
ii. {0, 01, 11} . This code is a suffix code (see problem 11). It is therefore uniquely
decodable. The sets in the Sardinas-Patterson test are S 1 = {0, 01, 11} ,
S2 = {1} = S3 = S4 = . . . .
iii. {0, 01, 10} . This code is not uniquely decodable. The sets in the test are
S1 = {0, 01, 10} , S2 = {1} , S3 = {0} , . . . . Since 0 is codeword, this code
fails the test. It is easy to see otherwise that the code is not UD - the string
010 has two valid parsings.
iv. {0, 01} . This code is a suffix code and is therefore UD. THe test produces
sets S1 = {0, 01} , S2 = {1} , S3 = φ .
v. {00, 01, 10, 11} . This code is instantaneous and therefore UD.
vi. {110, 11, 10} . This code is uniquely decodable, by the Sardinas-Patterson
test, since S1 = {110, 11, 10} , S2 = {0} , S3 = φ .
vii. {110, 11, 100, 00, 10} . This code is UD, because by the Sardinas Patterson
test, S1 = {110, 11, 100, 00, 10} , S2 = {0} , S3 = {0} , etc.
(d) We can produce infinite strings which can be decoded in two ways only for examples
where the Sardinas Patterson test produces a repeating set. For example, in part
(ii), the string 011111 . . . could be parsed either as 0,11,11, . . . or as 01,11,11, . . . .
Similarly for (viii), the string 10000 . . . could be parsed as 100,00,00, . . . or as
10,00,00, . . . . For the instantaneous codes, it is not possible to construct such a
string, since we can decode as soon as we see a codeword string, and there is no
way that we would need to wait to decode.
28. Shannon code. Consider the following method for generating a code for a random
variable X which takes on m values {1, 2, . . . , m} with probabilities p 1 , p2 , . . . , pm .
Assume that the probabilities are ordered so that p 1 ≥ p2 ≥ · · · ≥ pm . Define
Fi =
i−1
!
pk ,
(5.43)
k=1
the sum of the probabilities of all symbols less than i . Then the codeword for i is the
number Fi ∈ [0, 1] rounded off to li bits, where li = 2log p1i 3 .
(a) Show that the code constructed by this process is prefix-free and the average length
satisfies
H(X) ≤ L < H(X) + 1.
(5.44)
(b) Construct the code for the probability distribution (0.5, 0.25, 0.125, 0.125) .
Solution: Shannon code.
119
Data Compression
(a) Since li = 2log
1
pi 3 ,
we have
log
1
1
≤ li < log + 1
pi
pi
which implies that
H(X) ≤ L =
!
pi li < H(X) + 1.
(5.45)
(5.46)
The difficult part is to prove that the code is a prefix code. By the choice of l i ,
we have
2−li ≤ pi < 2−(li −1) .
(5.47)
Thus Fj , j > i differs from Fi by at least 2−li , and will therefore differ from Fi
is at least one place in the first li bits of the binary expansion of Fi . Thus the
codeword for Fj , j > i , which has length lj ≥ li , differs from the codeword for
Fi at least once in the first li places. Thus no codeword is a prefix of any other
codeword.
(b) We build the following table
Symbol Probability Fi in decimal Fi in binary li Codeword
1
0.5
0.0
0.0
1
0
2
0.25
0.5
0.10
2
10
3
0.125
0.75
0.110
3
110
4
0.125
0.875
0.111
3
111
The Shannon code in this case achieves the entropy bound (1.75 bits) and is
optimal.
29. Optimal codes for dyadic distributions. For a Huffman code tree, define the
probability of a node as the sum of the probabilities of all the leaves under that node.
Let the random variable X be drawn from a dyadic distribution, i.e., p(x) = 2 −i , for
some i , for all x ∈ X . Now consider a binary Huffman code for this distribution.
(a) Argue that for any node in the tree, the probability of the left child is equal to the
probability of the right child.
(b) Let X1 , X2 , . . . , Xn be drawn i.i.d. ∼ p(x) . Using the Huffman code for p(x) , we
map X1 , X2 , . . . , Xn to a sequence of bits Y1 , Y2 , . . . , Yk(X1 ,X2 ,...,Xn ) . (The length
of this sequence will depend on the outcome X 1 , X2 , . . . , Xn .) Use part (a) to
argue that the sequence Y1 , Y2 , . . . , forms a sequence of fair coin flips, i.e., that
Pr{Yi = 0} = Pr{Yi = 1} = 21 , independent of Y1 , Y2 , . . . , Yi−1 .
Thus the entropy rate of the coded sequence is 1 bit per symbol.
(c) Give a heuristic argument why the encoded sequence of bits for any code that
achieves the entropy bound cannot be compressible and therefore should have an
entropy rate of 1 bit per symbol.
Solution: Optimal codes for dyadic distributions.
120
Data Compression
(a) For a dyadic distribution, the Huffman code acheives the entropy bound. The
code tree constructed be the Huffman algorithm is a complete tree with leaves at
depth li with probability pi = 2−li .
For such a complete binary tree, we can prove the following properties
• The probability of any internal node at depth k is 2 −k .
We can prove this by induction. Clearly, it is true for a tree with 2 leaves.
Assume that it is true for all trees with n leaves. For any tree with n + 1
leaves, at least two of the leaves have to be siblings on the tree (else the tree
would not be complete). Let the level of these siblings be j . The probability of
the parent of these two siblings (at level j − 1 ) has probability 2 j + 2j = 2j−1 .
We can now replace the two siblings with their parent, without changing the
probability of any other internal node. But now we have a tree with n leaves
which satisfies the required property. Thus, by induction, the property is true
for all complete binary trees.
• From the above property, it follows immediately the the probability of the left
child is equal to the probability of the right child.
(b) For a sequence X1 , X2 , we can construct a code tree by first constructing the
optimal tree for X1 , and then attaching the optimal tree for X 2 to each leaf of
the optimal tree for X1 . Proceeding this way, we can construct the code tree for
X1 , X2 , . . . , Xn . When Xi are drawn i.i.d. according to a dyadic distribution, it
is easy to see that the code tree constructed will be also be a complete binary tree
with the properties in part (a). Thus the probability of the first bit being 1 is 1/2,
and at any internal node, the probability of the next bit produced by the code
being 1 is equal to the probability of the next bit being 0. Thus the bits produced
by the code are i.i.d. Bernoulli(1/2), and the entropy rate of the coded sequence
is 1 bit per symbol.
(c) Assume that we have a coded sequence of bits from a code that met the entropy
bound with equality. If the coded sequence were compressible, then we could used
the compressed version of the coded sequence as our code, and achieve an average
length less than the entropy bound, which will contradict the bound. Thus the
coded sequence cannot be compressible, and thus must have an entropy rate of 1
bit/symbol.
30. Relative entropy is cost of miscoding: Let the random variable X have five
possible outcomes {1, 2, 3, 4, 5} . Consider two distributions p(x) and q(x) on this
random variable
Symbol p(x) q(x) C1 (x) C2 (x)
1
1/2
1/2
0
0
2
1/4
1/8
10
100
3
1/8
1/8
110
101
4
1/16 1/8
1110
110
5
1/16 1/8
1111
111
(a) Calculate H(p) , H(q) , D(p||q) and D(q||p) .
121
Data Compression
(b) The last two columns above represent codes for the random variable. Verify that
the average length of C1 under p is equal to the entropy H(p) . Thus C 1 is
optimal for p . Verify that C2 is optimal for q .
(c) Now assume that we use code C2 when the distribution is p . What is the average
length of the codewords. By how much does it exceed the entropy p ?
(d) What is the loss if we use code C1 when the distribution is q ?
Solution: Cost of miscoding
(a) H(p) =
H(q) =
1
2
1
2
D(p||q) =
D(p||q) =
1
1
log 2 + 41 log 4 + 18 log 8 + 16
log 16 + 16
log 16 = 1.875 bits.
1
1
1
1
log 2 + 8 log 8 + 8 log 8 + 8 log 8 + 8 log 8 = 2 bits.
1
2
1
2
log 1/2
1/2 +
log 1/2
1/2 +
1
4
1
8
log 1/4
1/8 +
log 1/8
1/4 +
1
8
1
8
log 1/8
1/8 +
log 1/8
1/8 +
1/16
1/16
1
1
16 log 1/8 + 16 log 1/8 = 0.125 bits.
1/8
1/8
1
1
8 log 1/16 + 8 log 1/16 = 0.125 bits.
(b) The average length of C1 for p(x) is 1.875 bits, which is the entropy of p . Thus
C1 is an efficient code for p(x) . Similarly, the average length of code C 2 under
q(x) is 2 bits, which is the entropy of q . Thus C 2 is an efficient code for q .
1
(c) If we use code C2 for p(x) , then the average length is 12 ∗ 1 + 14 ∗ 3 + 18 ∗ 3 + 16
∗
1
3 + 16 ∗ 3 = 2 bits. It exceeds the entropy by 0.125 bits, which is the same as
D(p||q) .
(d) Similary, using code C1 for q has an average length of 2.125 bits, which exceeds
the entropy of q by 0.125 bits, which is D(q||p) .
31. Non-singular codes: The discussion in the text focused on instantaneous codes, with
extensions to uniquely decodable codes. Both these are required in cases when the
code is to be used repeatedly to encode a sequence of outcomes of a random variable.
But if we need to encode only one outcome and we know when we have reached the
end of a codeword, we do not need unique decodability - only the fact that the code is
non-singular would suffice. For example, if a random variable X takes on 3 values a,
b and c, we could encode them by 0, 1, and 00. Such a code is non-singular but not
uniquely decodable.
In the following, assume that we have a random variable X which takes on m values
with probabilities p1 , p2 , . . . , pm and that the probabilities are ordered so that p 1 ≥
p2 ≥ . . . ≥ p m .
(a) By viewing the non-singular binary code as a ternary code with three symbols,
0,1 and “STOP”, show that the expected length of a non-singular code L 1:1 for a
random variable X satisfies the following inequality:
L1:1 ≥
H2 (X)
−1
log2 3
(5.48)
where H2 (X) is the entropy of X in bits. Thus the average length of a nonsingular code is at least a constant fraction of the average length of an instantaneous code.
122
Data Compression
(b) Let LIN ST be the expected length of the best instantaneous code and L ∗1:1 be the
expected length of the best non-singular code for X . Argue that L ∗1:1 ≤ L∗IN ST ≤
H(X) + 1 .
(c) Give a simple example where the average length of the non-singular code is less
than the entropy.
(d) The set of codewords available for an non-singular code is {0, 1, 00, 01, 10, 11, 000, . . .} .
$
Since L1:1 = m
i=1 pi li , show that this is minimized if we allot the shortest codewords to the most probable symbols.
Thus0 l1 =1 l2 = 1 , l3 = l4 = l5 = l6 = 2 , 0etc. Show
that in general li =
1
$
i
p
2log
2log 2i + 1 3 , and therefore L∗1:1 = m
+
1
3
.
i=1 i
2
(e) The previous part shows that it is easy to find the optimal non-singular code for
a distribution. However, it is a little more tricky to deal with the average length
of this code. We now bound0 this average
length. It follows from the previous part
1
- $m
i
∗
that L1:1 ≥ L̃= i=1 pi log 2 + 1 . Consider the difference
m
!
m
!
*
+
i
pi log
pi log pi −
F (p) = H(X) − L̃ = −
+1 .
2
i=1
i=1
(5.49)
Prove by the method of Lagrange multipliers that the maximum of F (p) occurs
when pi = c/(i+2) , where c = 1/(Hm+2 −H2 ) and Hk is the sum of the harmonic
series, i.e.,
k
-! 1
Hk =
(5.50)
i
i=1
(This can also be done using the non-negativity of relative entropy.)
(f) Complete the arguments for
H(X) − L∗1:1 ≤ H(X) − L̃
≤ log(2(Hm+2 − H2 ))
(5.51)
(5.52)
Now it is well known (see, e.g. Knuth, “Art of Computer Programming”, Vol.
1
1
1
1) that Hk ≈ ln k (more precisely, Hk = ln k + γ + 2k
− 12k
2 + 120k 4 − % where
0 < % < 1/252n6 , and γ = Euler’s constant = 0.577 . . . ). Either using this or a
simple approximation that Hk ≤ ln k + 1 , which can be proved by integration of
1
∗
x , it can be shown that H(X) − L 1:1 < log log m + 2 . Thus we have
H(X) − log log |X | − 2 ≤ L∗1:1 ≤ H(X) + 1.
(5.53)
A non-singular code cannot do much better than an instantaneous code!
Solution:
(a) In the text, it is proved that the average length of any prefix-free code in a D -ary
alphabet was greater than HD (X) , the D -ary entropy. Now if we start with any
123
Data Compression
binary non-singular code and add the additional symbol “STOP” at the end, the
new code is prefix-free in the alphabet of 0,1, and “STOP” (since “STOP” occurs
only at the end of codewords, and every codeword has a “STOP” symbol, so the
only way a code word can be a prefix of another is if they were equal). Thus each
code word in the new alphabet is one symbol longer than the binary codewords,
and the average length is 1 symbol longer.
2 (X)
Thus we have L1:1 + 1 ≥ H3 (X) , or L1:1 ≥ Hlog
3 − 1 = 0.63H(X) − 1 .
(b) Since an instantaneous code is also a non-singular code, the best non-singular code
is at least as good as the best instantaneous code. Since the best instantaneous
code has average length ≤ H(X) + 1 , we have L ∗1:1 ≤ L∗IN ST ≤ H(X) + 1 .
(c) For a 2 symbol alphabet, the best non-singular code and the best instantaneous
code are the same. So the simplest example where they differ is when |X | = 3 .
In this case, the simplest (and it turns out, optimal) non-singular code has three
codewords 0, 1, 00 . Assume that each of the symbols is equally likely. Then
H(X) = log 3 = 1.58 bits, whereas the average length of the non-singular code
is 13 .1 + 31 .1 + 13 .2 = 4/3 = 1.3333 < H(X) . Thus a non-singular code could do
better than entropy.
(d) For a given set of codeword lengths, the fact that allotting the shortest codewords
to the most probable symbols is proved in Lemma 5.8.1, part 1 of EIT.
This result is a general version of what is called the Hardy-Littlewood-Polya inequality, which says that if a < b , c < d , then ad + bc < ac + bd . The general
version of the Hardy-Littlewood-Polya inequality states that if we were given two
sets of numbers A = {aj } and B = {bj } each of size m , and let a[i] be the i -th
largest element of A and b[i] be the i -th largest element of set B . Then
m
!
i=1
a[i] b[m+1−i] ≤
m
!
i=1
ai bi ≤
m
!
a[i] b[i]
(5.54)
i=1
An intuitive explanation of this inequality is that you can consider the a i ’s to the
position of hooks along a rod, and bi ’s to be weights to be attached to the hooks.
To maximize the moment about one end, you should attach the largest weights to
the furthest hooks.
The set of available codewords is the set of all possible sequences. Since the only
restriction is that the code be non-singular, each source symbol could be alloted
to any codeword in the set {0, 1, 00, . . .} .
Thus we should allot the codewords 0 and 1 to the two most probable source
symbols, i.e., to probablities p 1 and p2 . Thus l1 = l2 = 1 . Similarly, l3 = l4 =
l5 = l6 = 2 (corresponding to the codewords 00, 01, 10 and 11). The next 8
symbols will use codewords of length 3, etc.
We will now find the general form for li . We can prove it by induction, but we will
$
j
derive the result from first principles. Let c k = k−1
j=1 2 . Then by the arguments of
the previous paragraph, all source symbols of index c k +1, ck +2, . . . , ck +2k = ck+1
124
Data Compression
use codewords of length k . Now by using the formula for the sum of the geometric
series, it is easy to see that
ck =
!
j = 1k−1 2j = 2
!
j = 0k−2 2j = 2
2k−1 − 1
= 2k − 2
2−1
(5.55)
Thus all sources with index i , where 2k − 1 ≤ i ≤ 2k − 2 + 2k = 2k+1 − 2 use
codewords of length k . This corresponds to 2 k < i + 2 ≤ 2k+1 or k < log(i + 2) ≤
k + 1 or k − 1 < log i+2
2 ≤ k . Thus the length of the codeword for the i th symbol is k = 2log i+2
assigns codeword
2 3 . Thus the best non-singular code
$m
∗
∗
length li = 2log(i/2+1)3 to symbol i , and therefore L 1:1 = i=1 pi 2log(i/2+1)3 .
- $m
i=1 pi log
(e) Since 2log(i/2 + 1)3 ≥ log(i/2 + 1) , it follows that L ∗1:1 ≥ L̃=
Consider the difference
F (p) = H(X) − L̃ = −
m
!
i=1
pi log pi −
m
!
pi log
i=1
*
+
i
+1 .
2
0
i
2
1
+1 .
(5.56)
We want to maximize this function over all probability distributions, and therefore
$
we use the method of Lagrange multipliers with the constraint
pi = 1 .
Therefore let
m
!
+
*
m
!
m
!
i
J(p) = −
pi log pi −
+ 1 + λ( pi − 1)
pi log
2
i=1
i=1
i=1
(5.57)
Then differentiating with respect to p i and setting to 0, we get
*
+
(5.58)
i+2
2
(5.59)
∂J
i
= −1 − log pi − log
+1 +λ=0
∂pi
2
log pi = λ − 1 − log
2
i+2
$
Now substituting this in the constraint that
pi = 1 , we get
pi = 2λ−1
2
λ
m
!
i=1
$
or 2λ = 1/(
1
i i+2 ) .
Now using the definition Hk =
m
!
i=1
Thus 2λ =
1
=1
i+2
1
Hm+2 −H2
$k
1
j=1 j
(5.60)
(5.61)
, it is obvious that
m+2
! 1
1
1
=
− 1 − = Hm+2 − H2 .
i+2
i
2
i=1
(5.62)
, and
pi =
1
1
Hm+2 − H2 i + 2
(5.63)
125
Data Compression
Substituting this value of pi in the expression for F (p) , we obtain
m
!
+
*
m
!
i
F (p) = −
pi log pi −
+1
pi log
2
i=1
i=1
= −
= −
m
!
i=1
m
!
pi log pi
pi log
i=1
i+2
2
(5.64)
(5.65)
1
(5.66)
2(Hm+2 − H2 )
= log 2(Hm+2 − H2 )
(5.67)
Thus the extremal value of F (p) is log 2(H m+2 − H2 ) . We have not showed that
it is a maximum - that can be shown be taking the second derivative. But as usual,
it is easier to see it using relative entropy. Looking at the expressions above, we can
1
, then qi is a probability distribution (i.e.,
see that if we define qi = Hm+21 −H2 i+2
$
i+2
qi ≥ 0 ,
qi = 1 ). Also, 2=
1
1 , and substuting this in the expression
2(Hm+2 −H2 ) qi
for F (p) , we obtain
m
!
*
m
!
+
i
F (p) = −
pi log pi −
pi log
+1
2
i=1
i=1
= −
= −
= −
m
!
i=1
m
!
i=1
m
!
i=1
pi log pi
pi log pi
pi log
i+2
2
(5.69)
1
1
2(Hm+2 − H2 ) qi
m
pi !
1
−
pi log
qi i=1
2(Hm+2 − H2 )
= log 2(Hm+2 − H2 ) − D(p||q)
(f)
(5.68)
(5.70)
(5.71)
(5.72)
≤ log 2(Hm+2 − H2 )
(5.73)
H(X) − L∗1:1 ≤ H(X) − L̃
(5.74)
with equality iff p = q . Thus the maximum value of F (p) is log 2(H m+2 − H2 )
≤ log 2(Hm+2 − H2 )
(5.75)
The first inequality follows from the definition of L̃ and the second from the result
of the previous part.
To complete the proof, we will use the simple inequality H k ≤ ln k + 1 , which can
be shown by integrating x1 between 1 and k . Thus Hm+2 ≤ ln(m + 2) + 1 , and
2(Hm+2 − H2 ) = 2(Hm+2 − 1 − 12 ) ≤ 2(ln(m + 2) + 1 − 1 − 21 ) ≤ 2(ln(m + 2)) =
2 log(m + 2)/ log e ≤ 2 log(m + 2) ≤ 2 log m 2 = 4 log m where the last inequality
is true for m ≥ 2 . Therefore
H(X) − L1:1 ≤ log 2(Hm+2 − H2 ) ≤ log(4 log m) = log log m + 2
(5.76)
126
Data Compression
We therefore have the following bounds on the average length of a non-singular
code
H(X) − log log |X | − 2 ≤ L∗1:1 ≤ H(X) + 1
(5.77)
A non-singular code cannot do much better than an instantaneous code!
32. Bad wine. One is given 6 bottles of wine. It is known that precisely one bottle has gone
bad (tastes terrible). From inspection of the bottles it is determined that the probability
8 6 4 2 2 1
pi that the ith bottle is bad is given by (p1 , p2 , . . . , p6 ) = ( 23
, 23 , 23 , 23 , 23 , 23 ) . Tasting
will determine the bad wine.
Suppose you taste the wines one at a time. Choose the order of tasting to minimize
the expected number of tastings required to determine the bad bottle. Remember, if
the first 5 wines pass the test you don’t have to taste the last.
(a) What is the expected number of tastings required?
(b) Which bottle should be tasted first?
Now you get smart. For the first sample, you mix some of the wines in a fresh glass and
sample the mixture. You proceed, mixing and tasting, stopping when the bad bottle
has been determined.
(c) What is the minimum expected number of tastings required to determine the bad
wine?
(d) What mixture should be tasted first?
Solution: Bad Wine
(a) If we taste one bottle at a time, to minimize the expected number of tastings the
order of tasting should be from the most likely wine to be bad to the least. The
expected number of tastings required is
6
!
i=1
pi li = 1 ×
6
4
2
2
1
8
+2×
+3×
+4×
+5×
+5×
23
23
23
23
23
23
55
23
= 2.39
=
(b) The first bottle to be tasted should be the one with probability
8
23
.
(c) The idea is to use Huffman coding. With Huffman coding, we get codeword lengths
as (2, 2, 2, 3, 4, 4) . The expected number of tastings required is
6
!
i=1
pi li = 2 ×
54
23
= 2.35
=
8
6
4
2
2
1
+2×
+2×
+3×
+4×
+4×
23
23
23
23
23
23
127
Data Compression
(d) The mixture of the first and second bottles should be tasted first.
33. Huffman vs. Shannon. A random variable X takes on three values with probabilities 0.6, 0.3, and 0.1.
(a) What are the lengths of the binary Huffman codewords for X ? What are the
1
lengths of the binary Shannon codewords (l(x) = 2log( p(x)
)3) for X ?
(b) What is the smallest integer D such that the expected Shannon codeword length
with a D -ary alphabet equals the expected Huffman codeword length with a D ary alphabet?
Solution: Huffman vs. Shannon
(a) It is obvious that an Huffman code for the distribution (0.6,0.3,0.1) is (1,01,00),
with codeword lengths (1,2,2). The Shannon code would use lengths 2log p1 3 ,
which gives lengths (1,2,4) for the three symbols.
(b) For any D > 2 , the Huffman code for the three symbols are all one character. The
1
Shannon code length 2log D p1 3 would be equal to 1 for all symbols if log D 0.1
= 1,
i.e., if D = 10 . Hence for D ≥ 10 , the Shannon code is also optimal.
34. Huffman algorithm for tree construction. Consider the following problem: m
binary signals S1 , S2 , . . . , Sm are available at times T1 ≤ T2 ≤ . . . ≤ Tm , and we
would like to find their sum S1 ⊕ S2 ⊕ · · · ⊕ Sm using 2-input gates, each gate with
1 time unit delay, so that the final result is available as quickly as possible. A simple
greedy algorithm is to combine the earliest two results, forming the partial result at
time max(T1 , T2 ) + 1 . We now have a new problem with S 1 ⊕ S2 , S3 , . . . , Sm , available
at times max(T1 , T2 ) + 1, T3 , . . . , Tm . We can now sort this list of T’s, and apply the
same merging step again, repeating this until we have the final result.
(a) Argue that the above procedure is optimal, in that it constructs a circuit for which
the final result is available as quickly as possible.
(b) Show that this procedure finds the tree that minimizes
C(T ) = max(Ti + li )
i
(5.78)
where Ti is the time at which the result alloted to the i -th leaf is available, and
li is the length of the path from the i -th leaf to the root.
(c) Show that
C(T ) ≥ log 2
.
for any tree T .
!
Ti
/
(5.79)
/
+1
(5.80)
2
i
(d) Show that there exists a tree such that
C(T ) ≤ log 2
.
!
i
2
Ti
128
Data Compression
Thus log2
Solution:
0$
i
2Ti
1
is the analog of entropy for this problem.
Tree construction:
(a) The proof is identical to the proof of optimality of Huffman coding. We first show
that for the optimal tree if Ti < Tj , then li ≥ lj . The proof of this is, as in the
case of Huffman coding, by contradiction. Assume otherwise, i.e., that if T i < Tj
and li < lj , then by exchanging the inputs, we obtain a tree with a lower total
cost, since
max{Ti + li , Tj + lj } ≥ max{Ti + lj , Tj + li }
(5.81)
Thus the longest branches are associated with the earliest times.
The rest of the proof is identical to the Huffman proof. We show that the longest
branches correspond to the two earliest times, and that they could be taken as
siblings (inputs to the same gate). Then we can reduce the problem to constructing
the optimal tree for a smaller problem. By induction, we extend the optimality to
the larger problem, proving the optimality of the above algorithm.
Given any tree of gates, the earliest that the output corresponding to a particular
signal would be available is Ti + li , since the signal undergoes li gate delays. Thus
maxi (Ti + li ) is a lower bound on the time at which the final answer is available.
The fact that the tree achieves this bound can be shown by induction. For any
internal node of the tree, the output is available at time equal to the maximum of
the input times plus 1. Thus for the gates connected to the inputs T i and Tj , the
output is available at time max(Ti , Tj ) + 1 . For any node, the output is available
at time equal to maximum of the times at the leaves plus the gate delays to get
from the leaf to the node. This result extneds to the complete tree, and for the
root, the time at which the final result is available is max i (Ti + li ) . The above
algorithm minimizes this cost.
$
$
(b) Let c1 = i 2Ti and c2 = i 2−li . By the Kraft inequality, c2 ≤ 1 . Now let
−li
T
. Clearly, pi and ri are probability mass
pi = $2 i Tj , and let ri = $2 −l
j
j
2
j
2
functions. Also, we have Ti = log(pi c1 ) and li = − log(ri c2 ) . Then
C(T ) = max(Ti + li )
(5.82)
i
= max (log(pi c1 ) − log(ri c2 ))
(5.83)
= log c1 − log c2 + max log
(5.84)
i
i
pi
ri
Now the maximum of any random variable is greater than its average under any
distribution, and therefore
C(T ) ≥ log c1 − log c2 +
!
pi log
i
≥ log c1 − log c2 + D(p||r)
pi
ri
(5.85)
(5.86)
129
Data Compression
Since −logc2 ≥ 0 and D(p||r) ≥ 0 , we have
C(T ) ≥ log c1
(5.87)
which is the desired result.
(c) From the previous part, we achieve the lower bound if p i = ri and c2 = 1 .
However, since the li ’s are constrained to be integers, we cannot achieve equality
in all cases.
Instead, if we let
$ Tj M
K L
J
1
j2
= log
li = log
,
(5.88)
pi
2Ti
it is easy to verify that
that achieves
$
2−li ≤
$
pi = 1 , and that thus we can construct a tree
Ti + li ≤ log(
!
2Tj ) + 1
(5.89)
j
for all i . Thus this tree achieves within 1 unit of the lower bound.
$
Clearly, log( j 2Tj ) is the equivalent of entropy for this problem!
35. Generating random variables. One wishes to generate a random variable X
X=
)
1,
0,
with probability p
with probability 1 − p
(5.90)
You are given fair coin flips Z1 , Z2 , . . . . Let N be the (random) number of flips needed
to generate X . Find a good way to use Z1 , Z2 , . . . to generate X . Show that EN ≤ 2 .
Solution: We expand p = 0.p1 p2 . . . as a binary number. Let U = 0.Z1 Z2 . . . , the sequence Z treated as a binary number. It is well known that U is uniformly distributed
on [0, 1) . Thus, we generate X = 1 if U < p and 0 otherwise.
The procedure for generated X would therefore examine Z 1 , Z2 , . . . and compare with
p1 , p2 , . . . , and generate a 1 at the first time one of the Z i ’s is less than the corresponding pi and generate a 0 the first time one of the Z i ’s is greater than the corresponding
pi ’s. Thus the probability that X is generated after seeing the first bit of Z is the
probability that Z1 %= p1 , i.e., with probability 1/2. Similarly, X is generated after 2
bits of Z if Z1 = p1 and Z2 %= p2 , which occurs with probability 1/4. Thus
EN
1
1
1
= 1. + 2 + 3 + . . . +
2
4
8
= 2
(5.91)
(5.92)
36. Optimal word lengths.
(a) Can l = (1, 2, 2) be the word lengths of a binary Huffman code. What about
(2,2,3,3)?
130
Data Compression
(b) What word lengths l = (l1 , l2 , . . .) can arise from binary Huffman codes?
Solution: Optimal Word Lengths
We first answer (b) and apply the result to (a).
(b) Word lengths of a binary Huffman code must satisfy the Kraft inequality with
$ −li
equality, i.e.,
= 1 . An easy way to see this is the following: every node in
i2
the tree has a sibling (property of optimal binary code), and if we assign each node a
‘weight’, namely 2−li , then 2 × 2−li is the weight of the father (mother) node. Thus,
$
‘collapsing’ the tree back, we have that i 2−li = 1 .
(a) Clearly, (1, 2, 2) satisfies Kraft with equality, while (2, 2, 3, 3) does not. Thus,
(1, 2, 2) can arise from Huffman code, while (2, 2, 3, 3) cannot.
37. Codes. Which of the following codes are
(a) uniquely decodable?
(b) instantaneous?
C1
C2
C3
C4
=
=
=
=
{00, 01, 0}
{00, 01, 100, 101, 11}
{0, 10, 110, 1110, . . .}
{0, 00, 000, 0000}
Solution: Codes.
(a)
(b)
(c)
(d)
C1
C2
C3
C4
= {00, 01, 0} is uniquely decodable (suffix free) but not instantaneous.
= {00, 01, 100, 101, 11} is prefix free (instantaneous).
= {0, 10, 110, 1110, . . .} is instantaneous
= {0, 00, 000, 0000} is neither uniquely decodable or instantaneous.
6 6 4 4 3 2
, 25 , 25 , 25 , 25 , 25 )
38. Huffman. Find the Huffman D -ary code for (p 1 , p2 , p3 , p4 , p5 , p6 ) = ( 25
and the expected word length
(a) for D = 2 .
(b) for D = 4 .
Solution: Huffman Codes.
(a) D=2
6
6
4
4
2
2
1
6
6
4
4
3
2
6
6
5
4
4
8
6
6
5
11
8
6
14
11
25
131
Data Compression
6
pi 25
li 2
E(l) =
7
!
6
25
2
4
25
4
25
3
3
2
25
3
2
25
4
1
25
4
pi li
i=1
=
=
1
(6 × 2 + 6 × 2 + 4 × 3 + 4 × 3 + 2 × 3 + 2 × 4 + 1 × 4)
25
66
= 2.66
25
(b) D=4
6
6
4
4
2
2
1
6
pi 25
li 1
E(l) =
7
!
6
25
1
9
6
6
4
4
25
1
25
4
25
2
2
25
2
2
25
2
1
25
2
pi li
i=1
=
=
1
(6 × 1 + 6 × 1 + 4 × 1 + 4 × 2 + 2 × 2 + 2 × 2 + 1 × 2)
25
34
= 1.36
25
39. Entropy of encoded bits. Let C : X −→ {0, 1} ∗ be a nonsingular but nonuniquely
decodable code. Let X have entropy H(X).
(a) Compare H(C(X)) to H(X) .
(b) Compare H(C(X n )) to H(X n ) .
Solution: Entropy of encoded bits
132
Data Compression
(a) Since the code is non-singular, the function X → C(X) is one to one, and hence
H(X) = H(C(X)) . (Problem 2.4)
(b) Since the code is not uniquely decodable, the function X n → C(X n ) is many to
one, and hence H(X n ) ≥ H(C(X n )) .
40. Code rate.
Let X be a random variable with alphabet {1, 2, 3} and distribution
X=


 1,
2,

 3,
with probability 1/2
with probability 1/4
with probability 1/4.
The data compression code for X assigns codewords
C(x) =


 0,
10,

 11,
if x = 1
if x = 2
if x = 3.
Let X1 , X2 , . . . be independent identically distributed according to this distribution
and let Z1 Z2 Z3 . . . = C(X1 )C(X2 ) . . . be the string of binary symbols resulting from
concatenating the corresponding codewords. For example, 122 becomes 01010 .
(a) Find the entropy rate H(X ) and the entropy rate H(Z) in bits per symbol. Note
that Z is not compressible further.
(b) Now let the code be
C(x) =
and find the entropy rate H(Z).


 00,
10,
if x = 1
if x = 2
if x = 3.


 00,
if x = 1
if x = 2
if x = 3.

 01,
(c) Finally, let the code be
C(x) =
and find the entropy rate H(Z).
1,

 01,
Solution: Code rate.
This is a slightly tricky question. There’s no straightforward rigorous way to calculate
the entropy rates, so you need to do some guessing.
(a) First, since the Xi ’s are independent, H(X ) = H(X1 ) = 1/2 log 2+2(1/4) log(4) =
3/2.
Now we observe that this is an optimal code for the given distribution on X ,
and since the probabilities are dyadic there is no gain in coding in blocks. So the
133
Data Compression
resulting process has to be i.i.d. Bern(1/2), (for otherwise we could get further
compression from it).
Therefore H(Z) = H(Bern(1/2)) = 1 .
(b) Here it’s easy.
H(Z1 , Z2 , . . . , Zn )
n
H(X1 , X2 , . . . , Xn/2 )
= lim
n→∞
n
H(X ) n2
= lim
n→∞
n
= 3/4.
H(Z) =
lim
n→∞
(We’re being a little sloppy and ignoring the fact that n above may not be a even,
but in the limit as n → ∞ this doesn’t make a difference).
(c) This is the tricky part.
Suppose we encode the first n symbols X1 X2 · · · Xn into
Z1 Z2 · · · Zm = C(X1 )C(X2 ) · · · C(Xn ).
Here m = L(C(X1 ))+L(C(X2 ))+· · ·+L(C(Xn )) is the total length of the encoded
sequence (in bits), and L is the (binary) length function. Since the concatenated
codeword sequence is an invertible function of (X 1 , . . . , Xn ) , it follows that
nH(X ) = H(X1 X2 · · · Xn ) = H(Z1 Z2 · · · Z$n L(C(Xi )) )
1
(5.93)
The first equality above is trivial since the X i ’s are independent. Similarly, may
guess that the right-hand-side above can be written as
H(Z1 Z2 · · · Z$n L(C(Xi )) ) = E[
1
n
!
L(C(Xi ))]H(Z)
i=1
= nE[L(C(X1 ))]H(Z)
(5.94)
(This is not trivial to prove, but it is true.)
Combining the left-hand-side of (5.93) with the right-hand-side of (5.94) yields
H(Z) =
=
=
where E[L(C(X1 ))] =
$3
H(X )
E[L(C(X1 ))]
3/2
7/4
6
,
7
x=1 p(x)L(C(x))
= 7/4.
134
Data Compression
41. Optimal codes. Let l1 , l2 , . . . , l10 be the binary Huffman codeword lengths for the
probabilities p1 ≥ p2 ≥ . . . ≥ p10 . Suppose we get a new distribution by splitting the
last probability mass. What can you say about the optimal binary codeword lengths
˜ for the probabilities p1 , p2 , . . . , p9 , αp10 , (1 − α)p10 , where 0 ≤ α ≤ 1 .
l˜1 , l˜2 , . . . , l11
Solution: Optimal codes.
To construct a Huffman code, we first combine the two smallest probabilities. In this
case, we would combine αp10 and (1 − α)p10 . The result of the sum of these two
probabilities is p10 . Note that the resulting probability distribution is now exactly the
same as the original probability distribution. The key point is that an optimal code
for p1 , p2 , . . . , p10 yields an optimal code (when expanded) for p 1 , p2 , . . . , p9 , αp10 , (1 −
α)p10 . In effect, the first 9 codewords will be left unchanged, while the 2 new codewords will be XXX0 and XXX1 where XXX represents the last codeword of the
original distribution.
In short, the lengths of the first 9 codewords remain unchanged, while the lengths of
the last 2 codewords (new codewords) are equal to l 10 + 1 .
42. Ternary codes. Which of the following codeword lengths can be the word lengths of
a 3-ary Huffman code and which cannot?
(a) (1, 2, 2, 2, 2)
(b) (2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3)
Solution: Ternary codes.
(a) The word lengths (1, 2, 2, 2, 2) CANNOT be the word lengths for a 3-ary Huffman
code. This can be seen by drawing the tree implied by these lengths, and seeing
that one of the codewords of length 2 can be reduced to a codeword of length 1
which is shorter. Since the Huffman tree produces the minimum expected length
tree, these codeword lengths cannot be the word lengths for a Huffman tree.
(b) The word lengths (2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3) ARE the word lengths for a 3-ary
$
Huffman code. Again drawing the tree will verify this. Also, i 3−li = 8 × 3−2 +
3 × 3−3 = 1 , so these word lengths satisfy the Kraft inequality with equality.
Therefore the word lengths are optimal for some distribution, and are the word
lengths for a 3-ary Huffman code.
43. Piecewise Huffman. Suppose the codeword that we use to describe a random variable
X ∼ p(x) always starts with a symbol chosen from the set {A, B, C} , followed by binary
digits {0, 1} . Thus we have a ternary code for the first symbol and binary thereafter.
Give the optimal uniquely decodeable code (minimum expected number of symbols) for
the probability distribution
p=
*
+
16 15 12 10 8 8
, , , , ,
.
69 69 69 69 69 69
(5.95)
135
Data Compression
Solution: Piecewise
Codeword
a
x1 16
b1
x2 15
c1
x3 12
c0
x4 10
b01
x5 8
b00
x6 8
Huffman.
16
16
15
12
10
22
16
16
15
31
22
16
69
Note that the above code is not only uniquely decodable, but it is also instantaneously
decodable. Generally given a uniquely decodable code, we can construct an instantaneous code with the same codeword lengths. This is not the case with the piecewise Huffman construction. There exists a code with smaller expected lengths that is
uniquely decodable, but not instantaneous.
Codeword
a
b
c
a0
b0
c0
44. Huffman. Find the word lengths of the optimal binary encoding of p =
Solution: Huffman.
0
1
1
1
100 , 100 , . . . , 100
Since the distribution is uniform the Huffman tree will consist of word lengths of
2log(100)3 = 7 and *log(100)+ = 6 . There are 64 nodes of depth 6, of which (64k ) will be leaf nodes; and there are k nodes of depth 6 which will form 2k leaf nodes
of depth 7. Since the total number of leaf nodes is 100, we have
(64 − k) + 2k = 100 ⇒ k = 36.
So there are 64 - 36 = 28 codewords of word length 6, and 2 × 36 = 72 codewords of
word length 7.
45. Random “20” questions. Let X be uniformly distributed over {1, 2, . . . , m} . Assume m = 2n . We ask random questions: Is X ∈ S1 ? Is X ∈ S2 ?...until only one
integer remains. All 2m subsets of {1, 2, . . . , m} are equally likely.
(a) How many deterministic questions are needed to determine X ?
(b) Without loss of generality, suppose that X = 1 is the random object. What is
the probability that object 2 yields the same answers for k questions as object 1?
(c) What is the expected number of objects in {2, 3, . . . , m} that have the same
answers to the questions as does the correct object 1?
√
(d) Suppose we ask n + n random questions. What is the expected number of
wrong objects agreeing with the answers?
1
.
136
Data Compression
(e) Use Markov’s inequality Pr{X ≥ tµ} ≤ 1t , to show that the probability of error
(one or more wrong object remaining) goes to zero as n −→ ∞ .
Solution: Random “20” questions.
(a) Obviously, Huffman codewords for X are all of length n . Hence, with n deterministic questions, we can identify an object out of 2 n candidates.
(b) Observe that the total number of subsets which include both object 1 and object
2 or neither of them is 2m−1 . Hence, the probability that object 2 yields the same
answers for k questions as object 1 is (2m−1 /2m )k = 2−k .
More information theoretically, we can view this problem as a channel coding
problem through a noiseless channel. Since all subsets are equally likely, the
probability the object 1 is in a specific random subset is 1/2 . Hence, the question
whether object 1 belongs to the k th subset or not corresponds to the k th bit of
the random codeword for object 1, where codewords X k are Bern( 1/2 ) random
k -sequences.
Object Codeword
1
0110 . . . 1
2
0010 . . . 0
..
.
Now we observe a noiseless output Y k of X k and figure out which object was
sent. From the same line of reasoning as in the achievability proof of the channel
coding theorem, i.e. joint typicality, it is obvious the probability that object 2 has
the same codeword as object 1 is 2−k .
(c) Let
1j =
)
1,
0,
object j yields the same answers for k questions as object 1
,
otherwise
for j = 2, . . . , m.
Then,
E(# of objects in {2, 3, . . . , m} with the same answers) = E(
=
=
m
!
1j )
j=2
m
!
j=2
m
!
E(1j )
2−k
j=2
= (m − 1)2−k
(d) Plugging k = n +
√
= (2n − 1)2−k .
n into (c) we have the expected number of (2n − 1)2−n−
√
n.
137
Data Compression
(e) Let N by the number of wrong objects remaining. Then, by Markov’s inequality
P (N ≥ 1)
≤
=
≤
EN
(2n − 1)2−n−
2−
√
n
→ 0,
where the first equality follows from part (d).
√
n
138
Data Compression
Chapter 6
Gambling and Data Compression
1. Horse race. Three horses run a race. A gambler offers 3-for-1 odds on each of the
horses. These are fair odds under the assumption that all horses are equally likely to
win the race. The true win probabilities are known to be
p = (p1 , p2 , p3 ) =
$
*
+
1 1 1
, ,
.
2 4 4
(6.1)
Let b = (b1 , b2 , b3 ) , bi ≥ 0 ,
bi = 1 , be the amount invested on each of the horses.
The expected log wealth is thus
W (b) =
3
!
pi log 3bi .
(6.2)
i=1
(a) Maximize this over b to find b∗ and W ∗ . Thus the wealth achieved in repeated
∗
horse races should grow to infinity like 2nW with probability one.
(b) Show that if instead we put all of our money on horse 1, the most likely winner,
we will eventually go broke with probability one.
Solution: Horse race.
(a) The doubling rate
W (b) =
!
pi log bi oi
(6.3)
pi log 3bi
(6.4)
i
=
!
i
=
!
pi log 3 +
!
pi log pi −
= log 3 − H(p) − D(p||b)
≤ log 3 − H(p),
139
!
pi log
pi
bi
(6.5)
(6.6)
(6.7)
140
Gambling and Data Compression
with equality iff p = b . Hence b∗ = p = ( 21 , 14 , 14 ) and W ∗ = log 3 − H( 12 , 14 , 14 ) =
1
9
2 log 8 = 0.085 .
By the strong law of large numbers,
Sn
B
=
3b(Xj )
(6.8)
$
(6.9)
j
1
n( n
=
2
→ 2
j
log 3b(Xj ))
nE log 3b(X)
=
2
(6.10)
(6.11)
nW (b)
(6.12)
.
∗
When b = b∗ , W (b) = W ∗ and Sn =2nW = 20.085n = (1.06)n .
(b) If we put all the money on the first horse, then the probability that we do not
go broke in n races is ( 21 )n . Since this probability goes to zero with n , the
probability of the set of outcomes where we do not ever go broke is zero, and we
will go broke with probability 1.
Alternatively, if b = (1, 0, 0) , then W (b) = −∞ and
Sn → 2nW = 0
w.p.1
(6.13)
by the strong law of large numbers.
2. Horse race with subfair odds. If the odds are bad (due to a track take) the
gambler may wish to keep money in his pocket. Let b(0) be the amount in his
pocket and let b(1), b(2), . . . , b(m) be the amount bet on horses 1, 2, . . . , m , with
odds o(1), o(2), . . . , o(m) , and win probabilities p(1), p(2), . . . , p(m) . Thus the resulting wealth is S(x) = b(0) + b(x)o(x), with probability p(x), x = 1, 2, . . . , m.
(a) Find b∗ maximizing E log S if
$
$
1/o(i) < 1.
(b) Discuss b∗ if
1/o(i) > 1. (There isn’t an easy closed form solution in this case,
but a “water-filling” solution results from the application of the Kuhn-Tucker
conditions.)
Solution: (Horse race with a cash option).
Since in this case, the gambler is allowed to keep some of the money as cash, the
mathematics becomes more complicated. In class, we used two different approaches to
prove the optimality of proportional betting when the gambler is not allowed keep any
of the money as cash. We will use both approaches for this problem. But in the case
of subfair odds, the relative entropy approach breaks down, and we have to use the
calculus approach.
The setup of the problem is straight-forward. We want to maximize the expected log
return, i.e.,
W (b, p) = E log S(X) =
m
!
i=1
pi log(b0 + bi oi )
(6.14)
141
Gambling and Data Compression
over all choices b with bi ≥ 0 and
Approach 1: Relative Entropy
$m
i=0 bi
= 1.
We try to express W (b, p) as a sum of relative entropies.
W (b, p) =
=
=
=
where
K=
and
! b0
(
oi
!
!
!
!
+ bi ) = b 0
pi log(b0 + bi oi )
pi log
pi log
. b0
oi
. b0
oi
+ bi
1
oi
/
+ bi p i
pi
1
oi
(6.15)
(6.16)
/
(6.17)
pi log pi oi + log K − D(p||r),
! 1
oi
+
b0
oi
!
! 1
bi = b0 (
oi
− 1) + 1,
(6.18)
(6.19)
+ bi
(6.20)
K
is a kind of normalized portfolio. Now both K and r depend on the choice of b . To
maximize W (b, p) , we must maximize log K and at the same time minimize D(p||r) .
Let us consider the two cases:
ri =
$
(a)
≤ 1 . This is the case of superfair or fair odds. In these cases, it seems intuitively clear that we should put all of our money in the race. For example, in the
case of a superfair gamble, one could invest any cash using a “Dutch book” (investing inversely proportional to the odds) and do strictly better with probability
1.
Examining the expression for K , we see that K is maximized for b 0 = 0 . In this
case, setting bi = pi would imply that ri = pi and hence D(p||r) = 0 . We have
succeeded in simultaneously maximizing the two variable terms in the expression
for W (b, p) and this must be the optimal solution.
Hence, for fair or superfair games, the gambler should invest all his money in the
race using proportional gambling, and not leave anything aside as cash.
(b)
> 1 . In this case, sub-fair odds, the argument breaks down. Looking at the
expression for K , we see that it is maximized for b 0 = 1 . However, we cannot
simultaneously minimize D(p||r) .
If pi oi ≤ 1 for all horses, then the first term in the expansion of W (b, p) , that
$
is,
pi log pi oi is negative. With b0 = 1 , the best we can achieve is proportional
betting, which sets the last term to be 0. Hence, with b 0 = 1 , we can only achieve a
negative expected log return, which is strictly worse than the 0 log return achieved
be setting b0 = 1 . This would indicate, but not prove, that in this case, one should
leave all one’s money as cash. A more rigorous approach using calculus will prove
this.
1
oi
1
oi
142
Gambling and Data Compression
We can however give a simple argument to show that in the case of sub-fair odds,
the gambler should leave at least some of his money as cash and that there is
at least one horse on which he does not bet any money. We will prove this by
contradiction—starting with a portfolio that does not satisfy these criteria, we will
generate one which does better with probability one.
$
Let the amount bet on each of the horses be (b 1 , b2 , . . . , bm ) with m
i=1 bi = 1 , so
that there is no money left aside. Arrange the horses in order of decreasing b i oi ,
so that the m -th horse is the one with the minimum product.
Consider a new portfolio with
b'i = bi −
bm om
oi
(6.21)
for all i . Since bi oi ≥ bm om for all i , b'i ≥ 0 . We keep the remaining money, i.e.,
1−
m
!
b'i
= 1−
i=1
=
m *
!
i=1
bm om
bi −
oi
+
m
!
bm om
i=1
(6.22)
(6.23)
oi
as cash.
The return on the new portfolio if horse i wins is
b'i oi
=
*
+
m
!
bm om
bm om
oi +
bi −
oi
oi
i=1
= bi oi + bm om
> bi oi ,
.
m
!
1
i=1
oi
/
−1
(6.24)
(6.25)
(6.26)
$
since
1/oi > 1 . Hence irrespective of which horse wins, the new portfolio does
better than the old one and hence the old portfolio could not be optimal.
Approach 2: Calculus
We set up the functional using Lagrange multipliers as before:
J(b) =
m
!
pi log(b0 + bi oi ) + λ
i=1
.
Differentiating with respect to bi , we obtain
m
!
i=0
∂J
pi oi
=
+ λ = 0.
∂bi
b0 + bi oi
bi
/
(6.27)
(6.28)
Differentiating with respect to b0 , we obtain
m
!
∂J
pi
=
+ λ = 0.
∂b0
b
+
bi oi
i=1 0
(6.29)
143
Gambling and Data Compression
Differentiating w.r.t. λ , we get the constraint
!
bi = 1.
(6.30)
The solution to these three equations, if they exist, would give the optimal portfolio b .
But substituting the first equation in the second, we obtain the following equation
λ
$
! 1
oi
= λ.
(6.31)
1
Clearly in the case when
oi %= 1 , the only solution to this equation is λ = 0 ,
which indicates that the solution is on the boundary of the region over which the
maximization is being carried out. Actually, we have been quite cavalier with the
$
setup of the problem—in addition to the constraint
bi = 1 , we have the inequality
constraints bi ≥ 0 . We should have allotted a Lagrange multiplier to each of these.
Rewriting the functional with Lagrange multipliers
J(b) =
m
!
pi log(b0 + bi oi ) + λ
i=1
.
m
!
/
bi +
i=0
!
γi bi
(6.32)
Differentiating with respect to bi , we obtain
pi oi
∂J
=
+ λ + γi = 0.
∂bi
b0 + bi oi
(6.33)
Differentiating with respect to b0 , we obtain
m
!
∂J
pi
=
+ λ + γ0 = 0.
∂b0
b
+
bi oi
i=1 0
(6.34)
Differentiating w.r.t. λ , we get the constraint
!
bi = 1.
(6.35)
Now, carrying out the same substitution, we get
λ + γ0 = λ
$
! 1
oi
+
! γi
oi
,
(6.36)
1
which indicates that if
oi %= 1 , at least one of the γ ’s is non-zero, which indicates
that the corresponding constraint has become active, which shows that the solution is
on the boundary of the region.
In the case of solutions on the boundary, we have to use the Kuhn-Tucker conditions
to find the maximum. These conditions are described in Gallager[7], pg. 87. The
conditions describe the behavior of the derivative at the maximum of a concave function
over a convex region. For any coordinate which is in the interior of the region, the
derivative should be 0. For any coordinate on the boundary, the derivative should be
144
Gambling and Data Compression
negative in the direction towards the interior of the region. More formally, for a concave
function F (x1 , x2 , . . . , xn ) over the region xi ≥ 0 ,
∂F ≤ 0
∂xi = 0
if xi = 0
if xi > 0
(6.37)
Applying the Kuhn-Tucker conditions to the present maximization, we obtain
pi oi
≤0
+λ
=0
b0 + bi oi
and
!
if bi = 0
if bi > 0
pi
≤0
+λ
=0
b0 + bi oi
if b0 = 0
if b0 > 0
(6.38)
(6.39)
Theorem 4.4.1 in Gallager[7] proves that if we can find a solution to the Kuhn-Tucker
conditions, then the solution is the maximum of the function in the region. Let us
consider the two cases:
(a)
(b)
$
≤ 1 . In this case, we try the solution we expect, b 0 = 0 , and bi = pi .
Setting λ = −1 , we find that all the Kuhn-Tucker conditions are satisfied. Hence,
this is the optimal portfolio for superfair or fair odds.
$
1
oi
> 1 . In this case, we try the expected solution, b 0 = 1 , and bi = 0 . We find
that all the Kuhn-Tucker conditions are satisfied if all p i oi ≤ 1 . Hence under this
condition, the optimum solution is to not invest anything in the race but to keep
everything as cash.
In the case when some pi oi > 1 , the Kuhn-Tucker conditions are no longer satisfied
by b0 = 1 . We should then invest some money in the race; however, since the
denominator of the expressions in the Kuhn-Tucker conditions also changes, more
than one horse may now violate the Kuhn-Tucker conditions. Hence, the optimum
solution may involve investing in some horses with p i oi ≤ 1 . There is no explicit
form for the solution in this case.
The Kuhn Tucker conditions for this case do not give rise to an explicit solution.
Instead, we can formulate a procedure for finding the optimum distribution of
capital:
Order the horses according to pi oi , so that
1
oi
p1 o1 ≥ p2 o2 ≥ · · · ≥ pm om .
Define
$k
pi
$ki=1 1
1−
if k ≥ 1
(6.41)
t = min{n|pn+1 on+1 ≤ Cn }.
(6.42)
Ck =
Define



(6.40)
1−

 1
Clearly t ≥ 1 since p1 o1 > 1 = C0 .
i=1 oi
if k = 0
145
Gambling and Data Compression
Claim: The optimal strategy for the horse race when the odds are subfair and
some of the pi oi are greater than 1 is: set
b0 = Ct ,
and for i = 1, 2, . . . , t , set
bi = pi −
(6.43)
Ct
,
oi
(6.44)
and for i = t + 1, . . . , m , set
bi = 0.
(6.45)
The above choice of b satisfies the Kuhn-Tucker conditions with λ = 1 . For b 0 ,
the Kuhn-Tucker condition is
!
$
t
m
t
!
!
!
pi
1
pi
1
1 − ti=1 pi
=
+
=
+
= 1.
bo + bi oi
o
C
o
Ct
i=1 i
i=t+1 t
i=1 i
(6.46)
For 1 ≤ i ≤ t , the Kuhn Tucker conditions reduce to
pi oi
pi oi
=
= 1.
b0 + bi oi
pi oi
(6.47)
For t + 1 ≤ i ≤ m , the Kuhn Tucker conditions reduce to
pi oi
pi oi
=
≤ 1,
b0 + bi oi
Ct
(6.48)
by the definition of t . Hence the Kuhn Tucker conditions are satisfied, and this
is the optimal solution.
3. Cards. An ordinary deck of cards containing 26 red cards and 26 black cards is shuffled
and dealt out one card at at time without replacement. Let X i be the color of the ith
card.
(a) Determine H(X1 ).
(b) Determine H(X2 ).
(c) Does H(Xk | X1 , X2 , . . . , Xk−1 ) increase or decrease?
(d) Determine H(X1 , X2 , . . . , X52 ).
Solution:
(a) P(first card red) = P(first card black) = 1/2 . Hence H(X 1 ) = (1/2) log 2 +
(1/2) log 2 = log 2 = 1 bit.
(b) P(second card red) = P(second card black) = 1/2 by symmetry. Hence H(X 2 ) =
(1/2) log 2 + (1/2) log 2 = log 2 = 1 bit. There is no change in the probability from
X1 to X2 (or to Xi , 1 ≤ i ≤ 52 ) since all the permutations of red and black
cards are equally likely.
146
Gambling and Data Compression
(c) Since all permutations are equally likely, the joint distribution of X k and X1 , . . . , Xk−1
is the same as the joint distribution of X k+1 and X1 , . . . , Xk−1 . Therefore
H(Xk |X1 , . . . , Xk−1 ) = H(Xk+1 |X1 , . . . , Xk−1 ) ≥ H(Xk+1 |X1 , . . . , Xk )
(6.49)
and so the conditional entropy decreases as we proceed along the sequence.
Knowledge of the past reduces uncertainty and thus means that the conditional
entropy of the k -th card’s color given all the previous cards will decrease as k
increases.
, -
(d) All 52
26 possible sequences of 26 red cards and 26 black cards are equally likely.
Thus
.
52
H(X1 , X2 , . . . , X52 ) = log
26
/
= 48.8 bits (3.2 bits less than 52)
(6.50)
4. Gambling. Suppose one gambles sequentially on the card outcomes in Problem 3.
Even odds of 2-for-1 are paid. Thus the wealth S n at time n is Sn = 2n b(x1 , x2 , . . . , xn ),
where b(x1 , x2 , . . . , xn ) is the proportion of wealth bet on x1 , x2 , . . . , xn . Find maxb(·) E log S52 .
Solution: Gambling on red and black cards.
E[log Sn ] = E[log[2n b(X1 , X2 , ..., Xn )]]
(6.51)
= n log 2 + E[log b(X)]
(6.52)
= n+
(6.53)
!
p(x) log b(x)
x∈X n
= n+
!
p(x)[log
x∈X n
b(x)
− log p(x)]
p(x)
= n + D(p(x)||b(x)) − H(X).
(6.54)
(6.55)
Taking p(x) = b(x) makes D(p(x)||b(x)) = 0 and maximizes E log S 52 .
max E log S52 = 52 − H(X)
b(x)
= 52 − log
= 3.2
52!
26!26!
(6.56)
(6.57)
(6.58)
Alternatively, as in the horse race, proportional betting is log-optimal. Thus b(x) =
p(x) and, regardless of the outcome,
252
S52 = ,52- = 9.08.
(6.59)
26
and hence
log S52 = max E log S52 = log 9.08 = 3.2.
b(x)
(6.60)
147
Gambling and Data Compression
5. Beating the public odds. Consider a 3-horse race with win probabilities
1 1 1
(p1 , p2 , p3 ) = ( , , )
2 4 4
and fair odds with respect to the (false) distribution
1 1 1
(r1 , r2 , r3 ) = ( , , ) .
4 4 2
Thus the odds are
(o1 , o2 , o3 ) = (4, 4, 2) .
(a) What is the entropy of the race?
(b) Find the set of bets (b1 , b2 , b3 ) such that the compounded wealth in repeated plays
will grow to infinity.
Solution: Beating the public odds.
(a) The entropy of the race is given by
H(p) =
=
1
1
1
log 2 + log 4 + log 4
2
4
4
3
.
2
(b) Compounded wealth will grow to infinity for the set of bets (b 1 , b2 , b3 ) such that
W (b, p) > 0 where
W (b, p) = D(p5r) − D(p5b)
=
3
!
pi log
i=1
bi
.
ri
Calculating D(p5r) , this criterion becomes
D(p5b) <
1
.
4
6. Horse race: A 3 horse race has win probabilities p = (p 1 , p2 , p3 ) , and odds o =
$
(1, 1, 1) . The gambler places bets b = (b 1 , b2 , b3 ) , bi ≥ 0, bi = 1 , where bi denotes
the proportion on wealth bet on horse i . These odds are very bad. The gambler gets
his money back on the winning horse and loses the other bets. Thus the wealth S n at
time n resulting from independent gambles goes expnentially to zero.
(a) Find the exponent.
(b) Find the optimal gambling scheme b , i.e., the bet b ∗ that maximizes the exponent.
148
Gambling and Data Compression
(c) Assuming b is chosen as in (b), what distribution p causes S n to go to zero at
the fastest rate?
Solution: Minimizing losses.
(a) Despite the bad odds, the optimal strategy is still proportional gambling. Thus
the optimal bets are b = p , and the exponent in this case is
W∗ =
!
i
pi log pi = −H(p).
(6.61)
(b) The optimal gambling strategy is still proportional betting.
(c) The worst distribution (the one that causes the doubling rate to be as negative as
possible) is that distribution that maximizes the entropy. Thus the worst W ∗ is
− log 3 , and the gambler’s money goes to zero as 3 −n .
7. Horse race. Consider a horse race with 4 horses. Assume that each of the horses pays
4-for-1 if it wins. Let the probabilities of winning of the horses be { 12 , 14 , 18 , 18 } . If you
started with $100 and bet optimally to maximize your long term growth rate, what
are your optimal bets on each horse? Approximately how much money would you have
after 20 races with this strategy ?
Solution: Horse race. The optimal betting strategy is proportional betting, i.e., dividing the investment in proportion to the probabilities of each horse winning. Thus the
bets on each horse should be (50%, 25%,12.5%,12.5%), and the growth rate achieved
by this strategy is equal to log 4 − H(p) = log 4 − H( 12 , 14 , 18 , 18 ) = 2 − 1.75 = 0.25 . After
20 races with this strategy, the wealth is approximately 2 nW = 25 = 32 , and hence the
wealth would grow approximately 32 fold over 20 races.
8. Lotto. The following analysis is a crude approximation to the games of Lotto conducted
by various states. Assume that the player of the game is required pay $1 to play and is
asked to choose 1 number from a range 1 to 8. At the end of every day, the state lottery
commission picks a number uniformly over the same range. The jackpot, i.e., all the
money collected that day, is split among all the people who chose the same number as
the one chosen by the state. E.g., if 100 people played today, and 10 of them chose the
number 2, and the drawing at the end of the day picked 2, then the $100 collected is
split among the 10 people, i.e., each of persons who picked 2 will receive $10, and the
others will receive nothing.
The general population does not choose numbers uniformly - numbers like 3 and 7 are
supposedly lucky and are more popular than 4 or 8. Assume that the fraction of people
choosing the various numbers 1, 2, . . . , 8 is (f 1 , f2 , . . . , f8 ) , and assume that n people
play every day. Also assume that n is very large, so that any single person’s choice
choice does not change the proportion of people betting on any number.
(a) What is the optimal strategy to divide your money among the various possible
tickets so as to maximize your long term growth rate? (Ignore the fact that you
cannot buy fractional tickets.)
149
Gambling and Data Compression
(b) What is the optimal growth rate that you can achieve in this game?
(c) If (f1 , f2 , . . . , f8 ) = (1/8, 1/8, 1/4, 1/16, 1/16, 1/16, 1/4, 1/16) , and you start with
$1, how long will it be before you become a millionaire?
Solution:
(a) The probability of winning does not depend on the number you choose, and therefore, irrespective of the proportions of the other players, the log optimal strategy
is to divide your money uniformly over all the tickets.
(b) If there are n people playing, and f i of them choose number i , then the number
of people sharing the jackpot of n dollars is nf i , and therefore each person gets
n/nfi = 1/fi dollars if i is picked at the end of the day. Thus the odds for number
i is 1/fi , and does not depend on the number of people playing.
Using the results of Section 6.1, the optimal growth rate is given by
W ∗ (p) =
!
pi log oi − H(p) =
!1
8
log
1
− log 8
fi
(6.62)
(c) Substituing these fraction in the previous equation we get
1!
1
log − log 8
8
fi
1
=
(3 + 3 + 2 + 4 + 4 + 4 + 2 + 4) − 3
8
= 0.25
W ∗ (p) =
(6.63)
(6.64)
(6.65)
and therefore after N days, the amount of money you would have would be approximately 20.25N . The number of days before this crosses a million = log 2 (1, 000, 000)/0.25 =
79.7 , i.e., in 80 days, you should have a million dollars.
There are many problems with the analysis, not the least of which is that the
state governments take out about half the money collected, so that the jackpot
is only half of the total collections. Also there are about 14 million different
possible tickets, and it is therefore possible to use a uniform distribution using $1
tickets only if we use capital of the order of 14 million dollars. And with such
large investments, the proportions of money bet on the different possibilities will
change, which would further complicate the analysis.
However, the fact that people’s choices are not uniform does leave a loophole
that can be exploited. Under certain conditions, i.e., if the accumulated jackpot
has reached a certain size, the expected return can be greater than 1, and it is
worthwhile to play, despite the 50% cut taken by the state. But under normal
circumstances, the 50% cut of the state makes the odds in the lottery very unfair,
and it is not a worthwhile investment.
9. Horse race. Suppose one is interested in maximizing the doubling rate for a horse
race. Let p1 , p2 , . . . , pm denote the win probabilities of the m horses. When do the
odds (o1 , o2 , . . . , om ) yield a higher doubling rate than the odds (o '1 , o'2 , . . . , o'm ) ?
150
Gambling and Data Compression
Solution: Horse Race
Let W and W ' denote the optimal doubling rates for the odds (o 1 , o2 , . . . , om ) and
(o'1 , o'2 , . . . , o'm ) respectively. By Theorem 6.1.2 in the book,
W
=
W' =
!
!
pi log oi − H(p), and
pi log o'i − H(p)
where p is the probability vector (p 1 , p2 , . . . , pm ) . Then W > W ' exactly when
$
pi log oi > pi log o'i ; that is, when
$
E log oi > E log o'i .
10. Horse race with probability estimates
(a) Three horses race. Their probabilities of winning are ( 21 , 14 , 14 ) . The odds are
(4-for-1, 3-for-1 and 3-for-1). Let W ∗ be the optimal doubling rate.
Suppose you believe the probabilities are ( 14 , 12 , 14 ) . If you try to maximize the
doubling rate, what doubling rate W will you achieve? By how much has your
doubling rate decreased due to your poor estimate of the probabilities, i.e., what
is ∆W = W ∗ − W ?
(b) Now let the horse race be among m horses, with probabilities p = (p 1 , p2 , . . . , pm )
and odds o = (o1 , o2 , . . . , om ) . If you believe the true probabilities to be q =
(q1 , q2 , . . . , qm ) , and try to maximize the doubling rate W , what is W ∗ − W ?
Solution: Horse race with probability estimates
(a) If you believe that the probabilities of winning are ( 14 , 12 , 14 ) , you would bet pro$
portional to this, and would achieve a growth rate
pi log bi oi = 12 log 4 14 +
1
1
1
1
1
9
you
4 log 3 2 + 4 log 3 4 = 4 log 8 . If you bet according to the true probabilities,
$
would bet ( 12 , 14 , 14 ) on the three horses, achieving a growth rate
pi log bi oi =
1
1
1
1
1
1
1
3
2 log 4 2 + 4 log 3 4 + 4 log 3 4 = 2 log 2 . The loss in growth rate due to incorrect estimation of the probabilities is the difference between the two growth rates, which
is 14 log 2 = 0.25 .
$
(b) For m horses, the growth rate with the true distribution is
pi log pi oi , and
$
with the incorrect estimate is
pi log qi oi . The difference between the two is
$
pi log pq1i = D(p||q) .
11. The two envelope problem: One envelope contains b dollars, the other 2b dollars.
The amount b is unknown. An envelope is selected at random. Let X be the amount
observed in this envelope, and let Y be the amount in the other envelope.
Adopt the strategy of switching to the other envelope with probability p(x) , where
e−x
p(x) = e−x
+ex . Let Z be the amount that the player receives. Thus
(X, Y ) =
)
(b, 2b),
(2b, b),
with probability 1/2
with probability 1/2
(6.66)
151
Gambling and Data Compression
Z=
)
with probability 1 − p(x)
with probability p(x)
X,
Y,
(a) Show that E(X) = E(Y ) =
3b
2
(6.67)
.
(b) Show that E(Y /X) = 5/4 . Since the expected
envelope to the one in hand is 5/4, it seems
(This is the origin of the switching paradox.)
E(X)E(Y /X) . Thus, although E(Y /X) > 1 ,
E(X) .
ratio of the amount in the other
that one should always switch.
However, observe that E(Y ) %=
it does not follow that E(Y ) >
(c) Let J be the index of the envelope containing the maximum amount of money,
and let J ' be the index of the envelope chosen by the algorithm. Show that for
any b , I(J; J ' ) > 0 . Thus the amount in the first envelope always contains some
information about which envelope to choose.
(d) Show that E(Z) > E(X) . Thus you can do better than always staying or always
switching. In fact, this is true for any monotonic decreasing switching function
p(x) . By randomly switching according to p(x) , you are more likely to trade up
than trade down.
Solution: Two envelope problem:
(a) X = b or 2b with prob. 1/2, and therefore E(X) = 1.5b . Y has the same
unconditional distribution.
(b) Given X = x , the other envelope contains 2x with probability 1/2 and contains
x/2 with probability 1/2. Thus E(Y /X) = 5/4 .
(c) Without any conditioning, J = 1 or 2 with probability (1/2,1/2). By symmetry,
it is not difficult to see that the unconditional probability distribution of J ' is also
the same. We will now show that the two random variables are not independent,
and therefore I(J; J ' ) %= 0 . To do this, we will calculate the conditional probability
P (J ' = 1|J = 1) .
Conditioned on J = 1 , the probability that X = b or 2b is still (1/2,1/2). However, conditioned on (J = 1, X = 2b) , the probability that Z = X , and therefore
J ' = 1 is p(2b) . Similary, conditioned on (J = 1, X = b) , the probability that
J ' = 1 is 1 − p(b) . Thus,
P (J ' = 1|J = 1) = P (X = b|J = 1)P (J ' = 1|X = b, J = 1)
=
=
>
+P (X = 2b|J = 1)P (J ' = 1|X = 2b, J = 1)
1
1
(1 − p(b)) + p(2b)
2
2
1 1
+ (p(2b) − p(b))
2 2
1
2
(6.68)
(6.69)
(6.70)
(6.71)
152
Gambling and Data Compression
Thus the conditional distribution is not equal to the unconditional distribution
and J and J ' are not independent.
(d) We use the above calculation of the conditional distribution to calculate E(Z) .
Without loss of generality, we assume that J = 1 , i.e., the first envelope contains
2b . Then
E(Z|J = 1) = P (X = b|J = 1)E(Z|X = b, J = 1)
=
=
+P (X = 2b|J = 1)E(Z|X = 2b, J = 1)
(6.72)
1
1
E(Z|X = b, J = 1) + E(Z|X = 2b, J = 1)
(6.73)
2
2
1,
p(J ' = 1|X = b, J = 1)E(Z|J ' = 1, X = b, J = 1)
2
+p(J ' = 2|X = b, J = 1)E(Z|J ' = 2, X = b, J = 1)
+p(J ' = 1|X = 2b, J = 1)E(Z|J ' = 1, X = 2b, J = 1)
-
+ p(J ' = 2|X = 2b, J = 1)E(Z|J ' = 2, X = 2b, J = (6.74)
1)
=
=
>
1
([1 − p(b)]2b + p(b)b + p(2b)2b + [1 − p(2b)]b)
2
3b 1
+ b(p(2b) − p(b))
2
2
3b
2
(6.75)
(6.76)
(6.77)
as long as p(2b) − p(b) > 0 . Thus E(Z) > E(X) .
12. Gambling. Find the horse win probabilities p 1 , p2 , . . . , pm
(a) maximizing the doubling rate W ∗ for given fixed known odds o1 , o2 , . . . , om .
(b) minimizing the doubling rate for given fixed odds o 1 , o2 , . . . , om .
Solution: Gambling
(a) From Theorem 6.1.2, W ∗ =
W∗ =
$
pi log oi − H(p) . We can also write this as
!
pi log pi oi
i
=
!
pi log
i
=
!
pi log
i
=
!
i
where
pi log
(6.78)
pi
(6.79)
1
oi
pi
−
qi
!
i

pi log 

pi
− log 
qi
! 1
j
oj
! 1
j


oj


(6.80)
(6.81)
1
qi = $oi 1
j oj
(6.82)
153
Gambling and Data Compression
Therefore the minimum value of the growth rate occurs when p i = qi . This
is the0 distribution
that minimizes the growth rate, and the minimum value is
$ 11
− log
.
j oj
(b) The maximum growth rate occurs when the horse with the maximum odds wins
in all the races, i.e., pi = 1 for the horse that provides the maximum odds
13. Dutch book. Consider a horse race with m = 2 horses,
X = 1, 2
p = 1/2, 1/2
Odds (for one) = 10, 30
Bets = b, 1 − b.
The odds are super fair.
(a) There is a bet b which guarantees the same payoff regardless of which horse wins.
Such a bet is called a Dutch book. Find this b and the associated wealth factor
S(X).
(b) What is the maximum growth rate of the wealth for this gamble? Compare it to
the growth rate for the Dutch book.
Solution: Solution: Dutch book.
(a)
10bD = 30(1 − bD )
40bD = 30
bD = 3/4.
Therefore,
*
1
3
log 10
2
4
= 2.91
W (bD , P ) =
+
+
*
1
1
log 30
2
4
and
SD (X) = 2W (bD ,P ) = 7.5.
(b) In general,
W (b, p) =
Setting the
∂W
∂b
1
1
log(10b) + log(30(1 − b)).
2
2
to zero we get
1
2
*
10
10b∗
+
1
+
2
*
−30
30 − 30b∗
+
= 0
+
154
Gambling and Data Compression
1
1
+
∗
∗
2b
2(b − 1)
∗
(b − 1) + b∗
2b∗ (b∗ − 1)
2b∗ − 1
4b∗ (1 − b∗ )
= 0
= 0
= 0
b∗ =
1
.
2
Hence
1
1
log(5) + log(15) = 3.11
2
2
W (bD , p) = 2.91
W ∗ (p) =
and
∗
S ∗ = 2W = 8.66
SD = 2WD = 7.5
Thus gambling (a little) with b∗ beats the sure win of 7.5 given by the Dutch book
14. Horse race. Suppose one is interested in maximizing the doubling rate for a horse
race. Let p1 , p2 , . . . , pm denote the win probabilities of the m horses. When do the
odds (o1 , o2 , . . . , om ) yield a higher doubling rate than the odds (o '1 , o'2 , . . . , o'm ) ?
Solution: Horse Race (Repeat of problem 9)
Let W and W ' denote the optimal doubling rates for the odds (o 1 , o2 , . . . , om ) and
(o'1 , o'2 , . . . , o'm ) respectively. By Theorem 6.1.2 in the book,
W
=
W' =
!
!
pi log oi − H(p), and
pi log o'i − H(p)
where p is the probability vector (p 1 , p2 , . . . , pm ) . Then W > W ' exactly when
$
pi log oi > pi log o'i ; that is, when
$
E log oi > E log o'i .
15. Entropy of a fair horse race. Let X ∼ p(x) , x = 1, 2, . . . , m , denote the winner of
1
.
a horse race. Suppose the odds o(x) are fair with respect to p(x) , i.e., o(x) = p(x)
$m
Let b(x) be the amount bet on horse x , b(x) ≥ 0 ,
b(x)
=
1
.
Then
the
resulting
1
wealth factor is S(x) = b(x)o(x) , with probability p(x) .
155
Gambling and Data Compression
(a) Find the expected wealth ES(X) .
(b) Find W ∗ , the optimal growth rate of wealth.
(c) Suppose
Y =
)
1, X = 1 or 2
0, otherwise
If this side information is available before the bet, how much does it increase the
growth rate W ∗ ?
(d) Find I(X; Y ) .
Solution: Entropy of a fair horse race.
(a) The expected wealth ES(X) is
ES(X) =
=
=
m
!
x=1
m
!
x=1
m
!
S(x)p(x)
(6.83)
b(x)o(x)p(x)
(6.84)
b(x), (since o(x) = 1/p(x))
(6.85)
x=1
= 1.
(6.86)
(b) The optimal growth rate of wealth, W ∗ , is achieved when b(x) = p(x) for all x ,
in which case,
W ∗ = E(log S(X))
=
=
=
m
!
x=1
m
!
x=1
m
!
(6.87)
p(x) log(b(x)o(x))
(6.88)
p(x) log(p(x)/p(x))
(6.89)
p(x) log(1)
(6.90)
x=1
= 0,
(6.91)
so we maintain our current wealth.
(c) The increase in our growth rate due to the side information is given by I(X; Y ) .
Let q = Pr(Y = 1) = p(1) + p(2) .
I(X; Y ) = H(Y ) − H(Y |X)
= H(Y )
= H(q).
(since Y is a deterministic function of X)
(6.92)
(6.93)
(6.94)
156
Gambling and Data Compression
(d) Already computed above.
16. Negative horse race Consider a horse race with m horses with win probabilities p1 , p2 , . . . pm . Here the gambler hopes a given horse will lose. He places bets
$
loses his bet bi if horse i wins, and retains
(b1 , b2 , . . . , bm ), m
i=1 bi = 1 , on the horses,$
the rest of his bets. (No odds.) Thus S = j+=i bj , with probability pi , and one wishes
$
$
to maximize
pi ln(1 − bi ) subject to the constraint
bi = 1.
(a) Find the growth rate optimal investment strategy b ∗ . Do not constrain the bets
to be positive, but do constrain the bets to sum to 1. (This effectively allows short
selling and margin.)
(b) What is the optimal growth rate?
Solution: Negative horse race
$
(a) Let b'i = 1 − bi ≥ 0 , and note that i b'i = m − 1 . Let qi = b'i /
is a probability distribution on {1, 2, . . . , m} . Now,
W
=
=
!
i
pi log(1 − bi )
i
pi log qi (m − 1)
!
= log(m − 1) +
!
i
pi log pi
$
'
j bj
. Then, {qi }
qi
pi
= log(m − 1) − H(p) − D(p5q) .
Thus, W ∗ is obtained upon setting D(p5q) = 0 , which means making the bets
such that pi = qi = b'i /(m − 1) , or bi = 1 − (m − 1)pi . Alternatively, one can use
Lagrange multipliers to solve the problem.
(b) From (a) we directly see that setting D(p5q) = 0 implies W ∗ = log(m−1)−H(p) .
17. The St. Petersburg paradox. Many years ago in ancient St. Petersburg the
following gambling proposition caused great consternation. For an entry fee of c units,
a gambler receives a payoff of 2k units with probability 2−k , k = 1, 2, . . . .
(a) Show that the expected payoff for this game is infinite. For this reason, it was
argued that c = ∞ was a “fair” price to pay to play this game. Most people find
this answer absurd.
(b) Suppose that the gambler can buy a share of the game. For example, if he invests c/2 units in the game, he receives 1/2 a share and a return X/2 , where
Pr(X = 2k ) = 2−k , k = 1, 2, . . . . Suppose X1 , X2 , . . . are i.i.d. according to this
distribution and the gambler reinvests all his wealth each time. Thus his wealth
Sn at time n is given by
n
B
Xi
Sn =
.
(6.95)
c
i=1
157
Gambling and Data Compression
Show that this limit is ∞ or 0 , with probability one, accordingly as c < c ∗ or
c > c∗ . Identify the “fair” entry fee c∗ .
More realistically, the gambler should be allowed to keep a proportion b = 1 − b of his
money in his pocket and invest the rest in the St. Petersburg game. His wealth at time
n is then
+
n *
B
bXi
Sn =
b+
.
(6.96)
c
i=1
Let
W (b, c) =
∞
!
−k
2
k=1
We have
.
b2k
log 1 − b +
c
/
(6.97)
.
.
Sn =2nW (b,c)
(6.98)
W ∗ (c) = max W (b, c).
(6.99)
Let
0≤b≤1
Here are some questions about W ∗ (c).
(c) For what value of the entry fee c does the optimizing value b ∗ drop below 1?
(d) How does b∗ vary with c ?
(e) How does W ∗ (c) fall off with c ?
Note that since W ∗ (c) > 0 , for all c , we can conclude that any entry fee c is fair.
Solution: The St. Petersburg paradox.
(a) The expected return,
EX =
∞
!
k=1
p(X = 2k )2k =
∞
!
2−k 2k =
k=1
∞
!
k=1
1 = ∞.
(6.100)
Thus the expected return on the game is infinite.
(b) By the strong law of large numbers, we see that
n
1
1!
log Sn =
log Xi − log c → E log X − log c, w.p.1
n
n i=1
(6.101)
and therefore Sn goes to infinity or 0 according to whether E log X is greater or
less than log c . Therefore
log c∗ = E log X =
∞
!
k2−k = 2.
(6.102)
k=1
Therefore a fair entry fee is 2 units if the gambler is forced to invest all his money.
158
Gambling and Data Compression
"fileplot"
W(b,c)
2.5
2
1.5
1
0.5
0
-0.5
-1
1
1
2
3
0.5
4
5
c
6
7
8
9 10
b
0
Figure 6.1: St. Petersburg: W (b, c) as a function of b and c .
(c) If the gambler is not required to invest all his money, then the growth rate is
∞
!
.
b2k
W (b, c) =
2−k log 1 − b +
c
k=1
/
(6.103)
.
For b = 0 , W = 1 , and for b = 1 , W = E log X −log c = 2−log c . Differentiating
to find the optimum value of b , we obtain
∞
!
∂W (b, c)
1
=
2−k 0
∂b
1−b+
k=1
b2k
c
1
.
2k
−1 +
c
/
(6.104)
Unfortunately, there is no explicit solution for the b that maximizes W for a given
value of c , and we have to solve this numerically on the computer.
We have illustrated the results with three plots. The first (Figure 6.1) shows
W (b, c) as a function of b and c . The second (Figure 6.2)shows b ∗ as a function
of c and the third (Figure 6.3) shows W ∗ as a function of c .
From Figure 2, it is clear that b∗ is less than 1 for c > 3 . We can also see this
analytically by calculating the slope ∂W∂b(b,c) at b = 1 .
∂W (b, c)
∂b
∞
!
1
.
2k
1
=
2−k 0
−1
+
k
c
1 − b + b2c
k=1
/
(6.105)
159
Gambling and Data Compression
1
"filebstar"
0.8
b*
0.6
0.4
0.2
0
1
2
3
4
5
c
6
7
8
9
10
Figure 6.2: St. Petersburg: b∗ as a function of c .
3
"filewstar"
2.5
b*
2
1.5
1
0.5
0
1
2
3
4
5
c
6
7
8
9
10
Figure 6.3: St. Petersburg: W ∗ (b∗ , c) as a function of c .
160
Gambling and Data Compression
=
=
! 2−k
k
∞
!
2k
c
−k
2
k=1
= 1−
/
.
2k
−1
d
−
∞
!
c
3
(6.106)
c2−2k
(6.107)
k=1
(6.108)
which is positive for c < 3 . Thus for c < 3 , the optimal value of b lies on the
boundary of the region of b ’s, and for c > 3 , the optimal value of b lies in the
interior.
(d) The variation of b∗ with c is shown in Figure 6.2. As c → ∞ , b ∗ → 0 . We have
a conjecture (based on numerical results) that b ∗ → √12 c2−c as c → ∞ , but we
do not have a proof.
(e) The variation of W ∗ with c is shown in Figure 6.3.
18. Super St. Petersburg. Finally, we have the super St. Petersburg paradox, where
k
Pr(X = 22 ) = 2−k , k = 1, 2, . . . . Here the expected log wealth is infinite for all b > 0 ,
for all c , and the gambler’s wealth grows to infinity faster than exponentially for any
b > 0. But that doesn’t mean all investment ratios b are equally good. To see this,
we wish to maximize the relative growth rate with respect to some other portfolio, say,
b = ( 12 , 12 ). Show that there exists a unique b maximizing
E ln
(b + bX/c)
( 12 + 12 X/c)
and interpret the answer.
k
Solution: Super St. Petersburg. With Pr(X = 2 2 ) = 2−k , k = 1, 2, . . . , we have
E log X =
!
k
k
2−k log 22 = ∞,
(6.109)
and thus with any constant entry fee, the gambler’s money grows to infinity faster than
exponentially, since for any b > 0 ,
W (b, c) =
∞
!
k=1
−k
2
.
k
b22
log 1 − b +
c
/
>
!
k
−k
2
b22
log
= ∞.
c
(6.110)
But if we wish to maximize the wealth relative to the ( 21 , 12 ) portfolio, we need to
maximize
k
b22
!
(1
−
b)
+
c
J(b, c) =
2−k log
(6.111)
1
1 22k
+
k
2
2 c
As in the case of the St. Petersburg problem, we cannot solve this problem explicitly.
In this case, a computer solution is fairly straightforward, although there are some
161
Gambling and Data Compression
Expected log ratio
ER(b,c)
0
-1
-2
-3
-4
5
c
10
1
0.9
0.8
0.7
0.6
0.5
0.4
b
0.3
0.2
0.1
Figure 6.4: Super St. Petersburg: J(b, c) as a function of b and c .
k
complications. For example, for k = 6 , 22 is outside the normal range of numbers
representable on a standard computer. However, for k ≥ 6 , we can approximate the
b
ratio within the log by 0.5
without any loss of accuracy. Using this, we can do a simple
numerical computation as in the previous problem.
As before, we have illustrated the results with three plots. The first (Figure 6.4) shows
J(b, c) as a function of b and c . The second (Figure 6.5)shows b ∗ as a function of c
and the third (Figure 6.6) shows J ∗ as a function of c .
These plots indicate that for large values of c , the optimum strategy is not to put all
the money into the game, even though the money grows at an infinite rate. There exists
a unique b∗ which maximizes the expected ratio, which therefore causes the wealth to
grow to infinity at the fastest possible rate. Thus there exists an optimal b ∗ even when
the log optimal portfolio is undefined.
162
Gambling and Data Compression
1
0.9
0.8
0.7
b*
0.6
0.5
0.4
0.3
0.2
0.1
0.1
1
10
c
100
1000
Figure 6.5: Super St. Petersburg: b∗ as a function of c .
1
0.9
0.8
0.7
0.6
J*
0.5
0.4
0.3
0.2
0.1
0
-0.1
0.1
1
10
c
100
1000
Figure 6.6: Super St. Petersburg: J ∗ (b∗ , c) as a function of c .
Chapter 7
Channel Capacity
1. Preprocessing the output. One is given a communication channel with transition
probabilities p(y|x) and channel capacity C = max p(x) I(X; Y ). A helpful statistician
preprocesses the output by forming Ỹ = g(Y ). He claims that this will strictly improve
the capacity.
(a) Show that he is wrong.
(b) Under what conditions does he not strictly decrease the capacity?
Solution: Preprocessing the output.
(a) The statistician calculates Ỹ = g(Y ) . Since X → Y → Ỹ forms a Markov chain,
we can apply the data processing inequality. Hence for every distribution on x ,
I(X; Y ) ≥ I(X; Ỹ ).
(7.1)
Let p̃(x) be the distribution on x that maximizes I(X; Ỹ ) . Then
C = max I(X; Y ) ≥ I(X; Y )p(x)=p̃(x) ≥ I(X; Ỹ )p(x)=p̃(x) = max I(X; Ỹ ) = C̃.
p(x)
p(x)
(7.2)
Thus, the statistician is wrong and processing the output does not increase capacity.
(b) We have equality (no decrease in capacity) in the above sequence of inequalities
only if we have equality in the data processing inequality, i.e., for the distribution
that maximizes I(X; Ỹ ) , we have X → Ỹ → Y forming a Markov chain.
2. An additive noise channel. Find the channel capacity of the following discrete
memoryless channel:
163
164
Channel Capacity
Z
&
#$
%
!"
X
% Y
where Pr{Z = 0} = Pr{Z = a} = 21 . The alphabet for x is X = {0, 1}. Assume that
Z is independent of X.
Observe that the channel capacity depends on the value of a.
Solution: A sum channel.
Y =X +Z
X ∈ {0, 1},
Z ∈ {0, a}
(7.3)
We have to distinguish various cases depending on the values of a .
a = 0 In this case, Y = X , and max I(X; Y ) = max H(X) = 1 . Hence the capacity
is 1 bit per transmission.
a %= 0, ±1 In this case, Y has four possible values 0, 1, a and 1 + a . Knowing Y ,
we know the X which was sent, and hence H(X|Y ) = 0 . Hence max I(X; Y ) =
max H(X) = 1 , achieved for an uniform distribution on the input X .
a = 1 In this case Y has three possible output values, 0, 1 and 2 , and the channel
is identical to the binary erasure channel discussed in class, with a = 1/2 . As
derived in class, the capacity of this channel is 1 − a = 1/2 bit per transmission.
a = −1 This is similar to the case when a = 1 and the capacity here is also 1/2 bit
per transmission.
3. Channels with memory have higher capacity. Consider a binary symmetric channel with Yi = Xi ⊕ Zi , where ⊕ is mod 2 addition, and Xi , Yi ∈ {0, 1}.
Suppose that {Zi } has constant marginal probabilities Pr{Z i = 1} = p = 1 − Pr{Zi =
0}, but that Z1 , Z2 , . . . , Zn are not necessarily independent. Assume that Z n is independent of the input X n . Let C = 1 − H(p, 1 − p). Show that
max
p(x1 ,x2 ,...,xn )
I(X1 , X2 , . . . , Xn ; Y1 , Y2 , . . . , Yn ) ≥ nC.
(7.4)
Solution: Channels with memory have a higher capacity.
Yi = Xi ⊕ Zi ,
where
Zi =
)
1 with probability p
0 with probability 1 − p
(7.5)
(7.6)
165
Channel Capacity
and Zi are not independent.
I(X1 , X2 , . . . , Xn ; Y1 , Y2 , . . . , Yn )
= H(X1 , X2 , . . . , Xn ) − H(X1 , X2 , . . . , Xn |Y1 , Y2 , . . . , Yn )
= H(X1 , X2 , . . . , Xn ) − H(Z1 , Z2 , . . . , Zn |Y1 , Y2 , . . . , Yn )
≥ H(X1 , X2 , . . . , Xn ) − H(Z1 , Z2 , . . . , Zn )
(7.7)
= H(X1 , X2 , . . . , Xn ) − nH(p)
(7.9)
≥ H(X1 , X2 , . . . , Xn ) −
!
H(Zi )
(7.8)
= n − nH(p),
(7.10)
if X1 , X2 , . . . , Xn are chosen i.i.d. ∼ Bern( 12 ). The capacity of the channel with
memory over n uses of the channel is
nC (n) =
max
p(x1 ,x2 ,...,xn )
I(X1 , X2 , . . . , Xn ; Y1 , Y2 , . . . , Yn )
(7.11)
≥ I(X1 , X2 , . . . , Xn ; Y1 , Y2 , . . . , Yn )p(x1 ,x2 ,...,xn )=Bern( 1 )
2
≥ n(1 − H(p))
= nC.
(7.12)
(7.13)
(7.14)
Hence channels with memory have higher capacity. The intuitive explanation for this
result is that the correlation between the noise decreases the effective noise; one could
use the information from the past samples of the noise to combat the present noise.
4. Channel capacity. Consider the discrete memoryless channel Y = X + Z (mod 11),
where
.
/
1,
2,
3
Z=
1/3, 1/3, 1/3
and X ∈ {0, 1, . . . , 10} . Assume that Z is independent of X .
(a) Find the capacity.
(b) What is the maximizing p∗ (x) ?
Solution: Channel capacity.
Y = X + Z(mod 11)
where
In this case,


 1
with probability1/3
Z=
2 with probability1/3

 3 with probability1/3
H(Y |X) = H(Z|X) = H(Z) = log 3,
(7.15)
(7.16)
(7.17)
166
Channel Capacity
independent of the distribution of X , and hence the capacity of the channel is
C = max I(X; Y )
p(x)
(7.18)
= max H(Y ) − H(Y |X)
(7.19)
= max H(Y ) − log 3
(7.20)
= log 11 − log 3,
(7.21)
p(x)
p(x)
which is attained when Y has a uniform distribution, which occurs (by symmetry)
when X has a uniform distribution.
(a) The capacity of the channel is log 11
3 bits/transmission.
(b) The capacity is achieved by an uniform distribution on the inputs. p(X = i) =
for i = 0, 1, . . . , 10 .
1
11
5. Using two channels at once. Consider two discrete memoryless channels (X 1 , p(y1 |
x1 ), Y1 ) and (X2 , p(y2 | x2 ), Y2 ) with capacities C1 and C2 respectively. A new channel
(X1 × X2 , p(y1 | x1 ) × p(y2 | x2 ), Y1 × Y2 ) is formed in which x1 ∈ X1 and x2 ∈ X2 , are
simultaneously sent, resulting in y 1 , y2 . Find the capacity of this channel.
Solution: Using two channels at once. Suppose we are given two channels, (X 1 , p(y1 |x1 ), Y1 )
and (X2 , p(y2 |x2 ), Y2 ) , which we can use at the same time. We can define the product
channel as the channel, (X1 × X2 , p(y1 , y2 |x1 , x2 ) = p(y1 |x1 )p(y2 |x2 ), Y1 × Y2 ) . To find
the capacity of the product channel, we must find the distribution p(x 1 , x2 ) on the
input alphabet X1 × X2 that maximizes I(X1 , X2 ; Y1 , Y2 ) . Since the joint distribution
p(x1 , x2 , y1 , y2 ) = p(x1 , x2 )p(y1 |x1 )p(y2 |x2 ),
Y1 → X1 → X2 → Y2 forms a Markov chain and therefore
I(X1 , X2 ; Y1 , Y2 ) = H(Y1 , Y2 ) − H(Y1 , Y2 |X1 , X2 )
(7.22)
(7.23)
= H(Y1 , Y2 ) − H(Y1 |X1 , X2 ) − H(Y2 |X1 , X2 )
(7.24)
≤ H(Y1 ) + H(Y2 ) − H(Y1 |X1 ) − H(Y2 |X2 )
(7.26)
= H(Y1 , Y2 ) − H(Y1 |X1 ) − H(Y2 |X2 )
(7.25)
= I(X1 ; Y1 ) + I(X2 ; Y2 ),
(7.27)
where (7.24) and (7.25) follow from Markovity, and we have equality in (7.26) if Y 1 and
Y2 are independent. Equality occurs when X 1 and X2 are independent. Hence
C =
≤
max I(X1 , X2 ; Y1 , Y2 )
(7.28)
max I(X1 ; Y1 ) + max I(X2 ; Y2 )
(7.29)
p(x1 ,x2 )
p(x1 ,x2 )
p(x1 ,x2 )
= max I(X1 ; Y1 ) + max I(X2 ; Y2 )
(7.30)
= C1 + C2 .
(7.31)
p(x1 )
p(x2 )
with equality iff p(x1 , x2 ) = p∗ (x1 )p∗ (x2 ) and p∗ (x1 ) and p∗ (x2 ) are the distributions
that maximize C1 and C2 respectively.
167
Channel Capacity
6. Noisy typewriter. Consider a 26-key typewriter.
(a) If pushing a key results in printing the associated letter, what is the capacity C
in bits?
(b) Now suppose that pushing a key results in printing that letter or the next (with
equal probability). Thus A → A or B, . . . , Z → Z or A. What is the capacity?
(c) What is the highest rate code with block length one that you can find that achieves
zero probability of error for the channel in part (b) .
Solution: Noisy typewriter.
(a) If the typewriter prints out whatever key is struck, then the output, Y , is the
same as the input, X , and
C = max I(X; Y ) = max H(X) = log 26,
(7.32)
attained by a uniform distribution over the letters.
(b) In this case, the output is either equal to the input (with probability 12 ) or equal
to the next letter ( with probability 12 ). Hence H(Y |X) = log 2 independent of
the distribution of X , and hence
C = max I(X; Y ) = max H(Y ) − log 2 = log 26 − log 2 = log 13,
(7.33)
attained for a uniform distribution over the output, which in turn is attained by
a uniform distribution on the input.
(c) A simple zero error block length one code is the one that uses every alternate
letter, say A,C,E,. . . ,W,Y. In this case, none of the codewords will be confused,
since A will produce either A or B, C will produce C or D, etc. The rate of this
code,
log(# codewords)
log 13
R=
=
= log 13.
(7.34)
Block length
1
In this case, we can achieve capacity with a simple code with zero error.
7. Cascade of binary symmetric channels.
independent binary symmetric channels,
Show that a cascade of n identical
X0 → BSC →1 → · · · → Xn−1 → BSC →n
each with raw error probability p , is equivalent to a single BSC with error probability
1
n
lim I(X0 ; Xn ) = 0 if p %= 0, 1 . No encoding or
2 (1 − (1 − 2p) ) and hence that n→∞
decoding takes place at the intermediate terminals X 1 , . . . , Xn−1 . Thus the capacity
of the cascade tends to zero.
Solution: Cascade of binary symmetric channels. There are many ways to solve
this problem. One way is to use the singular value decomposition of the transition
probability matrix for a single BSC.
168
Channel Capacity
Let,
A=
"
1−p
p
p
1−p
#
be the transition probability matrix for our BSC. Then the transition probability matrix
for the cascade of n of these BSC’s is given by,
An = An .
Now check that,
−1
"
T =
"
A=T
where,
1
0
0 1 − 2p
#
1 1
1 −1
#
T
.
Using this we have,
An = An
= T
=
−1
"
"
1
2 (1
1
2 (1
1
0
0 (1 − 2p)n
+ (1 − 2p)n )
− (1 − 2p)n )
#
T
1
2 (1
1
2 (1
− (1 − 2p)n )
+ (1 − 2p)n )
#
.
From this we see that the cascade of n BSC’s is also a BSC with probablility of error,
1
pn = (1 − (1 − 2p)n ).
2
The matrix, T , is simply the matrix of eigenvectors of A .
This problem can also be solved by induction on n .
Probably the simplest way to solve the problem is to note that the probability of
error for the cascade channel is simply the sum of the odd terms of the binomial
expansion of (x + y)n with x = p and y = 1 − p . But this can simply be written as
1
1
1
n
n
n
2 (x + y) − 2 (y − x) = 2 (1 − (1 − 2p) .
8. The Z channel. The Z-channel has binary input and output alphabets and transition
probabilities p(y|x) given by the following matrix:
Q=
"
1
0
1/2 1/2
#
x, y ∈ {0, 1}
Find the capacity of the Z-channel and the maximizing input probability distribution.
169
Channel Capacity
Solution: The Z channel. First we express I(X; Y ) , the mutual information between
the input and output of the Z-channel, as a function of x = Pr(X = 1) :
H(Y |X) = Pr(X = 0) · 0 + Pr(X = 1) · 1 = x
H(Y ) = H(Pr(Y = 1)) = H(x/2)
I(X; Y ) = H(Y ) − H(Y |X) = H(x/2) − x
Since I(X; Y ) = 0 when x = 0 and x = 1 , the maximum mutual information is
obtained for some value of x such that 0 < x < 1 .
Using elementary calculus, we determine that
1 − x/2
d
1
I(X; Y ) = log2
− 1,
dx
2
x/2
which is equal to zero for x = 2/5 . (It is reasonable that Pr(X = 1) < 1/2 because
X = 1 is the noisy input to the channel.) So the capacity of the Z-channel in bits is
H(1/5) − 2/5 = 0.722 − 0.4 = 0.322 .
9. Suboptimal codes. For the Z channel of the previous problem, assume that we choose
a (2nR , n) code at random, where each codeword is a sequence of fair coin tosses. This
will not achieve capacity. Find the maximum rate R such that the probability of error
(n)
Pe , averaged over the randomly generated codes, tends to zero as the block length n
tends to infinity.
Solution: Suboptimal codes. From the proof of the channel coding theorem, it follows
that using a random code with codewords generated according to probability p(x) , we
can send information at a rate I(X; Y ) corresponding to that p(x) with an arbitrarily
low probability of error. For the Z channel described in the previous problem, we can
calculate I(X; Y ) for a uniform distribution on the input. The distribution on Y is
(3/4, 1/4), and therefore
1
1 1
3 3
3 1
I(X; Y ) = H(Y ) − H(Y |X) = H( , ) − H( , ) = − log 3.
4 4
2
2 2
2 4
(7.35)
10. Zero-error capacity. A channel with alphabet {0, 1, 2, 3, 4} has transition probabilities of the form
)
1/2 if y = x ± 1 mod 5
p(y|x) =
0 otherwise.
(a) Compute the capacity of this channel in bits.
(b) The zero-error capacity of a channel is the number of bits per channel use that
can be transmitted with zero probability of error. Clearly, the zero-error capacity
of this pentagonal channel is at least 1 bit (transmit 0 or 1 with probability 1/2).
Find a block code that shows that the zero-error capacity is greater than 1 bit.
Can you estimate the exact value of the zero-error capacity?
(Hint: Consider codes of length 2 for this channel.)
The zero-error capacity of this channel was finally found by Lovasz[9].
170
Channel Capacity
Solution: Zero-error capacity.
(a) Since the channel is symmetric, it is easy to compute its capacity:
H(Y |X) = 1
I(X; Y ) = H(Y ) − H(Y |X) = H(Y ) − 1 .
So mutual information is maximized when Y is uniformly distributed, which occurs when the input X is uniformly distributed. Therefore the capacity in bits is
C = log2 5 − 1 = log 2 2.5 = 1.32 .
(b) Let us construct a block code consisting of 2-tuples. We need more than 4 codewords in order to achieve capacity greater than 2 bits, so we will pick 5 codewords
with distinct first symbols: {0a, 1b, 2c, 3d, 4e} . We must choose a, b, c, d, e so that
the receiver will be able to determine which codeword was transmitted. A simple repetition code will not work, since if, say, 22 is transmitted, then 11 might
be received, and the receiver could not tell whether the codeword was 00 or 22.
Instead, using codewords of the form (i+1 mod 5, 2i+1 mod 5) yields the code
11,23,30,42,04.
Here is the decoding table for the pentagon channel:
040. 43. 2320101. 34. 34. 1212
It is amusing to note that the five pairs that cannot be received are exactly the 5
codewords.
Then whenever xy is received, there is exactly one possible codeword. (Each
codeword will be received as one of 4 possible 2-tuples; so there are 20 possible
received 2-tuples, out of a total of 25.) Since there are 5 possible error-free messages
with 2 channel uses, the zero-error capacity of this channel is at least 12 log2 5 =
1.161 bits.
In fact, the zero-error capacity of this channel is exactly 21 log2 5 . This result
was obtained by László Lovász, “On the Shannon capacity of a graph,” IEEE
Transactions on Information Theory, Vol IT-25, pp. 1–7, January 1979. The
first results on zero-error capacity are due to Claude E. Shannon, “The zeroerror capacity of a noisy channel,” IEEE Transactions on Information Theory, Vol
IT-2, pp. 8–19, September 1956, reprinted in Key Papers in the Development of
Information Theory, David Slepian, editor, IEEE Press, 1974.
11. Time-varying channels. Consider a time-varying discrete memoryless channel. Let
Y1 , Y2 , . . . , Yn be conditionally independent given X1 , X2 , . . . , Xn , with conditional disC
tribution given by p(y | x) = ni=1 pi (yi | xi ).
171
Channel Capacity
1 − pi
' 0
)
(
(
*
(
*
(
*
(
*
(p
i
* (
(
*
( * pi
(
*
(
*
(
*
(
*
(
*
+
'
* 1
1 (
0 *
*
1 − pi
Let X = (X1 , X2 , . . . , Xn ), Y = (Y1 , Y2 , . . . , Yn ). Find maxp(x) I(X; Y).
Solution: Time-varying channels.
We can use the same chain of inequalities as in the proof of the converse to the channel
coding theorem. Hence
I(X n ; Y n ) = H(Y n ) − H(Y n |X n )
(7.36)
= H(Y n ) −
H(Yi |Y1 , . . . , Yi−1 , X n )
(7.37)
H(Yi |Xi ),
(7.38)
= H(Y n ) −
n
!
i=1
n
!
i=1
since by the definition of the channel, Y i depends only on Xi and is conditionally
independent of everything else. Continuing the series of inequalities, we have
I(X ; Y ) = H(Y ) −
n
n
n
≤
≤
n
!
i=1
n
!
i=1
n
!
i=1
H(Yi ) −
H(Yi |Xi )
n
!
i=1
H(Yi |Xi )
(1 − h(pi )),
(7.39)
(7.40)
(7.41)
with equality if X1 , X2 , . . . , Xn is chosen i.i.d. ∼ Bern(1/2). Hence
max I(X1 , X2 , . . . , Xn ; Y1 , Y2 , . . . , Yn ) =
p(x)
12. Unused symbols. Show that the capacity
matrix

2/3

Py|x =  1/3
0
n
!
i=1
(1 − h(pi )).
(7.42)
of the channel with probability transition

1/3 0

1/3 1/3 
1/3 2/3
(7.43)
172
Channel Capacity
is achieved by a distribution that places zero probability on one of input symbols. What
is the capacity of this channel? Give an intuitive reason why that letter is not used.
Solution: Unused symbols Let the probabilities of the three input symbols be p 1 , p2
and p3 . Then the probabilities of the three output symbols can be easily calculated to
be ( 23 p1 + 13 p2 , 13 , 13 p2 + 23 p3 ) , and therefore
I(X; Y ) = H(Y ) − H(Y |X)
(7.44)
1
1 1
2
2 1
2
(7.45)
= H( p1 + p2 , , p2 + p3 ) − (p1 + p3 )H( , ) − p2 log 3
3
3
3 3
3
3 3
1 1
1 1 1
2 1
= H( + (p1 − p3 ), , − (p1 − p3 )) − (p1 + p3 )H( , ) − (1 − p1 − p3(7.46)
) log 3
3 3
3 3 3
3 3
where we have substituted p2 = 1 − p1 − p3 . Now if we fix p1 + p3 , then the second and
third terms are fixed, and the first term is maximized if p 1 − p3 = 0 , i.e., if p1 = p3 .
(The same conclusion can be drawn from the symmetry of the problem.)
Now setting p1 = p3 , we have
2 1
1 1 1
I(X; Y ) = H( , , ) − (p1 + p3 )H( , ) − (1 − p1 − p3 ) log 3
3 3 3
3 3
2 1
= log 3 − (p1 + p3 )H( , ) − (1 − p1 − p3 ) log 3
3 3
2 1
= (p1 + p3 )(log 3 − H( , ))
3 3
(7.47)
(7.48)
(7.49)
which is maximized if p1 + p3 is as large as possible (since log 3 > H( 23 , 13 ) ). Therefore the maximizing distribution corresponds to p 1 + p3 = 1 , p1 = p3 , and therefore
(p1 , p2 , p3 ) = ( 21 , 0, 12 ) . The capacity of this channel = log 3 − H( 23 , 13 ) = log 3 − (log 3 −
2
2
3 ) = 3 bits.
The intuitive reason why p2 = 0 for the maximizing distribution is that conditional
on the input being 2, the output is uniformly distributed. The same uniform output
distribution can be achieved without using the symbol 2 (by setting p 1 = p3 ), and
therefore the use of symbol 2 does not add any information (it does not change the
entropy of the output and the conditional entropy H(Y |X = 2) is the maximum
possible, i.e., log 3 , so any positive probability for symbol 2 will only reduce the mutual
information.
Note that not using a symbol is optimal only if the uniform output distribution can be
achieved without use of that symbol. For example, in the Z channel example above, both
symbols are used, even though one of them gives a conditionally uniform distribution
on the output.
13. Erasures and errors in a binary channel. Consider a channel with binary inputs
that has both erasures and errors. Let the probability of error be % and the probability
of erasure be α , so the the channel is as illustrated below:
Channel Capacity
0 *
.
1−α−%
173
' 0
,
,
,
,
*
. *
. **
.
*
*
.
,
*
. %
* ,
*,
.
*
.
, * α
*
. ,
*
.,
+
* e
)
(
,.
(
, .
(α
,
. ((
,%
(.
( .
,
(
,
.
(
(
, (
.
, (
.
(
,
.
(
(
'
/
. 1
1 ,
1−α−%
(a) Find the capacity of this channel.
(b) Specialize to the case of the binary symmetric channel ( α = 0 ).
(c) Specialize to the case of the binary erasure channel ( % = 0 ).
Solution:
(a) As with the examples in the text, we set the input distribution for the two inputs
to be π and 1 − π . Then
C = max I(X; Y )
p(x)
(7.50)
= max(H(Y ) − H(Y |X))
(7.51)
= max H(Y ) − H(1 − % − α, α, %).
(7.52)
p(x)
p(x)
As in the case of the erasure channel, the maximum value for H(Y ) cannot be
log 3 , since the probability of the erasure symbol is α independent of the input
distribution. Thus,
H(Y ) = H(π(1 − % − α) + (1 − π)%, α, (1 − π)(1 − % − α) + π%)
(7.53)
*
+
π + % − πα − 2π% 1 − π − % + 2%π − α + απ
= H(α) + (1 − α)H
,
(7.54)
1−α
1−α
≤ H(α) + (1 − α)
(7.55)
with equality iff π+!−πα−2π!
= 12 , which can be achieved by setting π = 12 . (The
1−α
1
fact that π = 1 − π = 2 is the optimal distribution should be obvious from the
symmetry of the problem, even though the channel is not weakly symmetric.)
174
Channel Capacity
Therefore the capacity of this channel is
C = H(α) + 1 − α − H(1 − α − %, α, %)
= H(α) + 1 − α − H(α) − (1 − α)H
*
= (1 − α) 1 − H
*
%
1−α−%
,
1−α 1−α
*
1−α−%
%
,
1−α 1−α
++
+
(7.56)
(7.57)
(7.58)
(b) Setting α = 0 , we get
C = 1 − H(%),
(7.59)
which is the capacity of the binary symmetric channel.
(c) Setting % = 0 , we get
C =1−α
(7.60)
which is the capacity of the binary erasure channel.
14. Channels with dependence between the letters. Consider the following channel
over a binary alphabet that takes in two bit symbols and produces a two bit output,
as determined by the following mapping: 00 → 01 , 01 → 10 , 10 → 11 , and 11 →
00 . Thus if the two bit sequence 01 is the input to the channel, the output is 10
with probability 1. Let X1 , X2 denote the two input symbols and Y1 , Y2 denote the
corresponding output symbols.
(a) Calculate the mutual information I(X 1 , X2 ; Y1 , Y2 ) as a function of the input
distribution on the four possible pairs of inputs.
(b) Show that the capacity of a pair of transmissions on this channel is 2 bits.
(c) Show that under the maximizing input distribution, I(X 1 ; Y1 ) = 0 .
Thus the distribution on the input sequences that achieves capacity does not necessarily maximize the mutual information between individual symbols and their
corresponding outputs.
Solution:
(a) If we look at pairs of inputs and pairs of outputs, this channel is a noiseless
four input four output channel. Let the probabilities of the four input pairs be
p00 , p01 , p10 and p11 respectively. Then the probability of the four pairs of output
bits is p11 , p00 , p01 and p10 respectively, and
I(X1 , X2 ; Y1 , Y2 ) = H(Y1 , Y2 ) − H(Y1 , Y2 |X1 , X2 )
= H(Y1 , Y2 ) − 0
= H(p11 , p00 , p01 , p10 )
(7.61)
(7.62)
(7.63)
175
Channel Capacity
(b) The capacity of the channel is achieved by a uniform distribution over the inputs,
which produces a uniform distribution on the output pairs
C = max I(X1 , X2 ; Y1 , Y2 ) = 2 bits
p(x1 ,x2 )
and the maximizing p(x1 , x2 ) puts probability
and 11.
1
4
(7.64)
on each of the pairs 00, 01, 10
(c) To calculate I(X1 ; Y1 ) , we need to calculate the joint distribution of X 1 and Y1 .
The joint distribution of X1 X2 and Y1 Y2 under an uniform input distribution is
given by the following matrix
X1 X2 \Y1 Y2 00 01 10 11
1
00
0
0
0
4
1
01
0
0
0
4
1
10
0
0
0
4
1
11
0
0
0
4
From this, it is easy to calculate the joint distribution of X 1 and Y1 as
X1 \Y1 0 1
1
1
0
4
4
1
1
1
4
4
and therefore we can see that the marginal distributions of X 1 and Y1 are both
(1/2, 1/2) and that the joint distribution is the product of the marginals, i.e., X 1
is independent of Y1 , and therefore I(X1 ; Y1 ) = 0 .
Thus the distribution on the input sequences that achieves capacity does not necessarily maximize the mutual information between individual symbols and their
corresponding outputs.
15. Jointly typical sequences. As we did in problem 13 of Chapter 3 for the typical
set for a single random variable, we will calculate the jointly typical set for a pair of
random variables connected by a binary symmetric channel, and the probability of error
for jointly typical decoding for such a channel.
0.9
' 0
)
(
(
*
(
*
(
*
(
*
(
* ( 0.1
(
*
( * 0.1
(
*
(
*
(
*
(
*
(
*
(
+
'
* 1
1
0 *
*
0.9
176
Channel Capacity
We will consider a binary symmetric channel with crossover probability 0.1. The input
distribution that achieves capacity is the uniform distribution, i.e., p(x) = (1/2, 1/2) ,
which yields the joint distribution p(x, y) for this channel is given by
X\Y
0
1
0
0.45
0.05
1
0.05
0.45
The marginal distribution of Y is also (1/2, 1/2) .
(a) Calculate H(X) , H(Y ) , H(X, Y ) and I(X; Y ) for the joint distribution above.
(b) Let X1 , X2 , . . . , Xn be drawn i.i.d. according the Bernoulli(1/2) distribution.
Of the 2n possible input sequences of length n , which of them are typical, i.e.,
(n)
(n)
member of A! (X) for % = 0.2 ? Which are the typical sequences in A ! (Y ) ?
(n)
(c) The jointly typical set A! (X, Y ) is defined as the set of sequences that satisfy
equations (7.35-7.37). The first two equations correspond to the conditions that
(n)
(n)
xn and y n are in A! (X) and A! (Y ) respectively. Consider the last condition,
which can be rewritten to state that − n1 log p(xn , y n ) ∈ (H(X, Y )−%, H(X, Y )+%) .
Let k be the number of places in which the sequence x n differs from y n ( k is a
function of the two sequences). Then we can write
p(xn , y n ) =
n
B
p(xi , yi )
(7.65)
= (0.45)n−k (0.05)k
* +n
1
(1 − p)n−k pk
=
2
(7.66)
i=1
(7.67)
An alternative way at looking at this probability is to look at the binary symmetric
channel as in additive channel Y = X ⊕ Z , where Z is a binary random variable
that is equal to 1 with probability p , and is independent of X . In this case,
p(xn , y n ) = p(xn )p(y n |xn )
= p(xn )p(z n |xn )
= p(x )p(z )
* +n
1
=
(1 − p)n−k pk
2
n
n
(7.68)
(7.69)
(7.70)
(7.71)
Show that the condition that (xn , y n ) being jointly typical is equivalent to the
condition that xn is typical and z n = y n − xn is typical.
(n)
(d) We now calculate the size of A! (Z) for n = 25 and % = 0.2 . As in problem 13
of Chapter 3, here is a table of the probabilities and numbers of sequences of with
k ones
177
Channel Capacity
k
0
1
2
3
4
5
6
7
8
9
10
11
12
,nk
1
25
300
2300
12650
53130
177100
480700
1081575
2042975
3268760
4457400
5200300
,n k
pk (1 − p)n−k
0.071790
0.199416
0.265888
0.226497
0.138415
0.064594
0.023924
0.007215
0.001804
0.000379
0.000067
0.000010
0.000001
− n1 log p(xn )
0.152003
0.278800
0.405597
0.532394
0.659191
0.785988
0.912785
1.039582
1.166379
1.293176
1.419973
1.546770
1.673567
(Sequences with more than 12 ones are omitted since their total probability is
negligible (and they are not in the typical set).)
(n)
What is the size of the set A! (Z) ?
(e) Now consider random coding for the channel, as in the proof of the channel coding
theorem. Assume that 2nR codewords X n (1), X n (2), . . . , X n (2nR ) are chosen uniformly over the 2n possible binary sequences of length n . One of these codewords
is chosen and sent over the channel. The receiver looks at the received sequence
and tries to find a codeword in the code that is jointly typical with the received
sequence. As argued above, this corresponds to finding a codeword X n (i) such
(n)
that Y n − X n (i) ∈ A! (Z) . For a fixed codeword xn (i) , what is the probability
that the received sequence Y n is such that (xn (i), Y n ) is jointly typical?
(f) Now consider a particular received sequence y n = 000000 . . . 0 , say. Assume that
we choose a sequence X n at random, uniformly distributed among all the 2 n
possible binary n -sequences. What is the probability that the chosen sequence
is jointly typical with this y n ? (Hint: this is the probability of all sequences x n
(n)
such that y n − xn ∈ A! (Z) .)
(g) Now consider a code with 29 = 512 codewords of length 12 chosen at random,
uniformly distributed among all the 2n sequences of length n = 25 . One of
these codewords, say the one corresponding to i = 1 , is chosen and sent over the
channel. As calculated in part (e), the received sequence, with high probability, is
jointly typical with the codeword that was sent. What is probability that one or
more of the other codewords (which were chosen at random, independently of the
sent codeword) is jointly typical with the received sequence? (Hint: You could use
the union bound but you could also calculate this probability exactly, using the
result of part (f) and the independence of the codewords)
(h) Given that a particular codeword was sent, the probability of error (averaged over
the probability distribution of the channel and over the random choice of other
178
Channel Capacity
codewords) can be written as
Pr(Error|xn (1) sent) =
!
y n :y n causes
error
p(y n |xn (1))
(7.72)
There are two kinds of error: the first occurs if the received sequence y n is not
jointly typical with the transmitted codeword, and the second occurs if there is
another codeword jointly typical with the received sequence. Using the result of
the previous parts, calculate this probability of error.
By the symmetry of the random coding argument, this does not depend on which
codeword was sent.
The calculations above show that average probability of error for a random code with
512 codewords of length 25 over the binary symmetric channel of crossover probability
0.1 is about 0.34. This seems quite high, but the reason for this is that the value of
% that we have chosen is too large. By choosing a smaller % , and a larger n in the
(n)
definitions of A! , we can get the probability of error to be as small as we want, as
long as the rate of the code is less than I(X; Y ) − 3% .
Also note that the decoding procedure described in the problem is not optimal. The
optimal decoding procedure is maximum likelihood, i.e., to choose the codeword that
is closest to the received sequence. It is possible to calculate the average probability
of error for a random code for which the decoding is based on an approximation to
maximum likelihood decoding, where we decode a received sequence to the unique
codeword that differs from the received sequence in ≤ 4 bits, and declare an error
otherwise. The only difference with the jointly typical decoding described above is
that in the case when the codeword is equal to the received sequence! The average
probability of error for this decoding scheme can be shown to be about 0.285.
Solution: Jointly typical set
(a) Calculate H(X) , H(Y ) , H(X, Y ) and I(X; Y ) for the joint distribution above.
Solution: H(X) = H(Y ) = 1 bit, H(X, Y ) = H(X) + H(Y |X) = 1 + H(p) = 1 −
0.9 log 0.9−0.1 log 0.1 = 1+0.469 = 1.469 bits, and I(X; Y ) = H(Y )−H(Y |X) =
0.531 bits.
(b) Let X1 , X2 , . . . , Xn be drawn i.i.d. according the Bernoulli(1/2) distribution. Of
the 2n possible sequences of length n , which of them are typical, i.e., member of
(n)
(n)
A! (X) for % = 0.2 ? Which are the typical sequences in A ! (Y ) ?
Solution:In the case for the uniform distribution, every sequence has probability
(1/2)n , and therefore for every sequence, − n1 log p(xn ) = 1 = H(X) , and therefore
(n)
every sequence is typical, i.e., ∈ A! (X) .
(n)
Similarly, every sequence y n is typical, i.e., ∈ A! (Y ) .
(n)
(c) The jointly typical set A! (X, Y ) is defined as the set of sequences that satisfy
equations (7.35-7.37) of EIT. The first two equations correspond to the conditions
(n)
(n)
that xn and y n are in A! (X) and A! (Y ) respectively. Consider the last
179
Channel Capacity
condition, which can be rewritten to state that − n1 log p(xn , y n ) ∈ (H(X, Y ) −
%, H(X, Y ) + %) . Let k be the number of places in which the sequence x n differs
from y n ( k is a function of the two sequences). Then we can write
p(xn , y n ) =
n
B
p(xi , yi )
(7.73)
= (0.45)n−k (0.05)k
* +n
1
=
(1 − p)n−k pk
2
(7.74)
i=1
(7.75)
An alternative way at looking at this probability is to look at the binary symmetric
channel as in additive channel Y = X ⊕ Z , where Z is a binary random variable
that is equal to 1 with probability p , and is independent of X . In this case,
p(xn , y n ) = p(xn )p(y n |xn )
(7.76)
= p(x )p(z |x )
n
n
(7.77)
n
= p(x )p(z )
* +n
1
=
(1 − p)n−k pk
2
n
(7.78)
n
(7.79)
Show that the condition that (xn , y n ) being jointly typical is equivalent to the
condition that xn is typical and z n = y n − xn is typical.
(n)
Solution:The conditions for (xn , y n ) ∈ A! (X, Y ) are
A(n)
= {(xn , y n ) ∈ X n × Y n :
!
3
3
3
3 1
3− log p(xn ) − H(X)3 < %,
3
3 n
3
3
3 1
3
3− log p(y n ) − H(Y )3 < %,
3 n
3
3
3
3 1
3
3− log p(xn , y n ) − H(X, Y )3 < %},
3 n
3
(7.80)
(7.81)
(7.82)
(7.83)
But, as argued above, every sequence x n and y n satisfies the first two conditions.
Thereofre, the only condition that matters is the last one. As argued above,
** +
+
1
1
1 n k
− log p(xn , y n ) = − log
p (1 − p)n−k
n
n
2
k
n−k
= 1 − log p −
log(1 − p)
n
n
(7.84)
(7.85)
Thus the pair (xn , y n ) is jointly typical iff |1− nk log p− n−k
n log(1−p)−H(X, Y )| <
k
n−k
% , i.e., iff | − n log p − n log(1 − p) − H(p)| < % , which is exactly the condition
for z n = y n ⊕ xn to be typical. Thus the set of jointly typical pairs (x n , y n ) is the
set such that the number of places in which x n differs from y n is close to np .
180
Channel Capacity
(n)
(d) We now calculate the size of A! (Z) for n = 25 and % = 0.2 . As in problem 7 of
Homework 4, here is a table of the probabilities and numbers of sequences of with
k ones
,n- $
,n,n- k
n−k
k
p(xn ) = pk (1 − p)n−k
Cumul. pr.
j≤k j
k
k p (1 − p)
0
1
1
7.178975e-02
0.071790
0.071790
1
25
26
7.976639e-03
0.199416
0.271206
2
300
326
8.862934e-04
0.265888
0.537094
3
2300
2626
9.847704e-05
0.226497
0.763591
4
12650
15276
1.094189e-05
0.138415
0.902006
5
53130
68406
1.215766e-06
0.064594
0.966600
6
177100
245506
1.350851e-07
0.023924
0.990523
7
480700
726206
1.500946e-08
0.007215
0.997738
8 1081575
1807781
1.667718e-09
0.001804
0.999542
9 2042975
3850756
1.853020e-10
0.000379
0.999920
10 3268760
7119516
2.058911e-11
0.000067
0.999988
11 4457400 11576916
2.287679e-12
0.000010
0.999998
12 5200300 16777216
2.541865e-13
0.000001
0.999999
(Sequences with more than 12 ones are omitted since their total probability is
negligible (and they are not in the typical set).)
(n)
What is the size of the set A! (Z) ?
Solution: H(Z) = H(0.1) = 0.469 .
Setting % = 0.2 , the typical set for Z is the set sequences for which − n1 log p(z n ) ∈
(H(Z) − %, H(Z) + %) = (0.269, 0.669) . Looking at the table above for n = 25 , it
follows that the typical Z sequences are those with 1,2,3 or 4 ones.
(n)
The total probability of the set A! (Z) = 0.902006 − 0.071790 = 0.830216 and
the size of this set is 15276 − 1 = 15275 .
(e) Now consider random coding for the channel, as in the proof of the channel coding
theorem. Assume that 2nR codewords X n (1), X n (2), . . . , X n (2nR ) are chosen uniformly over the 2n possible binary sequences of length n . One of these codewords
is chosen and sent over the channel. The receiver looks at the received sequence
and tries to find a codeword in the code that is jointly typical with the received
sequence. As argued above, this corresponds to finding a codeword X n (i) such
(n)
that Y n − X n (i) ∈ A! (Z) . For a fixed codeword xn (i) , what is the probability
that the received sequence Y n is such that (xn (i), Y n ) is jointly typical?
Solution:The easiest way to calculate this probability is to view the BSC as an
additive channel Y = X ⊕ Z , where Z is Bernoulli( p ). Then the probability that
for a given codeword, xn (i) , that the output Y n is jointly typical with it is equal
(n)
to the probability that the noise sequence Z n is typical, i.e., in A! (Z) . The noise
sequence is drawn i.i.d. according to the distribution (1 − p, p) , and as calculated
(n)
above, the probability that the sequence is typical, i.e., Pr(A ! (Z)) = 0.830216 .
Therefore the probability that the received sequence is not jointly typical with the
transmitted codeword is 0.169784.
− n1 log p(xn )
0.152003
0.278800
0.405597
0.532394
0.659191
0.785988
0.912785
1.039582
1.166379
1.293176
1.419973
1.546770
1.673567
181
Channel Capacity
(f) Now consider a particular received sequence y n = 000000 . . . 0 , say. Assume that
we choose a sequence X n at random, uniformly distributed among all the 2 n
possible binary n -sequences. What is the probability that the chosen sequence
is jointly typical with this y n ? (Hint: this is the probability of all sequences x n
(n)
such that y n − xn ∈ A! (Z) .)
Solution:Since all xn sequences are chosen with the same probability ( (1/2) n ),
the probability that the xn sequence chosen is jointly typical with the received y n
is equal to the number of possible jointly typical (x n , y n ) pairs times (1/2)n . The
number of sequences xn that are jointly typical with a given y n is equal to number
of typical z n , where z n = xn ⊕ y n . Thus the probability that a randomly chosen
(n)
xn is typical with the given y n is |A! (Z)| ∗ ( 12 )n = 15275 ∗ 2−25 = 4.552 × 10−4 .
(g) Now consider a code with 29 = 512 codewords of length 12 chosen at random,
uniformly distributed among all the 2n sequences of length n = 25 . One of
these codewords, say the one corresponding to i = 1 , is chosen and sent over the
channel. As calculated in part (e), the received sequence, with high probability, is
jointly typical with the codeword that was sent. What is probability that one or
more of the other codewords (which were chosen at random, independently of the
sent codeword) is jointly typical with the received sequence? (Hint: You could use
the union bound but you could also calculate this probability exactly, using the
result of part (f) and the independence of the codewords)
Solution:Each of the other codewords is jointly typical with received sequence
with probability 4.552 × 10−4 , and each of these codewords is independent. The
probability that none of the 511 codewords are jointly typical with the received
sequence is therefore (1 − 4.552 × 10−4 )511 = 0.79241 , and the probability that
at least one of them is jointly typical with the received sequence is therefore 1 −
0.79241 = 0.20749 .
Using the simple union of events bound gives the probability of another codeword
being jointly typical with the received sequence to be 4.552×10 −4 ×511 = 0.23262 .
The previous calculation gives the more exact answer.
(h) Given that a particular codeword was sent, the probability of error (averaged over
the probability distribution of the channel and over the random choice of other
codewords) can be written as
Pr(Error|xn (1) sent) =
!
y n :y n causes
error
p(y n |xn (1))
(7.86)
There are two kinds of error: the first occurs if the received sequence y n is not
jointly typical with the transmitted codeword, and the second occurs if there is
another codeword jointly typical with the received sequence. Using the result of
the previous parts, calculate this probability of error.
By the symmetry of the random coding argument, this does not depend on which
codeword was sent.
Solution:There are two error events, which are conditionally independent, given
the received sequence. In the previous part, we showed that the conditional proba-
182
Channel Capacity
bility of error of the second kind was 0.20749, irrespective of the received sequence
yn .
The probability of error of the first kind is 0.1698, conditioned on the input code(n)
word. In part (e), we calculated the probability that (x n (i), Y n ) ∈
/ A! (X, Y ) ,
but this was conditioned on a particular input sequence. Now by the symmetry
and uniformity of the random code construction, this probability does not depend
(n)
on xn (i) , and therefore the probability that (X n , Y n ) ∈
/ A! (X, Y ) is also equal
to this probability, i.e., to 0.1698.
We can therefore use a simple union of events bound to bound the total probability
of error ≤ 0.1698 + 0.2075 = 0.3773 .
Thus we can send 512 codewords of length 25 over a BSC with crossover probability
0.1 with probability of error less than 0.3773.
A little more accurate calculation can be made of the probability of error using the
fact that conditioned on the received sequence, both kinds of error are independent.
Using the symmetry of the code construction process, the probability of error of
the first kind conditioned on the received sequence does not depend on the received
sequence, and is therefore = 0.1698 . Therefore the probability that neither type
of error occurs is (using their independence) = (1 − 0.1698)(1 − 0.2075) = 0.6579
and therefore, the probability of error is 1 − 0.6579 = 0.3421
The calculations above show that average probability of error for a random code with
512 codewords of length 25 over the binary symmetric channel of crossover probability
0.1 is about 0.34. This seems quite high, but the reason for this is that the value of
% that we have chosen is too large. By choosing a smaller % , and a larger n in the
(n)
definitions of A! , we can get the probability of error to be as small as we want, as
long as the rate of the code is less than I(X; Y ) − 3% .
Also note that the decoding procedure described in the problem is not optimal. The
optimal decoding procedure is maximum likelihood, i.e., to choose the codeword that
is closest to the received sequence. It is possible to calculate the average probability
of error for a random code for which the decoding is based on an approximation to
maximum likelihood decoding, where we decode a received sequence to the unique
codeword that differs from the received sequence in ≤ 4 bits, and declare an error
otherwise. The only difference with the jointly typical decoding described above is
that in the case when the codeword is equal to the received sequence! The average
probability of error for this decoding scheme can be shown to be about 0.285.
16. Encoder and decoder as part of the channel: Consider a binary symmetric channel with crossover probability 0.1. A possible coding scheme for this channel with two
codewords of length 3 is to encode message a 1 as 000 and a2 as 111. With this coding
scheme, we can consider the combination of encoder, channel and decoder as forming
a new BSC, with two inputs a1 and a2 and two outputs a1 and a2 .
(a) Calculate the crossover probability of this channel.
183
Channel Capacity
(b) What is the capacity of this channel in bits per transmission of the original channel?
(c) What is the capacity of the original BSC with crossover probability 0.1?
(d) Prove a general result that for any channel, considering the encoder, channel and
decoder together as a new channel from messages to estimated messages will not
increase the capacity in bits per transmission of the original channel.
Solution: Encoder and Decoder as part of the channel:
(a) The probability of error with these 3 bits codewords was 2.8%, and thus the
crossover probability of this channel is 0.028.
(b) The capacity of a BSC with crossover probability 0.028 is 1 − H(0.028) , i.e., 10.18426 or 0.81574 bits for each 3 bit codeword. This corresponds to 0.27191 bits
per transmission of the original channel.
(c) The original channel had capacity 1 − H(0.1) , i.e., 0.531 bits/transmission.
(d) The general picture for the channel with encoder and decoder is shown below
W %
Message
Encoder
Xn %
Channel
p(y|x)
Yn %
Decoder
Ŵ %
Estimate
of
Message
By the data processing inequality, I(W ; Ŵ ) ≤ I(X n ; Y n ) , and therefore
CW =
1
1
max I(W ; Ŵ ) ≤ max
I(X n ; Y n ) = C
n p(w)
n p(xn )
(7.87)
Thus the capacity of the channel per transmission is not increased by the addition
of the encoder and decoder.
17. Codes of length 3 for a BSC and BEC: In Problem 16, the probability of error was
calculated for a code with two codewords of length 3 (000 and 111) sent over a binary
symmetric channel with crossover probability % . For this problem, take % = 0.1 .
(a) Find the best code of length 3 with four codewords for this channel. What is
the probability of error for this code? (Note that all possible received sequences
should be mapped onto possible codewords)
(b) What is the probability of error if we used all the 8 possible sequences of length 3
as codewords?
(c) Now consider a binary erasure channel with erasure probability 0.1. Again, if we
used the two codeword code 000 and 111, then received sequences 00E,0E0,E00,0EE,E0E,EE0
would all be decoded as 0, and similarly we would decode 11E,1E1,E11,1EE,E1E,EE1
184
Channel Capacity
as 1. If we received the sequence EEE we would not know if it was a 000 or a 111
that was sent - so we choose one of these two at random, and are wrong half the
time.
What is the probability of error for this code over the erasure channel?
(d) What is the probability of error for the codes of parts (a) and (b) when used over
the binary erasure channel?
Solution: Codes of length 3 for a BSC and BEC:
(a) To minimize the probability of confusion, the codewords should be as far apart as
possible. With four codewords, the minimum distance is at most 2, and there are
various sets of codewords that achieve this minimum distance. An example set is
000, 011, 110 and 101. Each of these codewords differs from the other codewords
in at least two places.
To calculate the probability of error, we need to find the best decoding rule, i.e,.
we need to map all possible recieved sequences onto codewords. As argued in the
previous homework, the best decoding rule assigns to each received sequence the
nearest codeword, with ties being broken arbitrarily. Of the 8 possible received
sequences, 4 are codewords, and each of the other 4 sequences has three codewords
within distance one of them. We can assign these received sequences to any of the
nearest codewords, or alternatively, for symmetry, we might toss a three sided
coin on receiving the sequence, and choose one of the nearest codewords with
probability (1/3, 1/3, 1/3). All these decoding strategies will give the same average
probability of error.
In the current example, there are 8 possible received sequences, and we will use
the following decoder mapping 000, 001 → 000; 011, 010 → 011; 110, 100 →
110; and 101, 111 → 101.
Under this symmetric mapping, the codeword and one received sequence at distance 1 from the codeword are mapped on to the codeword. The probability therefore that the codeword is decoded correctly is 0.9∗0.9∗0.9+0.9∗0.9∗0.1 = 0.81 and
the probability of error (for each codeword) is 0.19. Thus the average probability
of error is also 0.19.
(b) If we use all possible input sequences as codewords, then we have an error if any of
the bits is changed. The probability that all the three bits are received correctly
is 0.9 ∗ 0.9 ∗ 0.9 = 0.729 and therefore the probability of error is 0.271.
(c) There will be an error only if all three bits of the codeword are erased, and on
receiveing EEE, the decoder choses the wrong codeword. The probability of receiving EEE is 0.001 and conditioned on that, the probability of error is 0.5, so
the probability of error for this code over the BEC is 0.0005.
(d) For the code of part (a), the four codewords are 000, 011,110, and 101. We use
the following decoder mapping:
185
Channel Capacity
Received Sequences
000, 00E, 0E0, E00
011, 01E, 0E1, E11
110, 11E, 1E0, E10
101, 10E, 1E1, E01
0EE
EE0
..
.
codeword
000
011
110
101
000 or 011 with prob. 0.5
000 or 110 with prob. 0.5
EE1
011 or 101 with prob. 0.5
EEE
000 or 011 or 110 or 101 with prob. 0.25
Essentially all received sequences with only one erasure can be decoded correctly.
If there are two erasures, then there are two possible codewords that could have
caused the received sequence, and the conditional probability of error is 0.5. If
there are three erasures, any of the codewords could have caused it, and the
conditional probability of error is 0.75. Thus the probability of error given that
000 was sent is the probability of two erasures times 0.5 plus the probability of 3
erasures times 0.75, i.e, 3 ∗ 0.9 ∗ 0.1 ∗ 0.1 ∗ 0.5 + 0.1 ∗ 0.1 ∗ 0.1 ∗ 0.75 = 0.01425 .
This is also the average probability of error.
If all input sequences are used as codewords, then we will be confused if there
is any erasure in the received sequence. The conditional probability of error if
there is one erasure is 0.5, two erasures is 0.75 and three erasures is 0.875 (these
corrospond to the numbers of other codewords that could have caused the received
sequence). Thus the probability of error given any codeword is 3 ∗ 0.9 ∗ 0.9 ∗ 0.1 ∗
0.5 + 3 ∗ 0.9 ∗ 0.1 ∗ 0.1 ∗ 0.75 + 0.1 ∗ 0.1 ∗ 0.1 ∗ 0.875 = 0.142625 . This is also the
average probability of error.
18. Channel capacity: Calculate the capacity of the following channels with probability
transition matrices:
(a) X = Y = {0, 1, 2}
(b) X = Y = {0, 1, 2}
(c) X = Y = {0, 1, 2, 3}


(7.88)


(7.89)
1/3 1/3 1/3


p(y|x) =  1/3 1/3 1/3 
1/3 1/3 1/3
1/2 1/2 0


p(y|x) =  0 1/2 1/2 
1/2 0 1/2




p(y|x) = 
p
1−p
0
0
1−p
p
0
0
0
0
q
1−q
0
0
1−q
q





(7.90)
186
Channel Capacity
Solution: Channel Capacity:
(a) X = Y = {0, 1, 2}


1/3 1/3 1/3


p(y|x) =  1/3 1/3 1/3 
1/3 1/3 1/3
(7.91)
This is a symmetric channel and by the results of Section 8.2,
C = log |Y| − H(r) = log 3 − log 3 = 0.
(7.92)
In this case, the output is independent of the input.
(b) X = Y = {0, 1, 2}


1/2 1/2 0


p(y|x) =  0 1/2 1/2 
1/2 0 1/2
(7.93)
Again the channel is symmetric, and by the results of Section 8.2,
C = log |Y| − H(r) = log 3 − log = 0.58 bits
(c) X = Y = {0, 1, 2, 3}




p(y|x) = 
p
1−p
0
0
1−p
p
0
0
0
0
q
1−q
0
0
1−q
q





(7.94)
(7.95)
This channel consists of a sum of two BSC’s, and using the result of Problem 2 of
Homework 9, the capacity of the channel is
0
C = log 21−H(p) + 21−H(q)
1
(7.96)
19. Capacity of the carrier pigeon channel. Consider a commander of an army besieged a fort for whom the only means of communication to his allies is a set of carrier
pigeons. Assume that each carrier pigeon can carry one letter (8 bits), and assume that
pigeons are released once every 5 minutes, and that each pigeon takes exactly 3 minutes
to reach its destination.
(a) Assuming all the pigeons reach safely, what is the capacity of this link in bits/hour?
(b) Now assume that the enemies try to shoot down the pigeons, and that they manage
to hit a fraction α of them. Since the pigeons are sent at a constant rate, the
receiver knows when the pigeons are missing. What is the capacity of this link?
(c) Now assume that the enemy is more cunning, and every time they shoot down a
pigeon, they send out a dummy pigeon carrying a random letter (chosen uniformly
from all 8-bit letters). What is the capacity of this link in bits/hour?
187
Channel Capacity
Set up an appropriate model for the channel in each of the above cases, and indicate
how to go about finding the capacity.
Solution: Capacity of the carrier pigeon channel.
(a) The channel sends 8 bits every 5 minutes, or 96 bits/hour.
(b) This is the equivalent of an erasure channel with an input alphabet of 8 bit symbols,
i.e., 256 different symbols. For any symbols sent, a fraction α of them are received
as an erasure. We would expect that the capacity of this channel is (1 − α)8
bits/pigeon. We will justify it more formally by mimicking the derivation for the
binary erasure channel.
Consider a erasure channel with 256 symbol inputs and 257 symbol output - the
extra symbol is the erasure symbol, which occurs with probability α . Then
I(X; Y ) = H(Y ) − H(Y |X) = H(Y ) − H(α)
(7.97)
since the probability of erasure is independent of the input.
However, we cannot get H(Y ) to attain its maximum value, i.e., log 257 , since
the probability of the erasure channel is α independent of our input distribution.
However, if we let E be the erasure event, then
H(Y ) = H(Y, E) = H(E) + H(Y |E) = H(α) + α × 0 + (1 − α)H(Y |E = 0) (7.98)
and we can maximize H(Y ) by maximizing H(Y |E = 0) . However, H(Y |E = 0)
is just the entropy of the input distribution, and this is maximized by the uniform.
Thus the maximum value of H(Y ) is H(α) + (1 − α) log 256 , and the capacity of
this channel is (1 − α) log 256 bits/pigeon, or (1 − α)96 bits/hour, as we might
have expected from intuitive arguments.
(c) In this case, we have a symmetric channel with 256 inputs and 256 output. With
probability (1 − α) + α/256 , the output symbol is equal to the input, and with
probability α/256 , it is transformed to any of the other 255 symbols. This channel
is symmetric in the sense of Section 8.2, and therefore the capacity of the channel
is
C = log |Y| − H(r)
= log 256 − H(1 − α + α/256, α/256, α/256, . . . , α/256)
255
255
= 8 − H(1 −
α) −
αH(1/255, 1/255, . . . , 1/255)
256
256
255
255
= 8 − H(1 −
α) −
α log 255
256
256
(7.99)
(7.100)
(7.101)
(7.102)
We have to multiply this by 12 to get the capacity in bits/hour.
20. A channel with two independent looks at Y.
Let Y 1 and Y2 be conditionally
independent and conditionally identically distributed given X.
(a) Show I(X; Y1 , Y2 ) = 2I(X; Y1 ) − I(Y1 , Y2 ).
188
Channel Capacity
(b) Conclude that the capacity of the channel
X
%
% (Y1 , Y2 )
is less than twice the capacity of the channel
X
%
% Y1
Solution: A channel with two independent looks at Y
(a)
I(X; Y1 , Y2 ) = H(Y1 , Y2 ) − H(Y1 , Y2 |X)
(7.103)
= H(Y1 ) + H(Y2 ) − I(Y1 ; Y2 ) − H(Y1 |X) − H(Y2 |X)
(7.104)
= I(X; Y1 ) + I(X; Y2 ) − I(Y1 ; Y2 )
(7.106)
(since Y1 and Y2 are conditionally independent given X)
(7.105)
= 2I(X; Y1 ) − I(Y1 ; Y2 )
(since Y1 and Y2 are conditionally(7.107)
iden-.
tically distributed)
(b) The capacity of the single look channel X → Y 1 is
C1 = max I(X; Y1 ).
p(x)
(7.108)
The capacity of the channel X → (Y1 , Y2 ) is
C2 = max I(X; Y1 , Y2 )
p(x)
(7.109)
= max 2I(X; Y1 ) − I(Y1 ; Y2 )
(7.110)
≤ max 2I(X; Y1 )
(7.111)
= 2C1 .
(7.112)
p(x)
p(x)
Hence, two independent looks cannot be more than twice as good as one look.
21. Tall, fat people
Suppose that average height of people in a room is 5 feet. Suppose the average weight
is 100 lbs.
(a) Argue that no more than
1
3
of the population is 15 feet tall.
(b) Find an upper bound on the fraction of 300 lb, 10 footers in the room.
Solution:
Tall, fat people.
189
Channel Capacity
$
(a) The average height of the individuals in the population is 5 feet. So n1 hi = 5
where n is the population size and hi is the height of the i -th person. If more
than 31 of the population is at least 15 feet tall, then the average will be greater
than 13 15 = 5 feet since each person is at least 0 feet tall. Thus no more than 13
of the population is 15 feet tall.
(b) By the same reasoning as in part (a), at most 21 of the poplulation is 10 feet tall
and at most 13 of the population weighs 300 lbs. Therefore at most 13 are both
10 feet tall and weigh 300 lbs.
22. Can signal alternatives lower capacity? Show that adding a row to a channel
transition matrix does not decrease capacity.
Solution: Can signal alternatives lower capacity?
Adding a row to the channel transition matrix is equivalent to adding a symbol to the
input alphabet X . Suppose there were m symbols and we add an (m + 1) -st. We can
always choose not to use this extra symbol.
Specifically, let Cm and Cm+1 denote the capacity of the original channel and the new
channel, respectively. Then
Cm+1 =
≥
max
I(X; Y )
max
I(X; Y )
p(x1 ,...,xm+1 )
p(x1 ,...,xm ,0)
= Cm .
23. Binary multiplier channel
(a) Consider the channel Y = XZ where X and Z are independent binary random
variables that take on values 0 and 1. Z is Bernoulli( α ), i.e. P (Z = 1) = α .
Find the capacity of this channel and the maximizing distribution on X .
(b) Now suppose the receiver can observe Z as well as Y . What is the capacity?
Solution: Binary Multiplier Channel
(a) Let P (X = 1) = p . Then P (Y = 1) = P (X = 1)P (Z = 1) = αp .
I(X; Y ) = H(Y ) − H(Y |X)
= H(Y ) − P (X = 1)H(Z)
= H(αp) − pH(α)
We find that p∗ =
be log(2
H(α)
α
maximizes I(X; Y ) . The capacity is calculated to
1
H(α)
α(2 α
+ 1) −
H(α)
α
.
+1)
190
Channel Capacity
(b) Let P (X = 1) = p . Then
I(X; Y, Z) = I(X; Z) + I(X; Y |Z)
= H(Y |Z) − H(Y |X, Z)
= H(Y |Z)
= αH(p)
The expression is maximized for p = 1/2 , resulting in C = α . Intuitively, we can
only get X through when Z is 1, which happens α of the time.
24. Noise alphabets
Consider the channel
Z
&
#$
% Σ
!"
X
% Y
X = {0, 1, 2, 3} , where Y = X + Z , and Z is uniformly distributed over three distinct
integer values Z = {z1 , z2 , z3 }.
(a) What is the maximum capacity over all choices of the Z alphabet? Give distinct
integer values z1 , z2 , z3 and a distribution on X achieving this.
(b) What is the minimum capacity over all choices for the Z alphabet? Give distinct
integer values z1 , z2 , z3 and a distribution on X achieving this.
Solution: Noise alphabets
(a) Maximum capacity is C = 2 bits. Z = {10, 20, 30} and p(X) = ( 41 , 14 , 14 , 14 ) .
(b) Minimum capacity is C = 1 bit. Z = {0, 1, 2} and p(X) = ( 21 , 0, 0, 12 ) .
25. Bottleneck channel
Suppose a signal X ∈ X = {1, 2, . . . , m} goes through an intervening transition X −→
V −→ Y :
X
p(v|x)
V
p(y|v)
Y
191
Channel Capacity
where x = {1, 2, . . . , m} , y = {1, 2, . . . , m} , and v = {1, 2, . . . , k} . Here p(v|x) and
$
p(y|v) are arbitrary and the channel has transition probability p(y|x) = v p(v|x)p(y|v) .
Show C ≤ log k .
Solution: Bottleneck channel
The capacity of the cascade of channels is C = max p(x) I(X; Y ) . By the data processing
inequality, I(X; Y ) ≤ I(V ; Y ) = H(V ) − H(V |Y ) ≤ H(V ) ≤ log k .
26. Noisy typewriter. Consider the channel with x, y ∈ {0, 1, 2, 3} and transition probabilities p(y|x) given by the following matrix:

1
2


 0

 0
1
2

1
2
0
0
1
2
1
2
1
2
0 
0
0
1
2
1
2
0
(a) Find the capacity of this channel.




(b) Define the random variable z = g(y) where
g(y) =
)
A if y ∈ {0, 1}
.
B if y ∈ {2, 3}
For the following two PMFs for x , compute I(X; Z)
i.
p(x) =
)
p(x) =
)
1
2
if x ∈ {1, 3}
if x ∈ {0, 2}
0
if x ∈ {1, 3}
if x ∈ {0, 2}
0
ii.
1
2
(c) Find the capacity of the channel between x and z , specifically where x ∈ {0, 1, 2, 3} ,
z ∈ {A, B} , and the transition probabilities P (z|x) are given by
p(Z = z|X = x) =
!
g(y0 )=z
P (Y = y0 |X = x)
(d) For the X distribution of part i. of b , does X → Z → Y form a Markov chain?
Solution: Noisy typewriter
(a) This is a noisy typewriter channel with 4 inputs, and is also a symmetric channel.
Capacity of the channel by Theorem 7.2.1 is log 4 − 1 = 1 bit per transmission.
192
Channel Capacity
(b)
i. The resulting conditional distribution of Z given X is

1

 1
 2

 0
1
2
If
p(x) =
)
1
2
0
0





1 
1
2
1
2
if x ∈ {1, 3}
if x ∈ {0, 2}
then it is easy to calculate H(Z|X) = 0 , and I(X; Z) = 1 . If
p(x) =
)
0
1
2
if x ∈ {1, 3}
if x ∈ {0, 2}
then H(Z|X) = 1 and I(X; Y ) = 0 .
ii. Since I(X; Z) ≤ H(Z) ≤ 1 , the capacity of the channel is 1, achieved by the
input distribution
)
1
2 if x ∈ {1, 3}
p(x) =
0 if x ∈ {0, 2}
(c) For the input distribution that achieves capacity, X ↔ Z is a one-to-one function, and hence p(x, z) = 1 or 0 . We can therefore see the that p(x, y, z) =
p(z, x)p(y|x, z) = p(z, x)p(y|z) , and hence X → Z → Y forms a Markov chain.
27. Erasure channel
Let {X , p(y|x), Y} be a discrete memoryless channel with capacity C . Suppose this
channel is immediately cascaded with an erasure channel {Y, p(s|y), S} that erases α
of its symbols.
X
p(y|x)
Y
(
(
S
$ (
$$(
(
'''$$(
'''
$$
(
'
'
$
( e
Specifically, S = {y1 , y2 , . . . , ym , e}, and
Pr{S = y|X = x} = ᾱp(y|x), y ∈ Y,
Pr{S = e|X = x} = α.
Determine the capacity of this channel.
Solution: Erasure channel
193
Channel Capacity
The capacity of the channel is
C = max I(X; S)
p(x)
(7.113)
Define a new random variable Z , a function of S , where Z = 1 if S = e and Z =
0 otherwise. Note that p(Z = 1) = α independent of X . Expanding the mutual
information,
I(X; S) = H(S) − H(S|X)
= H(S, Z) − H(S, Z|X)
(7.114)
(7.115)
+ H(Z) + H(S|Z) − H(Z|X) − H(S|X, Z)
(7.116)
= I(X; Z) + I(S; X|Z)
(7.117)
= 0 + αI(X; S|Z = 1) + (1 − α)I(X; S|Z = 0)
(7.118)
When Z = 1 , S = e and H(S|Z = 1) = H(S|X, Z = 1) = 0 . When Z = 0 , S = Y ,
and I(X; S|Z = 0) = I(X; Y ) . Thus
I(X; S) = (1 − α)I(X; Y )
(7.119)
and therefore the capacity of the cascade of a channel with an erasure channel is (1− α)
times the capacity of the original channel.
28. Choice of channels.
Find the capacity C of the union of 2 channels (X 1 , p1 (y1 |x1 ), Y1 ) and (X2 , p2 (y2 |x2 ), Y2 )
where, at each time, one can send a symbol over channel 1 or over channel 2 but not
both. Assume the output alphabets are distinct and do not intersect.
(a) Show 2C = 2C1 + 2C2 . Thus 2C is the effective alphabet size of a channel with
capacity C .
(b) Compare with problem 10 of Chapter 2 where 2 H = 2H1 + 2H2 , and interpret (a)
in terms of the effective number of noise-free symbols.
(c) Use the above result to calculate the capacity of the following channel
194
Channel Capacity
1−p
' 0
)
(
(
*
(
*
(
*
(
*
(p
* (
(
*
( * p
(
*
(
*
(
*
(
*
(
*
(
+
'
* 1
1
0 *
*
1−p
2
1
' 2
Solution: Choice of Channels
(a) This is solved by using the very same trick that was used to solve problem 2.10.
Consider the following communication scheme:
X=
)
X1 Probability α
X2 Probability (1 − α)
Let
θ(X) =
)
1 X = X1
2 X = X2
Since the output alphabets Y1 and Y2 are disjoint, θ is a function of Y as well, i.e.
X → Y → θ.
I(X; Y, θ) = I(X; θ) + I(X; Y |θ)
= I(X; Y ) + I(X; θ|Y )
Since X → Y → θ , I(X; θ|Y ) = 0 . Therefore,
I(X; Y ) = I(X; θ) + I(X; Y |θ)
= H(θ) − H(θ|X) + αI(X1 ; Y1 ) + (1 − α)I(X2 ; Y2 )
= H(α) + αI(X1 ; Y1 ) + (1 − α)I(X2 ; Y2 )
Thus, it follows that
C = sup {H(α) + αC1 + (1 − α)C2 } .
α
195
Channel Capacity
Maximizing over α one gets the desired result. The maximum occurs for H ' (α) + C1 −
C2 = 0 , or α = 2C1 /(2C1 + 2C2 ) .
(b) If one interprets M = 2C as the effective number of noise free symbols, then the
above result follows in a rather intuitive manner: we have M 1 = 2C1 noise free symbols
from channel 1, and M2 = 2C2 noise free symbols from channel 2. Since at each
step we get to chose which channel to use, we essentially have M 1 + M2 = 2C1 + 2C2
noise free symbols for the new channel. Therefore, the capacity of this channel is
C = log2 (2C1 + 2C2 ) .
This argument is very similiar to the effective alphabet argument given in Problem 10,
Chapter 2 of the text.
29. Binary multiplier channel.
(a) Consider the discrete memoryless channel Y = XZ where X and Z are independent binary random variables that take on values 0 and 1. Let P (Z = 1) = α .
Find the capacity of this channel and the maximizing distribution on X .
(b) Now suppose the receiver can observe Z as well as Y . What is the capacity?
Solution: Binary Multiplier Channel (Repeat of problem 7.23)
(a) Let P (X = 1) = p . Then P (Y = 1) = P (X = 1)P (Z = 1) = αp .
I(X; Y ) = H(Y ) − H(Y |X)
= H(Y ) − P (X = 1)H(Z)
= H(αp) − pH(α)
We find that p∗ =
be log(2
H(α)
α
+ 1)
1
H(α)
maximizes I(X; Y ) . The capacity is calculated to
α(2 α +1)
− H(α)
α .
(b) Let P (X = 1) = p . Then
I(X; Y, Z) = I(X; Z) + I(X; Y |Z)
= H(Y |Z) − H(Y |X, Z)
= H(Y |Z)
= αH(p)
The expression is maximized for p = 1/2 , resulting in C = α . Intuitively, we can
only get X through when Z is 1, which happens α of the time.
30. Noise alphabets.
Consider the channel
196
Channel Capacity
Z
&
#$
% Σ
!"
X
% Y
X = {0, 1, 2, 3} , where Y = X + Z , and Z is uniformly distributed over three distinct
integer values Z = {z1 , z2 , z3 }.
(a) What is the maximum capacity over all choices of the Z alphabet? Give distinct
integer values z1 , z2 , z3 and a distribution on X achieving this.
(b) What is the minimum capacity over all choices for the Z alphabet? Give distinct
integer values z1 , z2 , z3 and a distribution on X achieving this.
Solution: Noise alphabets (Repeat of problem 7.24)
(a) Maximum capacity is C = 2 bits. Z = {10, 20, 30} and p(X) = ( 41 , 14 , 14 , 14 ) .
(b) Minimum capacity is C = 1 bit. Z = {0, 1, 2} and p(X) = ( 21 , 0, 0, 12 ) .
31. Source and channel.
We wish to encode a Bernoulli( α ) process V 1 , V2 , . . . for transmission over a binary
symmetric channel with crossover probability p .
+
V
n
%
X n (V n )
1−p %
*
)
+
)
p
%
+)
)+ p
)
+
%
)
,
+
%Y n
%̂ n
V
1−p
Find conditions on α and p so that the probability of error P ( V̂ n %= V n ) can be made
to go to zero as n −→ ∞ .
Solution: Source And Channel
Suppose we want to send a binary i.i.d. Bernoulli( α ) source over a binary symmetric
channel with error probability p .
By the source-channel separation theorem, in order to achieve an error rate that vanishes
asymptotically, P (V̂ n %= V n ) → 0 , we need the entropy of the source to be smaller than
the capacity of the channel. In this case this translates to
H(α) + H(p) < 1,
or, equivalently,
1
αα (1 − α)1−α pp (1 − p)1−p < .
2
197
Channel Capacity
32. Random “20” questions
Let X be uniformly distributed over {1, 2, . . . , m} . Assume m = 2 n . We ask random
questions: Is X ∈ S1 ? Is X ∈ S2 ?...until only one integer remains. All 2m subsets S
of {1, 2, . . . , m} are equally likely.
(a) How many deterministic questions are needed to determine X ?
(b) Without loss of generality, suppose that X = 1 is the random object. What is
the probability that object 2 yields the same answers for k questions as object 1?
(c) What is the expected number of objects in {2, 3, . . . , m} that have the same
answers to the questions as does the correct object 1?
√
(d) Suppose we ask n + n random questions. What is the expected number of
wrong objects agreeing with the answers?
(e) Use Markov’s inequality Pr{X ≥ tµ} ≤ 1t , to show that the probability of error
(one or more wrong object remaining) goes to zero as n −→ ∞ .
Solution: Random “20” questions. (Repeat of Problem 5.45)
(a) Obviously, Huffman codewords for X are all of length n . Hence, with n deterministic questions, we can identify an object out of 2 n candidates.
(b) Observe that the total number of subsets which include both object 1 and object
2 or neither of them is 2m−1 . Hence, the probability that object 2 yields the same
answers for k questions as object 1 is (2m−1 /2m )k = 2−k .
More information theoretically, we can view this problem as a channel coding
problem through a noiseless channel. Since all subsets are equally likely, the
probability the object 1 is in a specific random subset is 1/2 . Hence, the question
whether object 1 belongs to the k th subset or not corresponds to the k th bit of
the random codeword for object 1, where codewords X k are Bern( 1/2 ) random
k -sequences.
Object Codeword
1
0110 . . . 1
2
0010 . . . 0
..
.
Now we observe a noiseless output Y k of X k and figure out which object was
sent. From the same line of reasoning as in the achievability proof of the channel
coding theorem, i.e. joint typicality, it is obvious the probability that object 2 has
the same codeword as object 1 is 2−k .
(c) Let
1j =
)
1,
0,
object j yields the same answers for k questions as object 1
,
otherwise
for j = 2, . . . , m.
198
Channel Capacity
Then,
E(# of objects in {2, 3, . . . , m} with the same answers) = E(
=
=
m
!
1j )
j=2
m
!
j=2
m
!
E(1j )
2−k
j=2
= (m − 1)2−k
= (2n − 1)2−k .
(d) Plugging k = n +
√
n into (c) we have the expected number of (2n − 1)2−n−
√
n.
(e) Let N by the number of wrong objects remaining. Then, by Markov’s inequality
P (N ≥ 1)
≤
=
≤
EN
(2n − 1)2−n−
2−
√
√
n
n
→ 0,
where the first equality follows from part (d).
33. BSC with feedback. Suppose that feedback is used on a binary symmetric channel
with parameter p . Each time a Y is received, it becomes the next transmission. Thus
X1 is Bern(1/2), X2 = Y1 , X3 = Y2 , . . . , Xn = Yn−1 .
(a) Find limn→∞ n1 I(X n ; Y n ) .
(b) Show that for some values of p , this can be higher than capacity.
(c) Using this feedback transmission scheme,
X n (W, Y n ) = (X1 (W ), Y1 , Y2 , . . . , Ym−1 ) , what is the asymptotic communication
rate achieved; that is, what is limn→∞ n1 I(W ; Y n ) ?
Solution: BSC with feedback solution.
(a)
H(Y n |X n ) =
H(Y n ) =
!
i
!
i
So,
I(X n ; Y n ) = H(Y n ) − H(Y n |X n ).
H(Yi |Y i−1 , X n ) = H(Y1 |X1 ) +
H(Yi |Y i−1 ) = H(Y1 ) +
!
i
!
i
H(Yi |Y n ) = H(p) + 0.
H(Yi |Xi ) = 1 + (n − 1)H(p)
199
Channel Capacity
and,
I(X n ; Y n ) = 1 + (n − 1)H(p) − H(p) = 1 + (n − 2)H(p)
lim
n→∞
1
1 + (n − 2)H(p)
I(X n ; Y n ) = lim
= H(p)
n→∞
n
n
(b) For the BSC C = 1 − H(p) . For p = 0.5 , C = 0 , while lim n→∞ n1 I(X n ; Y n ) =
H(0.5) = 1 .
(c) Using this scheme
1
n
n I(W ; Y )
→ 0.
34. Capacity. Find the capacity of
(a) Two parallel BSC’s
'
1 1
$ 1
1 p
$0
1$
$
1
$p 11
$
2 $
X
2
'
1 2
'
3 1
0
$ 3
11p $$
$
1
$p 1
12
'
1
$
4 $
(b) BSC and single symbol.
Y
4
'
1 1
$ 1
1 p
$0
1$
$
1
X
$p 11
$
2 $
2
'
1 2
3
' 3
(c) BSC and ternary channel.
1
1 1 2
1
2
X
3
4
5
' 1
4
3
11 3
1
31
2
'
1
11
3
1
2
1
3
1
321 11
12
3 2
'
1
1
2
'
1112
$
$0
1$
$
1
1
2
$1$ 11
$ 2
2
'
1
1
2
1
2
Y
2
3
4
5
Y
200
Channel Capacity
(d) Ternary channel.
p(y|x) =
"
2/3 1/3 0
0 1/3 2/3
#
.
(7.120)
Solution: Capacity
Recall the parallel channels problem (problem 7.28 showed that for two channels in
parallel with capacities C1 and C2 , the capacity C of the new channel satisfies
2C = 2C1 + 2C2
(a) Here C1 = C2 = 1 − H(p) , and hence 2C = 2C1 +1 , or,
C = 2 − H(p).
(b) Here C1 = 1 − H(p) but C2 = 0 and so 2C = 2C1 + 1 , or,
0
1
C = log 21−H(p) + 1 .
(c) The p in the figure is a typo. All the transition probabilities are 1/2. The
capacity of the ternary channel (which is symmetric) is log 3 − H( 12 ) = log 3 − 1 .
The capacity of the BSC is 0, and together the parallel channels have a capacity
2C = 3/2 + 1 , or C = log 52 .
(d) The channel is weakly symmetric and hence the capacity is log3 − H( 13 , 23 ) =
log 3 − (log 3 − 23 ) = 23 .
35. Capacity.
Suppose channel P has capacity C, where P is an m × n channel matrix.
(a) What is the capacity of
P̃ =
"
P 0
0 1
#
P̂ =
"
P 0
0 Ik
#
(b) What about the capacity of
where Ik if the k × k identity matrix.
Solution: Solution: Capacity.
(a) By adding the extra column and row to the transition matrix, we have two channels
in parallel. You can transmit on either channel. From problem 7.28, it follows that
C̃ = log(20 + 2C )
C̃ = log(1 + 2C )
201
Channel Capacity
(b) This part is also an application of the conclusion problem 7.28. Here the capacity
of the added channel is log k.
Ĉ = log(2log k + 2C )
Ĉ = log(k + 2C )
36. Channel with memory.
Consider the discrete memoryless channel Y i = Zi Xi with input alphabet Xi ∈
{−1, 1}.
(a) What is the capacity of this channel when {Z i } is i.i.d. with
Zi =
)
1,
−1,
p = 0.5
?
p = 0.5
(7.121)
Now consider the channel with memory. Before transmission begins, Z is randomly chosen and fixed for all time. Thus Y i = ZXi .
(b) What is the capacity if
Z=
)
1,
−1,
p = 0.5
?
p = 0.5
(7.122)
Solution: Channel with memory solution.
(a) This is a BSC with cross over probability 0.5, so C = 1 − H(p) = 0 .
(b) Consider the coding scheme of sending X n = (1, b1 , b2 , . . . , bn−1 ) where the first
symbol is always a zero and the rest of the n − 1 symbols are ±1 bits. For the
first symbol Y1 = Z , so the receiver knows Z exactly. After that the receiver
can recover the remaining bits error free. So in n symbol transmissions n bits
are sent, for a rate R = n−1
n → 1 . The capacity C is bounded by log |X | = 1 ,
therefore the capacity is 1 bit per symbol.
37. Joint typicality.
Let (Xi , Yi , Zi ) be i.i.d. according to p(x, y, z). We will say that (x n , y n , z n ) is jointly
(n)
typical (written (xn , y n , z n ) ∈ A! ) if
• p(xn ) ∈ 2−n(H(X)±!)
• p(y n ) ∈ 2−n(H(Y )±!)
• p(z n ) ∈ 2−n(H(Z)±!)
• p(xn , y n ) ∈ 2−n(H(X,Y )±!)
• p(xn , z n ) ∈ 2−n(H(X,Z)±!)
• p(y n , z n ) ∈ 2−n(H(Y,Z)±!)
• p(xn , y n , z n ) ∈ 2−n(H(X,Y,Z)±!)
202
Channel Capacity
Now suppose (X̃ n , Ỹ n , Z̃ n ) is drawn according to p(xn )p(y n )p(z n ). Thus X̃ n , Ỹ n , Z̃ n
have the same marginals as p(xn , y n , z n ) but are independent. Find (bounds on)
(n)
P r{(X̃ n , Ỹ n , Z̃ n ) ∈ A! } in terms of the entropies H(X), H(Y ), H(Z), H(X, Y ), H(X, Z), H(Y, Z)
and H(X, Y, Z).
Solution: Joint typicality.
P r{(X̃ n , Ỹ n , Z̃ n ) ∈ A(n)
! } =
≤
!
p(xn )p(y n )p(z n )
(n)
(xn ,y n ,z n )∈A!
!
2−n(H(X)+H(Y )+H(Z)−3!)
(n)
(xn ,y n ,z n )∈A!
−n(H(X)+H(Y )+H(Z)−3!)
≤ |A(n)
! |2
≤ 2n(H(X,Y,Z)+!) 2−n(H(X)+H(Y )+H(Z)−3!)
≤ 2n(H(X,Y,Z)−H(X)−H(Y )−H(Z)+4!)
P r{(X̃ n , Ỹ n , Z̃ n ) ∈ A(n)
! } =
≥
!
p(xn )p(y n )p(z n )
(n)
(xn ,y n ,z n )∈A!
!
2−n(H(X)+H(Y )+H(Z)+3!)
(n)
(xn ,y n ,z n )∈A!
−n(H(X)+H(Y )+H(Z)−3!)
≥ |A(n)
! |2
≥ (1 − %)2n(H(X,Y,Z)−!) 2−n(H(X)+H(Y )+H(Z)−3!)
≥ (1 − %)2n(H(X,Y,Z)−H(X)−H(Y )−H(Z)−4!)
Note that the upper bound is true for all n, but the lower bound only hold for n large.
Chapter 8
Differential Entropy
N
1. Differential entropy. Evaluate the differential entropy h(X) = − f ln f for the
following:
(a) The exponential density, f (x) = λe−λx , x ≥ 0.
(b) The Laplace density, f (x) = 12 λe−λ|x| .
(c) The sum of X1 and X2 , where X1 and X2 are independent normal random
variables with means µi and variances σi2 , i = 1, 2.
Solution: Differential Entropy.
(a) Exponential distribution.
h(f ) = −
2
0
∞
λe−λx [ln λ − λx]dx
= − ln λ + 1 nats.
e
= log bits.
λ
(8.1)
(8.2)
(8.3)
(b) Laplace density.
h(f ) = −
2
∞
−∞
1 −λ|x| 1
λe
[ln + ln λ − λ|x|] dx
2
2
1
− ln λ + 1
2
2e
= ln
nats.
λ
2e
= log
bits.
λ
203
= − ln
(8.4)
(8.5)
(8.6)
(8.7)
204
Differential Entropy
(c) Sum of two normal distributions.
The sum of two normal random variables is also normal, so applying the result
derived the class for the normal distribution, since X 1 + X2 ∼ N (µ1 + µ2 , σ12 + σ22 ) ,
h(f ) =
1
log 2πe(σ12 + σ22 ) bits.
2
(8.8)
2. Concavity of determinants. Let K1 and K2 be two symmetric nonnegative definite
n × n matrices. Prove the result of Ky Fan[5]:
| λK1 + λK2 |≥| K1 |λ | K2 |λ ,
where | K | denotes the determinant of K.
for 0 ≤ λ ≤ 1, λ = 1 − λ,
Hint: Let Z = Xθ , where X1 ∼ N (0, K1 ), X2 ∼ N (0, K2 ) and θ = Bernoulli (λ).
Then use h(Z | θ) ≤ h(Z).
Solution: Concavity of Determinants. Let X 1 and X2 be normally distributed n vectors, Xi ∼ φKi (x) , i = 1, 2 . Let the random variable θ have distribution Pr{θ =
1} = λ , Pr{θ = 2} = 1 − λ, 0 ≤ λ ≤ 1 . Let θ , X1 , and X2 be independent and
let Z = Xθ . Then Z has covariance KZ = λK1 + (1 − λ)K2 . However, Z will not be
multivariate normal. However, since a normal distribution maximizes the entropy for
a given variance, we have
1
1
1
ln(2πe)n |λK1 +(1−λ)K2 | ≥ h(Z) ≥ h(Z|θ) = λ ln(2πe)n |K1 |+(1−λ) ln(2πe)n |K2 | .
2
2
2
(8.9)
Thus
|λK1 + (1 − λ)K2 | ≥ |K1 |λ |K2 |1−λ ,
(8.10)
as desired.
3. Uniformly distributed noise. Let the input random variable X to a channel be
uniformly distributed over the interval −1/2 ≤ x ≤ +1/2 . Let the output of the
channel be Y = X + Z , where the noise random variable is uniformly distributed over
the interval −a/2 ≤ z ≤ +a/2 .
(a) Find I(X; Y ) as a function of a .
(b) For a = 1 find the capacity of the channel when the input X is peak-limited; that
is, the range of X is limited to −1/2 ≤ x ≤ +1/2 . What probability distribution
on X maximizes the mutual information I(X; Y ) ?
(c) (Optional) Find the capacity of the channel for all values of a , again assuming
that the range of X is limited to −1/2 ≤ x ≤ +1/2 .
Solution: Uniformly distributed noise. The probability density function for Y = X +Z
is the convolution of the densities of X and Z . Since both X and Z have rectangular
densities, the density of Y is a trapezoid. For a < 1 the density for Y is
pY (y) =


 (1/2a)(y + (1 + a)/2)
−(1 + a)/2 ≤ y ≤ −(1 − a)/2
1
−(1 − a)/2 ≤ y ≤ +(1 − a)/2

 (1/2a)(−y − (1 + a)/2) +(1 − a)/2 ≤ y ≤ +(1 + a)/2
205
Differential Entropy
and for a > 1 the density for Y is


 y + (a + 1)/2
−(a + 1)/2 ≤ y ≤ −(a − 1)/2
pY (y) =
1/a
−(a − 1)/2 ≤ y ≤ +(a − 1)/2

 −y − (a + 1)/2 +(a − 1)/2 ≤ y ≤ +(a + 1)/2
(When a = 1 , the density of Y is triangular over the interval [−1, +1] .)
(a) We use the identity I(X; Y ) = h(Y ) − h(Y |X) . It is easy to compute h(Y )
directly, but it is even easier to use the grouping property of entropy. First suppose
that a < 1 . With probability 1 − a , the output Y is conditionally uniformly
distributed in the interval [−(1 − a)/2, +(1 − a)/2] ; whereas with probability a ,
Y has a split triangular density where the base of the triangle has width a .
1
h(Y ) = H(a) + (1 − a) ln(1 − a) + a( + ln a)
2
a
a
= −a ln a − (1 − a) ln(1 − a) + (1 − a) ln(1 − a) + + a ln a = nats.
2
2
If a > 1 the trapezoidal density of Y can be scaled by a factor a , which yields
h(Y ) = ln a+1/2a . Given any value of x , the output Y is conditionally uniformly
distributed over an interval of length a , so the conditional differential entropy in
nats is h(Y |X) = h(Z) = ln a for all a > 0 . Therefore the mutual information in
nats is
)
a/2 − ln a if a ≤ 1
I(X; Y ) =
1/2a
if a ≥ 0 .
As expected, I(X; Y ) → ∞ as a → 0 and I(X; Y ) → 0 as a → ∞ .
(b) As usual with additive noise, we can express I(X; Y ) in terms of h(Y ) and h(Z) :
I(X; Y ) = h(Y ) − h(Y |X) = h(Y ) − h(Z) .
Since both X and Z are limited to the interval [−1/2, +1/2] , their sum Y is
limited to the interval [−1, +1] . The differential entropy of Y is at most that of
a random variable uniformly distributed on that interval; that is, h(Y ) ≤ 1 . This
maximum entropy can be achieved if the input X takes on its extreme values x =
±1 each with probability 1/2. In this case, I(X; Y ) = h(Y ) − h(Z) = 1 − 0 = 1 .
Decoding for this channel is quite simple:
X̂ =
)
−1/2 if y < 0
+1/2 if y ≥ 0 .
This coding scheme transmits one bit per channel use with zero error probability.
(Only a received value y = 0 is ambiguous, and this occurs with probability 0.)
(c) When a is of the form 1/m for m = 2, 3, . . . , we can achieve the maximum
possible value I(X; Y ) = log m when X is uniformly distributed over the discrete
points {−1, −1+2/(m−1), . . . , +1−2/(m−1), +1} . In this case Y has a uniform
probability density on the interval [−1 − 1/(m − 1), +1 + 1/(m − 1)] . Other values
of a are left as an exercise.
206
Differential Entropy
4. Quantized random variables. Roughly how many bits are required on the average
to describe to 3 digit accuracy the decay time (in years) of a radium atom if the half-life
of radium is 80 years? Note that half-life is the median of the distribution.
Solution: Quantized random variables. The differential entropy of an exponentially
distributed random variable with mean 1/λ is log λe bits. If the median is 80 years,
then
2 80
1
λe−λx dx =
(8.11)
2
0
or
ln 2
λ=
= 0.00866
(8.12)
80
and the differential entropy is log e/λ . To represent the random variable to 3 digits ≈
10 bits accuracy would need log e/λ + 10 bits = 18.3 bits.
N
5. Scaling. Let h(X) = − f (x) log f (x) dx . Show h(AX) = log | det(A) | +h(X).
Solution: Scaling. Let Y = AX . Then the density of Y is
g(y) =
Hence
h(AX) = −
2
1
f (A−1 y).
|A|
g(y) ln g(y) dy
(8.14)
2
O
P
1
f (A−1 y) ln f (A−1 y) − log |A| dy
|A|
2
1
= −
f (x) [ln f (x) − log |A|] |A| dx
|A|
= h(X) + log |A|.
= −
(8.13)
(8.15)
(8.16)
(8.17)
6. Variational inequality: Verify, for positive random variables X , that
log EP (X) = sup [EQ (log X) − D(Q||P )]
(8.18)
Q
$
$
where EP (X) = x xP (x) and D(Q||P ) = x Q(x) log PQ(x)
(x) , and the supremum is
$
over all Q(x) ≥ 0 ,
Q(x) = 1 . It is enough to extremize J(Q) = E Q ln X −D(Q||P )+
$
λ( Q(x) − 1) .
Solution: Variational inequality
Using the calculus of variations to extremize
J(Q) =
!
x
q(x) ln x −
!
q(x) ln
x
we differentiate with respect to q(x) to obtain
!
q(x)
+ λ( q(x) − 1)
p(x)
x
∂J
q(x)
= ln x − ln
−1+λ=0
∂q(x)
p(x)
(8.19)
(8.20)
207
Differential Entropy
or
q(x) = c' xp(x)
where c' has to be chosen to satisfy the constraint,
c' = $
1
x xp(x)
(8.21)
$
x q(x)
= 1 . Thus
(8.22)
Substituting this in the expression for J , we obtain
!
J∗ =
x
c' xp(x) ln x −
= − ln c' +
= ln
!
!
x
!
c' xp(x) ln
x
c' xp(x) ln x −
xp(x)
!
c' xp(x)
p(x)
(8.23)
c' xp(x) ln x
(8.24)
x
(8.25)
x
To verify this is indeed a maximum value, we use the standard technique of writing it
as a relative entropy. Thus
ln
!
x
xp(x) −
!
q(x) ln x +
x
!
q(x) ln
x
q(x)
p(x)
=
!
q(x) ln
x
= D(q||p' )
q(x)
$xp(x)
yp(y)
y
≥ 0
Thus
ln
!
x
(8.26)
(8.27)
(8.28)
xp(x) = sup (EQ ln(X) − D(Q||P ))
(8.29)
Q
This is a special case of a general relationship that is a key in the theory of large
deviations.
7. Differential entropy bound on discrete entropy: Let X be a discrete random
variable on the set X = {a1 , a2 , . . .} with Pr(X = ai ) = pi . Show that

.
∞
∞
!
!
1
2

H(p1 , p2 , . . .) ≤ log(2πe)
pi i −
ipi
2
i=1
i=1
/2
Moreover, for every permutation σ ,

.
∞
∞
!
!
1
H(p1 , p2 , . . .) ≤ log(2πe)  pσ(i) i2 −
ipσ(i)
2
i=1
i=1

1
+ .
12
/2

1
+ .
12
(8.30)
(8.31)
Hint: Construct a random variable X ' such that Pr(X ' = i) = pi . Let U be an
uniform(0,1] random variable and let Y = X ' + U , where X ' and U are independent.
208
Differential Entropy
Use the maximum entropy bound on Y to obtain the bounds in the problem. This
bound is due to Massey (unpublished) and Willems(unpublished).
Solution: Differential entropy bound on discrete entropy
Of all distributions with the same variance, the normal maximizes the entropy. So the
entropy of the normal gives a good bound on the differential entropy in terms of the
variance of the random variable.
Let X be a discrete random variable on the set X = {a 1 , a2 , . . .} with
Pr(X = ai ) = pi .

(8.32)
.
∞
∞
!
!
1
H(p1 , p2 , . . .) ≤ log(2πe)  pi i2 −
ipi
2
i=1
i=1
/2
Moreover, for every permutation σ ,

.
∞
∞
!
!
1
H(p1 , p2 , . . .) ≤ log(2πe)  pσ(i) i2 −
ipσ(i)
2
i=1
i=1

1
+ .
12
/2

1
+ .
12
(8.33)
(8.34)
Define two new random variables. The first, X 0 , is an integer-valued discrete random
variable with the distribution
Pr(X0 = i) = pi .
(8.35)
Let U be a random variable uniformly distributed on the range [0, 1] , independent of
X0 . Define the continuous random variable X̃ by
X̃ = X0 + U.
(8.36)
The distribution of the r.v. X̃ is shown in Figure 8.1.
It is clear that H(X) = H(X0 ) , since discrete entropy depends only on the probabilities
and not on the values of the outcomes. Now
H(X0 ) = −
= −
= −
= −
∞
!
pi log pi
i=1
∞ *2 i+1
!
i
i=1
2
∞
! i+1
i=1 i
2 ∞
1
(8.37)
+
fX̃ (x) dx log
since fX̃ (x) = pi for i ≤ x < i + 1 .
i
fX̃ (x) log fX̃ (x) dx
fX̃ (x) log fX̃ (x) dx
= h(X̃),
*2
i+1
fX̃ (x) dx
+
(8.38)
(8.39)
(8.40)
(8.41)
209
Differential Entropy
X̃ 5
'
f (X̃)
Figure 8.1: Distribution of X̃ .
Hence we have the following chain of inequalities:
H(X) = H(X0 )
(8.42)
= h(X̃)
1
≤
log(2πe)Var(X̃)
2
1
=
log(2πe) (Var(X0 ) + Var(U ))
2

=
∞
!
1
log(2πe)  pi i2 −
2
i=1
.∞
!
i=1
ipi
/2
(8.43)
(8.44)
(8.45)
+

1
.
12
(8.46)
Since entropy is invariant with respect to permutation of p 1 , p2 , . . . , we can also obtain
a bound by a permutation of the pi ’s. We conjecture that a good bound on the variance
will be achieved when the high probabilities are close together, i.e, by the assignment
. . . , p5 , p3 , p1 , p2 , p4 , . . . for p1 ≥ p2 ≥ · · · .
How good is this bound? Let X be a Bernoulli random variable with parameter 21 ,
which implies that H(X) = 1 . The corresponding random variable X 0 has variance
1
4 , so the bound is
H(X) ≤
*
1
1
1
log(2πe)
+
2
4 12
+
= 1.255 bits.
(8.47)
8. Channel with uniformly distributed noise: Consider a additive channel whose
input alphabet X = {0, ±1, ±2} , and whose output Y = X + Z , where Z is uniformly
distributed over the interval [−1, 1] . Thus the input of the channel is a discrete random
210
Differential Entropy
variable, while the output is continuous. Calculate the capacity C = max p(x) I(X; Y )
of this channel.
Solution: Uniformly distributed noise
We can expand the mutual information
I(X; Y ) = h(Y ) − h(Y |X) = h(Y ) − h(Z)
(8.48)
and h(Z) = log 2 , since Z ∼ U (−1, 1) .
The output Y is a sum a of a discrete and a continuous random variable, and if the
probabilities of X are p−2 , p−1 , . . . , p2 , then the output distribution of Y has a uniform
distribution with weight p−2 /2 for −3 ≤ Y ≤ −2 , uniform with weight (p −2 + p−1 )/2
for −2 ≤ Y ≤ −1 , etc. Given that Y ranges from -3 to 3, the maximum entropy
that it can have is an uniform over this range. This can be achieved if the distribution
of X is (1/3, 0, 1/3,0,1/3). Then h(Y ) = log 6 and the capacity of this channel is
C = log 6 − log 2 = log 3 bits.
9. Gaussian mutual information. Suppose that (X, Y, Z) are jointly Gaussian and
that X → Y → Z forms a Markov chain. Let X and Y have correlation coefficient
ρ1 and let Y and Z have correlation coefficient ρ 2 . Find I(X; Z) .
Solution: Gaussian Mutual Information
First note that we may without any loss of generality assume that the means of X ,
Y and Z are zero. If in fact the means are not zero one can subtract the vector of
means without affecting the mutual information or the conditional independence of X ,
Z given Y . Let
.
/
σx2
σx σz ρxz
Λ=
,
σx σz ρxz
σz2
be the covariance matrix of X and Z . We can now use Eq. (8.34) to compute
I(X; Z) = h(X) + h(Z) − h(X, Z)
1
1
1
log (2πeσx2 ) + log (2πeσx2 ) − log (2πe|Λ|)
=
2
2
2
1
= − log(1 − ρ2xz )
2
Now,
ρxz =
=
=
=
E{XZ}
σx σz
E{E{XZ|Y }}
σx σz
E{E{X|Y }E{Z|Y }}
σx σz
0
E{
σx ρxy
σy Y
= ρxy ρzy
10
σx σz
σz ρzx
σy Y
1
}
211
Differential Entropy
We can thus conclude that
1
I(X; Y ) = − log(1 − ρ2xy ρ2zy )
2
10. The Shape of the Typical Set
Let Xi be i.i.d. ∼ f (x) , where
4
f (x) = ce−x .
N
(n)
Let h = − f ln f . Describe the shape (or form) or the typical set A !
f (xn ) ∈ 2−n(h±!) } .
= {xn ∈ Rn :
Solution: The Shape of the Typical Set
We are interested in the set { xn ∈ R : f (xn ) ∈ 2−n(h±!) }. This is:
2−n(h−!) ≤ f (xn ) ≤ 2−n(h+!)
Since Xi are i.i.d.,
f (xn ) =
=
n
B
i=1
n
B
f (x)
(8.49)
4
ce−xi
i=1
= enln(c)−
(8.50)
$n
i=1
x4i
(8.51)
(8.52)
Plugging this in for f (xn ) in the above inequality and using algebraic manipulation
gives:
n(ln(c) + (h − %)ln(2)) ≥
n
!
i=1
x4i ≥ n(ln(c) + (h + %)ln(2))
So the shape of the typcial set is the shell of a 4-norm ball
.
@
xn : ||xn ||4 ∈ (n(ln(c) + (h ± %)ln(2)))1/4
11. Non ergodic Gaussian process.
Consider a constant signal V in the presence of iid observational noise {Z i } . Thus
Xi = V + Zi , where V ∼ N (0, S) , and Zi are iid ∼ N (0, N ) . Assume V and {Zi }
are independent.
(a) Is {Xi } stationary?
A
212
Differential Entropy
(b) Find limn−→∞
1
n
$n
i=1 Xi .
Is the limit random?
(c) What is the entropy rate h of {Xi } ?
2 = lim
(d) Find the least mean squared error predictor X̂n+1 (X n ) and find σ∞
n−→∞ E(X̂n −
2
Xn ) .
(e) Does {Xi } have an AEP? That is, does − n1 log f (X n ) −→ h ?
Solution: Nonergodic Gaussian process
(a) Yes. EXi = EV + Zi = 0 for all i , and
EXi Xj = E(V + Zi )(V + Zj ) =
)
S,
i=j
S + N. i =
% j
(8.53)
Since Xi is Gaussian distributed it is completely characterized by its first and
second moments. Since the moments are stationary, X i is wide sense stationary,
which for a Gaussian distribution implies that X i is stationary.
(b)
n
1!
Xi =
n→∞ n
i=1
lim
n
1!
(Zi + V )
n→∞ n
i=1
lim
n
1!
Zi
n→∞ n
i=1
(8.54)
= V + lim
(8.55)
= V + EZi (by the strong law of large numbers)
(8.56)
= V
(8.57)
The limit is a random variable N (0, S) .
(c) Note that X n ∼ N (0, KX n ) , where KX n has diagonal values of S + N and off
diagonal values of S . Also observe that the determinant is |K X n | = N n (nS/N +
1) . We now compute the entropy rate as:
1
h(X ) = lim h(X1 , X2 , . . . , Xn )
n
1
= lim
log((2πe)n |KX n |)
2n
*
*
++
1
n n nS
= lim
log (2πe) N
+1
2n
N
*
+
1
1
nS
n
= lim
log(2πeN ) +
log
+1
2n
2n
N
*
+
1
1
nS
=
log 2πeN + lim
log
+1
2
2n
N
1
=
log 2πeN
2
(8.58)
(8.59)
(8.60)
(8.61)
(8.62)
(8.63)
213
Differential Entropy
(d) By iterated expectation we can write
0
12
E Xn+1 − X̂n+1 (X n )
*
=E E
*0
12 33
Xn+1 − X̂n+1 (X n ) 33 X n
++
(8.64)
We note that minimizing the expression is equivalent to minimizing the inner
expectation, and that for the inner expectation the predictor is a nonrandom
variable. Expanding the inner expectation and taking the derivative with respect
to the estimator X̂n+1 (X n ) , we get
0
E (Xn+1 − X̂n+1 (X n ))2 |X n
0
2
2
= E (Xn+1
− 2Xn+1 X̂n+1 (X n ) + X̂n+1
(X n ))|X n
so
0
1
dE (Xn+1 − X̂n+1 (X n ))2 |X n
dX̂n+1 (X n )
1
0
1
(8.65)
1
= E −2Xn+1 + 2X̂n+1 (X n )|X n (8.66)
= −2E(Xn+1 |X n ) + 2X̂n+1 (X n ) (8.67)
Setting the derivative equal to 0, we see that the optimal X̂n+1 (X n ) = E(Xn+1 |X n ) .
To find the limiting squared error for this estimator, we use the fact that V and
X n are normal random variables with known covariance matrix, and therefore the
conditional distribution of V given X n is
f (V |X ) ∼ N
n
.
n
!
S
SN
Xi ,
nS + N i=1
nS + N
/
(8.68)
Now
X̂n+1 (X n ) = E(Xn+1 |X n )
(8.69)
= E(V |X ) + E(Zn+1 |X )
(8.70)
=
(8.71)
n
S
nS + N
n
!
n
Xi + 0
i=1
and hence the limiting squared error
e2 =
=
=
=
lim E(X̂n − Xn )2
n→∞
lim E
.
lim E
.
lim E
.
n→∞
n→∞
n→∞
n−1
!
S
Xi − Xn
(n − 1)S + N i=1
(8.72)
/2
n−1
!
S
(Zi + V ) − Zn − V
(n − 1)S + N i=1
(8.73)
/2
n−1
!
S
N
Zi − Zn −
V
(n − 1)S + N i=1
(n − 1)S + N
(8.74)
/2
(8.75)
214
Differential Entropy
=
lim
n→∞
*
*
S
(n − 1)S + N
S
= lim
n→∞ (n − 1)S + N
= 0+N +0
+2 n−1
!
+2
EZi2 + EZn2 +
i=1
(n − 1)N + N +
*
*
N
(n − 1)S + N
N
(n − 1)S + N
+2
2
EV(8.76)
S
(8.77)
+2
= N
(8.78)
(8.79)
(e) Even though the process is not ergodic, it is stationary, and it does have an AEP
because
−1
1
1
1
−X t KX
n X/2
− ln f (X n ) = − ln
1 e
n
n (2π)n/2 |KX n | 2
1
1
1 t −1
=
ln(2π)n +
ln |KX n | +
X KX n X
2n
2n
2n
1
1
1 t −1
=
ln(2πe)n |KX n | − +
X KX n X
2n
2 2n
1
1
1 t −1
=
h(X n ) − +
X KX n X
n
2 2n
(8.80)
(8.81)
(8.82)
(8.83)
(8.84)
1
Since X ∼ N (0, K) , we can write X = K 2 W , where W ∼ N (0, I) . Then
$ 2
1
1
X t K −1 X = W t K 2 K −1 K 2 W = W t W =
Wi , and therefore X t K −1 X has a
χ2 distribution with n degrees of freedom. The density of the χ 2 distribution is
n
x
x 2 −1 e− 2
f (x) =
n
Γ( n2 )2 2
(8.85)
The moment generating function of the χ 2 distribution is
M (t) =
2
f (x)etx dx
(8.86)
=
2
x 2 −1 e− 2 tx
dx
n e
Γ( n2 )2 2
=
2
1
((1
n
(1−2t) 2 −1
=
n
x
(8.87)
n
− 2t)x) 2 −1 e−(1−2t)x/2
n
Γ( n2 )2 2
1
n
(1 − 2t) 2
(1 − 2t) dx
(8.88)
(8.89)
By the Chernoff bound (Lemma 11.19.1)
Q
1! 2
Pr
Wi > 1 + %
n
R
n
≤ min e−s(1+!) (1 − 2s)− 2
s
n
≤ e− 2 (!−ln(1+!))
(8.90)
(8.91)
215
Differential Entropy
setting s =
Thus
!
2(1+!)
.
3
Q3
R
3 1
3
n
3
3
Pr 3− ln f (X ) − hn 3 > %
=
n
Q3 *
R
+3
31
1 ! 2 33
3
Pr 3
−1 +
Wi 3 > % (8.92)
2
n
n
≤ e− 2 (!−ln(1+!))
(8.93)
and the bound goes to 0 as n → ∞ , and therefore by the Borel Cantelli lemma,
1
− ln f (X n ) − hn → 0
n
with probability 1. So Xi satisfies the AEP even though it is not ergodic.
(8.94)
216
Differential Entropy
Chapter 9
Gaussian channel
1. A channel with two independent looks at Y. Let Y 1 and Y2 be conditionally
independent and conditionally identically distributed given X.
(a) Show I(X; Y1 , Y2 ) = 2I(X; Y1 ) − I(Y1 ; Y2 ).
(b) Conclude that the capacity of the channel
X
%
% (Y1 , Y2 )
is less than twice the capacity of the channel
X
%
% Y1
Solution: Channel with two independent looks at Y .
(a)
I(X; Y1 , Y2 ) = H(Y1 , Y2 ) − H(Y1 , Y2 |X)
(9.1)
= H(Y1 ) + H(Y2 ) − I(Y1 ; Y2 ) − H(Y1 |X) − H(Y2 |X)
(9.2)
= I(X; Y1 ) + I(X; Y2 ) − I(Y1 ; Y2 )
(9.4)
(since Y1 and Y2 are conditionally independent given X)(9.3)
= 2I(X; Y1 ) − I(Y1 ; Y2 )
(since Y1 and Y2 are conditionally iden(9.5).
tically distributed)
(b) The capacity of the single look channel X → Y 1 is
C1 = max I(X; Y1 ).
p(x)
217
(9.6)
218
Gaussian channel
The capacity of the channel X → (Y1 , Y2 ) is
C2 = max I(X; Y1 , Y2 )
(9.7)
p(x)
= max 2I(X; Y1 ) − I(Y1 ; Y2 )
(9.8)
≤ max 2I(X; Y1 )
(9.9)
p(x)
p(x)
= 2C1 .
(9.10)
Hence, two independent looks cannot be more than twice as good as one look.
2. The two-look Gaussian channel.
X
%
% (Y1 , Y2 )
Consider the ordinary Gaussian channel with two correlated looks at X, i.e., Y =
(Y1 , Y2 ) , where
Y1 = X + Z1
(9.11)
Y2 = X + Z2
(9.12)
with a power constraint P on X , and (Z1 , Z2 ) ∼ N2 (0, K) , where
K=
"
N
Nρ
Nρ N
#
(9.13)
.
Find the capacity C for
(a) ρ = 1
(b) ρ = 0
(c) ρ = -1
Solution: The two look Gaussian channel.
It is clear that the input distribution that maximizes the capacity is X ∼ N (0, P ) .
Evaluating the mutual information for this distribution,
C2 = max I(X; Y1 , Y2 )
(9.14)
= h(Y1 , Y2 ) − h(Y1 , Y2 |X)
(9.15)
= h(Y1 , Y2 ) − h(Z1 , Z2 )
(9.17)
= h(Y1 , Y2 ) − h(Z1 , Z2 |X)
Now since
(Z1 , Z2 ) ∼ N
.
0,
"
N Nρ
Nρ N
#/
(9.16)
,
(9.18)
219
Gaussian channel
we have
1
1
log(2πe)2 |KZ | = log(2πe)2 N 2 (1 − ρ2 ).
2
2
Since Y1 = X + Z1 , and Y2 = X + Z2 , we have
h(Z1 , Z2 ) =
(Y1 , Y2 ) ∼ N
.
0,
"
P +N
P + ρN
P + ρN
P +N
#/
,
(9.19)
(9.20)
and
1
1
log(2πe)2 |KY | = log(2πe)2 (N 2 (1 − ρ2 ) + 2P N (1 − ρ)).
2
2
Hence the capacity is
h(Y1 , Y2 ) =
C2 = h(Y1 , Y2 ) − h(Z1 , Z2 )
*
+
2P
1
log 1 +
.
=
2
N (1 + ρ)
(9.21)
(9.22)
(9.23)
P
(a) ρ = 1 . In this case, C = 21 log(1 + N
) , which is the capacity of a single look
channel. This is not surprising, since in this case Y 1 = Y2 .
(b) ρ = 0 . In this case,
*
+
2P
1
C = log 1 +
,
(9.24)
2
N
which corresponds to using twice the power in a single look. The capacity is the
same as the capacity of the channel X → (Y 1 + Y2 ) .
(c) ρ = −1 . In this case, C = ∞ , which is not surprising since if we add Y 1 and Y2 ,
we can recover X exactly.
Note that the capacity of the above channel in all cases is the same as the capacity of
the channel X → Y1 + Y2 .
3. Output power constraint. Consider an additive white Gaussian noise channel with
an expected output power constraint P . Thus Y = X + Z , Z ∼ N (0, σ 2 ) , Z is
independent of X , and EY 2 ≤ P . Find the channel capacity.
Solution: Output power constraint
C =
=
=
max
I(X; Y )
(9.25)
max
(h(Y ) − h(Y |X))
(9.26)
max
(h(Y ) − h(Z))
(9.27)
f (X):E(X+Z)2 ≤P
f (X):E(X+Z)2 ≤P
f (X):E(X+Z)2 ≤P
(9.28)
Given a constraint on the output power of Y , the maximum differential entropy is
achieved by a normal distribution, and we can achieve this by have X ∼ N (0, P − N ) ,
and in this case,
1
1
1
P
C = log 2πeP − log 2πeN = log .
(9.29)
2
2
2
N
220
Gaussian channel
4. Exponential noise channels. Consider an additive noise channel Y i = Xi + Zi ,
where Zi is i.i.d. exponentially distributed noise with mean µ . Assume that we have a
mean constraint on the signal, i.e., EXi ≤ λ . Show that the capacity of such a channel
is C = log(1 + µλ ) .
Solution: Exponential noise channels
Just as for the Gaussian channel, we can write
C =
=
=
=
=
max
I(X; Y )
(9.30)
max
h(Y ) − h(Y |X)
(9.31)
max
h(Y ) − h(Z|X)
(9.32)
max
h(Y ) − h(Z)
(9.33)
max
h(Y ) − (1 + ln µ)
(9.34)
f (X):EX≤λ
f (X):EX≤λ
f (X):EX≤λ
f (X):EX≤λ
f (X):EX≤λ
(9.35)
Now Y = X + Z , and EY = EX + EZ ≤ λ + µ . Given a mean constraint, the entropy
is maximized by the exponential distribution, and therefore
max h(Y ) = 1 + ln(λ + µ)
(9.36)
EY ≤λ+µ
Unlike normal distributions, though, the sum of two exponentially distributed variables
is not exponential, so we cannot set X to be an exponential distribution to achive
the right distribution of Y . Instead, we can use characterstic functions to find the
distribution of X . The characteristic function of an exponential distribution
ψ(t) =
2
1 − µx −itx
1
e e
dx =
µ
1 − iµt
(9.37)
The distribution of X that when added to Z will give an exponential distribution for
Y is the ratio of the characterstic functions
1 − iµt
ψX (t) =
(9.38)
1 − i(λ + µ)t
(9.39)
which can been seen to correpond to mixture of a point mass and an exponential
distribution. If
)
µ
0,
with probability λ+µ
X=
(9.40)
λ
Xe , with probability λ+µ
where Xe has an exponential distribution with parameter µ + λ , we can verify that
the characterstic function of X is correct.
Using the value of entropy for exponential distributions, we get
*
C = h(Y ) − h(Z) = 1 + + ln(λ + µ) − (1 + ln µ) = ln 1 +
λ
µ
+
(9.41)
221
Gaussian channel
5. Fading channel.
Consider an additive noise fading channel
V
X
Z
'(
'(
&
&
% )
%
% Y
%& %&
Y = XV + Z,
where Z is additive noise, V is a random variable representing fading, and Z and V
are independent of each other and of X . Argue that knowledge of the fading factor V
improves capacity by showing
I(X; Y |V ) ≥ I(X; Y ).
Solution: Fading Channel
Expanding I(X; Y, V ) in two ways, we get
I(X; Y, V ) = I(X; V ) + I(X; Y |V )
= I(X; Y ) + I(X; V |Y )
(9.42)
(9.43)
i.e.
I(X; V ) + I(X; Y |V ) = I(X; Y ) + I(X; V |Y )
I(X; Y |V ) = I(X; Y ) + I(X; V |Y )
I(X; Y |V ) ≥ I(X; Y )
(9.44)
(9.45)
where (9.44) follows from the independence of X and V , and (9.45) follows from
I(X; V |Y ) ≥ 0 .
6. Parallel channels and waterfilling. Consider a pair of parallel Gaussian channels,
i.e.,
/
.
/ .
/ .
Y1
X1
Z1
,
(9.46)
=
+
Z2
Y2
X2
where
.
Z1
Z2
/
∼N
.
0,
"
σ12 0
0 σ22
#/
,
(9.47)
and there is a power constraint E(X12 + X22 ) ≤ 2P . Assume that σ12 > σ22 . At what
power does the channel stop behaving like a single channel with noise variance σ 22 , and
begin behaving like a pair of channels?
222
Gaussian channel
Solution: Parallel channels and waterfilling. By the result of Section 10.4, it follows
that we will put all the signal power into the channel with less noise until the total
power of noise + signal in that channel equals the noise power in the other channel.
After that, we will split any additional power evenly between the two channels.
Thus the combined channel begins to behave like a pair of parallel channels when the
signal power is equal to the difference of the two noise powers, i.e., when 2P = σ 12 − σ22 .
7. Multipath Gaussian channel. Consider a Gaussian noise channel of power contraint
P , where the signal takes two different paths and the received noisy signals are added
together at the antenna.
Z1
&
#$
% Y1 '
+
.
'''
---!"
'''
#$
''
/
%Y
Σ
X '''
.
- !"
'''
'' #$
-'
/ +
% Y2 --!"
0
Z2
(a) Find the capacity
if Z 1 and Z2 are jointly normal with covariance
" of this channel
#
σ 2 ρσ 2
matrix KZ =
.
ρσ 2 σ 2
(b) What is the capacity for ρ = 0 , ρ = 1 , ρ = −1 ?
Solution: Multipath Gaussian channel.
The channel reduces to the following channel:
Z1 + Z2
2X
&
#$
%
!"
% Y
The power constraint on the input 2X is 4P . Z 1 and Z2 are zero mean, and therefore
so is Z1 + Z2 . Then
V ar(Z1 + Z2 ) = E[(Z1 + Z2 )2 ]
= E[Z12 + Z22 + 2Z1 Z2 ]
= 2σ 2 + 2ρσ 2 .
223
Gaussian channel
Thus the noise distribution is N (0, 2σ 2 (1 + ρ) .
(a) Plugging the noise and power values into the formula for the one-dimensional
P
(P, N ) channel capacity, C = 21 log(1 + N
) , we get
C =
=
(b)
i. When ρ = 0 , C =
ii. When ρ = 1 , C =
1
2
1
2
log(1 +
log(1 +
iii. When ρ = −1 , C = ∞ .
*
+
1
4P
log 1 + 2
2
2σ (1 + ρ)
*
+
2P
1
log 1 + 2
.
2
σ (1 + ρ)
2P
σ2 ) .
P
).
σ2
8. Parallel Gaussian channels
Consider the following parallel Gaussian channel
Z1 ∼ N (0, N1 )
X1
&
#$
% +
!"
% Y1
&
#$
% +
!"
% Y2
Z2 ∼ N (0, N2 )
X2
where Z1 ∼ N (0,N1 ) and Z2 ∼ N (0,N2 ) are independent Gaussian random variables
and Yi = Xi + Zi . We wish to allocate power to the two parallel channels. Let β 1 and
β2 be fixed. Consider a total cost constraint β 1 P1 + β2 P2 ≤ β, where Pi is the power
allocated to the ith channel and βi is the cost per unit power in that channel. Thus
P1 ≥ 0 and P2 ≥ 0 can be chosen subject to the cost constraint β .
(a) For what value of β does the channel stop acting like a single channel and start
acting like a pair of channels?
(b) Evaluate the capacity and find P1 , P2 that achieve capacity for β1 = 1, β2 =
2, N1 = 3, N2 = 2 and β = 10 .
224
Gaussian channel
Solution: Parallel channels
When we have cost constraints on the power, we need to optimize the total capacity of
the two parallel channels
*
1
P1
C = log 1 +
2
N1
+
*
1
P2
+ log 1 +
2
N2
+
(9.48)
subject to the constraint that
β1 P1 + β2 P2 ≤ β
(9.49)
Using the methods of Section 9.4, we set
J(P1 , P2 ) =
!1
*
Pi
log 1 +
2
Ni
+
!
+ λ(
and differentiating with respect to Pi , we have
βi Pi )
1
1
+ λβi = 0,
2 Pi + Ni
or
Pi = (
or
ν
− Ni )+ .
βi
(9.50)
(9.51)
(9.52)
βi Pi = (ν − βi Ni )+ .
(9.53)
(a) It follows that we will put all the signal power into the channel with less weighted
noise ( βi Ni ) until the total weighted power of noise + signal in that channel
equals the weighted noise power in the other channel. After that, we will split
any additional power between the two channels according to their weights. Thus
the combined channel begins to behave like a pair of parallel channels when the
signal power is equal to the difference of the two weighted noise powers, i.e., when
β1 β = β2 N − 2 − β1 N1 .
(b) In this case, β1 N1 < β2 N2 , so we would put power into channel 1 until β = 1 .
After that we would put power according to their weights, i.e. we would divide
remaining power of 9 in the ratio 2 is to 1. Thus we would set P 1 = 6 + 1 and
P2 = 3 , and so that ν = 10 in the equation above. The capacity in this case is
C=
1
1
log(1 + 7/3) + log(1 + 3/2) = 1.53 bits.
2
2
(9.54)
9. Vector Gaussian channel
Consider the vector Gaussian noise channel
Y = X + Z,
where X = (X1 , X2 , X3 ) , Z = (Z1 , Z2 , Z3 ), and Y = (Y1 , Y2 , Y3 ), E5X52 ≤ P, and



1 0 1
 

Z ∼ N 0,  0 1 1  .
1 1 2
225
Gaussian channel
Find the capacity. The answer may be surprising.
Solution: Vector Gaussian channel
Normally one would water-fill over the eigenvalues of the noise covariance matrix. Here
we have the degenerate case (i.e., one of the eigenvalue is zero), which we can exploit
easily.
Musing upon the structure of the noise covariance matrix, one can see Z 1 + Z2 = Z3 .
Thus, by processing the output vector as Y 1 + Y2 − Y3 = (X1 + Z1 ) + (X2 + Z2 ) − (X3 +
Z3 ) = X1 + X2 − X3 , we can get rid of the noise completely. Therefore, we have infinite
capacity.
Note that we can reach the conclusion by water-filling on the zero eigenvalue.
10. The capacity of photographic film. Here is a problem with a nice answer that takes
a little time. We’re interested in the capacity of photographic film. The film consists
of silver iodide crystals, Poisson distributed, with a density of λ particles per square
inch. The film is illuminated without knowledge of the position of the silver iodide
particles. It is then developed and the receiver sees only the silver iodide particles that
have been illuminated. It is assumed that light incident on a cell exposes the grain if it
is there and otherwise results in a blank response. Silver iodide particles that are not
illuminated and vacant portions of the film remain blank. The question is, “What is
the capacity of this film?”
We make the following assumptions. We grid the film very finely into cells of area dA .
It is assumed that there is at most one silver iodide particle per cell and that no silver
iodide particle is intersected by the cell boundaries. Thus, the film can be considered
to be a large number of parallel binary asymmetric channels with crossover probability
1 − λdA .
By calculating the capacity of this binary asymmetric channel to first order in dA
(making the necessary approximations) one can calculate the capacity of the film in
bits per square inch. It is, of course, proportional to λ . The question is what is the
multiplicative constant?
The answer would be λ bits per unit area if both illuminator and receiver knew the
positions of the crystals.
Solution: Capacity of photographic film
As argued in the problem, each small cell can be modelled as a binary asymmetric
Z-channel with probability transition matrix
p(y|x) =
"
1
0
1 − λ dA λ dA
#
x, y ∈ {0, 1}
(9.55)
where x = 1 corresponds to shining light on the cell. Let β = λ dA .
First we express I(X; Y ) , the mutual information between the input and output of the
Z-channel, as a function of α = Pr(X = 1) :
H(Y |X) = Pr(X = 0) · 0 + Pr(X = 1) · H(β) = αH(β)
226
Gaussian channel
H(Y ) = H(Pr(Y = 1)) = H(αβ)
I(X; Y ) = H(Y ) − H(Y |X) = H(αβ) − αH(β)
Since I(X; Y ) = 0 when α = 0 and α = 1 , the maximum mutual information is
obtained for some value of α such that 0 < α < 1 .
Using elementary calculus, we determine that (converting the equation to nats rather
than bits),
1 − αβ
d
I(X; Y ) = β ln
− He (β)
dα
αβ
To find the optimal value of α , we set this equal to 0, and solve for α as
1
1
β 1 + e Heβ(β)
α=
If we let
γ=
(9.56)
1
1+e
(9.57)
He (β)
β
then αβ = γ , and
γ =1−γ =
or
e
He (β)
β
1+e
He (β)
β
ln γ − ln γ =
= γe
He (β)
β
He (β)
β
(9.58)
(9.59)
so that
I(X; Y ) = He (αβ) − αHe (β)
1
He (β)
= He (γ) −
He (β)
β
1+e β
= −γ ln γ − γ ln γ − γ(ln γ − ln γ)
= − ln γ
= ln(1 + e
−
≈ e
H (β)
− eβ
)
He (β)
β
(9.60)
(9.61)
(9.62)
(9.63)
(9.64)
(9.65)
−β ln β−(1−β) ln(1−β)
−
β
(9.66)
−− ln β
(9.67)
= e
≈ e
= β
(9.68)
Thus the capacity of this channel is approximately β nats when β → 0 .
11. Gaussian mutual information. Suppose that (X, Y, Z) are jointly Gaussian and
that X → Y → Z forms a Markov chain. Let X and Y have correlation coefficient
ρ1 and let Y and Z have correlation coefficient ρ 2 . Find I(X; Z) .
227
Gaussian channel
Solution: Gaussian Mutual Information (Repeat of problem 8.9)
First note that we may without any loss of generality assume that the means of X ,
Y and Z are zero. If in fact the means are not zero one can subtract the vector of
means without affecting the mutual information or the conditional independence of X ,
Z given Y . Let
.
/
σx2
σx σz ρxz
Λ=
,
σx σz ρxz
σz2
be the covariance matrix of X and Z . We can now use Eq. (9.93) and Eq. (9.94) to
compute
I(X; Z) = h(X) + h(Z) − h(X, Z)
1
1
1
=
log (2πeσx2 ) + log (2πeσx2 ) − log (2πe|Λ|)
2
2
2
1
2
= − log(1 − ρxz )
2
Now,
ρxz =
=
=
=
E{XZ}
σx σz
E{E{XZ|Y }}
σx σz
E{E{X|Y }E{Z|Y }}
σx σz
0
E{
σx ρxy
σy Y
= ρxy ρzy
10
σx σz
σz ρzx
σy Y
1
}
We can thus conclude that
1
I(X; Y ) = − log(1 − ρ2xy ρ2zy )
2
12. Time varying channel. A train pulls out of the station at constant velocity. The
received signal energy thus falls off with time as 1/i 2 . The total received signal at time
i is
* +
1
Yi =
Xi + Zi ,
i
where Z1 , Z2 , . . . are i.i.d. ∼ N (0, N ) . The transmitter constraint for block length n
is
n
1!
x2 (w) ≤ P, w ∈ {1, 2, . . . , 2nR }.
n i=1 i
Using Fano’s inequality, show that the capacity C is equal to zero for this channel.
228
Gaussian channel
Solution: Time Varying Channel
Just as in the proof of the converse for the Gaussian channel
nR = H(W ) = I(W ; Ŵ ) + H(W |Ŵ )
(9.69)
≤ I(W ; Ŵ ) + n%n
≤ I(X ; Y ) + n%n
n
(9.70)
(9.71)
n
= h(Y ) − h(Y |X ) + n%n
n
n
(9.72)
n
n
= h(Y ) − h(Z ) + n%n
(9.73)
h(Yi ) − h(Z n ) + n%n
(9.74)
n
≤
=
=
n
!
i=1
n
!
i=1
n
!
h(Yi ) −
n
!
h(Zi ) + n%n
(9.75)
i=1
I(Xi ; Yi ) + n%n .
(9.76)
i=1
Now let Pi be the average power of the i th column of the codebook, i.e.,
Pi =
1 !
2nR
x2i (w).
(9.77)
w
Then, since Yi = 1i Xi + Zi and since Xi and Zi are independent, the average power
of Yi is i12 Pi + N . Hence, since entropy is maximized by the normal distribution,
1
1
log 2πe( 2 Pi + N ).
2
i
Continuing with the inequalities of the converse, we obtain
(9.78)
h(Yi ) ≤
nR ≤
!
(h(Yi ) − h(Zi )) + n%n
!*
1
1
1
log(2πe( 2 Pi + N )) − log 2πeN
2
i
2
*
+
!1
Pi
=
log 1 + 2
+ n%n .
2
i N
≤
+
(9.79)
+ n%n
(9.80)
(9.81)
Since each of the codewords satisfies the power constraint, so does their average, and
hence
1!
Pi ≤ P.
(9.82)
n i
This corresponds to a set of parallel channels with increasing noise powers. Using
waterfilling, the optimal solution is to put power into the first few channels which have
the lowest noise power. Since the noise power in the channel i is N i = i2 N , we will put
power into channels only where Pi + Ni ≤ λ . The height of the water level in the water
filling is less than N +nP , and hence the for all channels we put power, i 2 N < nP +N ,
√
√
or only o( n) channels. The average rate is less than n1 n 12 log(1 + nP/N ) and the
capacity per transmission goes to 0. Hence there capacity of this channel is 0.
229
Gaussian channel
13. Feedback capacity for n = 2 . Let (Z1 , Z2 ) ∼ N (0, K),
|KX+Z |
|KZ |
maximum of 21 log
tr(KX ) ≤ 2P.
K=
"
1 ρ
ρ 1
#
. Find the
with and without feedback given a trace (power) constraint
Solution: Feedback capacity
Without feedback, the solution is based on waterfilling. The eigenvalues of the matrix
are 1 ± ρ , and therefore if P < ρ , we would use only one of the channels, and achieve
2P
) . For P ≥ ρ , we would use both eigenvalues and the
capacity C = 12 log(1 + 1−ρ
waterlevel for water filling would be obtained by distributing the remaining power
equally across both eigenvalues. Thus the water level would be (1 + ρ) + (2P − 2ρ)/2 =
1
1+P
1 + P , and the capacity would be C = 12 log( 1+P
1+ρ ) + 2 log( 1−ρ ) .
With feedback, the solution is a a little more complex. From (9.102), we have
(n)
Cn,F B = max
|(B + I)KZ (B + I)t + KV |
1
log
(n)
2n
|K |
(9.83)
Z
where the maximum is taken over all nonnegative definite K V and strictly lower triangular B such that
(n)
tr(BKZ B t + KV ) ≤ nP.
(9.84)
In the case when n = 2 ,
(B +
(n)
I)KZ (B
+ I) + KV
=
t
=
.
.
1 0
b 1
/.
1 ρ
ρ 1
/.
1 b
0 1
1 + P1
ρ+b
ρ + b 1 + P2 + 2ρb + b2
/
+
/
.
/
P1 0
(9.85)
0 P2
(9.86)
subject to the constraint that
tr
.
P1
0
0 P2 + b2
/
≤ 2P
(9.87)
Expanding this, we obtain the mutual information as
I(X; Y ) = 1 + P1 + P2 + P1 P2 + P1 b2 + 2P1 bρ − ρ2
(9.88)
P1 + P2 + b2 = 2P
(9.89)
subject to
Setting up the functional and differentiating with respect to the variables, we obtain
the following relationships
P1 = P2 + b2 + 2bρ
(9.90)
and
b = ρP1
(9.91)
230
Gaussian channel
14. Additive noise channel. Consider the channel Y = X + Z , where X is the transmitted signal with power constraint P , Z is independent additive noise, and Y is the
received signal. Let
)
1
0,
with prob. 10
Z=
9 ,
Z ∗ , with prob. 10
where Z ∗ ∼ N (0, N ). Thus Z has a mixture distribution which is the mixture of a
Gaussian distribution and a degenerate distribution with mass 1 at 0.
(a) What is the capacity of this channel? This should be a pleasant surprise.
(b) How would you signal in order to achieve capacity?
Solution: Additive Noise channel
The capacity of this channel is infinite, since at the times the noise is 0 the output is
exactly equal to the input, and we can send an infinite number of bits.
To send information through this channel, just repeat the same real number at the
input. When we have three or four outputs that agree, that should correspond to the
points where the noise is 0, and we can decode an infinite number of bits.
15. Discrete input continuous output channel. Let Pr{X = 1} = p , Pr{X = 0} =
1 − p , and let Y = X + Z , where Z is uniform over the interval [0, a] , a > 1 , and Z
is independent of X .
(a) Calculate
I(X; Y ) = H(X) − H(X|Y ).
(b) Now calculate I(X; Y ) the other way by
I(X; Y ) = h(Y ) − h(Y |X).
(c) Calculate the capacity of this channel by maximizing over p
Solution: Discrete input Continuous Output channel
(a) Since
f (Y |X = 0) =
and
f (Y |X = 1) =
Therefore,
f (y) =
)
0≤y<a
otherwise
1
a
0

1

 (1 − p) a
1
a

 p1
a

1

 (1 − p) a
1
a

 p1
a
0≤y<1
1≤y≤a
a< y < 1+a
0≤y<1
1≤y≤a
a<y <1+a
(9.92)
(9.93)
(9.94)
231
Gaussian channel
(b) H(X) = H(p) . H(X|Y = y) is nonzero only for 1 ≤ y ≤ a , and by Bayes rule,
conditioned on Y , the probabilty that X = 1 is
P (X = 1|Y = y) =
P (X = 1)f (y|X = 1)
=p
P (X = 1)f (y|X = 1) + P (X = 0)f (y|X = 0)
and hence H(X|Y ) = P (1 ≤ Y ≤ a)H(p) =
H(X) − H(X|Y ) = a1 H(p) .
a−1
a H(p) .
(9.95)
Therefore I(X; Y ) =
(c) f (Y |X = 0) ∼ U (0, a) , and hence h(Y |X = 0) = log a , and similarly for X = 1 ,
so that h(Y |X) = log a .
The differential entropy h(Y ) can be calculated from (9.94) as
h(Y ) = −
=
=
2
0
1
1−p
1
dy −
(1 − p) log
a
a
2
a
1
1
1
log dy −
a
a
1
(−p log p − (1 − p) log(1 − p)) + log a
a
1
H(p) + log a
a
2
1+a
a
p
p
log dy(9.96)
a
a
(9.97)
(9.98)
and again I(X; Y ) = h(Y ) − h(Y |X) = a1 H(p) .
(d) The mutual information is maximized for p = 0.5 , and the corresponding capacity
of the channel is a1 .
16. Gaussian mutual information
Suppose that (X, Y, Z) are jointly Gaussian and that X → Y → Z forms a Markov
chain. Let X and Y have correlation coefficient ρ 1 and let Y and Z have correlation
coefficient ρ2 . Find I(X; Z) .
Solution: Gaussian Mutual Information (Repeat of problem 8.9)
First note that we may without any loss of generality assume that the means of X ,
Y and Z are zero. If in fact the means are not zero one can subtract the vector of
means without affecting the mutual information or the conditional independence of X ,
Z given Y . Let
Λ=
.
σx2
σx σz ρxz
σx σz ρxz
σz2
/
,
be the covariance matrix of X and Z . We can now use Eq. (9.93) and Eq. (9.94) to
compute
I(X; Z) = h(X) + h(Z) − h(X, Z)
1
1
1
log (2πeσx2 ) + log (2πeσx2 ) − log (2πe|Λ|)
=
2
2
2
1
= − log(1 − ρ2xz )
2
232
Gaussian channel
Now,
ρxz =
=
=
=
E{XZ}
σx σz
E{E{XZ|Y }}
σx σz
E{E{X|Y }E{Z|Y }}
σx σz
0
E{
σx ρxy
σy Y
= ρxy ρzy
10
σz ρzx
σy Y
σx σz
1
}
We can thus conclude that
1
I(X; Y ) = − log(1 − ρ2xy ρ2zy )
2
17. Impulse power.
Consider the additive white Gaussian channel
Zi
Xi
Yi
where Zi ∼ N (0, N ) , and the input signal has average power constraint P.
(a) Suppose we use all our power at time 1, i.e. EX 12 = nP and EXi2 = 0, for
i = 2, 3, . . . , n. Find
I(X n ; Y n )
max
n
f (xn )
where the maximization is over all distributions f (x n ) subject to the constraint
EX12 = nP and EXi2 = 0, for i = 2, 3, . . . , n.
(b) Find
max
$n
1
f (xn ): E ( n
and compare to part (a).
Solution: Impulse power.
i=1
Xi2 )≤P
1
I(X n ; Y n )
n
233
Gaussian channel
(a)
max
I(X n ; Y n )
n
(a)
=
(b)
=
I(X1 ; Y1 )
0 n
1
1
nP
log
1
+
2
N
max
n
where (a) comes from the constraint that all our power, nP , be used at time 1
and (b) comes from that fact that given Gaussian noise and a power constraint
nP , I(X; Y ) ≤ 12 log(1 + nP
N ).
(b)
max
I(X n ; Y n )
n
(a)
=
=
=
nI(X; Y )
n
max I(X; Y )
*
+
1
P
log 1 +
2
N
max
where (a) comes from the fact that the channel is memoryless. Notice that the
quantity in part (a) goes to zero as n → ∞ while the quantity in part (b) stays
constant. Hence the impulse scheme is suboptimal.
18. Gaussian channel with time-varying mean. Find the capacity of the following
Gaussian channels.
Zi
6
$%
'
Xi
"#
' Yi
Let Z1 , Z2 , . . . be independent and let there be a power contraint P on x n (W ) . Find
the capacity when
(a) µi = 0 , for all i .
(b) µi = ei ,
i = 1, 2, . . . Assume µi known to the transmitter and receiver.
234
Gaussian channel
(c) µi unknown, but µi i.i.d. ∼ N (0, N1 ) for all i .
Solution: Gaussian Noise with time-varying mean
(a) This is the classical Gaussian channel capacity problem with
C=
*
P
1
log 1 +
2
N
+
.
(b) Since the transmitter and the receiver both know the means, the receiver can
simply subtract the mean while decoding. Thus, we are back in case (a). Hence
the capacity is
+
*
1
P
.
C = log 1 +
2
N
(c) Let pi be the density of Zi . Clearly pi is independent of the time index i . Also
2
(y−µ)2
µ2
1
1
e− 2N1 √
e− 2N
2πN 1
2πN
= N (0, N ) ∗ N (0, N1 )
p(y) =
√
= N (0, N + N1 ),
where ∗ represents convolution. From the distribution of Z i it is obvious that the
optimal input distribution Xi is N (0, P ) and the capacity is
*
+
1
P
C = log 1 +
.
2
N + N1
19. A parametric form for channel capacity
Consider m parallel Gaussian channels, Y i = Xi + Zi , where Zi ∼ N (0, λi ) and the
$
(λ−λi )+
1
noises Xi are independent r.v.’s. Thus C = m
) where λ is chosen
i=1 2 log(1 +
λi
$m
+
to satisfy i=1 (λ − λi ) = P . Show that this can be rewritten in the form
$
(λ − λi )
P (λ) =
$i:λi ≤λ 1
λ
C(λ) =
i:λi ≤λ 2 log λi .
Here P (λ) is piecewise linear and C(λ) is piecewise logarithmic in λ.
Solution: Parametric form of channel capacity
The optimal strategy for parallel Gaussian channels is given by water-filling. Here, λ
represents the maximum received power in any channel which is being used; i.e. any
channel i for which λi < λ will act as a single Gaussian channel with noise N i = λi
and will communicate a signal with power P i = λ − Ni . The (·)+ notation ensures
235
Gaussian channel
that channels with λi > λ will not be used. Thus, the the total transmitted power, as
a function of λ , is given by
P (λ) =
!
!
Pi =
i:λi <λ
(λ − λi ) =
i:λi <λ
!
i
(λ − λi )+
(9.99)
Now, if we consider the capacity of channel i,
Ci =
=
=
and we obtain
C(λ) =
*
+
Pi
1
log 1 +
2
Ni
*
+
1
λ − λi
log 1 +
2
λi
1
λ
log
2
λi
!
i:λi <λ
Ci =
! 1
i:λi <λ
2
log
(9.100)
(9.101)
(9.102)
λ
λi
(9.103)
20. Robust decoding. Consider an additive noise channel whose output Y is given by
Y = X + Z,
where the channel input X is average power limited,
EX 2 ≤ P,
and the noise process {Zk }∞
k=−∞ is iid with marginal distribution p Z (z) (not necessarily
Gaussian) of power N ,
EZ 2 = N.
(a) Show that the channel capacity, C = max EX 2 ≤P I(X; Y ) , is lower bounded by
CG where
*
+
P
1
,
CG = log 1 +
2
N
i.e., the capacity CG corresponding to white Gaussian noise.
(b) Decoding the received vector to the codeword that is closest to it in Euclidean
distance is in general sub-optimal, if the noise is non-Gaussian. Show, however,
that the rate CG is achievable even if one insists on performing nearest neighbor decoding (minimum Euclidean distance decoding) rather than the optimal
maximum-likelihood or joint typicality decoding (with respect to the true noise
distribution).
(c) Extend the result to the case where the noise is not iid but is stationary and
ergodic with power N .
236
Gaussian channel
Hint for b and c: Consider a size 2nR random codebook whose codewords are drawn
independently of √
each other according to a uniform distribution over the n dimensional
sphere of radius nP .
• Using a symmetry argument show that, conditioned on the noise vector, the ensemble average probability of error depends on the noise vector only via its Euclidean
norm 5z5 .
• Use a geometric argument to show that this dependence is monotonic.
• Given a rate R < CG choose some N ' > N such that
*
+
1
P
R < log 1 + ' .
2
N
Compare the case where the noise is iid N (0, N ' ) to the case at hand.
• Conclude the proof using the fact that the above ensemble of codebooks can achieve
the capacity of the Gaussian channel (no need to prove that).
Solution: Robust decoding
(a) The fact that the worst noise is Gaussian is a consequence of the entropy power
inequality, and is proved in problem 9.21. Since C G is the capacity of the Gaussian,
it is the lower bound on the capacity of the channel for all noise distributions.
(b) As suggested in the hint, we will draw
√ codewords at random according to a uniform
distribution on a sphere of radius nP . We will send a codeword over the channel,
and given the received sequence, find the codeword that is closest (in Euclidean
distance) to the received sequence.
First, by the symmetry of the code construction, the probability of error does not
depend on which message was sent, so without loss of generality, we can assume
that message 1 (i.e., codeword 1 was sent).
The probability of error then depends only on whether the noise sequence Z n is
such that the received vector the closer to some other codeword. However, given
any transmitted codeword, all the other codewords are randomly distributed in all
directions, and therefore the probability of error does not depend on the direction
of the error, only on the norm ||X(1) + Z|| and ||Z|| . By the spherical symmetry
of the choice of X(1) , the probability of error depends only on ||Z|| .
To show monotonocity of the error rate with the norm of the noise, consider an
error where the received sequence X(1)+Z is closer to some other codeword X(2)
say. Now if increase the norm of the error a little, we have
||X(1) + Z(1 + ∆) − X(2)|| = ||X(1) + Z − X(2)|| + ∆||Z||
(9.104)
by the triangle inequality, and hence if the output is closer to X(2) , then increasing
the norm of the noise will not reduce the probability that it is closer to X(2) . Thus
the error probability is monotonically decreasing the the norm of the error.
237
Gaussian channel
Finally we consider using this code on a Gaussian channel with noise N ' > N ,
such the R < 21 log(1 + P/N ' ) . Since this is a Gaussian channel, the standard
results show that we can achieve arbitrarily low probability of error for this code.
Now comparing the non Gaussian channel with the Gaussian channel, we can see
that probability close to 1 that the norm of the error in the Gaussian channel is
less than the norm of the error for the non Gaussian channel. By the monotonicity
of the probability of error with respect to the norm of the noise, we can see that
the probability of error for the non-Gaussian channel is less than the probability
of error for the Gaussian channel, and hence goes to 0 as the block length goes to
∞.
21. A mutual information game. Consider the following channel:
Z
X
&
#$
%
!"
% Y
Throughout this problem we shall constrain the signal power
EX = 0,
EX 2 = P,
(9.105)
EZ = 0,
EZ 2 = N,
(9.106)
and the noise power
and assume that X and Z are independent. The channel capacity is given by I(X; X +
Z) .
Now for the game. The noise player chooses a distribution on Z to minimize I(X; X +
Z), while the signal player chooses a distribution on X to maximize I(X; X + Z).
Letting X ∗ ∼ N (0, P ), Z ∗ ∼ N (0, N ), show that Gaussian X ∗ and Z ∗ satisfy the
saddlepoint conditions
I(X; X + Z ∗ ) ≤ I(X ∗ ; X ∗ + Z ∗ ) ≤ I(X ∗ ; X ∗ + Z).
(9.107)
min max I(X; X + Z) = max min I(X; X + Z)
(9.108)
Thus
Z
X
X
=
Z
*
1
P
log 1 +
2
N
+
,
(9.109)
and the game has a value. In particular, a deviation from normal for either player
worsens the mutual information from that player’s standpoint. Can you discuss the
implications of this?
238
Gaussian channel
Note: Part of the proof hinges on the entropy power inequality from Chapter 17, which
states that if X and Y are independent random n -vectors with densities, then
2
2
2
2 n h(X+Y) ≥ 2 n h(X) + 2 n h(Y) .
(9.110)
Solution: A mutual information game.
Let X and Z be random variables with EX = 0 , EX 2 = P , EZ = 0 and EZ 2 = N .
Let X ∗ ∼ N (0, P ) and Z ∗ ∼ N (0, N ) . Then as proved in class,
I(X; X + Z ∗ ) = h(X + Z ∗ ) − h(X + Z ∗ |X)
∗
∗
= h(X + Z ) − h(Z )
∗
∗
∗
≤ h(X + Z ) − h(Z )
∗
∗
∗
= I(X ; X + Z ),
(9.111)
(9.112)
(9.113)
(9.114)
where the inequality follows from the fact that given the variance, the entropy is maximized by the normal.
To prove the other inequality, we use the entropy power inequality,
22h(X+Z) ≤ 22h(X) + 22h(Z) .
Let
g(Z) =
22h(Z)
.
2πe
(9.115)
(9.116)
Then
I(X ∗ ; X ∗ + Z) = h(X ∗ + Z) − h(X ∗ + Z|X ∗ )
∗
= h(X + Z) − h(Z)
0
1
1
∗
≥
log 22h(X ) + 22h(Z) − h(Z)
2
1
1
=
log ((2πe)P + (2πe)g(Z)) − log(2πe)g(Z)
2
2
*
+
1
P
=
log 1 +
,
2
g(Z)
(9.117)
(9.118)
(9.119)
(9.120)
(9.121)
P
where the inequality follows from the entropy power inequality. Now 1 + g(Z)
is a
decreasing function of g(Z) , it is minimized when g(Z) is maximum, which occurs
when h(Z) is maximized, i.e., when Z is normal. In this case, g(Z ∗ ) = N and we
have the following inequality,
I(X ∗ ; X ∗ + Z) ≥ I(X ∗ ; X ∗ + Z ∗ ).
(9.122)
Combining the two inequalities, we have
I(X; X + Z ∗ ) ≤ I(X ∗ ; X ∗ + Z ∗ ) ≤ I(X ∗ ; X ∗ + Z).
(9.123)
239
Gaussian channel
Hence, using these inequalities, it follows directly that
min max I(X; X + Z) ≤ max I(X; X + Z ∗ )
Z
X
X
= I(X ∗ ; X ∗ + Z ∗ )
∗
∗
(9.124)
(9.125)
= min I(X ; X + Z)
(9.126)
≤ max min I(X ∗ ; X ∗ + Z).
(9.127)
Z
X
Z
We have shown an inequality relationship in one direction between min Z maxX I(X; X+
Z) and maxX minZ I(X; X + Z) . We will now prove the inequality in the other direction is a general result for all functions of two variables.
For any function f (a, b) of two variables, for all b , for any a 0 ,
f (a0 , b) ≥ min f (a, b).
(9.128)
max f (a0 , b) ≥ max min f (a, b).
(9.129)
a
Hence
b
a
b
Taking the minimum over a0 , we have
min max f (a0 , b) ≥ min max min f (a, b).
(9.130)
min max f (a, b) ≥ max min f (a, b).
(9.131)
min max I(X; X + Z) ≥ max min I(X; X + Z).
(9.132)
a0
a0
b
a
b
or
a
b
a
b
From this result,
Z
X
X
Z
From (9.127) and (9.132), we have
min max I(X; X + Z) = max min I(X; X + Z)
Z
X
X
=
Z
*
1
P
log 1 +
2
N
+
.
(9.133)
(9.134)
This inequality implies that we have a saddlepoint in the game, which is the value of
the game. If signal player chooses X ∗ , the noise player cannot do any better than
choosing Z ∗ . Similarly, any deviation by the signal player from X ∗ will make him do
worse, if the noise player has chosen Z ∗ . Any deviation by either player will make him
do worse.
Another implication of this result is that not only is the normal the best possible signal
distribution, it is the worst possible noise distribution.
240
Gaussian channel
22. Recovering the noise
Consider a standard Gaussian channel Y n = X n + Z n , where Zi is i.i.d. ∼ N (0, N ),
$
i = 1, 2, . . . , n, and n1 ni=1 Xi2 ≤ P.
Here we are interested in recovering the noise Z n and we don’t care about the signal
X n . By sending X n = (0, 0, . . . , 0) , the receiver gets Y n = Z n and can fully determine
the value of Z n . We wonder how much variability there can be in X n and still recover
the Gaussian noise Z n . The use of the channel looks like
Zn
Xn
&
#$
%
!"
% Yn
% Ẑ n (Y n )
Argue that, for some R > 0 , the transmitter can arbitrarily send one of 2 nR different
sequences of xn without affecting the recovery of the noise in the sense that
Pr{Ẑ n %= Z n } → 0
as n → ∞ .
For what R is this possible?
Solution: Recovering the noise
We prove that sup R = C = C(P/N ).
If R < C , from the achievability proof of the channel coding theorem, 2 nR different
X n sequences can be decoded correctly with arbitrarily small error for n large enough.
Once X n is determined, Z n can be easily computed as Y n − X n .
We show that this is optimal by using proof by contradiction. Assume that there is
some R > C such that Z n can be recovered with Pr{Ẑ n %= Z n } → 0 as n → ∞ .
But this implies that X n = Y n − Z n can be determined with arbitrary precision; that
is, there is a codebook X n (W ), W = 1, . . . , 2nR with R > C and Pr{X̂ n %= X n } =
Pr{W ≤ Ŵ } → 0 as n → ∞ . As we saw in the converse proof of the channel coding
theorem, this is impossible. Hence, we have the contradiction and R cannot be greater
than C.
Chapter 10
Rate Distortion Theory
1. One bit quantization of a single Gaussian random variable. Let X ∼ N (0, σ 2 )
and let the distortion measure be squared error. Here we do not allow block
U descriptions.
Show that the optimum reproduction points for 1 bit quantization are ± π2 σ , and that
2
the expected distortion for 1 bit quantization is π−2
π σ .
Compare this with the distortion rate bound D = σ 2 2−2R for R = 1 .
Solution: One bit quantization of a Gaussian random variable. Let X ∼ N (0, σ 2 )
and let the distortion measure be squared error. With one bit quantization, the obvious
reconstruction regions are the positive and negative real axes. The reconstruction point
is the centroid of each region. For example, for the positive real line, the centroid a is
2
a =
∞
x√
2
x2
e− 2σ2 dx
(10.1)
2πσ 2
2 ∞ V
2 −y
=
σ
e dy
π
0
V
2
= σ
,
π
0
(10.2)
(10.3)
using the substitution y = x2 /2σ 2 . The expected distortion for one bit quantization is
D =
2
0
−∞
+
= 2
2
.
x+σ
2
∞
0
∞
−∞
−2
*
.
V /2
x−σ
x +σ
2
0
2
∞
2
π
.
1
π
−2xσ
2
π
+
√
√
1
x2
2πσ 2
1
e− 2σ2 dx
x2
2πσ 2
V /
2
π
x2
e− 2σ2 dx
2πσ 2
V /2
22
241
√
√
e− 2σ2 dx
1
2πσ 2
x2
e− 2σ2 dx
(10.4)
(10.5)
(10.6)
(10.7)
242
Rate Distortion Theory
2
1
= σ + σ2 − 4 √ σ2
π
2π
π
−
2
.
= σ2
π
2
V
2
π
(10.8)
(10.9)
2. Rate distortion function with infinite distortion. Find the rate distortion function R(D) = min I(X; X̂) for X ∼ Bernoulli ( 12 ) and distortion


 0,
x = x̂,
d(x, x̂) =
1, x = 1, x̂ = 0,

 ∞, x = 0, x̂ = 1.
Solution: Rate Distortion. We wish to evaluate the rate distortion function
R(D) =
p(x̂|x):
$
min
(x,x̂)
p(x)p(x̂|x)d(x,x̂)≤D
(10.10)
I(X; X̂).
Since d(0, 1) = ∞ , we must have p(0, 1) = 0 for a finite distortion. Thus, the distortion
D = p(1, 0) , and hence we have the following joint distribution for (X, X̂) (assuming
D ≤ 21 ).
"
#
1
0
2
p(x, x̂) =
(10.11)
D 12 − D
The mutual information for this joint distribution is
R(D) = I(X; X̂) = H(X) − H(X|X̂)
1 1
1
= H( , ) − ( + D)H
2 2
2
= 1+
1
log
2
1
2
1
2
+D
.
1
2
+ D log
1
2
+D
1
2
,
1
2
D
,
+D
D
+D
/
(10.12)
(10.13)
(10.14)
which is the rate distortion function for this binary source if 0 ≤ D ≤ 12 . Since we can
achieve D = 12 with zero rate (use p(x̂ = 0) = 1 ), we have R(D) = 0 for D ≥ 12 .
3. Rate distortion for binary source with asymmetric distortion. Fix p(x̂|x) and
evaluate I(X; X̂) and D for
X ∼ Bern(1/2),
d(x, x̂) =
"
0 a
b 0
#
(The rate distortion function cannot be expressed in closed form.)
Solution: Binary source with asymmetric distortion. X ∼ Bern( 12 ), and the distortion
measure is
"
#
0 a
d(x, x̂) =
.
(10.15)
b 0
243
Rate Distortion Theory
Proceeding with the minimization to calculate R(D) as
R(D) =
p(x̂|x):
$
min
p(x)p(x̂|x)d(x,x̂)≤D
I(X; X̂),
(10.16)
we must choose the conditional distribution p(x̂|x) . Setting p(0|0) = α and p(1|1) = β ,
we get the joint distribution
p(x, x̂) =
"
α
2
1−β
2
1−α
2
β
2
#
(10.17)
.
Hence the distortion constraint can be written as
1−α
1−β
a+
b ≤ D.
2
2
(10.18)
The function to be minimized, I(X; X̂) , can be written
I(X; X̂) = H(X̂) − H(X̂|X) = H(
1
1
α+1−β
) − H(α) − H(β).
2
2
2
(10.19)
Using the method of Lagrange multipliers, we have
J(α, β, λ) = H(
α+1−β
1
1
1−α
1−β
) − H(α) − H(β) + λ(
a+
b)
2
2
2
2
2
(10.20)
and differentiating to find the maximum, we have the following equations:
.
1
log
2
.
1
−
log
2
1−α+β
2
α+1−β
2
1−α+β
2
α+1−β
2
/
/
*
1
1−α
−
log
2
α
*
1
1−β
−
log
2
β
+
+
−
λa
2
= 0
(10.21)
−
λb
2
= 0
(10.22)
1−α
1−β
a+
b=D
2
2
(10.23)
In principle, these equations can be solved for α , β , and λ and substituted back in
the definition to find the rate distortion function. This problem unfortunately does not
have an explicit solution.
4. Properties of R(D) . Consider a discrete source X ∈ X = {1, 2, . . . , m} with distribution p1 , p2 , . . . , pm and a distortion measure d(i, j) . Let R(D) be the rate distortion
function for this source and distortion measure. Let d ' (i, j) = d(i, j) − wi be a new
distortion measure and let R ' (D) be the corresponding rate distortion function. Show
$
that R' (D) = R(D + w̄) , where w̄ =
pi wi , and use this to show that there is no
essential loss of generality in assuming that min x̂ d(i, x̂) = 0 , i.e., for each x ∈ X , there
is one symbol x̂ which reproduces the source with zero distortion.
This result is due to Pinkston[10].
244
Rate Distortion Theory
Solution: Properties of the rate distortion function. By definition,
R' (D ' ) =
p(x̂|x):
$
min
p(x̂|x)p(x)d% (x,x̂)≤D %
I(X; X̂).
(10.24)
For any conditional distribution p(x̂|x) , we have
D' =
!
p(x)p(x̂|x)d' (x, x̂)
(10.25)
p(x)p(x̂|x)(d(x, x̂) − wx )
(10.26)
x,x̂
=
!
x,x̂
=
!
x,x̂
p(x)p(x̂|x)d(x, x̂) −
= D−
!
!
p(x)wx
x
!
p(x̂|x)
(10.27)
x̂
p(x)wx
(10.28)
x
= D − w̄,
(10.29)
or D = D ' + w̄ . Hence
R' (D ' ) =
=
p(x̂|x):
p(x̂|x):
$
$
min
p(x̂|x)p(x)d% (x,x̂)≤D %
min
I(X; X̂)
p(x̂|x)p(x)d(x,x̂)≤D % +w̄
I(X; X̂)
= R(D ' + w̄).
(10.30)
(10.31)
(10.32)
For any distortion matrix, we can set w i = minx̂ d(i, x̂) , hence ensuring that minx̂ d' (x, x̂) =
0 for every x . This produces only a shift in the rate distortion function and does not
change the essential theory. Hence, there is no essential loss of generality in assuming
that for each x ∈ X , there is one symbol x̂ which reproduces it with zero distortion.
5. Rate distortion for uniform source with Hamming distortion. Consider a
source X uniformly distributed on the set {1, 2, . . . , m} . Find the rate distortion
function for this source with Hamming distortion, i.e.,
d(x, x̂) =
)
0 if x = x̂,
1 if x %= x̂.
Solution: Rate distortion for uniform source with Hamming distortion. X is uniformly
distributed on the set {1, 2, . . . , m} . The distortion measure is
d(x, x̂) =
)
0 if x = x̂
1 if x %= x̂
Consider any joint distribution that satisfies the distortion constraint D . Since D =
Pr(X %= X̂) , we have by Fano’s inequality
H(X|X̂) ≤ H(D) + D log(m − 1),
(10.33)
245
Rate Distortion Theory
and hence
I(X; X̂) = H(X) − H(X|X̂)
(10.34)
≥ log m − H(D) − D log(m − 1).
(10.35)
We can achieve this lower bound by choosing p(x̂) to be the uniform distribution, and
the conditional distribution of p(x|x̂) to be
p(x̂|x)
)
=1−D
= D/(m − 1)
if x̂ = x
if x̂ =
% x.
(10.36)
It is easy to verify that this gives the right distribution on X and satisfies the bound
1
with equality for D < 1 − m
. Hence
R(D)
)
= log m − H(D) − D log(m − 1)
0
if 0 ≤ D ≤ 1 −
1
if D > 1 − m
.
1
m
(10.37)
6. Shannon lower bound for the rate distortion function. Consider a source X
with a distortion measure d(x, x̂) that satisfies the following property: all columns of
the distortion matrix are permutations of the set {d 1 , d2 , . . . , dm } . Define the function
φ(D) = $mmax
p:
i=1
pi di ≤D
(10.38)
H(p).
The Shannon lower bound on the rate distortion function[14] is proved by the following
steps:
(a) Show that φ(D) is a concave function of D .
(b) Justify the following series of inequalities for I(X; X̂) if Ed(X, X̂) ≤ D ,
I(X; X̂) = H(X) − H(X|X̂)
= H(X) −
≥ H(X) −
!
(c) Argue that
$
p(x̂)H(X|X̂ = x̂)
(10.40)
p(x̂)φ(Dx̂ )
(10.41)
x̂
!
x̂
≥ H(X) − φ
where Dx̂ =
(10.39)
.
!
p(x̂)Dx̂
x̂
/
≥ H(X) − φ(D),
(10.42)
(10.43)
x p(x|x̂)d(x, x̂) .
R(D) ≥ H(X) − φ(D),
which is the Shannon lower bound on the rate distortion function.
(10.44)
246
Rate Distortion Theory
(d) If in addition, we assume that the source has a uniform distribution and that
the rows of the distortion matrix are permutations of each other, then R(D) =
H(X) − φ(D) , i.e., the lower bound is tight.
Solution: Shannon lower bound on the rate distortion function.
(a) We define
φ(D) = $mmax
p:
i=1
pi di ≤D
(10.45)
H(p).
From the definition, if D1 ≥ D2 , then φ(D1 ) ≥ φ(D2 ) since the maximization is
over a larger set. Hence φ(D) is a monotonic increasing function.
To prove concavity of φ(D) , consider two levels of distortion D 1 and D2 and let
p(1) and p(2) achieve the maxima in the definition of φ(D 1 ) and φ(D2 ) . Let p(λ)
be the mixture of the two distributions, i.e.,
p(λ) = λp(1) + (1 − λ)p(2) .
(10.46)
Then the distortion is a mixture of the two distortions
Dλ =
!
i
(λ)
pi di = λD1 + (1 − λ)D2 .
(10.47)
Since entropy is a concave function, we have
H(p(λ) ) ≥ λH(p(1) ) + (1 − λ)H(p(2) ).
(10.48)
φ(Dλ ) =
(10.49)
Hence
p:
$max
pi di =Dλ
≥ H(p(λ) )
≥ λH(p
(1)
H(p)
(10.50)
) + (1 − λ)H(p
(2)
)
= λφ(D1 ) + (1 − λ)φ(D2 ),
(10.51)
(10.52)
proving that φ(D) is a concave function of D .
(b) For any (X, X̂) that satisfy the distortion constraint, we have
I(X; X̂)
(a)
=
(b)
=
(c)
≥
(d)
≥
(e)
≥
H(X) − H(X|X̂)
H(X) −
H(X) −
!
(10.53)
p(x̂)H(X|X̂ = x̂)
(10.54)
p(x̂)φ(Dx̂ )
(10.55)
x̂
!
x̂
H(X) − φ(
!
p(x̂)Dx̂ )
(10.56)
x̂
H(X) − φ(D),
(10.57)
247
Rate Distortion Theory
where
(a) follows from the definition of mutual information,
(b) from the definition of conditional entropy,
$
$
(c) follows from the definition of φ(Dx̂ ) where Dx̂ = p(x|x̂)d(x, x̂) = p(x|x̂)di%
that H(p(x|x̂)) ≤ φ(Dx̂ )
(d) follows from Jensen’s inequality and the concavity of φ , and
$
$
(e) follows from the monotonicity of φ and the fact that
p(x̂)Dx̂ = p(x, x̂)d(x, x̂) ≤
D.
Hence, from the definition of the rate distortion function, we have
R(D) =
p(x̂|x):
$ min
p(x,x̂)d(x,x̂)≤D
I(X; X̂)
(10.58)
(10.59)
≥ H(X) − φ(D),
which is the Shannon lower bound on the rate distortion function.
(c) Let p∗ = (p∗1 , p∗2 , . . . , p∗m ) be the distribution that achieves the maximum in the
definition of the φ(D) . Assume that the source has a uniform distribution and
that the rows of the distortion matrix are permutations of each other. Let the
distortion matrix be [aij ] . We can then choose p(x̂) to have a uniform distribution
and choose p(x = i|x̂ = j) = p∗k , if aij = dk . For this joint distribution,
px (i) =
!
px̂ (j)px|x̂ (i|j)
(10.60)
j
=
! 1
j
=
m
p∗k
(10.61)
1
m
(10.62)
since the rows of the distortion matrix are permutations of each other and therefore
each element p∗k , k = 1, 2, . . . , m occurs once in the above sum. Hence the
distribution of x has the desired source distribution. For this joint distribution,
we have
!
px,x̂ (i, j)aij
=
i,j
! 1 !
j
=
m
! 1
j
m
px|x̂ (i|j)aij
(10.63)
p∗k dk
(10.64)
i
! 1 !
j
=
m
k
D
= D,
(10.65)
(10.66)
the desired distortion. The mutual information
I(X; X̂) = H(X) − H(X|X̂)
(10.67)
248
Rate Distortion Theory
= H(X) −
= H(X) −
= H(X) −
! 1
j
m
! 1
j
m
! 1
j
m
H(X|X̂ = j)
(10.68)
H(p∗ )
(10.69)
φ(D)
(10.70)
= H(X) − φ(D).
(10.71)
Hence using this joint distribution in the definition of the rate distortion function
R(D) =
p(x̂|x):
$ min
p(x,x̂)d(x,x̂)≤D
I(X; X̂)
(10.72)
(10.73)
≤ I(X; X̂)
= H(X) − φ(D).
(10.74)
Combining this with the Shannon lower bound on the rate distortion function,
we must have equality in the above equation and hence we have equality in the
Shannon lower bound.
7. Erasure distortion. Consider X ∼ Bernoulli( 21 ), and let the distortion measure be
given by the matrix
"
#
0 1 ∞
d(x, x̂) =
.
(10.75)
∞ 1 0
Calculate the rate distortion function for this source. Can you suggest a simple scheme
to achieve any value of the rate distortion function for this source?
Solution: Erasure distortion. Consider X ∼ Bernoulli( 21 ), and the distortion measure
d(x, x̂) =
"
0 1 ∞
∞ 1 0
#
.
(10.76)
The infinite distortion constrains p(0, 1) = p(1, 0) = 0 . Hence by symmetry the joint
distribution of (X, X̂) is of the form shown in Figure 10.1.
For this joint distribution, it is easy to calculate the distortion D = α and that
I(X; X̂) = H(X) − H(X|X̂) = 1 − α . Hence we have R(D) = 1 − D for 0 ≤ D ≤ 1 .
For D > 1 , R(D) = 0 .
It is very see how we could achieve this rate distortion function. If D is rational, say
k/n , then we send only the first n − k of any block of n bits. We reproduce these bits
exactly and reproduce the remaining bits as erasures. Hence we can send information
at rate 1 − D and achieve a distortion D . If D is irrational, we can get arbitrarily
close to D by using longer and longer block lengths.
8. Bounds on the rate distortion function for squared error distortion. For the
case of a continuous random variable X with mean zero and variance σ 2 and squared
249
Rate Distortion Theory
0 *
1−α
*
*
*
*
*
*
*
(
(
(
(
(
(
(
1 (
' 0
*
*
*
α*
*
*
+
* e
)
(
(
α (
(
(
(
(
1−α
' 1
Figure 10.1: Joint distribution for erasure rate distortion of a binary source
error distortion, show that
h(X) −
1
σ2
1
log(2πeD) ≤ R(D) ≤ log .
2
2
D
(10.77)
For the upper bound, consider the joint distribution shown in Figure 10.2. Are Gaussian
random variables harder or easier to describe than other random variables with the same
variance?
Solution: Bounds on the rate distortion function for squared error distortion.
We assume that X has zero mean and variance σ 2 . To prove the lower bound, we use
the same techniques as used for the Guassian rate distortion function. Let (X, X̂) be
random variables such that E(X − X̂)2 ≤ D . Then
I(X; X̂) = h(X) − h(X|X̂)
(10.78)
= h(X) − h(X − X̂|X̂)
≥ h(X) − h(X − X̂)
(10.79)
(10.80)
≥ h(X) − h(N (0, E(X − X̂) ))
1
= h(X) − log(2πe)E(X − X̂)2
2
1
≥ h(X) − log(2πe)D.
2
2
(10.81)
(10.82)
(10.83)
To prove the upper bound, we consider the joint distribution as shown in Figure 10.3,
250
Rate Distortion Theory
0
2
Z ∼ N 0, σDσ
2 −D
X
1
6
$%
'
"#
X̂ =
σ2 −D
σ2
σ2 −D
σ2
6
()
' *
' X̂
&'
(X + Z)
Figure 10.2: Joint distribution for upper bound on rate distortion function.
0
2
Z ∼ N 0, σDσ
2 −D
X
1
6
$%
'
"#
X̂ =
σ2 −D
σ2
σ2 −D
σ2
6
()
' *
' X̂
&'
(X + Z)
Figure 10.3: Joint distribution for upper bound on rate distortion function
251
Rate Distortion Theory
and calculate the distortion and the mutual information between X and X̂ . Since
X̂ =
σ2 − D
(X + Z) ,
σ2
(10.84)
we have
2
E(X − X̂)
= E
.
σ2 − D
D
X
−
Z
σ2
σ2
*
D
σ2
+2
=
=
*
D
σ2
= D,
+2
EX +
2
σ +
2
.
.
/2
σ2 − D
σ2
σ2 − D
σ2
(10.85)
/2
/2
EZ 2
(10.86)
Dσ 2
σ2 − D
(10.87)
(10.88)
since X and Z are independent and zero mean. Also the mutual information is
I(X; X̂) = h(X̂) − h(X̂|X)
σ2 − D
= h(X̂) − h(
Z).
σ2
(10.89)
(10.90)
Now
E X̂
2
=
=
=
.
.
.
σ2 − D
σ2
σ2 − D
σ2
σ2 − D
σ2
= σ 2 − D.
/2
/2
E(X + Z)2
(10.91)
(EX 2 + EZ 2 )
(10.92)
/2 .
Dσ 2
σ + 2
σ −D
2
/
(10.93)
(10.94)
Hence, we have
σ2 − D
Z)
σ2
σ2 − D
= h(X̂) − h(Z) − log
σ2
1
Dσ 2
σ2 − D
≤ h(N (0, σ 2 − D)) − log(2πe) 2
− log
2
σ −D
σ2
I(X; X̂) = h(X̂) − h(
=
=
1
1
Dσ 2
1
log(2πe)(σ 2 − D) − log(2πe) 2
− log
2
2
σ −D 2
1
σ2
log ,
2
D
.
(10.95)
(10.96)
(10.97)
/2
σ2 − D
(10.98)
σ2
(10.99)
252
Rate Distortion Theory
which combined with the definition of the rate distortion function gives us the required
upper bound.
For a Gaussian random variable, h(X) = 12 log(2πe)σ 2 and the lower bound is equal to
the upper bound. For any other random variable, the lower bound is strictly less than
the upper bound and hence non-Gaussian random variables cannot require more bits
to describe to the same accuracy than the corresponding Gaussian random variables.
This is not surprising, since the Gaussian random variable has the maximum entropy
and we would expect that it would be the most difficult to describe.
9. Properties of optimal rate distortion code. A good (R, D) rate distortion code
with R ≈ R(D) puts severe constraints on the relationship of the source X n and the
representations X̂ n . Examine the chain of inequalities (10.100–10.112) considering the
conditions for equality and interpret as properties of a good code. For example, equality
in (10.101) implies that X̂ n is a deterministic function of X n .
Solution: Properties of optimal rate distortion code. The converse of the rate distortion
theorem relies on the following chain of inequalities
nR
(a)
≥
H(X̂ n )
(10.100)
≥
H(X̂ n ) − H(X̂ n |X n )
(10.101)
=
H(X ) − H(X |X̂ )
(10.103)
(b)
(c)
=
(d)
=
(e)
=
(f )
≥
=
(g)
≥
=
I(X̂ n ; X n )
n
n
!
i=1
n
!
i=1
n
!
i=1
n
!
i=1
n
!
(10.102)
n
H(Xi ) − H(X n |X̂ n )
H(Xi ) −
H(Xi ) −
n
!
i=1
n
!
i=1
(10.104)
H(Xi |X̂ n , Xi−1 , . . . , X1 )
(10.105)
H(Xi |X̂i )
(10.106)
I(Xi ; X̂i )
(10.107)
R(Ed(Xi , X̂i ))
(10.108)
i=1
n
!
n
i=1
(h)
n
.
1
R(Ed(Xi , X̂i ))
n
(10.109)
/
n
1!
Ed(Xi ; X̂i )
n i=1
(10.110)
≥
nR
(i)
=
nR(Ed(X n , X̂ n ))
(10.111)
=
nR(D).
(10.112)
253
Rate Distortion Theory
We will have equality in
(a) if X̂ n is uniformly distributed over the set of codewords -i.e., if all the codewords
were equally likely,
(b) if X̂ n is a deterministic function of X n ,
(f) if each Xi depends only on the corresponding X̂i and is conditionally independent
of every other X̂j ,
(g) if the joint distribution of Xi and X̂i is the one achieving the minimum in the
definition of the rate distortion function, and
(h) if either the rate distortion curve is a straight line or if all the distortions (at each
i ) are equal.
Thus the optimal rate distortion code would be deterministic, and the joint distribution between the source symbol and the codeword at each instant of time would be
independent and equal to the joint distribution that achieves the minimum of the rate
distortion function. The distortion would be the same for each time instant.
10. Rate distortion. Find and verify the rate distortion function R(D) for X uniform
on X = {1, 2, . . . , 2m} and
d(x, x̂) =
)
1
0
for x − x̂ odd,
for x − x̂ even,
where X̂ is defined on X̂ = {1, 2, . . . , 2m} .
(You may wish to use the Shannon lower bound in your argument.)
Solution: Rate distortion
Since the columns of the distortion measure are alternate 0 and 1, they are all permutations of each other, and we can apply the Shannon lower bound on the rate distortion
function. The Shannon lower bound says that
R(D) ≥ H(X) − φ(D),
(10.113)
where
φ(D) = $mmax
p:
i=1
pi di ≤D
H(p).
(10.114)
In Problem 6, it was shown that if the input probability distribution is uniform, the
bound is tight, and the Shannon lower bound is equal to the rate distortion function.
Therefore to calculate the R(D) , we only need to compute φ(D) for the distortion
measure of the problem. Each row of the distortion matrix is a permutation (actually a
cyclic shift) of the first row [010101 . . . 01] . Let Y be random variable with distribution
p1 , . . . , p2m , and let Z be the value of the d(0, Y ) . Thus Z is 0 on the even values of
Y and 1 on the odd values. Then
φ(D) =
p:
$mmax
i=1
pi di ≤D
H(p)
(10.115)
254
Rate Distortion Theory
=
p:
=
p:
=
p:
=
p:
$mmax
i=1
pi di ≤D
$mmax
i=1
pi di ≤D
$mmax
i=1
pi di ≤D
$mmax
i=1
pi di ≤D
H(Y )
(10.116)
H(Y, Z)
(10.117)
H(Z) + H(Y |Z)
(10.118)
−p log p − (1 − p) log(1 − p) + (1 − p)H(Y |Z = 0) + pH(Y(10.119)
|Z = 1)
(10.120)
$
$
where p = Pr(Z = 1) . Since i pi di = i:Z=1 pi = p , we have p ≤ D . Given Z = 0 ,
there are m possible values of Y , and the entropy is maximized by a uniform over these
values. Similarly, conditioned on Z = 1 , H(Y |Z = 1) is maximized by an uniform
distribution on the m values of Y where Z = 1 . Thus
φ(D) = max H(p) + p log m + (1 − p) log m = H(D) + log m
p:p≤D
(10.121)
and hence
R(D) = H(X) − φ(D) = log 2m − log m − H(D) = 1 − H(D)
(10.122)
11. Lower bound
Let
4
e−x
X ∼ N ∞ −x4
dx
−∞ e
and
N
4
x4 e−x dx
N
= c.
4
e−x dx
Define g(a) = max h(X) over all densities such that EX 4 ≤ a . Let R(D) be the
rate distortion function for X with the above density and with distortion criterion
d(x, x̂) = (x − x̂)4 . Show R(D) ≥ g(c) − g(D) .
Solution: Lower bound
This is a continuous analog of the Shannon lower bound for the rate distortion function.
By similar arguments
R(D) =
min
I(X; X̂)
(10.123)
h(X) − h(X|X̂)
(10.124)
Ed(X,X̂)≤D
=
min
Ed(X,X̂)≤D
(10.125)
The maximum entropy distribution given the expected fourth power constraint is of
the form
4
e−x
X ∼ N ∞ −x4
(10.126)
dx
−∞ e
255
Rate Distortion Theory
and hence h(X) = g(c) .
Now h(X|X̂) = h(X − X̂|X̂) ≤ h(X − X̂) ≤ g(D) from the definition of g(a) =
maxEX 4 =a h(X) . Therefore
R(D) ≥ g(c) − g(D)
(10.127)
12. Adding a column to the distortion matrix. Let R(D) be the rate distortion function for an i.i.d. process with probability mass function p(x) and distortion function
d(x, x̂) , x ∈ X , x̂ ∈ X̂ . Now suppose that we add a new reproduction symbol x̂ 0 to
X̂ with associated distortion d(x, x̂ 0 ) , x ∈ X . Does this increase or decrease R(D)
and why?
Solution: Adding a column
Let the new rate distortion function be denoted as R̃(D) , and note that we can still
achieve R(D) by restricting the support of p(x, x̂) , i.e., by simply ignoring the new
symbol. Thus, R̃(D) ≤ R(D) .
Finally note the duality to the problem in which we added a row to the channel transition matrix to have no smaller capacity (Problem 7.22).
13. Simplification. Suppose X = {1, 2, 3, 4} , X̂ = {1, 2, 3, 4} , p(i) = 41 , i = 1, 2, 3, 4 ,
and X1 , X2 , . . . are i.i.d. ∼ p(x) . The distortion matrix d(x, x̂) is given by
1
2
3
4
1
0
0
1
1
2
0
0
1
1
3
1
1
0
0
4
1
1
0
0
(a) Find R(0) , the rate necessary to describe the process with zero distortion.
(b) Find the rate distortion function R(D) . There are some irrelevant distinctions in
alphabets X and X̂ , which allow the problem to be collapsed.
(c) Suppose we have a nonuniform distribution p(i) = p i , i = 1, 2, 3, 4 . What is
R(D) ?
Solution: Simplification
(a) We can achieve 0 distortion if we output X̂ = 1 if X = 1 or 2 , and X̂ = 3 if
X = 3 or 4 . Thus if we set Y = 1 if X = 1 or 2 , and Y = 2 if X = 3 or 4 ,
we can recover Y exactly if the rate is greater than H(Y ) = 1 bit. It is also not
hard to see that any 0 distortion code would be able to recover Y exactly, and
thus R(0) = 1 .
(b) If we define Y as in the previous part, and Ŷ similarly from X̂ , we can see that
the distortion between X and X̂ is equal to the Hamming distortion between Y
and Ŷ . Therefore if the rate is greater than the Hamming rate distortion function
R(D) for Y , we can recover X to distortion D . Thus R(D) = 1 − H(D) .
256
Rate Distortion Theory
(c) If the distribution of X is not uniform, the same arguments hold and Y has
a distribution (p1 + p2 , p3 + p4 ) , and the rate distortion function is R(D) =
H(p1 + p2 ) − H(D) ,
14. Rate distortion for two independent sources. Can one simultaneously compress
two independent sources better than by compressing the sources individually? The
following problem addresses this question. Let {X i } be iid ∼ p(x) with distortion
d(x, x̂) and rate distortion function R X (D) . Similarly, let {Yi } be iid ∼ p(y) with
distortion d(y, ŷ) and rate distortion function R Y (D) .
Suppose we now wish to describe the process {(X i , Yi )} subject to distortions Ed(X, X̂) ≤
D1 and Ed(Y, Ŷ ) ≤ D2 . Thus a rate RX,Y (D1 , D2 ) is sufficient, where
RX,Y (D1 , D2 ) =
I(X, Y ; X̂, Ŷ )
min
p(x̂,ŷ|x,y):Ed(X,X̂)≤D1 ,Ed(Y,Ŷ )≤D2
Now suppose the {Xi } process and the {Yi } process are independent of each other.
(a) Show
RX,Y (D1 , D2 ) ≥ RX (D1 ) + RY (D2 ).
(b) Does equality hold?
Now answer the question.
Solution: Rate distortion for two independent sources
(a) Given that X and Y are independent, we have
p(x, y, x̂, ŷ) = p(x)p(y)p(x̂, ŷ|x, y)
(10.128)
Then
I(X, Y ; X̂, Ŷ ) = H(X, Y ) − H(X, Y |X̂, Ŷ )
(10.129)
= H(X) + H(Y ) − H(X|X̂, Ŷ ) − H(Y |X, X̂, Ŷ )(10.130)
≥ H(X) + H(Y ) − H(X|X̂) − H(Y |Ŷ )
(10.131)
= I(X; X̂) + I(Y ; Ŷ )
(10.132)
where the inequality follows from the fact that conditioning reduces entropy.
Therefore
RX,Y (D1 , D2 ) =
≥
=
min
I(X, Y ; X̂, Ŷ )
p(x̂,ŷ|x,y):Ed(X,X̂)≤D1 ,Ed(Y,Ŷ )≤D2
min
min
I(X; X̂) +
= RX (D1 ) + RY (D2 )
1
I(X; X̂) + I(Y ; Ŷ(10.134)
)
p(x̂,ŷ|x,y):Ed(X,X̂)≤D1 ,Ed(Y,Ŷ )≤D2
p(x̂|x):Ed(X,X̂)≤D1
0
(10.133)
min
I(Y ; Ŷ
(10.135)
)
p(ŷ|y):Ed(Y,Ŷ )≤D2
(10.136)
257
Rate Distortion Theory
(b) If
p(x, y, x̂, ŷ) = p(x)p(y)p(x̂|x)p(ŷ|y),
(10.137)
then
I(X, Y ; X̂, Ŷ ) = H(X, Y ) − H(X, Y |X̂, Ŷ )
(10.138)
= H(X) + H(Y ) − H(X|X̂, Ŷ ) − H(Y |X, X̂, Ŷ )(10.139)
= H(X) + H(Y ) − H(X|X̂) − H(Y |Ŷ )
= I(X; X̂) + I(Y ; Ŷ )
(10.140)
(10.141)
Let p(x, x̂) be a distribution that achieves the rate distortion R X (D1 ) at distortion D1 and let p(y, ŷ) be a distribution that achieves the rate distortion
RY (D2 ) at distortion D2 . Then for the product distribution p(x, y, x̂, ŷ) =
p(x, x̂)p(y, ŷ) , where the component distributions achieve rates (D 1 , RX (D1 )) and
(D2 , RX (D2 )) , the mutual information corresponding to the product distribution
is RX (D1 ) + RY (D2 ) . Thus
RX,Y (D1 , D2 ) =
min
p(x̂,ŷ|x,y):Ed(X,X̂)≤D1 ,Ed(Y,Ŷ )≤D2
I(X, Y ; X̂, Ŷ ) = RX (D1 )+RY (D2 )
(10.142)
Thus by using the product distribution, we can achieve the sum of the rates.
Therefore the total rate at which we encode two independent sources together with
distortions D1 and D2 is the same as if we encoded each of them separately.
15. Distortion-rate function. Let
D(R) =
min
Ed(X, X̂)
(10.143)
p(x̂|x):I(X;X̂)≤R
be the distortion rate function.
(a) Is D(R) increasing or decreasing in R ?
(b) Is D(R) convex or concave in R ?
(c) Converse for distortion rate functions: We now wish to prove the converse by
focusing on D(R) . Let X1 , X2 , . . . , Xn be i.i.d. ∼ p(x) . Suppose one is given
a (2nR , n) rate distortion code X n → i(X n ) → X̂ n (i(X n )) , with i(X n ) ∈ 2nR .
And suppose that the resulting distortion is D = Ed(X n , X̂ n (i(X n ))) . We must
show that D ≥ D(R) . Give reasons for the following steps in the proof:
D
=
(a)
=
(b)
=
Ed(X n , X̂ n (i(X n )))
E
1
n
1
n
n
!
d(Xi , X̂i )
(10.145)
Ed(Xi , X̂i )
(10.146)
i=1
n
!
i=1
(10.144)
258
Rate Distortion Theory
(c)
≥
(d)
≥
(e)
≥
(f )
≥
n
0
1
1!
D I(Xi ; X̂i )
n i=1
D
.
*
/
n
1!
I(Xi ; X̂i )
n i=1
+
(10.147)
(10.148)
1
I(X n ; X̂ n )
D
n
(10.149)
D(R)
(10.150)
Solution: Distortion rate function.
(a) Since for larger values of R , the minimization in
D(R) =
min
Ed(X, X̂)
(10.151)
p(x̂|x):I(X;X̂)≤R
is over a larger set of possible distributions, the minimum has to be at least as
small as the minimum over the smaller set. Thus D(R) is a nonincreasing function
of R .
(b) By similar arguments as in Lemma 10.4.1, we can show that D(R) is a convex
function of R . Consider two rate distortion pairs (R 1 , D1 ) and (R2 , D2 ) which
lie on the distortion-rate curve. Let the joint distributions that achieve these pairs
be p1 (x, x̂) = p(x)p1 (x̂|x) and p2 (x, x̂) = p(x)p2 (x̂|x) . Consider the distribution
pλ = λp1 + (1 − λ)p2 . Since the distortion is a linear function of the distribution,
we have D(pλ ) = λD1 + (1 − λ)D2 . Mutual information, on the other hand, is a
convex function of the conditional distribution (Theorem 2.7.4) and hence
Ipλ (X; X̂) ≤ λIp1 (X; X̂) + (1 − λ)Ip2 (X; X̂) = λR1 + (1 − λ)R2
(10.152)
Therefore we can achieve a distortion λD1 + (1 − λ)D2 with a rate less than
λR1 + (1 − λ)R2 and hence
D(Rλ ) ≤ Dpλ (X; X̂)
= λD(R1 ) + (1 − λ)D(R2 ),
(10.153)
(10.154)
which proves that D(R) is a convex function of R .
(c)
D
=
(a)
=
(b)
=
Ed(X n , X̂ n (i(X n )))
E
1
n
1
n
n
!
d(Xi , X̂i )
(10.156)
Ed(Xi , X̂i )
(10.157)
i=1
n
!
i=1
(10.155)
259
Rate Distortion Theory
(c)
≥
(d)
≥
(e)
≥
(f )
≥
n
0
1
1!
D I(Xi ; X̂i )
n i=1
D
.
*
/
n
1!
I(Xi ; X̂i )
n i=1
+
(10.158)
(10.159)
1
I(X n ; X̂ n )
D
n
(10.160)
D(R)
(10.161)
(a) follows from the definition of distortion for sequences
(b) from exchanging summation and expectation
(c) from the definition of the distortion rate function based on the joint distribution
p(xi , x̂i ) ,
(d) from Jensen’s inequality and the convexity of D(R)
(e) from the fact that
I(X n ; X̂ n ) = H(X n ) − H(X n |X̂ n )
=
=
n
!
i=1
n
!
i=1
≥
=
n
!
i=1
n
!
H(Xi ) − H(X n |X̂ n )
H(Xi ) −
H(Xi ) −
n
!
i=1
n
!
i=1
(10.162)
(10.163)
H(Xi |X̂ n , Xi−1 , . . . , X1 )
(10.164)
H(Xi |X̂i )
(10.165)
I(Xi ; X̂i )
(10.166)
i=1
and
(f) follows from the definition of the distortion rate function.
16. Probability of conditionally typical sequences. In Chapter 7, we calculated the
probability that two independently drawn sequences X n and Y n are weakly jointly
typical. To prove the rate distortion theorem, however, we need to calculate this probability when one of the sequences is fixed and the other is random.
The techniques of weak typicality allow us only to calculate the average set size of the
conditionally typical set. Using the ideas of strong typicality on the other hand provides
us with stronger bounds which work for all typical x n sequences. We will outline the
∗(n)
proof that Pr{(xn , Y n ) ∈ A! } ≈ 2−nI(X;Y ) for all typical xn . This approach was
introduced by Berger[1] and is fully developed in the book by Csiszár and Körner[4].
Let (Xi , Yi ) be drawn i.i.d. ∼ p(x, y) . Let the marginals of X and Y be p(x) and
p(y) respectively.
∗(n)
(a) Let A!
be the strongly typical set for X . Show that
.
|A∗(n)
|=2nH(X)
!
(10.167)
260
Rate Distortion Theory
Hint: Theorem 11.1.1 and 11.1.3.
(b) The joint type of a pair of sequences (x n , y n ) is the proportion of times (xi , yi ) =
(a, b) in the pair of sequences, i.e.,
pxn ,yn (a, b) =
n
1
1!
N (a, b|xn , y n ) =
I(xi = a, yi = b).
n
n i=1
(10.168)
The conditional type of a sequence y n given xn is a stochastic matrix that gives
the proportion of times a particular element of Y occurred with each element of
X in the pair of sequences. Specifically, the conditional type V yn |xn (b|a) is defined
as
N (a, b|xn , y n )
.
(10.169)
Vyn |xn (b|a) =
N (a|xn )
Show that the number of conditional types is bounded by (n + 1) |X ||Y| .
(c) The set of sequences y n ∈ Y n with conditional type V with respect to a sequence
xn is called the conditional type class T V (xn ) . Show that
1
2nH(Y |X) ≤ |TV (xn )| ≤ 2nH(Y |X) .
(n + 1)|X ||Y|
(10.170)
(d) The sequence y n ∈ Y n is said to be % -strongly conditionally typical with the sequence xn with respect to the conditional distribution V (·|·) if the conditional
type is close to V . The conditional type should satisfy the following two conditions:
i. For all (a, b) ∈ X × Y with V (b|a) > 0 ,
1
%
|N (a, b|xn , y n ) − V (b|a)N (a|xn )| ≤
.
n
|Y| + 1
(10.171)
ii. N (a, b|xn , y n ) = 0 for all (a, b) such that V (b|a) = 0 .
The set of such sequences is called the conditionally typical set and is denoted
∗(n)
A! (Y |xn ) . Show that the number of sequences y n that are conditionally typical
with a given xn ∈ X n is bounded by
1
2n(H(Y |X)−!1 ) ≤ |A∗(n)
(Y |xn )| ≤ (n + 1)|X ||Y| 2n(H(Y |X)+!1 ) , (10.172)
!
(n + 1)|X ||Y|
where %1 → 0 as % → 0 .
(e) For a pair of random variables (X, Y ) with joint distribution p(x, y) , the % ∗(n)
strongly typical set A!
is the set of sequences (xn , y n ) ∈ X n × Y n satisfying
i.
3
3
31
3
%
3 N (a, b|xn , y n ) − p(a, b)3 <
3n
3
|X ||Y|
for every pair (a, b) ∈ X × Y with p(a, b) > 0 .
(10.173)
261
Rate Distortion Theory
ii. N (a, b|xn , y n ) = 0 for all (a, b) ∈ X × Y with p(a, b) = 0 .
The set of % -strongly jointly typical sequences is called the % -strongly jointly
∗(n)
typical set and is denoted A! (X, Y ) .
Let (X, Y ) be drawn i.i.d. ∼ p(x, y) . For any x n such that there exists at least
∗(n)
∗(n)
one pair (xn , y n ) ∈ A! (X, Y ) , the set of sequences y n such that (xn , y n ) ∈ A!
satisfies
1
2n(H(Y |X)−δ(!)) ≤ |{y n : (xn , y n ) ∈ A∗(n)
}| ≤ (n + 1)|X ||Y| 2n(H(Y |X)+δ(!)) ,
!
(n + 1)|X ||Y|
(10.174)
where δ(%) → 0 as % → 0 . In particular, we can write
2n(H(Y |X)−!2 ) ≤ |{y n : (xn , y n ) ∈ A!∗(n) }| ≤ 2n(H(Y |X)+!2 ) ,
(10.175)
where we can make %2 arbitrarily small with an appropriate choice of % and n .
(f) Let Y1 , Y2 , . . . , Yn be drawn i.i.d. ∼
∗(n)
(xn , Y n ) ∈ A!
is bounded by
C
∗(n)
p(yi ) . For xn ∈ A!
, the probability that
2−n(I(X;Y )+!3 ) ≤ Pr((xn , Y n ) ∈ A∗(n)
) ≤ 2−n(I(X;Y )−!3 ) ,
!
(10.176)
where %3 goes to 0 as % → 0 and n → ∞ .
Solution:
Probability of conditionally typical sequences.
(a) The set of strongly typical sequences is the set of sequence whose type is close the
distribution p . We have two conditions - that the proportion of any symbol a
in the sequence is close to p(a) and that no symbol with p(a) = 0 occurs in the
sequence. The second condition may seem a technical one, but is essential in the
proof of the strong equipartition theorem below.
By the strong law of large numbers, for a sequence drawn i.i.d. ∼ p(x) , the
asymptotic proportion of any letter a is close to p(a) with high probability. So
for appropriately large n , the proportion of every letter is within % of p(a) with
probability close to 1, i.e., the strongly typical set has a probability close to 1. We
will show that
%
%
2n(H(p)−! ) ≤ |A∗(n)
| ≤ 2n(H(p)+! ) ,
(10.177)
!
where %' goes to 0 as % → 0 and n → ∞ .
For sequences in the strongly typical set,
−H(p) −
1
log p(xn ) =
n
!
a∈X
= −
p(a) log p(a) −
! *1
a∈X
n
1 !
N (a|xn ) log p(a)
n a∈X
+
N (a|xn ) − p(a) log p(a),
(10.178)
262
Rate Distortion Theory
and since | n1 N (a|xn ) − p(a)| < % if p(a) > 0 , and | n1 N (a|xn ) − p(a)| = 0 if
p(a) = 0 , we have
1
| − H(p) − log p(xn )| < %1 .
(10.179)
n
$
1
where %1 = % a:p(a)>0 log p(a)
. It follows that %1 → 0 as % → 0 .
Recall the definition of weakly typical sequences in Chapter 3. A sequence was
defined as %1 -weakly typical if | − log p(xn ) − H(p)| ≤ %1 . Hence a sequence that
is % -strongly typical is also %1 -weakly typical. Hence the strongly typical set is a
∗(n)
(n)
subset of the corresponding weakly typical set, i.e., A ! ⊂ A!1 .
Similarly, by the continuity of the entropy function, it follows that for all types
in the typical set, the entropy of the type is close to H(p) . Specifically, for all
∗(n)
xn ∈ A! , |pxn (a) − p(a)| < % and hence by Lemma 10.0.5, we have
|H(pxn ) − H(p)| < %2 ,
(10.180)
where %2 = −|X |% log % → 0 as % → 0 .
There are only a polynomial number of types altogether and hence there are only
a polynomial number of types in the strongly typical set. The type class of any
∗(n)
type q ∈ A! , by Theorem 12.1.3, has a size bounded by
1
2nH(q) ≤ |T (q)| ≤ 2nH(q) .
(n + 1)|X |
(10.181)
∗(n)
By the previous part of this theorem, for q ∈ A ! , |H(q) − H(p)| ≤ %2 , and
hence
1
2n(H(p)−!2 ) ≤ |T (q)| ≤ 2n(H(p)+!2 ) .
(10.182)
(n + 1)|X |
Since the number of elements in the strongly typical set is the sum of the sizes
of the type classes in the strongly typical set, and there are only a polynomial
number of them, we have
1
2n(H(p)−!2 ) ≤ |A!∗(n) | ≤ (n + 1)|X | 2n(H(p)+!2 ) ,
(n + 1)|X |
∗(n)
(10.183)
i.e., | n1 log |A! | − H(p)| ≤ %' , where %' = %2 + |Xn | log(n + 1) which goes to 0 as
% → 0 and n → ∞ .
It is instructive to compare the proofs of the strong AEP with the AEP for weakly
typical sequences. The results are similar, but there is one important difference.
The lower bound on size of the strongly typical set does not depend on the probability of the set—instead, the bound is derived directly in terms of the size of
type classes. This enables the lower bound in the strong AEP to be extended
to conditionally typical sequences and sets; the weak AEP cannot be extended
similarly. We will consider the extensions of the AEP to conditional distributions
in the next part.
263
Rate Distortion Theory
(b) The concept of types for single sequences can be extended to pairs of sequences
for which we can define the concept of the joint type and the conditional type.
Definition: The joint type of a pair of sequences (x n , y n ) is the proportion of
times a pair of symbols (a, b) occurs jointly the the pair of sequences, i.e.,
pxn ,yn (a, b) =
1
N (a, b|xn , y n ).
n
(10.184)
Definition: The conditional type of a sequence y n given xn is a stochastic matrix
that gives the proportion of times a particular element of Y occurred with each
element of X in the pair of sequences. Specifically, the conditional type V yn |xn (b|a)
is defined as
N (a, b|xn , y n )
Vyn |xn (b|a) =
.
(10.185)
N (a|xn )
The set of sequences y n ∈ Y n with conditional type V with respect to a sequence
xn is called the conditional type class T V (xn ) .
Lemma 10.0.2 The number of conditional types for sequences of length n from
the alphabet X and Y is bounded by (n + 1)|X ||Y| .
Proof: By Theorem 12.1.1, the number of ways of choosing a row of the
matrix V (·|a) is bounded by (n+1)|Y| and there are |X | different choices
of rows. So the total number of different conditional types is bounded by
(n + 1)|X ||Y| . !
(c) Since Vyn |xn is a stochastic matrix, we can multiply it with p xn to find the joint
type of (xn , y n ) . We will denote the conditional entropy of Y given X for this
joint distribution as H(Vyn |xn |pxn ) .
Lemma 10.0.3 For xn ∈ X n , let TV (xn ) denote the set of sequences y n ∈ Y n
with conditional type V with respect to x n . Then
1
2nH(V |pxn ) ≤ |TV (xn )| ≤ 2nH(V |pxn ) .
(n + 1)|X ||Y|
(10.186)
Proof: This is a direct consequence of the corresponding lemma about
the size of unconditional type classes. We can consider the subsequences
of the pair corresponding each element of X . For any particular element
a ∈ X , the number of conditionally typical sequences depends only the
conditional type V (·|a) , and hence the number of conditionally typical
sequences is bounded by
B
a∈X
1
(N (a|xn ) +
1)|Y|
2N (a|x
n )H(V
|pxn )
≤ |TV (xn )| ≤
B
a∈X
2N (a|x
n )H(V
|pxn )
(10.187)
which proves the lemma. !
The above two lemmas generalize the corresponding lemmas for unconditional
types. We can use these to extend the strong AEP to conditionally typical sets.
264
Rate Distortion Theory
(d) We begin with the definition of strongly conditionally typical sequences.
Definition: The sequence y n ∈ Y n is said to be % -strongly conditionally typical
with the sequence xn with respect to the conditional distribution V (·|·) if the
conditional type is close to V . The conditional type should satisfy the following
two conditions:
i. For all (a, b) ∈ X × Y with V (b|a) > 0 ,
1
|N (a, b|xn , y n ) − V (b|a)N (a|xn )| ≤ %.
n
(10.188)
ii. N (a, b|xn , y n ) = 0 for all (a, b) such that V (b|a) = 0 .
The set of such sequences is called the conditionally typical set and is denoted
∗(n)
A! (Y |xn ) .
Essentially, a sequence y n is conditionally typical with xn if the subsequence of
y n corresponding to the occurrences of a particular symbol a in x n is typical
with respect to the conditional distribution V (·|a) . Since the number of such
conditionally typical sequences is just the product of the number of subsequences
conditionally typically corresponding to each choice of a ∈ X , we can now extend
the strong AEP to derive a bound on the size of the conditionally typical set.
Lemma 10.0.4 The number of sequences y n that are conditionally typical with
a given xn ∈ X n is bounded by
1
2n(H(V |pxn )−!4 ) ≤ |A!∗(n) (Y |xn )| ≤ (n + 1)|X ||Y| 2n(H(V |pxn )+!4 ) ,
(n + 1)|X ||Y|
(10.189)
where %4 = −|X ||Y|% log % → 0 as % → 0 .
Proof: Just as in the proof of the strong AEP (Theorem 12.2.1), we
will derive the bounds using purely combinatorial arguments. The size
of the conditional type class is bounded in Lemma 10.0.3 in terms of the
entropy of the conditional type. By Lemma 10.0.5 and the definition of
the conditionally typical set, we have
3
3
3
3
3H(pyn |xn |pxn ) − H(V |pxn )3 ≤ −|X ||Y|% log %
(10.190)
Combining this with the bound on the number of conditional types (Lemma 10.0.2),
we have the theorem. !
(e) We now extend the definition of strongly typical sequences to pairs of sequences.
The joint type of a pair of sequences is the proportion of occurrences of a pair of
symbols together in the pair. A pair of sequences (x n , y n ) is called jointly strongly
typical with respect to a distribution p(x, y) if the joint type is close to p(x, y) .
Definition: For a pair of random variables (X, Y ) with joint distribution p(x, y) ,
∗(n)
the % -strongly typical set A!
is the set of sequences (xn , y n ) ∈ X n ×Y n satisfying
265
Rate Distortion Theory
i.
3
3
31
3
3 N (a, b|xn , y n ) − p(a, b)3 < %
3n
3
(10.191)
for every pair (a, b) ∈ X × Y with p(a, b) > 0 .
ii. N (a, b|xn , y n ) = 0 for all (a, b) ∈ X × Y with p(a, b) = 0 .
The set of % -strongly jointly typical sequences is called the % -strongly jointly
∗(n)
typical set and is denoted A! (X, Y ) .
Theorem 10.0.1 (Joint AEP.) Let (X n , Y n ) be sequences of length n drawn
C
i.i.d. according to p(xn , y n ) = ni=1 p(xi , yi ) . Then
P (A∗(n)
) → 1,
!
as n → ∞.
(10.192)
Proof: Follows directly from the weak law of large numbers. !
From the definition, it is clear that strongly jointly typical sequences are also
∗(n)
individually typical, i.e., for xn such that (xn , y n ) ∈ A! (X, Y ) ,
|pxn (a) − p(a)| ≤
!
b∈Y
|pxn ,yn (a, b) − p(a, b)|
(10.193)
for all a ∈ X .
(10.194)
≤ %|Y|,
∗(n)
Hence xn ∈ A!|Y| . This in turn implies that the pair is also conditionally typical
∗(n)
for the conditional distribution p(y|x) , i.e., for (x n , y n ) ∈ A!
(X, Y ) ,
|pxn ,yn (a, b) − p(b|a)pxn (a)| < %(|Y| + 1) < %|X ||Y|.
(10.195)
Since conditional entropy is also a continuous function of the distribution, the
conditional entropy of the type of a jointly strongly typical sequence, p xn ,yn , is
close to conditional entropy for p(x, y) . Hence we can also extend Lemma 10.0.3
for elements of the typical set as follows:
Theorem 10.0.2 (Size of conditionally typical set)
Let (X, Y ) be drawn i.i.d. ∼ p(x, y) . For any x n such that there exists at least
∗(n)
∗(n)
one pair (xn , y n ) ∈ A! (X, Y ) , the set of sequences y n such that (xn , y n ) ∈ A!
satisfies
1
2n(H(Y |X)−δ(!)) ≤ |{y n : (xn , y n ) ∈ A∗(n)
}| ≤ (n + 1)|X ||Y| 2n(H(Y |X)+δ(!)) ,
!
(n + 1)|X ||Y|
(10.196)
where δ(%) → 0 as % → 0 . In particular, we can write
2n(H(Y |X)−!5 ) ≤ |{y n : (xn , y n ) ∈ A∗(n)
}| ≤ 2n(H(Y |X)+!5 ) ,
!
(10.197)
where we can make %5 arbitrarily small with an appropriate choice of % and n .
266
Rate Distortion Theory
Proof: The theorem follows from Theorem 10.0.2 and the continuity of
conditional entropy as a function of the joint distribution. Now the set
of sequences that are jointly typical with a given x n are also %|X ||Y| strongly conditionally typical, and hence from the upper bound of Theorem 10.0.2, we have
|{y n : (xn , y n ) ∈ A∗(n)
}| ≤ (n + 1)|X ||Y|2n(H(p(b|a)|pxn )+δ(!)) ,
!
(10.198)
where δ(%) → 0 as % → 0 . Now since
H(Y |X) = −
!
x
p(x)
!
p(y|x) log p(y|x)
(10.199)
is a linear function of the distribution p(x) , we have
|H(p(b|a)|pxn ) − H(Y |X)| ≤ %|Y| max H(Y |X = a) ≤ %|Y| log |Y|,
a∈X
(10.200)
which gives us the upper bound of the theorem.
∗(n)
For the lower bound, assume that (xn , y n ) ∈ A! (X, Y ) . Then since the
joint type of a pair of sequences is determined by the type of x n and the
conditional type of y n given xn , all sequences y n with this conditional
∗(n)
type will also be in A! (X, Y ) . Hence the number of sequences |{y n :
∗(n)
(xn , y n ) ∈ A! }| is at least as much as the number of sequences of this
conditional type, which by the lower bound of Lemma 10.0.4, and the
continuity of conditional entropy as a function of the joint distribution
(Lemma 10.0.5 and (10.200)), we have
|{y n : (xn , y n ) ∈ A∗(n)
}| ≥ (n + 1)|X ||Y|2n(H(p(b|a)|pxn )−δ(!)) ,
!
(10.201)
where δ(%) → 0 as % → 0 . This gives us the theorem with
%5 =
|X ||Y|
log(n + 1) + %|Y| log |Y| − %|X |2 |Y|2 log %|X ||Y|.
n
(10.202)
!
To use this result, we have to assume that there is at least one y n such that
∗(n)
(xn , y n ) ∈ A! (X, Y ) . From the definitions of the strongly typical sets, it is clear
that if |pxn (a) − p(a)| < % , there exists at least one conditional distribution p̂(b|a)
such that |p̂(b|a)pxn (a) − p(a, b)| < % and hence for large enough n , we have at
least one conditional type such that |p xn ,yn (a, b) − p(a, b)| ≤ % and hence if xn is
% -strongly typical, then there exists a conditional type such the joint type is jointly
typical. For such an xn sequence, we can always find a y n such that (xn , y n ) is
jointly typical.
(f) Notice that for the results of Theorems 10.0.2, we have used purely combinatorial
arguments to bound the size of the conditionally type class and the conditionally
typical set. These theorems illustrate the power of the method of types. We will
now use the last theorem to bound the probability that a randomly chosen Y n
∗(n)
will be conditionally typical with a given x n ∈ A! .
267
Rate Distortion Theory
Theorem 10.0.3 Let Y1 , Y2 , . . . , Yn be drawn i.i.d. ∼
∗(n)
the probability that (xn , Y n ) ∈ A!
is bounded by
C
∗(n)
p(y) . For xn ∈ A !
3
2−n(I(X;Y )+!7 ) ≤ Pr((xn , Y n ) ∈ A!∗(n) ) ≤ 2−n(I(X;Y )−!7 )
,
(10.203)
where %7 = %5 − %|X ||Y| log %|X | which goes to 0 as % → 0 and n → ∞ .
∗(n)
Proof: If Y n ∈ A!
.
, then p(Y n )=2−n(H(Y )) , and hence
P ((xn , Y n ) ∈ A!∗(n) ) =
≤
!
p(y n )
(10.204)
∗(n)
y n :(xn ,y n )∈A!
!
2−n(H(Y )−!6 ) (10.205)
∗(n)
y n :(xn ,y n )∈A!
= |A!∗(n) (Y |xn )|2−n(H(Y )−!6 )
(10.206)
n(H(Y |X)+!5 ) −n(H(Y )−!6 )
(10.207)
−n(I(X;Y )−!7 )
(10.208)
≤ 2
2
= 2
.
∗(n)
where %6 = −%|X ||Y| log %|X | since |pyn − p| ≤ %|X | if (xn , y n ) ∈ A!
Also
P ((xn , Y n ) ∈ A!∗(n) ) =
≥
!
p(y n )
.
(10.209)
∗(n)
y n :(xn ,y n )∈A!
!
2−n(H(Y )+!6 ) (10.210)
∗(n)
y n :(xn ,y n )∈A!
= |A!∗(n) (Y |xn )|2−n(H(Y )+!6 )
(10.211)
n(H(Y |X)−!5 ) −n(H(Y )+!6 )
(10.212)
−n(I(X;Y )+!7 )
(10.213)
≥ 2
2
= 2
.
Hence
2−n(I(X;Y )+!7 ) ≤ P r((xn , Y n ) ∈ A!∗(n) ) ≤ 2−n(I(X;Y )−!7 ) .
(10.214)
!
The main result of this problem is the last theorem, which gives upper and lower
bounds on the probability that a randomly chosen sequence y n will be jointly
typical with a given xn . This was used in the proof of the rate distortion theorem.
To end this solution, we will prove a theorem on the continuity of entropy:
Lemma 10.0.5 If |p(x) − q(x)| ≤ % for all x , then |H(p) − H(q)| ≤ −%|X | log % .
Proof: We will use some simple properties of the function
f (x) = −x ln x
for 0 ≤ x ≤
1
.
e
(10.215)
268
Rate Distortion Theory
Since f ' (x) = −1 − ln x > 0 and f '' (x) = − x1 , f (x) is an increasing
concave function. Now consider
g(x) = f (x + %) − f (x) = x ln x − (x + %) ln(x + %).
(10.216)
Then again by differentiation, it is clear that g ' (x) < 0 so the function is
strictly decreasing. Hence g(x) < g(0) = −% ln % for all x .
For any a ∈ X , assume p(a) > q(a) , and hence we have
(10.217)
p(a) − q(a) ≤ %
Hence by the fact that f is an increasing function, we have
| − p(a) ln p(a) + q(a) ln q(a)| = −p(a) ln p(a) + q(a) ln q(a)
(10.218)
≤ −(q(a) + %) ln(q(a) + %) + q(a) ln q(a)
≤ −% ln %.
(10.219)
Summing this over all a ∈ X , we have the lemma. !
17. The source-channel separation theorem with distortion: Let V 1 , V2 , . . . , Vn be
a finite alphabet i.i.d. source which is encoded as a sequence of n input symbols X n
of a discrete memoryless channel. The output of the channel Y n is mapped onto the
$
reconstruction alphabet V̂ n = g(Y n ) . Let D = Ed(V n , V̂ n ) = n1 ni=1 Ed(Vi , V̂i ) be
the average distortion achieved by this combined source and channel coding scheme.
Vn
% X n (V n ) % Channel Capacity C
% Yn
%
V̂ n
(a) Show that if C > R(D) , where R(D) is the rate distortion function for V ,
then it is possible to find encoders and decoders that achieve a average distortion
arbitrarily close to D .
(b) (Converse.) Show that if the average distortion is equal to D , then the capacity
of the channel C must be greater than R(D) .
Solution: Source channel separation theorem with distortion
(a) To show achievability, we consider two codes at rate R , where C > R > R(D) .
The first code is a rate distortion code that achieves distortion D at rate R . The
second code is a channel code that allows transmission over the channel at rate R
with probability of error going to 0. Using the rate distortion code to encode the
source into one of the 2nR messages, and the channel code to send this message
over the channel. Since the probability of error is exponentially small, the received
269
Rate Distortion Theory
message is the same as the transmitted message with probability close to 1. In
that case, the results of the achieveability of rate distortion show that the decoded
sequence is within distortion D of the input sequence with high probability.
To complete the analysis, we need to consider the case wther the channel code
produces an error—however, even in this case, the distortion produced by the
error is bounded, and hence the total distortion is essentially the same as achieved
without the errors.
(b) To prove the converse, we need to prove that for any encoding system that achieves
distortion D , the capacity of the channel should be greater than R(D) . Mimicking
the steps for the converse for the rate distortion function, we can define coding
function fn and decoding function gn , Let V̂ n = V̂ n (Y n ) = gn (Y n ) be the
reproduced sequence corresponding to V n . Assume that Ed(V n , V̂ n ) ≥ D for
this code. Then we have the following chain of inequalities:
I(V n ; V̂ n )
=
=
=
(a)
≥
=
≥
=
(b)
H(V n ) − H(V n |V̂ n )
n
!
i=1
n
!
i=1
n
!
i=1
n
!
i=1
n
!
(10.220)
H(Vi ) − H(V n |V̂ n )
H(Vi ) −
H(Vi ) −
n
!
i=1
n
!
i=1
(10.221)
H(Vi |V̂ n , Vi−1 , . . . , V1 )
(10.222)
H(Vi |V̂i )
(10.223)
I(Vi ; V̂i )
(10.224)
R(Ed(Vi , V̂i ))
(10.225)
i=1
.
/
n
1!
R(Ed(Vi , V̂i ))
n
n i=1
.
/
n
1!
Ed(Vi , V̂i )
n i=1
(10.226)
(10.227)
≥
nR
=
nR(Ed(V n , V̂ n ))
(10.228)
=
nR(D),
(10.229)
where
(a) follows from the fact that conditioning reduces entropy,
(b) from the convexity of the rate distortion function. Also by the data processing
inequality,
I(V n ; V̂ n ) ≥ I(X n ; Y n ) ≤ nC
where the last inequality follows from Lemma 7.9.2.
(10.230)
270
Rate Distortion Theory
18. Rate distortion.
Let d(x, x̂) be a distortion function. We have a source X ∼ p(x). Let R(D) be the
associated rate distortion function.
(a) Find R̃(D) in terms of R(D), where R̃(D) is the rate distortion function associ˜ x̂) = d(x, x̂) + a for some constant a > 0. (They are
ated with the distortion d(x,
not equal)
(b) Now suppose that d(x, x̂) ≥ 0 for all x, x̂ and define a new distortion function
d∗ (x, x̂) = bd(x, x̂), where b is some number ≥ 0. Find the associated rate distortion function R∗ (D) in terms of R(D).
(c) Let X ∼ N (0, σ 2 ) and d(x, x̂) = 5(x − x̂)2 + 3. What is R(D)?
Solution: Rate distortion.
(a)
R̃(D) =
=
=
inf
˜
p(x̂|x): E(d(x,x̂))≤D
I(X; X̂)
inf
I(X; X̂)
inf
I(X; X̂)
p(x̂|x): E(d(x,x̂))+a≤D
p(x̂|x): E(d(x,x̂))≤D−a
= R(D − a)
(b) If b > 0
R∗ (D) =
=
=
inf
I(X; X̂)
inf
I(X; X̂)
inf
I(X; X̂)
p(x̂|x): E(d∗ (x,x̂))≤D
p(x̂|x): E(bd(x,x̂))≤D
p(x̂|x): E(d(x,x̂))≤ D
b
*
+
D
= R
,
b
else if b = 0 then d∗ = 0 and R∗ (D) = 0.
(c) Let Rse (D) be the rate distortion function associate with the distortion d se (x, x̂) =
(x − x̂)2 . Then from parts (a) and (b) we have
R(D) = Rse
*
+
D−3
.
5
271
Rate Distortion Theory
We know that
Rse (D) =
)
1
2
2
log σD
0
0 ≤ D ≤ σ2
D < σ2
Therefore we have
R(D) =
)
1
2
2
5σ
log D−3
0
3 ≤ D ≤ 5σ 2 + 3
D > 5σ 2 + 3
19. Rate distortion with two constraints
Let Xi be iid ∼ p(x) . We are given two distortion functions d 1 (x, x̂) and d2 (x, x̂) . We
wish to describe X n at rate R and reconstruct it with distortions Ed 1 (X n , X̂1n ) ≤ D1 ,
and Ed2 (X n , X̂2n ) ≤ D2 , as shown here:
X n −→ i(X n ) −→ (X̂1n (i), X̂2n (i))
D1 = ED(X1n , X̂1n )
D2 = ED(X1n , X̂2n ).
Here i(·) takes on 2nR values. What is the rate distortion function R(D 1 , D2 ) ?
Solution: Rate distortion with two constraints
R(D1 , D2 ) =
min
p(x̂1 ,x̂2 |x)
I(X; X̂1 , X̂2 )
subject to:
Ed1 (X̂1 , X) ≤ D1
Ed2 (X̂2 , X) ≤ D2
Some interesting things to note about R(D 1 , D2 ) are the following. First,
max(R(D1 ), R(D2 )) ≤ R(D1 , D2 ) ≤ R(D1 )+R(D2 ) . The upper bound occurs when the
mutual information is minimized with X̂1 independent of X̂2 which is always allowed.
The lower bound occurs because the best rate achieved in the more constrained problem
can not be lower than the best rate acheived in either less constrained problem. Note
that the optimization is over the set of distributions of the form p(x̂ 1 , x̂2 |x) which is
a larger set than if conditional independence p(x̂ 1 |x)p(x̂2 |x) were required, and the
minimum rate achieving distribution may not have conditional independence. As a
simple example of where the optimal solution is conditionally dependent consider a
Gaussian source where both distortion measures are square error and the distortion
bounds are the same as well. In this case the minimum rate is achieved when x̂ 1 = x̂2
almost surely which gives R(D1 , D2 ) = R(D1 ) = R(D2 ) . So for I(X; X̂1 , X̂2 ) =
I(X; X̂1 ) + I(X; X̂2 |X̂1 ) the second term is zero and the first term is minimized which
is not possible of conditional independence is required.
272
Rate Distortion Theory
20. Rate distortion
Consider the standard rate distortion problem, X i i.i.d. ∼ p(x), X n → i(X n ) → X̂ n ,
|i(·)| = 2nR . Consider two distortion criteria d1 (x, x̂) and d2 (x, x̂) .
Suppose d1 (x, x̂) ≤ d2 (x, x̂) for all x ∈ X , x̂ ∈ X̂ .
Let R1 (D) and R2 (D) be the corresponding rate distortion functions.
(a) Find the inequality relationship between R 1 (D) and R2 (D) .
(b) Suppose we must describe the source {X i } at the minimum rate R achieving
d1 (X n , X̂1n ) ≤ D and d2 (X n , X̂2n ) ≤ D. Thus
X n → i(X n ) →
and |i(·)| = 2nR .
Find the minimum rate R .

 X̂1n (i(X n ))
 X̂ n (i(X n ))
2
Solution: Rate distortion
(a) Any rate (and coding scheme) satisfying d 2 (X n , Xˆn ) ≤ D automatically satisfy
d1 (X n , Xˆn ) ≤ D . Hence
R1 (D) ≤ R2 (D).
(b) As in Problem 10.19,
R2 (D) = max(R1 (D), R2 (D)) ≤ R(D, D),
where R(D, D) is the minimum rate distortion function achieving both distortion
criteria.
For the other direction of inequality, repeat the argument we used in part (a):
If we use the rate R2 (D) and the optimal coding scheme with X̂1 = X̂2 = X̂ ,
we satisfy both distortion constraints since d 1 (X n , X̂ n ) ≤ d2 (X n , X̂ n ) ≤ D. This
implies R2 (D) is achievable so that R2 (D) ≥ R(D, D).
Chapter 11
Information Theory and Statistics
1. Chernoff-Stein lemma. Consider the two hypothesis test
H1 : f = f 1
vs. H2 : f = f2
Find D(f1 5 f2 ) if
(a) fi (x) = N (0, σi2 ), i = 1, 2
(b) fi (x) = λi e−λi x , x ≥ 0, i = 1, 2
(c) f1 (x) is the uniform density over the interval [0,1] and f 2 (x) is the uniform density
over [a, a + 1] . Assume 0 < a < 1.
(d) f1 corresponds to a fair coin and f 2 corresponds to a two-headed coin.
Solution: Stein’s lemma.
(a) f1 = N (0, σ12 ) , f2 = N (0, σ22 ) ,
D(f1 ||f2 ) =
=
2
"
1 σ22
f1 (x)
ln
−
2 σ12
−∞
∞
"
#
.
x2
x2
−
2σ12 2σ22
/#
1
σ2 σ2
ln 22 + 12 − 1 .
2
σ1
σ2
dx
(11.1)
(11.2)
(b) f1 = λ1 e−λ1 x , f2 = λ2 e−λ2 x ,
2
∞
4
5
λ1
D(f1 ||f2 ) =
f1 (x) ln
− λ1 x + λ2 x dx
λ2
0
λ1 λ2
= ln
+
− 1.
λ2 λ1
273
(11.3)
(11.4)
274
Information Theory and Statistics
(c) f1 = U [0, 1] , f2 = U [a, a + 1] ,
2
D(f1 ||f2 ) =
1
0
2
=
a
0
= ∞.
f1 ln
f1
f2
f1 ln ∞ +
(11.5)
2
1
a
f1 ln 1
(11.6)
(11.7)
In this case, the Kullback Leibler distance of ∞ implies that in a hypothesis test,
the two distributions will be distinguished with probability 1 for large samples.
(d) f1 = Bern
0 1
1
2
and f2 = Bern(1) ,
D(f1 ||f2 ) =
1 12
1 1
ln + ln 2 = ∞.
2 1
2 0
(11.8)
The implication is the same as in part (c).
2. A relation between D(P 5 Q) and Chi-square. Show that the χ 2 statistic
χ2 = Σ x
(P (x) − Q(x))2
Q(x)
is (twice) the first term in the Taylor series expansion of D(P 5 Q) about Q. Thus
D(P 5 Q) = 21 χ2 + . . .
Suggestion: Write
P
Q
=1+
P −Q
Q
and expand the log.
Solution: A relation between D(P 5 Q) and Chi-square.
There are many ways to expand D(P ||Q) in a Taylor series, but when we are expanding
about P = Q , we must get a series in P − Q , whose coefficients depend on Q only.
It is easy to get misled into forming another series expansion, so we will provide two
alternative proofs of this result.
• Expanding the log.
P
Writing Q
= 1 + P −Q
Q =1+
∆
Q
D(P ||Q) =
=
=
=
, and P = Q + ∆ , we get
2
2
2
2
P ln
P
Q
(11.9)
*
(Q + ∆) ln 1 +
(Q + ∆)
∆+
.
∆
Q
+
∆
∆2
−
+ ...
Q 2Q2
∆2 ∆2
−
+ ....
Q
2Q
(11.10)
/
(11.11)
(11.12)
275
Information Theory and Statistics
The integral of the first term
term in the expansion is
N
∆=
N
P−
N
Q = 0 , and hence the first non-zero
∆2
χ2
=
,
2Q
2
(11.13)
which shows that locally around Q , D(P ||Q) behaves quadratically like χ 2 .
• By differentiation.
If we construct the Taylor series expansion for f , we can write
f (x) = f (c) + f ' (c)(x − c) + f '' (c)
(x − c)2
+ ...
2
(11.14)
Doing the same expansion for D(P ||Q) around the point Q , we get
D(P ||Q)P =Q = 0,
D' (P ||Q)P =Q = (ln
and
D '' (P ||Q)P =Q =
(11.15)
P
+ 1)P =Q = 1,
Q
*
1
P
+
=
P =Q
1
.
Q
(11.16)
(11.17)
Hence the Taylor series is
D(P ||Q) = 0 +
=
and we get
χ2
2
2
1(P − Q) +
1 2
χ + ....
2
2
1 (P − Q)2
+ ...
Q
2
(11.18)
(11.19)
as the first non-zero term in the expansion.
3. Error exponent for universal codes. A universal source code of rate R achieves
∗
(n) .
a probability of error Pe = e−nD(P /Q) , where Q is the true distribution and P ∗
achieves min D(P 5 Q) over all P such that H(P ) ≥ R.
(a) Find P ∗ in terms of Q and R.
(b) Now let X be binary. Find the region of source probabilities Q(x), x ∈ {0, 1} ,
(n)
for which rate R is sufficient for the universal source code to achieve P e → 0.
Solution: Error exponent for universal codes.
(a) We have to minimize D(p||q) subject to the constraint that H(p) ≥ R . Rewriting
this problem using Lagrange multipliers, we get
J(p) =
!
p log
!
!
p
+λ
p log p + ν
p.
q
(11.20)
276
Information Theory and Statistics
Figure 11.1: Error exponent for universal codes
U
P*
Q
Differentiating with respect to p(x) and setting the derivative to 0, we obtain
log
which implies that
p
+ 1 + λ log p + λ + ν = 0,
q
q µ (x)
p∗ (x) = $ µ .
a q (a)
(11.21)
(11.22)
λ
where µ = 1−λ
is chosen to satisfy the constraint H(p ∗ ) = R . We have to
first check that the constraint is active, i.e., that we really need equality in the
constraint. For this we set λ = 0 or µ = 1 , and we get p ∗ = q . Hence if q is such
that H(q) ≥ R , then the maximizing p ∗ is q . On the other hand, if H(q) < R ,
then λ %= 0 , and the constraint must be satisfied with equality.
Geometrically it is clear that there will be two solutions for λ of the form (11.22)
which have H(p∗ ) = R , corresponding to the minimum and maximum distance
to q on the manifold H(p) = R . It is easy to see that for 0 ≤ µ ≤ 1 , p ∗µ (x) lies
on the geodesic from q to the uniform distribution. Hence, the minimum will lie
in this region of µ . The maximum will correspond to negative µ , which lies on
the other side of the uniform distribution as in the figure.
277
Information Theory and Statistics
(b) For a universal code with rate R , any source can be transmitted by the code
if H(p) < R . In the binary case, this corresponds to p ∈ [0, h −1 (R)) or p ∈
(1 − h−1 (R), 1] , where h is the binary entropy function.
4. Sequential projection. We wish to show that projecting Q onto P 1 and then projectW
W
ing the projection Q̂ onto P1 P2 is the same as projecting Q directly onto P 1 P2 .
Let P1 be the set of probability mass functions on X satisfying
!
x
!
p(x) = 1,
(11.23)
x
p(x)hi (x) ≥ αi , i = 1, 2, . . . , r.
(11.24)
Let P2 be the set of probability mass functions on X satisfying
!
x
!
p(x) = 1,
(11.25)
x
p(x)gj (x) ≥ βj , j = 1, 2, . . . , s.
(11.26)
X
Suppose Q %∈ P1 P2 . Let P ∗ minimize D(P 5 Q) over all P ∈ P1 . Let R∗ minimize
W
D(R 5 Q) over all R ∈ P1 P2 . Argue that R∗ minimizes D(R 5 P ∗ ) over all
W
R ∈ P1 P2 .
Solution: Sequential Projection.
P1 is defined by the constraints {hi } and P2 by the constraints {gi } . Hence P1 ∩ P2
is defined by the union of the constraints.
We will assume that all the constraints are active. In this case, from the parametric form
of the distribution that minimizes D(p||q) subject to equality constraints as derived in
the first homework, we have
p∗ (x) = arg min D(p||q)
(11.27)
p∈P1
= c1 q(x)e
$r
i=1
λi hi (x)
(11.28)
.
r ∗ (x) = arg min D(p||q)
p∈P1 ∩P2
$r
= c2 q(x)e
i=1
λ%i hi (x)+
(11.29)
$s
j=1
νj% gj (x)
.
(11.30)
where the constants are chosen so as to satisfy the constraints. Now when we project
p∗ onto P1 ∩ P2 , we get
p∗∗ (x) = arg min D(p||p∗ )
(11.31)
p∈P1 ∩P2
$
= c3 p∗ (x)e
= c3 c1 q(x)e
(11.32)
νi gi (x)
$
νi gi (x)+
$
λi hi (x)
,
(11.33)
278
Information Theory and Statistics
which is of the same form as r ∗ . Since the constants are chosen to satisfy the constraints
in both cases, we will obtain the same constants, and hence
r ∗ (x) = p∗∗ (x)
for all x .
(11.34)
Hence sequential projection is equivalent to direct projection, and r ∗ minimizes D(r||p∗ )
over all r ∈ P1 ∩ P2 .
An alternative proof is to use the fact (proved in the first homework) that for any set
E determined by constraints of the type in the problem,
D(p||p∗ ) + D(p∗ ||q) = D(p||q),
for all p ∈ E .
(11.35)
where p∗ is the distribution in E that is closest to q . Let p ∗∗ be the projection of p∗
on P1 ∩ P2 . Then for every element of P1 ∩ P2 ,
D(p||p∗ ) + D(p∗ ||q) = D(p||q).
(11.36)
Taking the minimum of both sides over p ∈ P 1 ∩ P2 , we see that the same p must
simultaneously minimize both sides, i.e.,
p∗∗ = r ∗ .
(11.37)
5. Counting. Let X = {1, 2, . . . , m} . Show that the number of sequences x n ∈ X n
$
∗
satisfying n1 ni=1 g(xi ) ≥ α is approximately equal to 2nH , to first order in the
exponent, for n sufficiently large, where
H∗ =
P:
H(P ).
$m max
i=1
(11.38)
P (i)g(i)≥α
Solution: Counting. We wish to count the number of sequences satisfying a certain
property. Instead of directly counting the sequences, we will calculate the probability of
the set under an uniform distribution. Since the uniform distribution puts a probability
of m1n on every sequence of length n , we can count the sequences by multiplying the
probability of the set by mn .
The probability of the set can be calculated easily from Sanov’s theorem. Let Q be
the uniform distribution, and let E be the set of sequences of length n satisfying
1 $
g(xi ) ≥ α . Then by Sanov’s theorem, we have
n
.
Qn (E)=2−nD(P
∗ ||Q)
,
(11.39)
where P ∗ is the type in E that is closest to Q . Since Q is the uniform distribution,
D(P ||Q) = log m − H(P ) , and therefore P ∗ is the type in E that has maximum
entropy. Therefore, if we let
H∗ =
P:
$m max
i=1
P (i)g(i)≥α
H(P ),
(11.40)
279
Information Theory and Statistics
we have
∗
.
Qn (E)=2−n(log m−H ) .
Multiplying this by
mn
(11.41)
to find the number of sequences in this set, we obtain
∗
.
∗
|E|=2−n log m 2nH mn = 2nH .
(11.42)
6. Biased estimates may be better. Consider the problem of estimating µ and σ 2
from n samples of data drawn i.i.d. from a N (µ, σ 2 ) distribution.
(a) Show that Xn is an unbiased estimator of µ .
(b) Show that the estimator
n
1!
(Xi − X n )2
n i=1
Sn2 =
is biased and the estimator
2
Sn−1
=
(11.43)
n
1 !
(Xi − X n )2
n − 1 i=1
(11.44)
is unbiased.
2
(c) Show that Sn2 has a lower mean squared error than S n−1
. This illustrates the
idea that a biased estimator may be “better” than an unbiased estimator for the
same parameter.
Solution: Biased estimates may be better.
$
(a) Let Xn = n1 ni=1 Xi . Then EXn =
estimator of µ .
1
n
$
EXi = µ . Thus Xn is an unbiased
(b) Before we compute the expected value of S n2 , we will first compute the variance
of Xn . By the independence of the Xi ’s, we have
var(Xn ) =
1 !
σ2
var(X
)
=
.
i
n2 i
n
(11.45)
Also, we will need to calculate


1!
E(Xi − µ)(Xn − µ) = E(Xi − µ) 
(Xj − µ)
n j
=
=
(11.46)
1
1!
E(Xi − µ)2 +
E(Xi − µ)(Xj − µ) (11.47)
n
n j+=i
σ2
,
n
(11.48)
since by independence, Xi and Xj are uncorrelated and therefore E(Xi −µ)(Xj −
µ) = 0 .
280
Information Theory and Statistics
Therefore, if we let
W =
!
i
we have
EW
=
!
i
(Xi − Xn )2 =
E(Xi − µ)2 − 2
σ2
σ2
= nσ 2 − 2n + n
n
n
= (n − 1)σ 2
Thus,
Sn2 =
has ESn2 =
n−1 2
n σ ,
!0
i
!
i
12
(Xi − µ) − (Xn − µ)
,
(11.49)
E(Xi − µ)(Xn − µ) + nE(Xn − µ)2(11.50)
(11.51)
(11.52)
n
1!
W
(Xi − X n )2 =
n i=1
n
(11.53)
and it is therefore a biased estimator of σ 2 , and
2
Sn−1
=
n
1 !
W
(Xi − X n )2 =
n − 1 i=1
n−1
(11.54)
has expected value σ 2 and is therefore an unbiased estimator of σ 2 .
(c) This involves a lot of algebra. We will need the following properties of the Normal
distribution - the third central moment is 0 and the fourth central moment is 3σ 4 ,
and therefore
EXi = µ
(11.55)
EXi2
3
= µ +σ
EXi3
4
= µ + 3σ µ
EXi4
= µ + 6µ σ + 3σ .
2
E(Xi − µ)
= 0
E(Xi − µ)
= 3σ
(11.56)
2
(11.57)
3
(11.58)
2
(11.59)
4
4
2 2
4
(11.60)
2
We also know that T = Xn ∼ N (µ, σn ) , and we have the corresponding results
for T :
ET
= µ
(11.61)
ET 2 = µ2 +
σ2
E(T − µ)3 = 0
(11.63)
ET 3 = µ3 + 3
E(T − µ)4 = 3
(11.62)
n
σ2
n
µ
σ4
n2
ET 4 = µ4 + 6µ2
(11.64)
(11.65)
σ2
σ4
+ 3 2.
n
n
(11.66)
281
Information Theory and Statistics
Also,
EXi T
EXi2 T 2
=
1
1!
σ2
EXi2 +
EXi Xj = µ2 +
n
n j+=i
n

(11.67)

!
!
1
= E 2 Xi2  Xj2 + 2
Xj Xk 
n
j
j,k:j<k
(11.68)

!
!
1
= E 2 Xi4 +
Xi2 Xj2 + 2
Xi2 Xj Xk
n
j+=i
j,k:j+=i,k+=i,j<k
!
+2
k:k+=i
=
=

(11.69)
Xi3 Xk 
1 0 4
µ + 6µ2 σ 2 + 3σ 4 + (n − 1)(µ2 + σ 2 )2
n2
+
(n − 1)(n − 2) 2
2 2
3
2
(µ + σ )µ + 2(n − 1)µ(µ + 3σ µ)
+2
2
n2 µ4 + (n2 + 5n)µ2 σ 2 + (n + 2)σ 4
(11.70)
n2
Now we are in a position to calculate EW 2 . We first rewrite
=
W
=
=
=
!
i
(Xi − T )2
i
Xi2 − 2
(11.73)
i
Xi2 − 2nT T + nT 2
(11.74)
i
Xi2 − nT 2 .
!
!
!
!
(11.71)
Xi T + nT 2
(11.72)
i
Thus
W
2
=
=
.
!
i
!
i
Xi2
− nT
Xi4 + 2
!
i<j
2
/2
Xi2 Xj2 + n2 T 4 − 2n
(11.75)
!
Xi2 T 2
(11.76)
i
and therefore
EW 2 = n(µ4 + 6σ 2 µ2 + 3σ 4 ) + 2
n(n − 1) 2
σ2
σ4
(µ + σ 2 )2 + n2 (µ4 + 6 µ2 + 3 2 )
2
n
n
1 2 4
(n µ + (n2 + 5n)µ2 σ 2 + (n + 2)σ 4 )
n2
= (n2 − 1)σ 4 .
−2n2
(11.77)
(11.78)
282
Information Theory and Statistics
Now we can calculate the mean squared error of the two estimators. The error of
W
Sn−1 = n−1
is
E(Sn−1 − σ )
2 2
= E
=
=
The error of Sn−1 =
W
n−1
is
E(Sn − σ )
2 2
=
W2
W 2
−2
σ + σ4
(n − 1)2
n−1
/
(n − 1)σ 2 2
(n2 − 1)σ 4
−
2
σ + σ4
(n − 1)2
n−1
2
σ4 .
n−1
= E
=
.
.
W2
W
− 2 σ2 + σ4
2
n
n
/
(11.79)
(11.80)
(11.81)
(11.82)
(n2 − 1)σ 4
(n − 1)σ 2 2
−
2
σ + σ4
n2
n
2n − 1 4
σ .
n2
(11.83)
(11.84)
2
Since 2n−1
n2 is less than n−1 for all positive n , we see that S n has a lower expected
error than Sn−1 .
In fact, if we let the estimator of σ 2 be cW , then we can easily calculate the
expected error of the estimator to be
0
1
E(cW − σ 2 )2 = σ 4 (n2 − 1)c2 − 2(n − 1)c + 1 ,
(11.85)
1
which is minimized for c = n+1
. Thus neither best unbiased estimator ( c =
1
1
n−1 ) or the maximum likelihood estimator ( c = n ) produces the minimum mean
squared error.
7. Fisher information and relative entropy. Show for a parametric family {p θ (x)}
that
1
1
lim
D(pθ ||pθ% ) =
J(θ).
(11.86)
θ % →θ (θ − θ ' )2
ln 4
Solution: Fisher information and relative entropy. Let t = θ ' − θ . Then
1
1 !
pθ (x)
1
%) =
D(p
||p
D(p
||p
)
=
pθ (x) ln
.
θ
θ
θ
θ+t
'
2
2
2
(θ − θ )
t
t ln 2 x
pθ+t (x)
Let
f (t) = pθ (x) ln
pθ (x)
.
pθ+t (x)
(11.87)
(11.88)
We will suppress the dependence on x and expand f (t) in a Taylor series in t . Thus
f ' (t) = −
pθ dpθ+t
,
pθ+t dt
(11.89)
283
Information Theory and Statistics
and
pθ
f (t) = 2
pθ+t
''
*
dpθ+t
dt
+2
pθ d2 pθ+t
.
pθ+t dt2
+
(11.90)
Thus expanding in the Taylor series around t = 0 , we obtain
f (t) = f (0) + f ' (0)t + f '' (0)
where f (0) = 0 ,
f ' (0) = −
and
Now
$
x pθ (x)
(11.91)
3
dpθ
pθ dpθ+t 33
=
3
pθ dt t=0
dθ
1
f (0) =
pθ
''
t2
+ O(t3 ),
2
*
dpθ
dθ
+2
+
(11.92)
d2 pθ
dθ 2
(11.93)
= 1 , and therefore
! dpθ (x)
=
d
1 = 0,
dt
(11.94)
! d2 pθ (x)
=
d
0 = 0.
dθ
(11.95)
dθ
x
and
dθ 2
x
Therefore the sum of the terms of (11.92) sum to 0 and the sum of the second terms in
(11.93) is 0.
Thus substituting the Taylor expansions in the sum, we obtain
1 !
pθ (x)
1
D(pθ ||pθ% ) = 2
pθ (x) ln
'
2
(θ − θ )
t ln 2 x
pθ+t (x)
=
=
=
and therefore
1
t2 ln 2
.
0+
! dpθ (x)
x
dθ
t+
*
dpθ(x)
1 ! 1
2 ln 2 x pθ (x)
dθ
1
J(θ) + O(t)
ln 4
+2
!
x
.
1
pθ
*
dpθ
dθ
+2
(11.96)
d2 pθ
+
dθ 2
+ O(t)
/
/
t2
+ O(t3 )
2
(11.97)
(11.98)
(11.99)
1
1
D(pθ ||pθ% ) =
J(θ).
'
2
θ →θ (θ − θ )
ln 4
lim
%
(11.100)
8. Examples of Fisher information. The Fisher information J(Θ) for the family
fθ (x), θ ∈ R is defined by
J(θ) = Eθ
*
∂fθ (X)/∂θ
fθ (X)
+2
=
2
Find the Fisher information for the following families:
%
(fθ )2
fθ
284
Information Theory and Statistics
(a) fθ (x) = N (0, θ) =
2
x
√ 1 e− 2θ
2πθ
(b) fθ (x) = θe−θx , x ≥ 0
(c) What is the Cramèr Rao lower bound on E θ (θ̂(X)−θ)2 , where θ̂(X) is an unbiased
estimator of θ for (a) and (b)?
Solution: Examples of Fisher information.
(a) fθ (x) = N (0, θ) =
2
x
√ 1 e− 2θ
2πθ
, and therefore
x2
x2
1 1
x2 1
fθ' = − √
e− 2θ + 2 √
e− 2θ ,
2 2πθ 3
2θ
2πθ
and
fθ'
=
fθ
.
x2
1
− + 2
2θ 2θ
/
(11.101)
(11.102)
.
Therefore the Fisher information,
*
+
ft' h 2
J(th) = Eθ
fh
/
. t
1 x2
x4
1
−2
+
= Eθ
4θ 2
2θ 2θ 2 4θ 4
=
=
1
1 θ
3θ 2
−
2
+
4θ 2
θ 2θ 2 4θ 4
1
,
2θ 2
(11.103)
(11.104)
(11.105)
(11.106)
using the “well-known” or easily verified fact that for a normal N (0, θ) distribution, the fourth moment is 3θ 2 .
(b) fθ (x) = θe−θx , x ≥ 0 , and therefore ln fθ = ln θ − θx , and
d ln fθ
1
= − x,
dθ
θ
(11.107)
and therefore
*
+
d ln fθ 2
dθ
*
+
1
1
2
= Eθ
−
2
x
+
x
θ2
θ
1
11 1
1
=
−2
+ + 2
2
θ
θθ θ θ
1
=
θ
J(θ) = Et h
using the fact that for an exponential distribution, EX =
(11.108)
(11.109)
(11.110)
(11.111)
1
θ
, and EX 2 =
1
θ
+ θ12 .
285
Information Theory and Statistics
(c) The Cramer-Rao lower bound is the reciprocal of the Fisher information, and is
therefore 2θ 2 and θ for parts (a) and (b) respectively.
9. Two conditionally independent looks double the Fisher information. Let
gθ (x1 , x2 ) = fθ (x1 )fθ (x2 ) . Show Jg (θ) = 2Jf (θ) .
Solution: Two conditionally independent looks double the Fisher information. We can
simply use the same arguments as in Section 12.11 in the text. We define the score
∂
function V (Xi ) = ∂θ
ln fθ (xi ) . Then the score functions are independent mean zero
random variables and since the Fisher information of g is the variance of the sum of the
score functions, it is the sum of the individual variances. Thus the Fisher information
of g is twice the Fisher information of f .
10. Joint distributions and product distributions. Consider a joint distribution
Q(x, y) with marginals Q(x) and Q(y) . Let E be the set of types that look jointly
typical with respect to Q , i.e.,
E = {P (x, y) : −
−
−
!
x,y
P (x, y) log Q(x) − H(X) = 0,
x,y
P (x, y) log Q(y) − H(Y ) = 0,
x,y
P (x, y) log Q(x, y) − H(X, Y ) = 0}.
!
!
(11.112)
(a) Let Q0 (x, y) be another distribution on X × Y . Argue that the distribution P ∗
in E that is closest to Q0 is of the form
P ∗ (x, y) = Q0 (x, y)eλ0 +λ1 log Q(x)+λ2 log Q(y)+λ3 log Q(x,y) ,
(11.113)
where λ0 , λ1 , λ2 and λ3 are chosen to satisfy the constraints. Argue that this
distribution is unique.
(b) Now let Q0 (x, y) = Q(x)Q(y) . Verify that Q(x, y) is of the form (11.113) and
satisfies the constraints. Thus P ∗ (x, y) = Q(x, y) , i.e., the distribution in E
closest to the product distribution is the joint distribution.
Solution: Joint distributions and product distributions.
(a) This result follows directly from Problem 2 in Chapter 11. We will not repeat the
arguments.
(b) If we let λ0 = 0 , λi = −1 , λ2 = −1 , and λ3 = 1 , then
P ∗ (x, y) = Q0 (x, y)eλ0 +λ1 log Q(x)+λ2 log Q(y)+λ3 log Q(x,y)
1
1
= Q(x)Q(y)
Q(x, y)
Q(x) Q(y)
= Q(x, y)
(11.114)
(11.115)
(11.116)
and therefore Q(x, y) is of the form that minimizes the relative entropy. It is easy
to verify that Q(x, y) trivially satisfies the constraints involved in the definition
286
Information Theory and Statistics
of the set E . Therefore the joint distribution is the distribution that looks jointly
typical and is closest to the product distribution.
11. Cramer-Rao inequality with a bias term. Let X ∼ f (x; θ) and let T (X) be an
estimator for θ . Let bT (θ) = Eθ T − θ be the bias of the estimator. Show that
E(T − θ)2 ≥
[1 + b'T (θ)]2
+ b2T (θ).
J(θ)
(11.117)
Solution: Cramer-Rao inequality with a bias term. The proof parallels the proof without the bias term (Theorem 12.11.1). We will begin with the calculation of E θ (V T ) ,
where V is the score function and T is the estimator.
E(V T ) =
2
∂
∂θ f (x; θ)
T (x)f (x; θ) dx
f (x; θ)
2
∂
=
f (x; θ)T (x) dx
∂θ
2
∂
=
f (x; θ)T (x) dx
∂θ
∂
=
Eθ T
∂θ
∂
=
(bT (θ) + θ)
∂θ
= b'T (θ) + 1
(11.118)
(11.119)
(11.120)
(11.121)
(11.122)
(11.123)
By the Cauchy-Schwarz inequality, we have
(E [(V − EV )(T − ET )])2 ≤ E(V − EV )2 E(T − ET )2 .
(11.124)
Also, EV = 0 and therefore E(V − EV )(T − ET ) = E(V T ) . Also, by definition,
var(V ) = J(θ) . Thus we have
,
-2
b'T (θ) + 1
Now
≤ J(θ)E(T − θ − bT (θ))2 .
E(T − θ − bT (θ))2 = E(T − θ)2 + b2T (θ) − 2E(T − θ)bT (θ)
= E(T − θ) +
2
= E(T − θ) −
2
b2T (θ) −
b2T (θ).
2b2T (θ)
(11.125)
(11.126)
(11.127)
(11.128)
Substituting this in the Cauchy Schwarz inequality, we have the desired result
E(T − θ)2 ≥
[1 + b'T (θ)]2
+ b2T (θ).
J(θ)
(11.129)
287
Information Theory and Statistics
12. Hypothesis testing. Let X1 , X2 , . . . , Xn be i.i.d. ∼ p(x) . Consider the hypothesis
test H1 : p = p1 versus H2 : p = p2 . Let
p1 (x) =



1
2,
1
4,
1
4,
x = −1
x=0
x=1
p2 (x) =



1
4,
1
4,
1
2,
x = −1
x=0
x=1 .
and




(a) Find the error exponent for Pr{ Decide H 2 |H1 true } in the best hypothesis test
of H1 vs. H2 subject to Pr{ Decide H1 |H2 true } ≤ 21 .
Solution: Hypothesis testing
By the Chernoff-Stein lemma, the error exponent in this hypothesis test is the exponent
for probability of the acceptance region for H 2 given P1 , which is
D(P2 ||P1 ) =
1
log
4
1
4
1
2
+
1
log
4
1
4
1
4
+
1
log
2
1
2
1
4
= 0.25
(11.130)
n
Thus the probability of error will go to 0 as 2 − 4 .
13. Sanov’s theorem: Prove the simple version of Sanov’s theorem for the binary random
variables, i.e., let X1 , X2 , . . . , Xn be a sequence of binary random variables, drawn i.i.d.
according to the distribution:
Pr(X = 1) = q,
Pr(X = 0) = 1 − q.
(11.131)
Let the proportion of 1’s in the sequence X 1 , X2 , . . . , Xn be pX , i.e.,
pX n =
n
1!
Xi .
n i=1
(11.132)
By the law of large numbers, we would expect p X to be close to q for large n . Sanov’s
theorem deals with the probability that p X n is far away from q . In particular, for
concreteness, if we take p > q > 21 , Sanov’s theorem states that
1
p
1−p
− log Pr {(X1 , X2 , . . . , Xn ) : pX n ≥ p} → p log +(1−p) log
= D((p, 1−p)||(q, 1−q)).
n
q
1−q
(11.133)
Justify the following steps:
•
Pr {(X1 , X2 , . . . , Xn ) : pX ≥ p} ≤
n
!
i=$np%
. /
n i
q (1 − q)n−i
i
(11.134)
288
Information Theory and Statistics
• Argue that the term corresponding to i = *np+ is the largest term in the sum on
the right hand side of the last equation.
• Show that this term is approximately 2−nD .
• Prove an upper bound on the probability in Sanov’s theorem using the above steps.
Use similar arguments to prove a lower bound and complete the proof of Sanov’s
theorem.
Solution: Sanov’s theorem
• Since nX n has a binomial distribution, we have
Pr(nX n = i) =
. /
n i
q (1 − q)n−i
i
and therefore
Pr {(X1 , X2 , . . . , Xn ) : pX ≥ p} ≤
•
n
!
i=$np%
(11.135)
. /
n i
q (1 − q)n−i
i
n - i+1
(1 − q)n−i−1
n−i q
i+1 q
,n=
i (1 − q)n−i
q
i+11−q
i
1−q
n−i
i+1 < q ,i.e., if i > nq − (1 − q) . Thus
Pr(nX n = i + 1)
=
Pr(nX n = i)
,
This ratio is less than 1 if
of the terms occurs when i = *np+ .
• From Example 11.1.3,
.
/
n
.
=2nH(p)
*np+
(11.136)
(11.137)
the maximum
(11.138)
and hence the largest term in the sum is
.
/
n
q $np% (1−q)n−$np% = 2n(−p log p−(1−p) log(1−p))+np log q+n(1−p) log(1−q) = 2−nD(p||q)
*np+
(11.139)
• From the above results, it follows that
Pr {(X1 , X2 , . . . , Xn ) : pX ≥ p} ≤
n
!
i=$np%
. /
n i
q (1 − q)n−i
i
.
(11.140)
/
n
≤ (n − *np+)
q i (1 − q)n−i(11.141)
*np+
≤ (n(1 − p) + 1)2−nD(p||q)
(11.142)
where the second inequality follows from the fact that the sum is less than the
largest term times the number of terms. Taking the logarithm and dividing by n
and taking the limit as n → ∞ , we obtain
lim
n→∞
1
log Pr {(X1 , X2 , . . . , Xn ) : pX ≥ p} ≤ −D(p||q)
n
(11.143)
289
Information Theory and Statistics
Similarly, using the fact the sum of the terms is larger than the largest term, we
obtain
n
!
Pr {(X1 , X2 , . . . , Xn ) : pX ≥ p} ≥
i=0np1
.
≥
. /
(11.144)
/
(11.145)
n i
q (1 − q)n−i
i
n
q i (1 − q)n−i
2np3
≥ 2−nD(p||q)
(11.146)
and
1
log Pr {(X1 , X2 , . . . , Xn ) : pX ≥ p} ≥ −D(p||q)
(11.147)
n
Combining these two results, we obtain the special case of Sanov’s theorem
lim
n→∞
1
log Pr {(X1 , X2 , . . . , Xn ) : pX ≥ p} = −D(p||q)
n→∞ n
lim
14. Sanov. Let Xi be i.i.d. ∼ N (0, σ 2 ).
(11.148)
$
(a) Find the exponent in the behavior of Pr{ n1 ni=1 Xi2 ≥ α2 }. This can be done from
first principles (since the normal distribution is nice) or by using Sanov’s theorem.
(b) What does the data look like if
minimizes D(P 5 Q) ?
1
n
$n
2
i=1 Xi
≥ α ? That is, what is the P ∗ that
Solution: Sanov
$
(a) From the properties of the normal distribution, we know that
Xi2 has a χ2
distribution with n degrees of freedom, and we can directly calculate
*
1! 2
Pr
Xi ≥ α2
n
+
0
= Pr χ2n ≥ nα2
=
2
Γ( n2 , nα2 )
Γ( n2 )
1
(11.149)
(11.150)
However, using Sanov’s theorem, we know that the probability of the set
*
1
1! 2
− log Pr
Xi ≥ α2
n
n
+
= D(P ∗ ||Q),
(11.151)
where P ∗ is the distribution that satisfies the constraint that is closest to Q . In
this case,
D(P ||Q) =
2
f (x) ln
f (x)
2
x
√ 1
e− 2σ2
2
2πeσ
2
2
√
x2
2
= −H(f ) + f (x) ln 2πeσ + f (x) 2
2σ
2]
√
E[X
= −H(f ) + ln 2πeσ 2 +
2σ 2
(11.152)
(11.153)
(11.154)
290
Information Theory and Statistics
and hence the distribution that minimizes the relative entropy is the distribution
that maximizes the entropy subject to the expected square constraint. Using the
results of the maximum entropy chapter (Chapter 12), we can see the the maximum
2
entropy distribution is of the form f (x) ∼ Ce −βx , i.e., the maximum entropy
distribution is a normal distribution. Thus the P ∗ that minimizes relative entropy
$
subject to the constraint n1 Xi2 ≥ α2 is the N (0, α2 ) distribution. Substituting
this distribution back into the expression for relative entropy, we obtain
√
E[X 2 ]
D(P ∗ ||Q) = −H(N (0, α2 )) − ln 2πeσ 2 +
2σ 2
1
1
α2
=
log(2πeα2 ) − log(2πeσ 2 ) + 2
2
2
2σ
α2 1 α2
1
log 2 +
=
2
σ
2 σ2
(11.155)
(11.156)
(11.157)
(b) From the above calculation and the conditional limit theorem, the distribution of
the data conditional on the constraint is P ∗ , which is N (0, α2 ) .
15. Counting states.
Suppose an atom is equally likely to be in each of 6 states, X ∈ {s 1 , s2 , s3 , . . . , s6 } .
One observes n atoms X1 , X2 , . . . , Xn independently drawn according to this uniform
distribution. It is observed that the frequency of occurrence of state s 1 is twice the
frequency of occurrence of state s 2 .
(a) To first order in the exponent, what is the probability of observing this event?
(b) Assuming n large, find the conditional distribution of the state of the first atom
X1 , given this observation.
Solution: Counting states
(a) Using Sanov’s theorem, we need to determine the distribution P ∗ that is closest
to the uniform with p1 = 2p2 , which is the empirical constraint. We need to
minimize
!
D(P ||Q) =
pi log 6pi
(11.158)
subject to the constraints
J(P ) =
$
!
pi = 1 and p1 − 2p2 = 0 . Setting up the functional
pi log 6pi + λ1
!
pi + λ2 (p1 − 2p2 )
(11.159)
Differentiating with respect to p i and setting to 0, we obtain
log 6p1 + 1 + λ1 + λ2 = 0
(11.160)
log 6p2 + 1 + λ1 − 2λ2 = 0
(11.161)
log 6pi + 1 + λ1 = 0,
i = 3, 4, 5, 6
(11.162)
291
Information Theory and Statistics
Thus
p1 = c1 2−λ2
(11.163)
p2 = c1 2
(11.164)
2λ2
pi = c1 ,
i = 3, 4, 5, 6
(11.165)
where c1 = 16 2−(1+λ1 ) . Since p1 = 2p2 , we obtain λ2 = − 13 log 2 . c1 should be
$
1
2
chosen so that
pi = 1 , which in turn implies that c1 = 1/(2 3 + 2− 3 + 4) =
1/5.889 , and the corresponding distribution is (0.213,0.107,0.17,0.17,0.17,0.17),
and the relative entropy distance is 0.0175. Thus the first order probability that
this event happens is 2−0.0175n .
(b) When the event ( p1 = 2p2 ) happens, the conditional distribution is close to P ∗ =
(0.213, 0.107, 0.17, 0.17, 0.17, 0.17) .
16. Hypothesis testing
Let {Xi } be i.i.d. ∼ p(x) , x ∈ {1, 2, . 0
. .}1. Consider two hypotheses H 0 : p(x) = p0 (x)
x
vs. H1 : p(x) = p1 (x) , where p0 (x) = 21 , and p1 (x) = qpx−1 , x = 1, 2, 3, . . .
(a) Find D(p0 5 p1 ) .
(b) Let Pr{H0 } = 12 . Find the minimal probability of error test for H 0 vs. H1 given
data X1 , X2 , . . . , Xn ∼ p(x) .
Solution: Hypothesis testing
(a)
D(p0 ||p1 ) =
=
!
x
! * 1 +x
=
=
=
=
p0 (x)
p1 (x)
0 1x
1
2
qpx−1
log
2
** +x +
! * 1 +x
1
p
log
2
2p
q
x
*
+
*
+
x
! 1
! * 1 +x
1
p
x log
+
log
2
2p
2
q
x
x
* + ! * +x
*
+
1
1
p! 1 x
log
x + log
2p x 2
q x 2
* +
1
p
2 log
+ log
2p
q
− log(4pq)
x
=
p0 (x) log
(11.166)
(11.167)
(11.168)
(11.169)
(11.170)
(11.171)
(11.172)
292
Information Theory and Statistics
(b) In the Bayesian setting, the minimum probability of error exponent is given by
the Chernoff information
-
C(P1 , P2 )= − min log
0≤λ≤1
Now
!
P1λ (x)P21−λ (x)
.
!
/
P1λ (x)P21−λ (x)
x
! * 1 +xλ * q +1−λ
=
x
x
2
p
* +1−λ ! .
q
p
=
x
=
* +1−λ
=
q 1−λ pλ
2λ pλ − p
q
p
1
p1−λ
2λ
.
(11.173)
px(1−λ)
(11.174)
/x
(11.175)
p1−λ
2λ
1−λ
− p 2λ
(11.176)
(11.177)
To find the mimimum of this over λ , we differentiate the logarithm of this with
respect to λ , and obtain
− log q + log p −
1
(2p)λ log 2p = 0
(2p)λ − p
(11.178)
Solving for λ from this equation and substituting this into the definition of Chernoff information will provide us the answer.
17. Maximum likelihood estimation. Let {f θ (x)} denote a parametric family of densities with parameter θ%R . Let X1 , X2 , . . . , Xn be i.i.d. ∼ fθ (x) . The function
lθ (x ) = ln
n
.
n
B
/
fθ (xi )
i=1
is known as the log likelihood function. Let θ 0 denote the true parameter value.
(a) Let the expected log likelihood be
Eθ0 lθ (X n ) =
2
(ln
n
B
i=1
fθ (xi ))
n
B
fθ0 (xi )dxn ,
i=1
and show that
Eθ0 (l(X n )) = (−h(fθ0 ) − D(fθ0 ||fθ ))n .
(b) Show that the maximum over θ of the expected log likelihood is achieved by
θ = θ0 .
Solution: Maximum likelihood
This problem is the continuous time analog of the cost of miscoding.
293
Information Theory and Statistics
(a) Let us denote
fθ (x ) =
n
.
Eθ0 lθ (X ) =
−
=
2
(log
n
B
fθ (xi ))
i=1
2
/
fθ (xi )
(11.179)
fθ0 (xi )dxn ,
(11.180)
i=1
Then if
n
n
B
n
B
i=1
fθ0 (xn ) log fθ (xn ) dxn
2
(11.181)
ln fθ0 (xn ) log fθ0 (xn ) dxn +
2
fθ0 (xn ) log
= −h(fθ0 (xn )) − D(fθ0 (xn )||fθ (xn ))
fθ (xn )
n
dx
(11.182)
fθ0 (xn )
(11.183)
= −n(h(fθ0 (x)) − D(fθ0 (x)||fθ (x)))
(11.184)
(b) From the non-negativity of relative entropy, it follows from the last equation that
the maximum value of the likelihood occurs when
D(fθ0 (x)||fθ (x)) = 0
(11.185)
or θ = θ0 .
18. Large deviations. Let X1 , X2 , . . . be i.i.d. random variables drawn according to the
geometric distribution
P r{X = k} = pk−1 (1 − p), k = 1, 2, . . .
.
Find good estimates (to first order in the exponent) of
(a) P r{ n1
(b)
$n
i=1 Xi ≥ α} .
$
P r{X1 = k| n1 ni=1 Xi
≥ α} .
(c) Evaluate a) and b) for p = 21 , α = 4 .
Solution: Large deviations
By Sanov’s theorem, the probability is determined by the relative entropy distance to
the closest distribution that satisfies the constraint. Let that distribution be r 1 , r2 , . . . ,
on the integers 1,2, . . . . Then the relative entropy distance to the geometric distribution
is
!
ri
D(r||p) =
ri log i−1
(11.186)
p (1 − p)
$
$
We need to minimize this subject to the constraints,
ri = 1 , and
iri = α . We
have assumed that the constraint is matched with equality without loss of generality.
We set up the functional
J(r) =
!
ri log
ri
i−1
p (1 −
p)
+ λ1
!
ri + λ2
!
iri
(11.187)
294
Information Theory and Statistics
Differentiating with respect to ri and setting to 0, we obtain
log ri − log(pi−1 (1 − p)) + λ1 + λ2 i = 0
(11.188)
ri = pi−1 (1 − p)c1 ci2
(11.189)
or
From the form of the equation for ri , it is clear that r too is a geometric distribution.
$
Since we need to satisfy the constraint
iri = α , it follows that ri is a geometric
1
distribution with parameter 1 − α . Therefore
*
ri = 1 −
1
α
+i−1
1
α
(11.190)
and
D(r||p) =
!
ri log
i
=
!
i
= log
ri log
ri
pi−1 (1 − p)
p 1−
1 − p α1
1
α
(11.191)
*
1
αp
+i
(11.192)
α−1
p
+ α log
(1 − p)(α − 1)
αp
(11.193)
(a)
n
1
1!
p
α−1
− log Pr{
Xi ≥ α} = D(r||p) = log
+ α log
(11.194)
n
n i=1
(1 − p)(α − 1)
αp
(b)
Pr{X1 = k|
*
n
1!
1
Xi ≥ α} = rk = 1 −
n i=1
α
+k−1
1
α
(11.195)
(c) For α = 4 and p = 0.5 , we have D = log 27/16 = 0.755 , and the conditional
distribution of X1 is geometric with mean 4, i.e.
n
1!
Pr{X1 = k|
Xi ≥ α} = rk = 0.75k−1 0.25
n i=1
(11.196)
19. Another expression for Fisher information. Use integration by parts to show
that
∂ 2 ln fθ (x)
J(θ) = −E
.
∂θ 2
Solution: Another expression for Fisher information
295
Information Theory and Statistics
From (11.270), we have
4
5
2
∂
J(θ) = Eθ
ln f (X; θ)
∂θ
4
52
2 ∞
∂
=
f (x; θ)
ln f (x; θ) dx
∂θ
0
2
=
2
=
2
=
∞
f (x; θ)
0
∞
0
0
0
∞*
Now integrating by parts, setting u =
2
∞
f (x; θ)
∂
f (x; θ)
∂θ
∂
∂θ
dv =
we have
0
(11.200)
dx
+"
∂
∂θ f (x; θ)
f (x; θ)
#
∞*
+
∂
f (x; θ) dx
∂θ
2
(11.201)
dx
(11.202)
ln f (x; θ) and dv =
0
2
∂
f (x; θ)
∂θ
2
(11.199)
dx
f (x; θ)
12
(11.198)
0
∂
∂θ f (x; θ)
1
dx , we have
∂ ∞
=
f (x; θ) dx
∂θ 0
∂
1
=
∂θ
= 0
Therefore since
∞*
#2
∂
∂θ f (x; θ)
∂
∂θ f (x; θ)
0
2
"
(11.197)
+"
u dv = uv −
∂
∂θ f (x; θ)
f (x; θ)
#
2
(11.204)
(11.205)
(11.206)
(11.207)
v du
dx = −
2
0
(11.203)
∞
−
∂2
ln; f (x; θ) dx
∂θ 2
(11.208)
20. Stirling’s approximation: Derive a weak form of Stirling’s approximation for factorials, i.e., show that
* +n
* +n
n
n
≤ n! ≤ n
(11.209)
e
e
using the approximation of integrals by sums. Justify the following steps:
ln(n!) =
n−1
!
i=2
ln(i) + ln(n) ≤
and
ln(n!) =
n
!
i=1
ln(i) ≥
2
n−1
ln x dx + ln n = ......
(11.210)
2
2
0
n
ln x dx = ......
(11.211)
296
Information Theory and Statistics
Solution: Stirling’s approximation The basic idea of the proof is find bounds for the
$
sum ni=2 ln i . If we plot the sum as a sum of rectangular areas, as shown in Figure 11.2,
it is not difficult to see that the total area of the rectangles is bounded above by the
integral of the upper curve, and bounded below by the integral of the lower curve.
4
log(i)
log(x)
log(x-1)
3
2
1
0
-1
1
2
3
4
5
6
7
8
9
10
11
Figure 11.2: Upper and lower bounds on log n!
Now consider the upper bound: From the figure, it follows that the sum of the rectangles
starting at 2,3,4, . . . , n − 1 is less than the integral of the upper curve from 2 to n .
Therefore,
ln(n!) = ln n +
n−1
!
ln i + ln 1
(11.212)
ln i
(11.213)
ln(x) dx
(11.214)
= ln n + [x ln x − x]n2
(11.215)
= ln n +
≤ ln n +
i=2
n−1
!
i=2
2 n
2
= ln n + n ln n − n − (2 ln 2 − 2)
= ln n + n ln(n/e) − ln(4/e )
(11.216)
2
(11.217)
,
(11.218)
Therefore, exponentiating, we get
* +n .
n
n! ≤ n
e
e2
4
/
≤ 2n
* +n
n
e
297
Information Theory and Statistics
since e2 /4 = 1.847 < 2 .
For the lower bound, from the figure, it follows that the sum of the areas of the rectanges
starting at 1, 2, . . . , n is less than the integral of the lower curve from 1 to n + 1 .
Therefore
ln(n!) =
≥
=
n
!
ln i
i=1
2 n+1
21n
(11.219)
ln(x − 1) dx
(11.220)
ln(x) dx
(11.221)
= [x ln x − x]n0
(11.222)
0
= n ln n − n − (0 ln 0 − 0)
(11.223)
= n ln(n/e)
(11.224)
Therefore, exponentiating, we get
n! ≥
* +n
n
e
(11.225)
.
, -
21. Asymptotic value of nk . Use the simple approximation of the previous problem to
show that, if 0 ≤ p ≤ 1 , and k = *np+ , i.e., k is the largest integer less than or equal
to np , then
. /
1
n
lim log
n→∞ n
k
= −p log p − (1 − p) log(1 − p) = H(p).
(11.226)
Now let pi , i = 1, . . . , m be a probability distribution on m symbols, i.e., p i ≥ 0 , and
$
i pi = 1 . What is the limiting value of
.
/
1
n
$m−1
log
=
n
*np1 + *np2 + . . . *npm−1 + n − j=0
*npj +
1
n!
log
$
n
*np1 +! *np2 +! . . . *npm−1 +! (n − m−1
j=0 *npj +)!
Solution: Asymptotic value of
Using the bounds
,nk
* +n
n
e
we obtain
. /
1
n
log
n
k
=
(11.227)
≤ n! ≤ n
* +n
n
e
1
(log n! − log k! − log(n − k)!)
n
(11.228)
(11.229)
298
Information Theory and Statistics
.
* +n
1
≤
log n
n
1
=
log n −
n
→ H(p)
* +
*
n
k k
n−k
− log
− log
e
e
e
k
k n−k
n−k
log −
log
n
n
n
n
+n−k /
(11.230)
(11.231)
(11.232)
Similarly, using the same bounds
. /
n
1
log
k
n
=
1
(log n! − log k! − log(n − k)!)
n
.
* +
(11.233)
* +
*
1
n n
k k
n−k
≥
log
− log k
− log(n − k)
n
e
e
e
1
k
k n−k
n−k
= − log k(n − k) − log −
log
n
n
n
n
n
→ H(p)
and therefore
+n−k /
. /
n
1
= H(p)
lim log
n
k
(11.234)
(11.235)
(11.236)
(11.237)
By the same arguments, it is easy to see that
.
1
n
$m−1
lim log
n
*np1 + *np2 + . . . *npm−1 + n − j=0
*npj +
/
= H(p1 , . . . , pm )
(11.238)
22. The running difference.. Let X1 , X2 , . . . , Xn be i.i.d. ∼ Q1 (x) , and Y1 , Y2 , . . . , Yn
$
be i.i.d. ∼ Q2 (y) . Let X n and Y n be independent. Find an expression for Pr{ ni=1 Xi −
$n
i=1 Yi ≥ nt} , good to first order in the exponent. Again, this answer can be left in
parametric form.
Solution: Running difference
The joint distribution of X and Y is Q(x, y) = Q 1 (x)Q2 (y) . The constraint that the
running difference is greater than nt translates to a constraint on the empirical join
distribution, i.e.,
!!
Pn (i, j)(i − j) ≥ t
(11.239)
i
j
∗
By Sanov’s theorem, the probability of this large deviation is 2 −nD to the first order in the exponent, where D ∗ is the minimum relative entropy distance between all
distributions P that satisfy the above constraint and Q(x, y) = Q 1 (x)Q2 (y) , i.e.,
D∗ = $ $ min
i
j
Pn (i,j)(i−j)≥t
D(P ||Q)
(11.240)
299
Information Theory and Statistics
23. Large likelihoods. Let X1 , X2 , . . . be i.i.d. ∼ Q(x) , x ∈ {1, 2, . . . , m} . Let P (x) be
some other probability mass function. We form the log likelihood ratio
n
1
P n (X1 , X2 , . . . , Xn )
1!
P (Xi )
log n
=
log
n
Q (X1 , X2 , . . . , Xn )
n i=1
Q(Xi )
of the sequence X n and ask for the probability that it exceeds a certain threshold.
Specifically, find (to first order in the exponent)
Q
n
*
+
1
P (X1 , X2 , . . . , Xn )
log
>0 .
n
Q(X1 , X2 , . . . , Xn )
There may be an undetermined parameter in the answer.
Solution: Large likelihoods
Let PX be the type of the sequence X1 , X2 , . . . , Xn . The the empirical likelihood ratio
can be rewriiten as
P (X1 , X2 , . . . , Xn )
1
log
n
Q(X1 , X2 , . . . , Xn )
=
=
n
1!
P (Xi )
log
n i=1
Q(Xi )
!
PX (x) log
x∈X
P (x)
Q(x)
(11.241)
(11.242)
and therefore by Sanov’s theorem, the probability of this ratio being greater than 0 can
be written as
∗
.
P (E)=2−nD(P ||Q)
(11.243)
where P ∗ is the distribution satisfying
!
P ∗ (x) log
x∈X
P (x)
>0
Q(x)
(11.244)
that is closest to Q in relative entropy distance. Using the formulation in the section
on Examples of Sanov’s Theorem (11.208) with g(x) = log P (x)/Q(x) , we obtain using
Lagrange multipliers
P ∗ (x) =
=
where λ is chosen so that
!
x∈X
Q(x)e
$
λ log
P (x)
Q(x)
λ log
P (x)
Q(x)
x Q(x)e
P λ (x)Q1−λ (x)
$
λ
1−λ (x)
x P (x)Q
P ∗ (x) log
P (x)
=0
Q(x)
(11.245)
(11.246)
(11.247)
24. Fisher information for mixtures. Let f 1 (x) and f0 (x) be two given probability
densities. Let Z be Bernoulli( θ ), where θ is unknown. Let X ∼ f 1 (x) , if Z = 1 and
X ∼ f0 (x) , if Z = 0 .
300
Information Theory and Statistics
(a) Find the density fθ (x) of the observed X .
(b) Find the Fisher information J(θ) .
(c) What is the Cramér-Rao lower bound on the mean squared error of an unbiased
estimate of θ ?
(d) Can you exhibit an unbiased estimator of θ ?
Solution: Fisher information of mixtures
(a) The density of X is the weighted mixture of the two densities, i.e.,
fθ (x) = θf1 (x) + (1 − θ)f0 (x).
(11.248)
dfθ
= f1 (x) − f2 (x)
dθ
(11.249)
(b)
and therefore by definition,
4
5
dfθ 2
dθ
= Eθ (f1 (x) − f2 (x))2
J(θ) = Eθ
=
2
x
(θf1 (x) + (1 − θ)f0 (x)) (f1 (x) − f2 (x))2
(11.250)
(11.251)
(11.252)
(c) By the Cramer-Rao inequality, the lower bound on the variance of an unbiased
1
estimator for θ is J(θ)
.
(d) The maximum likelihood estimator for Z is Ẑ = 1 for all x such that f1 (x) ≥
f0 (x) and Ẑ = 0 for other values of x . We could use this as an estimator for
θ , however this estimator is not unbiased. This can be illustrated by a simple
example. Let f1 (x) be uniform on [0, 1] , and f0 (x) be uniform on [0, 1 + %] .
Therefore for all x ∈ [0, 1] , f1 (x) > f0 (x) . Let θ = 0.5 , but the maximum
likelihood estimator will set θ̂ = 1 for almost all values of X , i.e, the expected
value of the estimator will be close to 1. Therefore, the estimator is not unbiased.
To construct an unbiased estimator, we use a method suggested by Boes(1966)[3].
Let F1 (x) and F0 (x) be the cumulative distribution functions corresponding to
f1 and f0 . Let x be any value where F1 (x) %= F0 (x) . Now Fθ (x) = θF1 (x) +
(1 − θ)F0 (x) .
Given a sequence of observations of X , we would expect the proportion of observations ≤ x to equal Fθ (x) , and we can use this to estimate θ . In particular,
with one observation, let I be the indicator that X ≤ x . Then I is our estimate
of Fθ (x) , yielding an estimator
θ̂ =
I − F0 (x)
F1 (x) − F0 (x)
(11.253)
To verify that this estimator is unbiased, we calculate the expected value of the
estimator under the distribution f θ . Since the expected value of I under the
distribution fθ is Fθ (x) , it is easy to verify that the E(θ̂) = θ .
301
Information Theory and Statistics
25. Bent coins. Let {Xi } be iid ∼ Q where
Q(k) = Pr(Xi = k) =
* +
m
k
m−k
, for k = 0, 1, 2, . . . , m.
k q (1 − q)
Thus, the Xi ’s are iid ∼ Binomial (m, q) .
Show that, as n → ∞ ,
n
1!
Pr(X1 = k|
Xi ≥ α) → P ∗ (k),
n i=1
where
P∗
is Binomial (m, λ) (i.e.
P ∗ (k)
=
Find λ .
* +
m
k λk (1 − λ)m−k for some λ ∈ [0, 1]) .
Solution: Bent coins
Using the formulation of Section 11.5 with g(x) = x , we obtain the closest distribution
P ∗ that satisfies the constraint as
Q(x)eλ1 x
P ∗ (x) =
(11.254)
$
λ1 x
x Q(x)e
*
+
m
k q x (1 − q)m−x eλ1 x
=
$
Now setting
(11.255)
* +
m
x
k q x (1 − q)m−x eλ1 x
λ
q λ1
=
e
1−λ
1−q
we obtain
(11.256)
* +
m
∗
P (x) =
=
since
$
straint
* +
k λx (1 − λ)m−x (1 − q)m
$
* +
m
x
* +
m
k
λx (1
−
λ)m−x (1
x
m−x
k λ (1 − λ)
−
(11.257)
q)m
(11.258)
m
x
k λx (1 − λ)m−x = 1 . λ should be chosen so that P ∗ satisfies the con!
P ∗ (x)x = α
(11.259)
Since the expected value of the binomial distribution B(m, λ) with parameter λ is
mλ , we have mλ = α or λ = α/m .
26. Conditional limiting distribution.
302
Information Theory and Statistics
(a) Find the exact value of
P r{X1 = 1|
n
1!
1
Xi = } ,
n i=1
4
(11.260)
if X1 , X2 , . . . , are Bernoulli( 32 ) and n is a multiple of 4.
(b) Now let Xi %{−1, 0, 1} and let X1 , X2 . . . be i.i.d. uniform over {−1, 0, +1}. Find
the limit of
n
1
1!
X2 = }
P r{X1 = +1|
(11.261)
n i=1 i
2
for n = 2k,
k → ∞.
Solution: Conditional Limiting Distribution
(a) By the result of Problem 11.25, the conditional distribution given the constraint
is binomial B(m, λ) where λ = 41 .
(b) Again, using the formulation of Section 11.5, the conditional limit distribution is
P ∗ (x) =
=
Q(x)eλ1 x
$
Q(x)eλ1 x
x
λ

x=1
 ce 1
c

 cceλ1
where c is the normalizing constant, i.e.,
1
c
x=0
x = −1
(11.262)
(11.263)
= 2eλ1 + 1 .
27. Variational inequality: Verify, for positive random variables X , that
log EP (X) = sup [EQ (log X) − D(Q||P )]
(11.264)
Q
$
$
where EP (X) = x xP (x) and D(Q||P ) = x Q(x) log PQ(x)
(x) , and the supremum is
$
over all Q(x) ≥ 0 ,
Q(x) = 1 . It is enough to extremize J(Q) = E Q ln X −D(Q||P )+
$
λ( Q(x) − 1) .
Solution: Variational inequality (Repeat of Problem 8.6)
Using the calculus of variations to extremize
J(Q) =
!
x
q(x) ln x −
!
x
q(x) ln
!
q(x)
+ λ( q(x) − 1)
p(x)
x
(11.265)
we differentiate with respect to q(x) to obtain
∂J
q(x)
= ln x − ln
−1+λ=0
∂q(x)
p(x)
(11.266)
303
Information Theory and Statistics
or
q(x) = c' xp(x)
where c' has to be chosen to satisfy the constraint,
c' = $
1
x xp(x)
(11.267)
$
x q(x)
= 1 . Thus
(11.268)
Substituting this in the expression for J , we obtain
!
J∗ =
x
c' xp(x) ln x −
= − ln c' +
= ln
!
!
x
!
c' xp(x) ln
x
c' xp(x) ln x −
xp(x)
!
c' xp(x)
p(x)
(11.269)
c' xp(x) ln x
(11.270)
x
(11.271)
x
To verify this is indeed a maximum value, we use the standard technique of writing it
as a relative entropy. Thus
ln
!
x
xp(x) −
!
q(x) ln x +
x
!
q(x) ln
x
q(x)
p(x)
=
!
q(x) ln
x
y
'
= D(q||p )
≥ 0
Thus
ln
!
x
xp(x) = sup (EQ ln(X) − D(Q||P ))
q(x)
$xp(x)
(11.272)
yp(y)
(11.273)
(11.274)
(11.275)
Q
This is a special case of a general relationship that is a key in the theory of large
deviations.
28. Type constraints.
(a) Find constraints on the type PX n such that the sample variance Xn2 − (X n )2 ≤ α ,
$
$
where Xn2 = n1 ni=1 Xi2 and Xn = n1 ni=1 Xi .
(b) Find the exponent in the probability Q n (Xn2 − (X n )2 ≤ α) . You can leave the
answer in parametric form.
Solution: Type constraints
We need to rewrite the constraint as an expectation with respect to the type P X n .
Xn2 =
=
n
1!
X2
n i=1 i
!
x
x2 PX (x)
(11.276)
(11.277)
304
Information Theory and Statistics
and
Xn =
=
n
1!
Xi
n i=1
!
(11.278)
xPX (x)
(11.279)
x
and therefore the constraint becomes
Xn2 − (X n ) =
2
!
x
x PX (x) −
2
"
!
#2
xPX (x)
x
≤α
(11.280)
29. While we cannot use the results of Section 11.5 directly, we can use a similar approach
using the calculus of variations to find the type that is closest to subject to the constraint. We set up the functional
J(P ) =
!
x

"
#2
!
!
p(x)
p(x) ln
xp(x)
+λ  x2 p(x) −
q(x)
x
x
and differentiating with respect to p(x) , we obtain

.
Letting
$
!
− α +γ(
/
!
δJ
p(x)
= ln
+ 1 + λx2 − λ2
xp(x) x + γ
δp(x)
q(x)
x
x xp(x)
p(x)−1) (11.281)
x
(11.282)
= µ , we can write
ln
p(x)
= λ(x2 − 2µx + µ2 ) + γ + 1 − λµ2
q(x)
or
2 +c
p(x) = q(x)eλ(x−µ)
(11.283)
(11.284)
where λ, µ and c are chosen so that
!
p(x) = 1
(11.285)
xp(x) = µ
(11.286)
x
!
x
2
x p(x) −
"
!
x
!
x
xp(x)
#2
= α
(11.287)
30. Uniform distribution on the simplex.
Which of these methods will generate a sample from the uniform distribution on the
$n
simplex {x ∈ Rn : xi ≥ 0,
i=1 xi = 1} ?
(a) Let Yi be i.i.d. uniform [0, 1] , with Xi = Yi /
$n
j=1 Yj .
305
Information Theory and Statistics
(b) Let Yi be i.i.d. exponentially distributed ∼ λe−λy , y ≥ 0, with Xi = Yi /
$n
j=1 Yj .
(c) (Break stick into n parts.) Let Y 1 , Y2 , . . . , Yn−1 be i.i.d. uniform [0, 1] , and let
Xi be the length of the ith interval.
Solution: Uniform distribution on the simplex
(a) To see that this construction does not yield a uniform distribution, take n = 2 for
simplicity. If (X1 , X2 ) is indeed uniform on the simplex, then
P (X1 ≤ 1/3) = 1/3.
But
P
*
+
Y1
≤ 1/3
Y1 + Y 2
= P (Y2 ≥ 2Y1 )
= 1/4.
(b) This case is actually equivalent to the construction in (c), which will be shown
shortly to generate the uniform distribution on the n -simplex. The equivalence
can be seen by a basic fact from stochastic processes. Let {N (t)} t≥0 be a Poisson
process with rate λ, that is,
N (t) = inf{k ≥ 0 :
k
!
i=1
Yi ≥ t}
where Yi are i.i.d. Exp( λ ). Let Tk = inf{s ≥ 0 : N (s) = k} = Y1 + . . . + Yk . Then
conditioned on Tn = t , the random vector (T1 , . . . , Tk−1 ) is uniformly distributed
on [0, Tn ] . Scaling, we can generate a sample of n − 1 i.i.d. uniform [0, 1] random
variables, which is exactly the setup in (c).
The same result can be obtained directly (and more rigorously) as follows. Con$
$
sider the transformation Xi = Yi / nj=1 Yj , i = 1, . . . , n − 1, and S = nj=1 Yj .
Since the inverse of this transformation is Y i = SXi , i = 1, . . . , n − 1, and
Yn = S(1 − (X1 + . . . + Xn−1 )), the Jacobian for this transformation can be
easily computed as
J
=
=
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
s
0
..
.
0
s
···
···
..
.
0
0
x1
x2
0
0 ··· s
xn−1
−s −s · · · −s 1 − x1 · · · − xn−1
s 0 ···
0 s ···
..
..
.
.
0 0 ···
0 0 ···
0
0
x1
x2
s xn−1
0
1
3
3
3
3
3
3
3 = sn−1 .
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
306
Information Theory and Statistics
Hence, the joint density function of (X1 , . . . , Xn−1 , S) is given by
f (x1 , . . . , xn−1 , s) =
)
λn e−λs sn−1 , s ≥ 0, xi ≥ 0,
0,
otherwise.
$n−1
i=1
xi ≤ 1,
From this, it is easy to see that (X1 , . . . , Xn−1 ) and S are independent and that
the random vector (X1 , . . . , Xn−1 ) has the marginal density
f (x1 , . . . , xn−1 ) =
)
$
n−1
(n − 1)!, xi ≥ 0,
i=1 xi ≤ 1,
0,
otherwise.
This proves that (X1 , . . . , Xn ) is the uniformly distributed over the n -simplex.
(c) It is intuitively obvious that the resulting distribution is uniform on the simplex.
Let (U1 , . . . , Un−1 ) = (Y[1] , . . . , Y[n−1] ) denote the order statistic of (Y1 , . . . , Yn−1 ) .
In other words, U1 denotes the smallest of (Y1 , . . . , Yn−1 ) , U2 denotes the second
smallest, and so on. By symmetry, the joint density of (U 1 , . . . , Un−1 ) can be
easily obtained as
f (u1 , . . . , un−1 ) =
)
(n − 1)!, 0 ≤ u1 ≤ . . . ≤ un−1 ≤ 1,
0,
otherwise.
Now consider the transformation
X1 = U1 ,
X2 = U2 − U1 ,
..
.
Xn−1 = Un−1 − Un−2 .
It is easy to see that the Jacobian of this transformation is 1. Hence, the random
vector (X1 , . . . , Xn−1 ) has the marginal density
f (x1 , . . . , xn−1 ) =
)
$
n−1
(n − 1)!, xi ≥ 0,
i=1 xi ≤ 1,
0,
otherwise,
which proves that (X1 , . . . , Xn ) is the uniformly distributed over the n -simplex.
Chapter 12
Maximum Entropy
1. Maximum entropy. Find the maximum entropy density
f , defined Nfor x ≥ 0 ,
N
satisfying
EX
=
α
,
E
ln
X
=
α
.
That
is,
maximize
−
f
ln
f
subject to xf (x) dx =
1
2
N
α1 , (ln x)f (x) dx = α2 , where the integral is over 0 ≤ x < ∞ . What family of densities
is this?
Solution: Maximum entropy.
As derived in class, the maximum entropy distribution subject to constraints
2
and
is of the form
2
xf (x) dx = α1
(12.1)
(ln x)f (x) dx = α2
(12.2)
f (x) = eλ0 +λ1 x+λ2 ln x = cxλ2 eλ1 x ,
(12.3)
which is of the form of a Gamma distribution. The constants should be chosen so as to
satisfy the constraints.
2. Min D(P 5 Q) under constraints on P. We wish to find the (parametric form) of
the probability mass function P (x), x ∈ {1, 2, . . .} that minimizes the relative entropy
$
D(P 5 Q) over all P such that
P (x)gi (x) = αi ,
i = 1, 2, . . . .
(a) Use Lagrange multipliers to guess that
P ∗ (x) = Q(x)e
$∞
i=1
λi gi (x)+λ0
(12.4)
achieves this minimum if there exist λ i ’s satisfying the αi constraints. This
generalizes the theorem on maximum entropy distributions subject to constraints.
(b) Verify that P ∗ minimizes D(P 5 Q).
Solution: Minimize D(P ||Q) under constraints on P .
307
308
Maximum Entropy
(a) We construct the functional using Lagrange multipliers
J(P ) =
2
P (x) !
P (x) ln
+
λi
Q(x)
i
2
P (x)hi (x) + λ0
2
P (x).
(12.5)
‘Differentiating’ with respect to P (x) , we get
!
P (x)
∂J
= ln
+1+
λi hi (x) + λ0 = 0,
∂P
Q(x)
i
(12.6)
which indicates that the form of P (x) that minimizes the Kullback Leibler distance
is
$
P ∗ (x) = Q(x)eλ0 + i λi hi (x) .
(12.7)
(b) Though the Lagrange multiplier method correctly indicates the form of the solution, it is difficult to prove that it is a minimum using calculus. Instead we use the
properties of D(P ||Q) . Let P be any other distribution satisfying the constraints.
Then
D(P ||Q) − D(P ∗ ||Q)
2
(12.8)
2
P ∗ (x)
P (x)
− P ∗ (x) ln
Q(x)
Q(x)
2
2
!
P (x)
λi hi (x)]
=
P (x) ln
− P ∗ (x)[λ0 +
Q(x)
i
=
=
2
2
P (x) ln
P (x) ln
P (x)
−
Q(x)
P (x)
−
Q(x)
2
P (x)
=
P (x) ln ∗
P (x)
= D(P ||P ∗ )
=
P (x) ln
2
2
P (x)[λ0 +
!
i
P (x) ln
λi hi (x)]
(12.9)
(12.10)
(since both P and P ∗
satisfy the constraints)
P ∗ (x)
Q(x)
≥ 0,
(12.11)
(12.12)
(12.13)
(12.14)
and hence P ∗ uniquely minimizes D(P ||Q) .
In the special case when Q is a uniform distribution over a finite set, minimizing
D(P ||Q) corresponds to maximizing the entropy of P .
3. Maximum entropy processes. Find the maximum entropy rate stochastic process
{Xi }∞
−∞ subject to the constraints:
(a) EXi2 = 1,
(b)
EXi2
i = 1, 2, . . . ,
= 1 , EXi Xi+1 = 12 ,
i = 1, 2, . . . .
(c) Find the maximum entropy spectrum for the processes in parts (a) and (b).
309
Maximum Entropy
Solution: Maximum Entropy Processes.
(a) If the only constraint is EXi2 = 1 , then by Burg’s theorem, it is clear that the
maximum entropy process is a 0-th order Gauss-Markov, i.e., X i i.i.d. ∼ N (0, 1) .
(b) If the constraints are EXi2 = 1, EXi Xi+1 = 12 , then by Burg’s theorem, the
maximum entropy process is a first order Gauss-Markov process of the form
Xi = −aXi−1 + Zi ,
Zi ∼ N (0, σ 2 ).
(12.15)
To determine a and σ 2 , we use the Yule-Walker equations
R0 = −aR1 + σ 2
(12.16)
R1 = −aR0
Substituting R0 = 1 and R1 =
maximum entropy process is
1
2
(12.18)
, we get a = − 12 and σ 2 =
1
Xi = Xi−1 + Zi ,
2
(12.17)
3
Zi ∼ N (0, ).
4
3
4
. Hence the
(12.19)
4. Maximum entropy with marginals. What is the maximum entropy distribution
p(x, y) that has the following marginals? Hint: You may wish to guess and verify a
more general result.
x\y
1
2
3
1
p11
p21
p31
2
p12
p22
p32
3
p13
p23
p33
2/3
1/6
1/6
1/2
1/4
1/4
Solution: Maximum entropy with marginals.
Given the marginal distributions of X and Y , H(X) and H(Y ) are fixed. Since
I(X; Y ) = H(X) + H(Y ) − H(X, Y ) ≥ 0 , we have
H(X, Y ) ≤ H(X) + H(Y )
(12.20)
with equality if and only if X and Y are independent. Hence the maximum value of
H(X, Y ) is H(X) + H(Y ) , and is attained by choosing the joint distribution to be the
product distribution, i.e.,
310
Maximum Entropy
x
y
1
2
3
1
1/3
1/6
1/6
2
1/12
1/24
1/24
3
1/12
1/24
1/24
2/3
1/6
1/6
1/2
1/4
1/4
5. Processes with fixed marginals. Consider the set of all densities with fixed pairwise
marginals fX1 ,X2 (x1 , x2 ), fX2 ,X3 (x2 , x3 ), . . . , fXn−1 ,Xn (xn−1 , xn ) . Show that the maximum entropy process with these marginals is the first-order (possibly time-varying)
Markov process with these marginals. Identify the maximizing f ∗ (x1 , x2 , . . . , xn ) .
Solution: Processes with fixed marginals
By the chain rule,
h(X1 , X2 , . . . , Xn ) = h(X1 ) +
≤ h(X1 ) +
n
!
i=2
n
!
i=2
h(Xi |Xi−1 , . . . , X1 )
(12.21)
h(Xi |Xi−1 ),
(12.22)
since conditioning reduces entropy. The quantities h(X 1 ) and h(Xi |Xi−1 ) depend only
on the second order marginals of the process and hence the upper bound is true for all
processes satisfying the second order marginal constraints.
Define
f ∗ (x1 , x2 , . . . , xn ) = f0 (x1 )
n
B
f0 (xi−1 , xi )
i=2
f0 (xi−1 )
.
(12.23)
We will show that f ∗ maximizes the entropy among all processes with the same second
order marginals. To prove this, we just have to show that this process satisfies has the
same second order marginals and that this process achieves the upper bound (12.22).
The fact that the process satisfies the marginal constraints can be easily proved by
induction. Clearly, it is true for f ∗ (x1 , x2 ) and if f ∗ (xi−1 , xi ) = f0 (xi−1 , xi ) , then
f ∗ (xi ) = f0 (xi ) and by the definition of f ∗ , it follows that f ∗ (xi , xi+1 ) = f0 (xi , xi+1 ) .
Also, since by definition, f ∗ is first order Markov, h(Xi |Xi−1 , . . . , X1 ) = h(Xi |Xi−1 )
and we have equality in (12.22). Hence f ∗ has the maximum entropy of all processes
with the same second order marginals.
6. Every density is a maximum entropy density. Let f 0 (x) be a givenN density. Given
r(x) , let gα (x) be the density maximizing h(X) over all f satisfying f (x)r(x) dx =
α . Now let r(x) = ln f0 (x) . Show that gα (x) = f0 (x) for an appropriate
choice
N
α = α0 . Thus f0 (x) is a maximum entropy density under the constraint f ln f0 = α0 .
Solution: Every density is a maximum entropy density. Given the constraints that
2
r(x)f (x) = α
(12.24)
311
Maximum Entropy
the maximum entropy density is
f ∗ (x) = eλ0 +λ1 r(x)
(12.25)
With r(x) = log f0 (x) , we have
∗
f (x) = N
f0λ1 (x)
f0λ1 (x) dx
(12.26)
where λ1 has to chosen to satisfy the constraint. We can chose the value of the constraint to correspond to the value λ1 = 1 , in which case f ∗ = f0 . So f0 is a maximum
entropy density under appropriate constraints.
7. Mean squared error.
Let {Xi }ni=1 satisfy EXi Xi+k = Rk ,
Xn , i.e.
k = 0, 1, . . . , p . Consider linear predictors for
X̂n =
n−1
!
bi Xn−i .
i=1
Assume n > p . Find
max min E(Xn − X̂n )2 ,
f (xn )
b
where the minimum is over all linear predictors b and the maximum is over all densities
f satisfying R0 , . . . , Rp .
Solution: Mean squared error.
Let F be the family of distributions that satisfy E(X i Xi+k ) = Rk ,
k = 0, . . . , p
We can think of this situation as a game between two people. Your adversary maliciously chooses some distribution f (x n ) ∈ F and then reveals it to you. Based on
your knowledge of f (xn ) you choose the linear predictor b∗ (f ) of Xn given X n−1 that
minimizes the mean squared error (MSE) of the prediction. We are asked to calculate
the maximum MSE that your adversary can cause you to suffer.
We first note that your adversary should not feel cheated if she is limited to distributions
in F that are also multivariate normal since the MSE incurred by a linear predictor
( and also its structure ) depends only on the mean vector and the covariance matrix.
While the minimum MSE predictor is in general non-linear, for a multivariate normal
distribution it is linear. Thus if your adversary chooses some Gaussian distribution
g(xn ) ∈ F the optimum ( linear ) predictor is
Eg (Xn |X n−1 )
Also, since g is Gaussian, the conditional distribution of X n given X n−1 is normal
with variance
0
12
σg2 = Eg Xn − Eg (Xn |X n−1 )
312
Maximum Entropy
that does not depend on X n−1 . Thus, we have
hg (Xn |X n−1 ) =
1
log (2πeσg2 ).
2
Your adversary will thus maximize hg (Xn |X n−1 ) . We have
n−1
h(Xn |X n−1 ) ≤ h(Xn |Xn−p
) ≤ h(Zn |Z n−1 ),
where {Zn } is the p-th order Gauss-Markov process in F . Therefore the worst the
adversary can do is to choose a distribution in F under which X n is a p-th order
Gauss-Markov process. In this case the MSE incurred is given by Eq. (11.40) of Cover
& Thomas.
8. Maximum entropy characteristic functions.
We ask for the maximum entropy density
f (x), 0 ≤ x ≤ a, satisfying a constraint on
N
the characteristic function Ψ(u) = 0a eiux f (x)dx . The answers need be given only in
parametric form.
N
(a) Find the maximum entropy f satisfying 0a f (x) cos(u0 x) dx = α , at a specified
point u0 .
N
(b) Find the maximum entropy f satisfying 0a f (x) sin(u0 x)dx = β .
(c) Find the maximum entropy density f (x), 0 ≤ x ≤ a, having a given value of the
characteristic function Ψ(u0 ) at a specified point u0 .
(d) What problem is encountered if a = ∞ ?
Solution: Maximum entropy characteristic functions.
(a) Using the general parametric form from the book, we have:
f (x) = eλ0 +λ1 cos u0 x ,
where λ0 and λ1 are chosen to satisfy the constraints.
(b) Similarly,
f (x) = eλ0 +λ1 sin u0 x .
(12.27)
(12.28)
(c) The key point here is to realize that we are dealing with a vector-valued constraint,
ψ(u0 ) = α1 + iα2 .
(12.29)
f (x) = eλ0 +λ1 cos(u0 x)+λ2 sin(u0 x) .
(12.30)
and hence
where
2
a
20 a
0
2
a
f (x) dx = 1
(12.31)
f (x) cos(u0 x) dx = R{ψ(u0 )}
(12.32)
f (x) sin(u0 x) dx = I{ψ(u0 )}
(12.33)
0
313
Maximum Entropy
(d) In each of the above cases, f (x) is periodic with period 2πu −1
0 , as are cos(u0 x)
and sin(u0 x) . Thus, the integrands in the constraints will have the same period,
making their integrals periodic as a function of a , and so the integral is not well
defined. If the integral oscillates symmetrically about zero, the limit will not exist.
Otherwise, the limit will either be ∞ or −∞ .
9. Maximum entropy processes.
(a) Find the maximum entropy rate binary stochastic process {X i }∞
i=−∞ ,
{0, 1} , satisfying Pr{Xi = Xi+1 } = 13 , for all i .
(b) What is the resulting entropy rate?
Xi ∈
Solution: Maximum entropy processes.
Our first hope may be an i.i.d. Bern( p ) process that is consistent with the constraint. Unfortunately, there is no such process. However, we can still construct an
independent non-identically distributed sequence of Bernoulli r.v.’s, such that the entropy rate exists, and the constraints are met. (For example, X i ∼ Bern(1) for odd i
and Xi ∼ Bern(1/3) for even i .) This process does not yield the maximum entropy
rate.
This problem is, in fact, a discrete (or more precisely, binary) analogue of Burg’s maximum entropy theorem and we can obtain the maximum entropy process from a similar
argument.
(a) Let Xi be any binary process satisfying the constraint Pr{X i = Xi+1 } = 1/3 .
Let Zi be a first order stationary Markov chain, that stays at 0 with probability
1/3, jumps to 1 with probability 2/3, and vice versa. This process obviously meets
the constraint. With a slight abuse of notation, we have
H(Xi |Xi−1 ) = EH(Xi |Xi−1 = x)
= EH(Pr(Xi = x|Xi−1 = x))
≤ H(E Pr(Xi = x|Xi−1 = x))
= H(1/3)
= H(Zi |Zi−1 ),
where the inequality follows from the concavity of the binary entropy function.
Since H(X1 ) ≤ 1 = H(Z1 ) ,
H(X1 , . . . , Xn ) = H(X1 ) +
≤ H(X1 ) +
≤ H(Z1 ) +
n
!
i=2
n
!
i=2
n
!
i=2
H(Xi |X i−1 )
H(Xi |Xi−1 )
H(Zi |Zi−1 )
= H(Z1 , . . . , Zn ),
314
Maximum Entropy
whence {Zi } is the maximum entropy process under the given constraint.
(b) The maximum entropy rate H(Z) = H(Z 2 |Z1 ) = H(1/3) = log 3 − 2/3 .
10. Maximum entropy of sums Let Y = X1 + X2 Find the maximum entropy density
for Y under the constraint EX12 = P1 , EX22 = P2 ,
(a) if X1 and X2 are independent.
(b) if X1 and X2 are allowed to be dependent.
(c) Prove part (a).
Solution: Maximum entropy of sums
(a) Independence implies that
EY 2 = EX12 + EX22 + 2EX1 EX2 = P1 + P2 + 2µ1 µ2 ,
and
V ar(Y ) = EY 2 − (EY )2 = P1 + P2 − µ21 − µ22 .
We note that the maximum entropy distribution of Y subject to a second moment
2
constraint is f ∗ (y) = Ce−λy which we recognize as the distribution of a Gaussian
random variable. Since Y is Gaussian, to maximize the entropy we want to
maximize the variance of Y which corresponds with µ 1 = µ2 = 0 . The maximum
entropy of a Gaussian with mean 0 and the given variance is (1/2) log 2πe(P 1 +P2 ) .
(b) In this case
Y
EY 2 = P1 + P2 + 2(ρ P1 P2 + µ1 µ2 ),
and
Y
V ar(Y ) = P1 + P2 + 2ρ P1 P2 − µ21 − µ22 ,
where ρ ∈ [−1, 1] is the correlation coefficient between X 1 and X2 . Again, we
note that the maximum entropy distribution for Y is Gaussian, so we just need
to maximize the variance of Y . Observe that the variance of Y is maximized
for µ1 = µ2 = 0 , and ρ = 1 . Note that ρ = 1 means that the two random
variables
√ are adding coherently. The maximum entropy is then (1/2) ln 2πe(P 1 +
P2 + 2 P1 P2 ) .
11. Maximum entropy Markov chain.
Let {Xi } be a stationary Markov chain with Xi ∈ {1, 2, 3} . Let I(Xn ; Xn+2 ) = 0 for
all n .
(a) What is the maximum entropy rate process satisfying this constraint?
(b) What if I(Xn ; Xn+2 ) = α , for all n for some given value of α , 0 ≤ α ≤ log 3 ?
Solution: Maximum entropy Markov chain.
315
Maximum Entropy
(a) Since an i.i.d. process which is uniform on {1, 2, 3} also satisfies the constraint,
and since H(X1 , X2 , . . . , Xn ) ≤ nH(X) ≤ n log 3 for any process, we can see
that the maximum entropy process with the constraint is the i.i.d. uniformly
distributed process. The entropy rate is log 3 bits/symbol.
(b) With the constraint I(Xn ; Xn+2 ) = α , we have the analog of Burg’s theorem.
Since the constraint is only on all the odd numbered or all the even numbered
values of n , we can split the X process into two processes, one on the odd indices
and the other on the even indices. The total entropy will be maximized if the
processes are independent of each other.
For the even index process, let Yi = X2i . Then the constraints can be written as
I(Yn ; Yn+1 ) = α . Then
H(Y1 , Y2 , . . . , Yn ) =
n
!
i=1
H(Yi |Y (i−1) )
≤ H(Y1 ) +
n
!
i=2
H(Yi |Yi−1 )
= nH(Y1 ) − (n − 1)I(Yi ; Yi−1 )
= nH(Y1 ) − (n − 1)α
≤ n log 3 − (n − 1)α
(12.34)
(12.35)
(12.36)
(12.37)
(12.38)
where we have used the stationary of the Y i process and the fact that H(Yi ) ≤
log 3 . We can therefore achieve equality in the first inequality if Y i is a first
order Markov chain, and equality in the seoond inequality if Y i has a uniform
distribution. Thus the maximum entropy process is a first order Markov chain
with a uniform stationary distribution with I(Y i ; Yi−1 ) = α . The Markov chain
will have a uniform stationary distribution if the probability transition matrix is
doubly stochastic (Problem 4.1), and we can construct the matrix if the rows of
the matrix have entropy log 3 − α .
The maximum entropy X process consists of two indpendent interleaved copies
of the Y process.
12. An entropy bound on prediction error. Let {X n } be an arbitrary real valued
stochastic process. Let X̂n+1 = E{Xn+1 |X n } . Thus the conditional mean X̂n+1 is
a random variable depending on the n -past X n . Here X̂n+1 is the minimum mean
squared error prediction of Xn+1 given the past.
(a) Find a lower bound on the conditional variance E{E{(X n+1 − X̂n+1 )2 |X n }} in
terms of the conditional differential entropy h(X n+1 |X n ) .
(b) Is equality achieved when {Xn } is a Gaussian stochastic process?
Solution: An entropy bound on prediction error.
316
Maximum Entropy
(a) From the corollary to Theorem 8.6.6, setting Y = X n , we obtain the following
inequality,
ˆ (X n ))2 ≥ 1 e2h(Xn+1 |X n )
E(Xn+1 − Xn+1
(12.39)
2πe
(b) We have equality in Theorem 8.6.6 if and only if X is Gaussian, and hence if
X1 , X2 , . . . , Xn form a Gaussian process, Xn+1 conditioned on the past has a
Gaussian distribution and hence we have equality in the bound above.
13. Maximum entropy rate. What is the maximum entropy rate stochastic process
{Xi } over the symbol set {0, 1} for which the probability that 00 occurs in a sequence
is zero?
Solution: Maximum entropy rate
This problem is essentially the same as problem 4.7. If we disallow the sequence 00,
the constraint can be modelled as a Markov chain where from state 0, we always go to
state 1, and from state 1 we can go either to state 0 or state 1. The maximum entropy
1st order Markov chain is calculated in Problem 4.7 to be 0.694 bits, which is therefore
the maximum entropy process satisfying the constraint has entropy rate 0.694 bits.
14. Maximum entropy.
(a) What is the parametric form maximum entropy density f (x) satisfying the two
conditions
EX 8 = a
EX 16 = b?
(b) What is the maximum entropy density satisfying the condition
E(X 8 + X 16 ) = a + b
?
(c) Which entropy is higher?
Solution: Maximum entropy.
(a) From Theorem 12.1.1, with two constraints, the maximum entropy distribution is
f (x) = eλ0 +λ1 x
8 +λ x16
2
(12.40)
where
2
2
2
f (x) dx = 1,
(12.41)
x8 f (x) dx = a,
(12.42)
x16 f (x) dx = b
(12.43)
317
Maximum Entropy
(b) From Theorem 12.1.1, with one constraint, the maximum entropy distribution is
f (x) = eλ0 +λ1 (x
where
2
2
8 +x16 )
f (x) dx = 1,
(x8 + x16 )f (x) dx = a + b
(12.44)
(12.45)
(12.46)
(c) The maximum entropy in (b) is higher since any distribution that satisfies both
constraints in (a) also satisfies the constraint in (b). So the set of possible distributions with the single constraint is larger, and hence the maximum entropy in
(b) is not less than the maximum entropy in (a).
15. Maximum entropy. Find the parametric form of the maximum entropy density f
satisfying the Laplace transform condition
2
f (x)e−x dx = α ,
and give the constraints on the parameter.
Solution: Maximum entropy.
By Theorem 12.1.1, the maximum entropy distribution with this constraint is
f (x) = eλ0 +λ1 e
−x
(12.47)
subject to
2
2
f (x) dx = 1,
(12.48)
e−x f (x) dx = α
(12.49)
16. Maximum entropy processes
Consider the set of all stochastic processes with {X i }, Xi ∈ R, with
R0 = EXi2 = 1
R1 = EXi Xi+1 = 12 .
Find the maximum entropy rate.
Solution: Maximum entropy processes
By Burg’s theorem, the maximum entropy process subject to the two constraints is a
first order Gauss Markov process of the form X n+1 = −aXn + Un . The Yule Walker
equations in this case show that
1
1 = −a + σ 2
2
1
= −a1
2
(12.50)
(12.51)
318
Maximum Entropy
and we obtain a = − 12 and σ 2 = 34 , and therefore Xn+1 = 1/2Xn +
Z ∼ N (0, 1) . The entropy rate is h = 0.5 log 2πeσ 2 with σ 2 = 3/4 .
17. Binary maximum entropy
Consider a binary process {Xi },
EXi Xi+1 = 21 .
√
3/2Zn+1 with
Xi ∈ {−1, +1}, with R0 = EXi2 = 1 and R1 =
(a) Find the maximum entropy process with these constraints.
(b) What is the entropy rate?
(c) Is there a Bernoulli process satisfying these constraints?
Solution: Binary maximum entropy
(a) By the discrete analog of Burg’s theorem, the maximum entropy process is a first
order Markov process. The maximum entropy process is a Markov process with
transition probabilities P (1, −1) = P (−1, 1) = 1/4 .
(b) The entropy rate is H(X1 |X0 ) = H(1/4) .
(c) Just solve EXi X√i+1 = 1/2 , that is (p − q)2 = 1/2 . This gives a Bernoulli process
with p = 1/2 + 2/4 . So, although a Bernoulli process satisfies the constraints,
it is not the maximum entropy process.
18. Maximum entropy.
Maximize h(Z, Vx , Vy , Vz ) subject to the energy constraint E( 12 m 5 V 52 +mgZ) = E0 .
Show that the resulting distribution yields
1
3
E m 5 V 52 = E0
2
5
EmgZ =
Thus
2
5
2
E0 .
5
of the energy is stored in the potential field, regardless of its strength g .
Solution: Maximum entropy.
As derived in class, the maximum entropy distribution subject to the constraint
E
*
1
m5v52 + mgZ
2
is of the form f (z, vx , vy , vz ) = Ce−λ( 2 m/v/
1
f (z, vx , vy , vz ) =
*
1
λmg
2 +mgZ
+*
λm
2π
+
= E0
(12.52)
) . Properly normalized the density is
+3
2
e−λ( 2 m/v/
1
2 +mgZ
)
(12.53)
319
Maximum Entropy
Therefore,
=
1
E( mvx2 ) =
2
=
The
constraint
on energy yields
0
1
E
is
1
λ
2
∞
z −λmgz
e
mgλ
0
1
mg
=
mgλ
λ
E(mgZ) = mg
2
dz
*
∞
1
mλ
m
dvx
2
2π
−∞
1 m
1
=
2 mλ
2λ
+1
2
(12.54)
vx2 e−
2
mλvx
2
(12.55)
= 25 E0 . This immediately gives E(mgZ) = 25 E0 and
1
2 = 3 E . The split of energy between kinetic
2 m5v5
5 0
2
3 regardless of the strength of gravitational field g .
energy and potential energy
19. Maximum entropy discrete processes.
(a) Find the maximum entropy rate binary stochastic process {X i }∞
i=−∞ , Xi ∈ {0, 1},
1
satisfying Pr{Xi = Xi+1 } = 3 , for all i .
(b) What is the resulting entropy rate?
Solution: Maximum entropy discrete processes. Repeat of Problem 12.9
Our first hope may be an i.i.d. Bern( p ) process that is consistent with the constraint.
Unfortunately, there is no such process. (Check!) However, we can still construct
an independent non-identically distributed sequence of Bernoulli r.v.’s, such that the
entropy rate exists, and the constraints are met. (For example, X i ∼ Bern(1) for odd
i and Xi ∼ Bern(1/3) for even i .) This process does not yield the maximum entropy
rate.
This problem is, in fact, a discrete (or more precisely, binary) analogue of Burg’s maximum entropy theorem and we can obtain the maximum entropy process from a similar
argument.
(a) Let Xi be any binary process satisfying the constraint Pr{X i = Xi+1 } = 1/3 .
Let Zi be a first order stationary Markov chain, that stays at 0 with probability
1/3, jumps to 1 with probability 2/3, and vice versa. This process obviously meets
the constraint. With a slight abuse of notation, we have
H(Xi |Xi−1 ) = EH(Xi |Xi−1 = x)
= EH(Pr(Xi = x|Xi−1 = x))
≤ H(E Pr(Xi = x|Xi−1 = x))
= H(1/3)
= H(Zi |Zi−1 ),
320
Maximum Entropy
where the inequallity follows from the concavity of the binary entropy function.
Since H(X1 ) ≤ 1 = H(Z1 ) ,
H(X1 , . . . , Xn ) = H(X1 ) +
n
!
i=2
≤ H(X1 ) +
≤ H(Z1 ) +
n
!
i=2
n
!
i=2
H(Xi |X i−1 )
H(Xi |Xi−1 )
H(Zi |Zi−1 )
= H(Z1 , . . . , Zn ),
whence {Zi } is the maximum entropy process under the given constraint.
(b) The maximum entropy rate H(Z) = H(Z 2 |Z1 ) = H(1/3) = log 3 − 2/3 .
20. Maximum entropy of sums.
Let Y = X1 + X2 . Find the maximum entropy of Y under the constraint EX 12 =
P1 , EX22 = P2 ,
(a) if X1 and X2 are independent.
(b) if X1 and X2 are allowed to be dependent.
Solution: Maximum entropy of sums (Repeat of Problem 12.10)
(a) Independence implies that
EY 2 = EX12 + EX22 + 2EX1 EX2 = P1 + P2 + 2µ1 µ2 ,
and
V ar(Y ) = EY 2 − (EY )2 = P1 + P2 − µ21 − µ22 .
We note that the maximum entropy distribution of Y subject to a second moment
2
constraint is f ∗ (y) = Ce−λy which we recognize as the distribution of a Gaussian
random variable. Since Y is Gaussian, to maximize the entropy we want to
maximize the variance of Y which corresponds with µ 1 = µ2 = 0 . The maximum
entropy of a Gaussian with mean 0 and the given variance is (1/2) log 2πe(P 1 +P2 ) .
(b) In this case
Y
EY 2 = P1 + P2 + 2(ρ P1 P2 + µ1 µ2 ),
and
Y
V ar(Y ) = P1 + P2 + 2ρ P1 P2 − µ21 − µ22 ,
where ρ ∈ [−1, 1] is the correlation coefficient between X 1 and X2 . Again, we
note that the maximum entropy distribution for Y is Gaussian, so we just need
to maximize the variance of Y . Observe that the variance of Y is maximized
for µ1 = µ2 = 0 , and ρ = 1 . Note that ρ = 1 means that the two random
variables
√ are adding coherently. The maximum entropy is then (1/2) ln 2πe(P 1 +
P2 + 2 P1 P2 ) .
321
Maximum Entropy
21. Entropy rate
(a) Find the maximum entropy rate stochastic process {X i } with EXi2 = 1, EXi Xi+2 =
α, i = 1, 2, . . . . Be careful.
(b) What is the maximum entropy rate?
(c) What is EXi Xi+1 for this process?
Solution: Entropy rate
(a) The key to this problem is to realize that the maximum entropy rate process
occurs for two independent interleaved processes. Since there is no constraint on
the correlation between Xi and Xi+1 , but only on Xi and Xi+2 we find that:
1
H(X1 , . . . , Xn ) =
n
≤
n
1!
H(Xi |X1 , . . . , Xi−1 )
n i=1
n
1!
H(Xi |Xi−2 )
n i=1
Where the inequality comes because conditioning reduces entropy. So the entropy
rate is increased when the even and odd time periods each have an independent
first order Markov process. By Burg’s Theorem each process will be a first order
Gauss Markov process to maximize the entropy rate. The Yule-Walker equations
are 1 = −aα + σ 2 and α = −a , which combine to give a = −1 and σ 2 = 1 − α2 .
So the maximum entropy rate process is given by X i = αXi−2 + Zi , where Zi ∼
N (0, 1 − α2 ) .
(b) The maximum entropy rate is the entropy rate for either process since they are
both indentical. So the entropy rate of the Gaussian process is:
H(X ) =
=
=
=
lim H(Xi |X i−1 )
n→∞
lim H(Xi |Xi−2 )
n→∞
lim H(Zi )
n→∞
1
log 2πe(1 − α2 )
2
(c) Since Xi and Xi+1 are independent, EXi Xi+1 = 0 .
22. Minimum expected value
(a) Find the minimum value of EX over all probability density functions f (x) satisfying the following three constraints:
(i) f (x) = 0 for x ≤ 0,
N∞
(ii) −∞
f (x)dx = 1, and
322
Maximum Entropy
(iii) h(f ) = h.
(b) Solve the same problem if (i) is replaced by
(i ' ) f (x) = 0
for x ≤ a.
Solution: Minimum expected value
(a) Let X be any positive random variable with mean µ . Then, from the result on
the maximum entropy distribution, h(X) ≤ h(X ∗ ) = log(e µ) where X ∗ has the
exponential distribution with mean µ . By exponentiating both sides of the above
inequality, we get
1
EX ≥ µ = 2h .
e
This bound is for any distribution with support on the positive real line (whether
or not it has a density), but it is tight for the exponential distribution (which has
a density). Hence we can conclude that 1e 2h is the minimum value of EX for any
probability density function satisfying Conditions (i),(ii), and (iii).
(b) Although we can derive the maximum entropy for random variables defined on
{x ≥ a} with the mean constraint and repeate the previous argument, we simply
reuse the result of part (a) as follows:
Consider any random variable X satisfying Conditions (i’), (ii), and (iii). Since
the entropy is translation invariant, i.e., h(X) = h(X + b) for any b , the random
variable Y = X − a satisfies Conditions (i), (ii), and (iii) of part (a) and thus
EY ≥ 1e 2h . This implies
1
EX = EY + a ≤ a + 2h .
e
Chapter 13
Universal Source Coding
1. Minimax regret data compression and channel capacity. First consider universal data compression with respect to two source distributions. Let the alphabet
V = {1, e, 0} and let p1 (v) put mass 1 − α on v = 1 and mass α on v = e . Let p 2 (v)
put mass 1 − α on 0 and mass α on v = e .
1
, the ideal codeword length
We assign word lengths to V according to l(v) = log p(v)
with respect to a cleverly chosen probability mass function p(v) . The worst case excess
description length (above the entropy of the true distribution) is
*
max Epi log
i
1
1
− Epi log
p(V )
pi (V )
+
= max D(pi 5 p).
i
(13.1)
Thus the minimax regret is R ∗ = minp maxi D(pi 5 p) .
(a) Find R∗ .
(b) Find the p(v) achieving R ∗ .
(c) Compare R∗ to the capacity of the binary erasure channel
"
1−α α
0
0
α 1−α
#
and comment.
Solution: Minimax regret data compression and channel capacity.
(a) The whole trick to this problem is to employ the duality between universal datacompression and channel capacity. Although we don’t immediately know how to
solve the data-compression problem, we turn it into a channel-capacity problem
whose answer we already know.
We know that if we wish to compress a source that is drawn from either of two possible distributions p1 or p2 , but we don’t know which, then the minimax regret,
323
324
Universal Source Coding
R , that we can achieve using a universal data-compression scheme is equivalent
to the capacity of a channel whose channel-transistion matrix has rows which are
precisely the probability distributions p 1 and p2 .
In this case, p1 = (1 − α, α, 0) and p2 = (0, α, 1 − α) , yielding the channeltransition matrix
"
#
1−α α
0
0
α 1−α
which we recognize immediately as a binary erasure channel with erasure probability α and hence capacity 1 − α . Thus, the minimax regret R is 1 − α .
(b) We know that the p(v) achieving R will be the center of the smallest relativeentropy ball that contains the pi . In the dual problem, involving the erasure channel, the center of this ball is the distribution induced on the channel output when
we send according to the input distribution that achieves capacity. For the erasure
channel, the input distribution that achieves capacity is ( 12 , 12 ) , which induces a
(1−α)
(1−α)
(1−α)
distribution of ( (1−α)
2 , α,
2 ) on the output. Thus, p(v) = ( 2 , α,
2 ).
(c) As indicated above, the minimax regret D ∗ is equal to the capacity of the channel.
2. Universal data compression. Consider three possible source distributions on X ,
Pa = (.7, .2, .1),
Pb = (.1, .7, .2),
and
Pc = (.2, .1, .7).
(a) Find the minimum incremental cost of compression
R∗ = min max D(Pθ 5P ),
P
θ
and the associated mass function P = (p 1 , p2 , p3 ), and ideal codeword lengths
li = log(1/pi ).
(b) What is the channel capacity of a channel matrix with rows P a , Pb , Pc ?
Solution: Universal data compression.
(a) It can be easily checked that P = (1/3, 1/3, 1/3) is equidistant from all three
source distributions Pa , Pb , Pc . Therefore, it is the optimal distribution with the
associated cost
∆ = D(Pa ||P ) = log 3 − H(.7, .2, .1)
and the idealized codeword lengths (log 3, log 3, log 3) .
Alternatively, we can use the duality between the minimax regret and the channel
capacity. The minimax regret ∆ is equal to the capacity of the channel with a
channel matrix with rows Pa , Pb , Pc .
325
Universal Source Coding
(b) Since this channel is symmetric, the capacity can be easily obtained as C =
log 3 − H(.7, .2, .1) .
3. Arithmetic coding: Let [Xi ] be a stationary binary Markov chain with transition
matrix
#
"
pij =
3
4
1
4
1
4
3
4
(13.2)
Calculate the first 3 bits of F (X ∞ ) = 0.F1 F2 . . . when X ∞ = 1010111 . . . . How many
bits of X ∞ does this specify?
Solution: Arithmetic coding
The stationary distribution of this Markov chain is (1/2,1/2), and hence p(0) = p(1) =
1
2 . p(01) = p(10) = 1/8 , p(00) = p(11) = 3/8 , p(100) = 3/32 , etc. We start with the
equation
F (xn ) =
n
!
p(X k−1 0)xk
(13.3)
k=1
= p(0)1 + p(10)0 + p(100)1 + p(1010)0 + . . .
1
3
1
=
+0+
+ 0 + terms less than
2
32
512
19
=
32
= 0.10011 . . .
(13.4)
(13.5)
(13.6)
(13.7)
So the first 3 bits in the expansion of F are 100.
Given these three coded bits, we can see that the first bit of the input has to be 1. If
the next bit was 1, the value of F would be greater than 0.101, so the next bit has to
0. However, we cannot decode any further with three bits.
4. Arithmetic
coding. Let Xi be binary stationary Markov with transition matrix



1
3
2
3
2
3
1
3
.
(a) Find F (01110) = Pr{.X1 X2 X3 X4 X5 < .01110}.
(b) How many bits .F1 F2 . . . can be known for sure if it is not known how X = 01110
continues?
Solution: Arithmetic coding.
(a) The stationary distribution of this Markov chain is (1/2,1/2), and hence p(0) =
p(1) = 12 . p(01) = p(10) = 1/3 , p(00) = p(11) = 1/6 , p(100) = 3/32 , etc. We
start with the equation
F (xn ) =
n
!
k=1
p(X k−1 0)xk
(13.8)
326
Universal Source Coding
= p(0)0 + p(00)1 + p(010)1 + p(0110)1 + p(01110)0 + . . .
1
11
122
1212
12122
0+
1+
1+
1+
0 + ...
=
2
23
233
2333
23333
25
=
54
= 0.0111011 . . .
(13.9)
(13.10)
(13.11)
(13.12)
(b) Since the next term in this series is 4/243 ¿ 1/64, we cannot know the 6th bit until
we know the next input symbol. Since the next term is less than 1/32, we know
the 5th bit for sure.
5. Lempel-Ziv. Give the LZ78 parsing and encoding of 00000011010100000110101.
Solution: Lempel-Ziv. We first parse the string, looking for strings that we have not
seen before. Thus, the parsing yields 0,00,000,1,10,101,0000,01,1010,1. There are 10
phrases, and therefore we need 4 bits to represent the pointer to the prefix. Thus, using
the scheme described in the text, we encode the string as (0000,0),(0001,0), (0010,0),
(0000,1), (0100,0), (0101,1), (0011,0), (0001,1), (0110,0),(0000,1). (The last phrase,
though it is not really a new phrase, is handled like a new phrase).
6. Lempel Ziv 78
We are given the constant sequence xn = 11111 . . .
(a) Give the LZ78 parsing for this sequence.
(b) Argue that the number of encoding bits per symbol for this sequence goes to zero
as n → ∞ .
Solution: Lempel Ziv 78
(a) Since we parse into phrases we have not seen before, the constant string is parsed
into 1, 11, 111, 1111, etc., one bit longer than the previous phrase.
(b) Since the i -th phrase has length i , the total length of the string of k phrases
$
is ki=1 i = k(k + 1)/2 . Thus the number of phrases for a string of length n is
√
approximately n , and the total description length for the LZ code is k(log k + 1)
bits, and hence the description rate is (k log k)/n goes to 0 as n → ∞ .
7. Another idealized version of Lempel-Ziv coding. An idealized version of LZ
was shown to be optimal: The encoder and decoder both have available to them the
“infinite past” generated by the process, . . . , X −1 , X0 , and the encoder describes the
string (X1 , X2 , . . . , Xn ) by telling the decoder the position R n in the past of the first
recurrence of that string. This takes roughly log R n + 2 log log Rn bits.
Now consider the following variant: Instead of describing R n , the encoder describes
Rn−1 plus the last symbol Xn . From these two the decoder can reconstruct the string
(X1 , X2 , . . . , Xn ) .
(a) What is the number of bits per symbol used in this case to encode (X 1 , X2 , . . . , Xn ) ?
327
Universal Source Coding
(b) Modify the proof given in the text to show that this version is also asymptotically
optimal, namely that the expected number of bits-per-symbol converges to the
entropy rate.
Solution: Another idealized version of Lempel-Ziv coding.
In this version of LZ coding, the encoder and decoder both have available to them
the infinite past generated by the process, . . . , X −1 , X0 , and the encoder describes the
string X1n = (X1 , X2 , . . . , Xn ) by telling the decoder the position R n−1 in the past of
the first recurrence of the string X1n−1 , plus the last symbol Xn .
(a) Let A be the alphabet of the process, and |A| denote its size. Then the number
of bits it takes to represent Rn−1 is roughly log Rn−1 + C log log Rn−1 , where C
is a constant independent of n . To represent X n , it takes 2log |A|3 bits, so the
overall number of bits per symbol used for the whole string X 1n is
log Rn−1 + C log log Rn−1 + 2log |A|3
Ln (X1n )
=
.
n
n
(b) To prove that this description is asymptotically optimal it suffices to show that
lim sup E
n→∞
*
Ln
n
+
(13.13)
≤ H,
and the optimality will follow since we know that the reverse inequality also holds,
by Shannon’s Noiseless Coding Theorem.
For the last term in Ln it is immediate that
2log |A|3
→ 0,
n
(13.14)
as n → ∞ . Now notice that we always have R n−1 ≤ Rn , so for the first two
terms in Ln ,
*
log Rn−1 + C log log Rn−1
E
n
+
*
+
log Rn + C log log Rn
≤ E
,
n
(13.15)
and as was shown in class, the above right-hand-side is asymptotically bounded
above by H ,
*
log Rn + C log log Rn
lim sup E
n
n→∞
+
≤ H.
(13.16)
Combining (13.14) with (13.15) and (13.16), yields (13.13), as claimed.
8. Length of pointers in LZ77. In the version of LZ77 due of the Storer and Szymanski[16],
described in Section 13.4.1, a short match can either be represented by (F, P, L) (flag,
pointer, length) or by (F, C) (flag, character). Assume that the window length is W ,
and assume that the maximum match length is M .
328
Universal Source Coding
(a) How many bits are required to represent P ? To represent L ?
(b) Assume that C , the representation of a character is 8 bits long. If the representation of P plus L is longer than 8 bits, it would be better to represent a single
character match as an uncompressed character rather than as a match within the
dictionary. As a function of W and M , what is the shortest match that one
should represent as a match rather than as uncompressed characters?
(c) Let W = 4096 and M = 256 . What is the shortest match that one would
represent as a match rather than uncompressed characters?
Solution: Length of pointers in LZ77
(a) Since P represents the position within the window, 2log W 3 bits would suffice to
represent P . Since L represents the length of the match, which is at most M ,
2log M 3 bits suffice for L .
(b) Ignoring the integer constraints, we can see that we would use the uncompressed
character to represent a match of length 1 if log W + log M > 8 . Similarly, we
would use single characters rather than the match representation for matches of
length m if 1 + log W + log M > m(1 + 8) , since the representation as a sequence
of single characters needs 9 bits per character.
(c) If M = 256 , log M = 8 , W = 4096 , log W = 14 , and 1 + log W + log M = 23
and we would use the uncompressed representation if m = 1 or 2 . For m ≥ 3 ,
the match representation is shorter.
9. Lempel-Ziv.
(a) Continue the Lempel-Ziv parsing of the sequence 0,00,001,00000011010111.
(b) Give a sequence for which the number of phrases in the LZ parsing grows as fast
as possible.
(c) Give a sequence for which the number of phrases in the LZ parsing grows as slowly
as possible.
Solution: Lempel-Ziv.
(a) The Lempel-Ziv parsing is: 0, 00, 001, 000, 0001, 1, 01, 011, 1
(b) The sequence is: 0, 1, 00, 01, 10, 11, 000, 001, . . . concatenating all binary strings of
length 1,2,3, etc. This is the sequence where the phrases are as short as possible.
(c) Clearly the constant sequence will do: 1, 11, 111, 1111, . . .
10. Two versions of fixed-database Lempel-Ziv. Consider a source (A, P ) . For
simplicity assume that the alphabet is finite |A| = A < ∞ , and the symbols are i.i.d.
∼ P . A fixed database D is given, and is revealed to the decoder. The encoder parses
the target sequence xn1 into blocks of length l , and subsequently encodes them by
giving the binary description of their last appearance in the database. If a match is
329
Universal Source Coding
not found, the entire block is sent uncompressed, requiring l log A bits. A flag is used
to tell the decoder whether a match location is being described, or the sequence itself.
Problems (a) and (b) give some preliminaries you will need in showing the optimality
of fixed-database LZ in (c).
(a) Let xl be a δ -typical sequence of length l starting at 0, and let R l (xl ) be the
corresponding recurrence index in the infinite past . . . , X −2 , X−1 . Show that
O
P
E Rl (X l )|X l = xl ≤ 2l(H+δ)
where H is the entropy rate of the source.
0
1
(b) Prove that for any % > 0 , Pr Rl (X l ) > 2l(H+!) → 0 as l → ∞ .
Hint: Expand the probability by conditioning on strings x l , and break things
up into typical and non-typical. Markov’s inequality and the AEP should prove
handy as well.
(c) Consider the following two fixed databases (i) D 1 is formed by taking all δ -typical
l -vectors; and (ii) D2 formed by taking the most recent L̃ = 2l(H+δ) symbols in
the infinite past (i.e., X−L̃ , . . . , X−1 ). Argue that the algorithm described above
is asymptotically optimal, namely that the expected number of bits-per-symbol
converges to the entropy rate, when used in conjunction with either database D 1
or D2 .
Solution: Two versions of fixed-database Lempel-Ziv
(a) Since xl is δ -typical, the AEP implies that p(x l ) ≥ 2−l(H+δ) , and the result
follows from Kac’s lemma.
(l)
(b) Fix % > 0 , and δ ∈ (0, %) . Let Aδ be the δ -typical set for Al . We divide the set
of sequences into the typical sequences and the non-typical sequences.
Pr(Rl (X l ) > 2l(H+!) ) =
!
xl
=
p(xl ) Pr(Rl (X l ) > 2l(H+!) |xl )
!
(l)
xl ∈Aδ
p(xl ) Pr(Rl (X l ) > 2l(H+!) |xl )
+
!
(l)
xl ∈A
/ δ
(i)
≤
!
(l)
xl ∈Aδ
p(xl )
p(xl ) Pr(Rl (X l ) > 2l(H+!) |xl )
2l(H+δ)
(l)
+ Pr(X l ∈
/ Aδ )
2l(H+!)
(l)
≤ 2−l(!−δ) + Pr(X l ∈
/ Aδ )
where (i) follows from Markov’s inequality and using the result of part (a). The
proof now follows from the AEP, by sending l to infinity.
330
Universal Source Coding
(c) For D1 the proof follows trivially from the analysis in §3.2 in Cover and Thomas.
For D2 , let N = n/l be the number of blocks in the sequence, and let L(B i )
denote the length of the encoding of the i -th block B i . To simplify the notation,
assume that N = n/l is an integer. We call a block ‘good’ if we can find a match
in D2 , and ‘bad’ otherwise. Let G be the set of good blocks. If B i ∈ G , we e
ncode it using log |D2 | bits, which by our choice of D2 is equal to H(l + δ) bits.
If Bi ∈
/ G then we encode it using l log A bits. We throw in one extra bit to
distinguish between the two events. Then,
1
EL(X1 , X2 , . . . , Xn ) =
n
=
(i)
≤
1 !
E
L(Bi )
n
i
1 !
1 !
E
(1 + l(H + δ)) + E
(1 + l log A)
n i∈G
n i∈G
/
(1 + l(H + δ))
1
+ Pr{X l ∈
/ D2 }( + log A)
l
l
where step (i) follows from taking the first summation over all N blocks, and
$
c
using N −1 E i∈G
/ 1 = Pr{G } . Take ln to be a sequence of integers such that
ln ↑ ∞ as n → ∞ . It now follows from part (b) that Pr{X l ∈
/ D2 } → 0 and thus,
lim supn n−1 ELn ≤ H +δ and since δ is arbitrary, we have lim sup n n−1 ELn ≤ H .
The proof is now complete since lim inf n n−1 ELn ≥ H , by Shannon’s source
coding theorem.
11. Tunstall Coding: The normal setting for source coding maps a symbol (or a block of
symbols) from a finite alphabet onto a variable length string. An example of such a code
is the Huffman code, which is the optimal (minimal expected length) mapping from a set
of symbols to a prefix free set of codewords. Now consider the dual problem of variableto-fixed length codes, where we map a variable length sequence of source symbols into
a fixed length binary (or D -ary) representation. A variable-to-fixed length code for an
i.i.d. sequence of random variables X1 , X2 , . . . , Xn , Xi ∼ p(x), x ∈ X = {0, 1, . . . , m−1}
is defined by a prefix-free set of phrases A D ⊂ X ∗ , where X ∗ is the set of finite length
strings of symbols of X , and |AD | = D . Given any sequence X1 , X2 , . . . , Xn , the
string is parsed into phrases from AD (unique because of the prefix free property of
AD ), and represented by a sequence of symbols from a D -ary alphabet. Define the
efficiency of this coding scheme by
R(AD ) =
log D
EL(AD )
(13.17)
where EL(AD ) is the expected length of a phrase from A D .
(a) Prove that R(AD ) ≥ H(X) .
(b) The process of constructing AD can be considered as a process of constructing an
m -ary tree whose leaves are the phrases in A D . Assume that D = 1 + k(m − 1)
for some integer k ≥ 1 .
Consider the following algorithm due to Tunstall:
Universal Source Coding
331
i. Start with A = {0, 1, . . . , m − 1} with probabilities p 0 , p1 , . . . , pm−1 . This
corresponds to a complete m -ary tree of depth 1.
ii. Expand the node with the highest probability. For example, if p 0 is the
node with the highest probability, the new set is A = {00, 01, . . . , 0(m −
1), 1, . . . , (m − 1)} .
iii. Repeat step 2 until the number of leaves (number of phrases) reaches the
required value.
Show that the Tunstall algorithm is optimal, in the sense that it constructs a
variable to fixed code with the best R(AD ) for a given D , i.e., the largest value
of EL(AD ) for a given D .
(c) Show that there exists a D such that R(A ∗D ) < H(X) + 1 .
Solution: Tunstall Coding:
(a) We will argue that if R(AD ) < H(X) , then it is possible to construct an uniquely
decodable code with average length less than the entropy. Consider a long sequence
of i.i.d. random variables X1 , X2 , . . . , Xn ∼ p . We can parse this sequence into
phrases using the prefix-free set AD , and these phrases are independent and identically distributed, with the distribution induced by p on the tree. Thus, using renewal theory, since the expected phrase length is EL(A D ) , the number of phrases
in the block of length n is approxn/EL(A D ) . These phrases can be described
with log D bits each, so that the total description length is ≈ log D(n/EL(A D )) .
If R(AD ) < H , then the total description length is less than nH and we have a
contradiction to the fundamental theorem of source coding.
However, making the above argument precise raises issues for which we have two
different solutions:
• The algorithm above does not describe how to handle a sequence of random
variables that is a prefix of an element of A D . For example, after parsing a
block of length n into phrases from AD , we might be left with a few symbols
of X that are not long enough to make a phrase. We can imagine that these
symbols are sent uncompressed, and the overhead is small (the set A D is
finite, and so the maximal length of a phrase in A D is finite, and so the
maximum length of the residue is bounded).
• We could extend the fundamental theorem of source coding directly to the
variable to variable case, i.e., we can prove that for any mapping F : X ∗ →
{0, 1}∗ , that EL(AD )H(X) ≤ EL(C) , where EL(C) is the average length of
the binary codewords induced by the mapping. An example of such a mapping
for X = {a, b, c} is the mapping aa → 000, ab → 001, ac → 010, b → 011, c →
1 . It is easy to see that the above result is a “special” case of such a mapping,
where we assume that we can find a code length of log D for all the elements
of AD .
Let Y be the random variable whose values are the elements of A D , and
whose distribution is induced by the distribution of X , i.e., if the phrase is
332
Universal Source Coding
ab , the probability Pr(Y =' ab' ) = pa pb . It is easy to see that Y is well
defined random variable with total probability 1.
Now the code C is a code for the random variable Y , and by the standard
source coding theorem, we have EL(C) ≥ H(Y ) . We will now prove that
H(Y ) = EL(AD )H(X) , which will complete the proof.
There are many ways to prove this result, which is the Wald equation for
entropies. We will prove it directly, using a summation over the leaves of
a tree. Let L(X1 , X2 , . . . , Xn ) be the stopping defined by set AD , i.e.,
L(X1 , X2 , . . . , Xn ) = l if the sequence of length l at the beginning of X
is in AD . Choose n larger than the maximal length in A D , and let Y = X1L
be the first phrase in the parsing of X1 , X2 , . . . , Xn . Then
nH(X) = H(X1 , X2 , . . . , Xn )
=
=
n
H(X1L , L, XL+1
)
L
H(X1 ) + H(L|X1L )
(13.18)
(13.19)
+
n
H(XL+1
|L, X1L )
(13.20)
n
Now since L is fixed given X1L , H(L|X1L ) = 0 . Also, XL+1
is indpendent of
L
X1 given L , and we can write
n
H(XL+1
|L) =
=
n
!
l=1
n
!
l=1
n
Pr(L = l)H(Xl+1
)
(13.21)
Pr(L = l)(n − l)H(X)
(13.22)
= (n − EL)H(X)
(13.23)
Substituting this in the equation above, we get
nH(X) = H(X1L ) + (n − EL)H(X)
(13.24)
H(Y ) = H(X1L ) = ELH(X)
(13.25)
or
To prove the required result of part (a), we only need to verify that H(Y ) ≤
log D , which follows directly from the fact that the range of Y is limited to
D values.
(b) We will prove the optimality of the Tunstall algorithm by induction in a fashion
similar to the proof of Huffman coding optimality. By the statement of the problem, we restrict our attention to complete trees, i.e., trees for which every node is
either a leaf (no children) or has m children. Clearly, the algorithm to minimize
R(AD ) for a given D has to find the set AD that maximizes EL(AD ) .
We will need some notation for the analysis that follows: nodes in the tree are
either leaf nodes (nodes that have no children) or internal nodes (nodes that have
m children). The probability of a node is the product of the probability of the
symbols that led up to the node. The probability of the root node is 1.
333
Universal Source Coding
We will assume that the algorithm constructs a tree that is optimal for D k =
1 + k(m − 1) . We will show that the algorithm than produces a tree that is
optimal for Dk+1 = 1 + (k + 1)(m − 1) .
Any tree with Dk+1 nodes consists of tree with Dk nodes with one of the nodes
expanded. Let Tk denote a tree with Dk nodes, σ denote a leaf of this tree
with probability pσ , and let Tk+1 denote the tree with Dk+1 nodes formed by
expanding the node σ . Let N (T ) denote the leaf nodes in T . Then
EL(Tk+1 ) =
!
(13.26)
p(i)l(i)
i∈N (Tk+1 )
=
!
p(i)l(i) +
!
p(σ)p(j)(l(σ) + 1)
(13.27)
j=1
i∈N (Tk ),i+=σ
=
m
!
p(i)l(i) + p(σ)(l(σ) + 1)
(13.28)
i∈N (Tk ),i+=σ
=
!
p(i)l(i) + p(σ)
(13.29)
i∈N (Tk )
= EL(Tk ) + p(σ)
(13.30)
Thus the expected length for any expanded tree is equal the expected length of
the original tree plus the probability of the node that was expanded. This result
provides the basic intuition that motivates the algorithm: to maximize EL(T k+1 )
given Tk , we should expand the node with the largest probability. Doing this
repeatedly gives us the Tunstall algorithm.
However, using this to prove the optimality of the Tunstall algorithm is surprisingly
tricky. This is because there a different sequences of node expansions that give
rise to the same final tree. Also, a suboptimal tree of size D k might have a larger
value of p(σ) (the steps are not independent, and hence a greedy step early on
might not be optimal later) , and thus we cannot directly use the above result for
induction.
Instead, we will use another property of the optimal tree constructed by the Tunstall algorithm, that is, the probability of each of the internal nodes is higher than
the probability of the leaves. We have the following statement:
Lemma: Any optimal tree Tk+1 (a tree maximizing EL ) has the property that the
probability of any of the internal nodes is greater than or equal to the probability
of any of the leaves, i.e., if ΣI is the set of internal nodes and Σ L is the set of
leaf nodes, then
∀σ ∈ ΣI , ∀σ ' ∈ ΣL , p(σ) ≥ p(σ ' )
(13.31)
Proof: If this were not true, then there exists an internal node with a lower
probability than a leaf. Let σ be this internal node, σ l be the leaf, and since the
probability of any descendant node is less than that of the parent, we can work
down from σ until we find an internal node σ ' just above the leaves that also
satisfies this property, i.e., p(σ ' ) < p(σl ) . Now consider the tree Tk formed by
334
Universal Source Coding
'
by expanding node
deleting all the leaves coming out σ ' , and form a tree Tk+1
σl in Tk . We have
EL(Tk+1 ) = EL(Tk ) + p(σ ' )
(13.32)
'
EL(Tk+1
)
(13.33)
= EL(Tk ) + p(σl )
'
and since p(σ ' ) < p(σl ) , we have EL(Tk+1 ) < EL(Tk+1
) , contradicting the optimality of Tk+1 . Thus all optimal trees satisfy the propery above.
We now prove the converse, i.e., that any tree that satisfies this property must
be optimal. Again, we prove it by contradiction. Assume that there is a tree T k
satisfying this property that is not optimal, and therefore there is another tree T k∗
which is optimal, i.e.,having larger expected length. By the previous result, this
tree also satisfies (13.31).
Now consider the set of nodes that occur in T or T ∗ . These nodes can be classified
into 8 categories.
•
•
•
•
•
•
•
•
S1 :
S2 :
S3 :
S4 :
S5 :
S6 :
S7 :
S8 :
nodes
nodes
nodes
nodes
nodes
nodes
nodes
nodes
that
that
that
that
that
that
that
that
are internal nodes in both T and T ∗ .
are leaf nodes in both T and T ∗ .
are internal nodes in T and leaf nodes in T ∗ .
leaf nodes in T and internal nodes in T ∗ .
are internal nodes in T that are not in T ∗
are internal nodes in T ∗ that are not in T
are leaf nodes in T that are not in T ∗
are leaf nodes in T ∗ that are not in T
By assumption, T %= T ∗ , and therefore there are leaf nodes in T that are not
in T ∗ . Some ancestor of this leaf node σ in T must be a leaf node of T ∗ , and
therefore S3 is not empty. Similarly, if T %= T ∗ , S4 must be non-empty.
We now argue that all nodes in S3 and S4 have the same probability. Let σ3 ∈ S3
and σ4 ∈ S4 be two nodes in the two sets. By property (13.31) for T ∗ , it follows
that p(σ3 ) ≤ p(σ4 ) . By property (13.31) for T , we have p(σ 4 ) ≤ p(σ3 ) . Thus
p(σ3 ) = p(σ4 ) .
We now argue that S5 and S6 are empty sets. This follows from the fact that since
any node in S5 has to be a descendant of a node in S 3 , and hence p(σ5 ) < p(σ3 ) .
But by the property (13.31) for T , p(σ 5 ) ≥ p(σ4 ) , and since p(σ4 ) = p(σ3 ) , we
have a contradiction. Thus there can be no nodes in S 5 or S6 .
Thus the nodes in S7 are the children of nodes in S3 and the nodes in S8 are
the children of nodes in S4 . Since T and T ∗ are equal except for these nodes in
S7 and S8 and the average length of the trees depends only the probability of the
internal nodes, it follows that T and T ∗ have the same average length.
This finally proves the key result, which is that a tree is optimal if and only if it
satisfies property (13.31).
It is now simple to show by induction that the Tunstall algorithm constructs a
tree that satisfies (13.31). Initially, the trivial tree of depth 1 satisfies (13.31).
335
Universal Source Coding
Also, if we start with a tree that satisfies (13.31), and expand the leaf with the
highest probability, we still satisfy (13.31), since the new internal node has a
probability that is at least as high as any other leaf, and the new leaves have a
lower probability that the original leaf node that was expanded to form the new
internal node. Thus the new tree also satisfies (13.31), and by induction, the tree
constructed by the Tunstall algorithm satisfies (13.31). Combining this with the
previous result, we see that the tree constructed by the Tunstall algorithm has
maximal average length, and therefore minimizes R(A D ) .
(c) We will use the familiar Huffman coding procedure and “invert” it to construct
the variable to fixed code which achieves a compression ratio within one bit of the
entropy.
First, we take a blocks of length 2 for the random variable, ie. X 1 X2 ∈ X 2 and
construct an Huffman code for this pair. By the standard results for Huffman
codes, we have
2H < EL2 < 2H + 1
(13.34)
Let lm be the maximal length of any of the Huffman codewords.
Now consider the set of binary sequences of length n , n >> l m . Parse each
binary sequence to codewords from Huffman code, and replace the codewords
by the corresponding pair of symbols of X . This defines a set of sequences of
X , which we will like to use to construct A D . This set of sequences might not
correspond to a complete tree for X . We therefore add to this set by adding the
X sequences that correspond to siblings of the X sequences already chosen. This
augmented set will be the AD that we will use in our analysis.
We now show that for an appropriate choice of n large enough, this choice of A D
achieves an compression rate less than H + 1 .
The number of elements in AD : It is not difficult to see the code for any sequence
in AD is less than n + lm , and thus the number of sequences in AD is less than
2n+lm .
The average length of the sequences in A D : Using renewal theory, it follows that
the expected number of Huffman codewords in the parsing of a binary sequence
of length n converges to n/L2 . Thus the average length of the X sequences
corresponding to the parsed binary sequences converges to 2n/L 2 , since each
Huffman codeword corresponds to a block of two symbols of X .
The fact that we have added sequences to this set to form A D does not change
this result, and we can prove this by considering the parsing of binary sequences
of length n − lm and n + lm . The details of this analysis are omitted.
Thus we have the expected length of sequences of A D converges to 2n/L2 as
n → ∞ . Thus the compression ratio,
R(AD ) =
log D
EL(AD )
(13.35)
is upper bounded by (n + lm )/(2n/L2 + %) for n large enough. This converges
336
Universal Source Coding
to H + 1/2 as n → ∞ , and thus there exists an n such that we can achieve a
compression ratio less than H + 1 . This proves the required result.
Chapter 14
Kolmogorov Complexity
1. Kolmogorov complexity of two sequences. Let x, y ∈ {0, 1} ∗ . Argue that K(x, y) ≤
K(x) + K(y) + c.
Solution: Suppose we are given two sequences, x and y , with Kolmogorov complexity
K(x) and K(y) respectively. Then there must exist programs p x and py , of length
K(x) and K(y) respectively, which print out x and y . Form the following program:
Run the following two programs, not halting after the first;
Run the program px , interpreting the halt as a jump to the next step;
Run the program py .
This program, of length K(x) + K(y) + c , prints out x, y . Hence
K(x, y) ≤ K(x) + K(y) + c.
(14.1)
2. Complexity of the sum.
(a) Argue that K(n) ≤ log n + 2 log log n + c.
(b) Argue that K(n1 + n2 ) ≤ K(n1 ) + K(n2 ) + c.
(c) Give an example in which n1 and n2 are complex but the sum is relatively simple.
Solution:
(a) To describe an integer n , we will tell the computer the length of n , and then tell
it the bits of n . Thus the program will be self delimiting. To represent the length
of n , i.e., log n , we could use the simple code described in class: repeat each
bit of log n twice, and end the description by 10. This representation requires
2 log log n + 2 bits. It requires log n bits to represent the bits of n , and hence the
total length of the program is log n + 2 log log n + c , which is an upper bound on
the complexity of n:
K(n) ≤ log n + 2 log log n + c.
(14.2)
337
338
Kolmogorov Complexity
(b) Given two programs to print out n 1 and n2 , we can modify them so that they write
on the work tape, rather than the output tape. Then we can add an instruction
to add the two numbers together and print them out. The length of this program
is K(n1 ) + K(n2 ) + c , and hence
K(n1 + n2 ) ≤ K(n1 ) + K(n2 ) + c.
(14.3)
3. Images. Consider an n × n array x of 0’s and 1’s . Thus x has n 2 bits.
Find the Kolmogorov complexity K(x | n) (to first order) if
(a) x is a horizontal line.
(b) x is a square.
(c) x is the union of two lines, each line being vertical or horizontal.
Solution:
(a) The program to print out an image of one horizontal line is of the form
For 1 ≤ i ≤ n { Set pixels on row i to 0; }
Set pixels on row r to 1;
Print out image.
Since the computer already knows n , the length of this program is K(r|n) + c ,
which is ≤ log n + c . Hence, the Kolmogorov complexity of a line image is
K(line|n) ≤ log n + c.
(14.4)
(b) For a square, we have to tell the program the coordinates of the top left corner,
and the length of the side of the square. This requires no more than 3 log n bits,
and hence
K(square|n) ≤ 3 log n + c.
(14.5)
However, we can save some description length by first describing the length of the
side of the square and then the coordinates. Knowing the length of the side of
the square reduces the range of possible values of the coordinates. Even better,
we can count the total number of such squares. There is one n × n square, four
(n − 1) × (n − 1) squares, nine (n − 2) × (n − 2) squares, etc. The total number
of squares is
12 + 22 + 32 + · · · + n2 =
n3
n(n + 1)(2n + 1)
≈
.
6
3
(14.6)
339
Kolmogorov Complexity
Since we can give the index of a square in a lexicographic ordering,
K(square|n) ≤ log
n3
+ c.
3
(14.7)
(c) In this case, we have to tell the program the position of the horizontal line and
the position of the vertical line, requiring no more than 2 log n bits. Hence
K(pair of lines|n) ≤ 2 log n + c.
(14.8)
In all the above cases, there are many images which are much simpler to describe. For
example, in the case of the horizontal line image, the image of the first line or the
middle line is much easier to describe. However most of the images have description
lengths close to the bounds derived above.
4. Do computers reduce entropy? Feed a random program P into an universal
computer. What is the entropy of the corresponding output? Specifically, let X =
U(P ) , where P is a Bernoulli(1/2) sequence. Here the binary sequence X is either
undefined or is in {0, 1}∗ . Let H(X) be the Shannon entropy of X . Argue that
H(X) = ∞ . Thus although the computer turns nonsense into sense, the output entropy
is still infinite.
Solution: Do computers reduce entropy? The output probability distribution on
strings x is PU (x) , the universal probability of the string x . Thus, by the arguments
following equation (7.65), the output distribution includes a mixture of all computable
probability distributions.
Consider the following distribution on binary finite length stings:
P1 (x) =
$



1
An log2 n

 0
if x = 111
. . . 1? 0
< =>
n
otherwise
1’s
(14.9)
$
1
where A = ∞
n=1 n log2 n is chosen to ensure that
x P1 (x) = 1 . Then P1 (x) is a
computable probability distribution, and by problem 9 in Chapter 2, P 1 (x) has an
infinite entropy.
By (7.65) in the text,
PU (x) ≥ c1 P1 (x)
(14.10)
for some constant c1 that does not depend on x . Let
P2 (c) =
It is easy to see that
Also,
$
x P2 (x)
PU (x) − c1 P1 (x)
.
1 − c1
(14.11)
= 1 , and therefore P2 (x) is a probability distribution.
PU (x) = c1 P1 (x) + (1 − c1 )P2 (x).
(14.12)
340
Kolmogorov Complexity
By the results of Chapter 2, −t log t is a concave function of t and therefore
−PU (x) log PU (x) ≥ −c1 P1 (x) log P1 (x) − (1 − c1 )P2 (x) log P2 (x)
(14.13)
Summing this over all x , we obtain
H(PU ) ≥ c1 H(P1 ) + (1 − c1 )H(P2 ) = ∞
(14.14)
Thus the entropy at the output of a universal computer fed in Bernoulli(1/2) sequences
is infinite.
5. Monkeys on a computer. Suppose a random program is typed into a computer. Give
a rough estimate of the probability that the computer prints the following sequence:
(a) 0n followed by any arbitrary sequence.
(b) π1 π2 . . . πn followed by any arbitrary sequence, where π i is the i -th bit in the
expansion of π.
(c) 0n 1 followed by any arbitrary sequence.
(d) ω1 ω2 . . . ωn followed by any arbitrary sequence.
(e) A proof of the four color theorem.
Solution: The probability that a computer with a random input will print will print
out the string x followed by any arbitrary sequence is the sum of the probabilities over
all sequences starting with the string x .
pU (x . . .) =
!
y∈{0,1}∗ ∪{0,1}∞
pU (xy), where pU (x) =
!
2−&(p) .
(14.15)
p:U (p)=x
This sum is lower bounded by the largest term, which corresponds to the simplest
concatenated sequence.
(a) The simplest program to print a sequence that starts with n 0’s is
Print 0’s forever.
This program has constant length c and hence the probability of strings starting
with n zeroes is
pU (0n . . .) ≈ 2−c .
(14.16)
(b) Just as in part (a), there is a short program to print the bits of π forever. Hence
pU (π1 π2 . . . πn . . .) ≈ 2−c .
(14.17)
(c) A program to print out n 0’s followed by a 1 must in general specify n . Since
most integers n have a complexity ≈ log ∗ n , and given n , the program to print
out 0n 1 is simple, we have
pU (0n 1 . . .) ≈ 2− log
∗
n−c
,
(14.18)
341
Kolmogorov Complexity
(d) We know that n bits of Ω are essentially incompressible, i.e., their complexity
≥ n−c . Hence, the shortest program to print out n bits of Ω followed by anything
must have a length at least n − c , and hence
pU (ω1 ω2 . . . ωn . . .) ≈ 2−(n−c) .
(14.19)
6. Kolmogorov complexity and ternary programs.
Suppose that the input programs for a universal computer U are sequences in {0, 1, 2} ∗
(ternary inputs). Also, suppose U prints ternary outputs. Let K(x|l(x)) = min U (p,l(x))=x l(p).
Show that
(a) K(xn |n) ≤ n + c.
(b) |xn ∈ {0, 1}∗ : K(xn |n) < k| < 3k .
Solution: Kolmogorov Complexity and Ternary Programs.
(a) It is always possible to include a ternary representation of the string to be printed
out in the program. This program has a length of n + c ternary digits, and
therefore K(xn |n) ≤ n + c.
(b) There are less than 3k ternary programs of length less than k and each of these
programs can produce at most one output string and therefore the number of
strings with Kolmogorov complexity less than k has to be less than 3 k .
7. A law of large numbers. Using ternary inputs and outputs as in Problem 6, outline
an argument demonstrating that if a sequence x is algorithmically random, i.e., if
K(x|l(x)) ≈ l(x), then the proportion of 0’s, 1’s, and 2’s in x must each be near 1/3 .
It may be helpful to use Stirling’s approximation n! ≈ (n/e) n .
Solution: A Law of Large Numbers. The arguments parallel the arguments in the
binary case in Theorem 7.5.2. We will only outline the main argument. Let θ 0 , θ1 , θ2
be the proportions of 0’s, 1’s, and 2’s in the string x n . We can construct a two stage
description of xn by first describing θ0 , θ1 , θ2 , and then describing the string within
the set of all strings with the same proportions of 0,1 and 2. The two stage description
has a length bounded by nH3 (θ0 , θ1 , θ2 ) + 6 log n + c , where H3 denotes entropy to
base 3. If K(xn |n) ≈ n , then
n − cn ≥ K(xn |n) ≥ nH3 (θ0 , θ1 , θ2 ) + 6 log n + c,
(14.20)
H3 (θ0 , θ1 , θ2 ) ≥ 1 − δn ,
(14.21)
and therefore
where δn → 0 . Thus θ0 , θ1 , θ2 must lie in a neighborhood of ( 31 , 13 , 13 ) . This can be
seen by considering the behavior of the entropy function—it is close to 1 only in the
neighborhood of the center of the three dimensional simplex. Therefore, the proportion
of 0’s, 1’s and 2’s must be close to 1/3 for an incompressible ternary sequence.
342
Kolmogorov Complexity
8. Image complexity.
Consider two binary subsets A and B (of an n × n grid). For example,
Find general upper and lower bounds, in terms of K(A|n) and K(B|n) , for
(a) K(Ac |n).
(b) K(A ∪ B|n).
(c) K(A ∩ B|n).
Solution: Image Complexity.
(a) We can describe Ac by first describing A , so
K(Ac |n) < K(A|n) + c
(14.22)
(b) We can describe the union by describing each set separately and taking the union,
hence
K(A ∪ B|n) ≤ K(A|n) + K(B|n) + c
(14.23)
(c) The intersection can also be described similarly, and hence
K(A ∩ B|n) ≤ K(A|n) + K(B|n) + c
(14.24)
9. Random program. Suppose that a random program (symbols i.i.d. uniform over the
symbol set) is fed into the nearest available computer.
√
To our surprise the first n bits of the binary expansion of 1/ 2 are printed out.
Roughly what would you say the probability √
is that the next output bit will agree with
the corresponding bit in the expansion of 1/ 2 ?
Solution: Random program. The arguments parallel the argument in Section 7.10,
and we will not repeat them. Thus the probability
that the next bit printed out will
√
1
be the next bit of the binary expansion of 2 is ≈ cn+1
.
10. The face-vase illusion.
(a) What is an upper bound on the complexity of a pattern on an m × m grid that
has mirror image symmetry about a vertical axis through the center of the grid
and consists of horizontal line segments?
Kolmogorov Complexity
343
(b) What is the complexity K if the image differs in one cell from the pattern described
above?
Solution: The face vase illusion.
(a) An image with mirror image symmetry has only m 2 /2 independent pixels. We
can describe only one half and ask the computer to construct the other. Therefore
the Kolmogorov complexity of the image is less than m 2 /2 + c .
The fact that the image consists of horizontal line segments will not make a difference unless we are given some further restrictions on the line segments. For
example, in the image with the face-vase illusion, each half of any horizontal line
consists of only two segments, one black and one white. In this case, we can
describe the image by a sequence of boundary points between the black and the
white. Thus the image will take m log( m
2 ) + c bits to describe the m boundary points in one half of the picture (the boundary points on the other half can
be calculated from this half). Thus the image with the face-vase illusion has a
Kolmogorov complexity less than m log m + c .
(b) We can describe a picture that differs in one pixel from the image above by first
describing the above image, and then giving the location of the pixel that is
different. Therefore, the Kolmogorov complexity of the new image is less than
m log m + 2 log m + c .
11. Kolmogorov complexity
Assume n very large and known. Let all rectangles be parallel to the frame.
(a) What is the (maximal) Kolmogorov complexity of the union of two rectangles on
an n × n grid?
(b) What if the rectangles intersect at a corner?
344
Kolmogorov Complexity
(c) What if they have the same (unknown) shape?
(d) What if they have the same (unknown) area?
(e) What is the minimum Kolmogorov complexity of the union of two rectangles?
That is, what is the simplest union?
(f) What is the (maximal) Kolmogorov complexity over all images (not necessarily
rectangles) on an n × n grid?
Solution: Kolmogorov complexity Note that K(a single point on the screen|n) ≈ 2 log n+
c.
(a) To specify two rectangles, we need to describe the coordinates of two corners
(X, Y ) and either length and width (L, W ) or the opposite corner (X ' , Y ' ) of the
rectangle. Hence we will need to describe 4 numbers, each of which is ≤ n , and
therefore we need 4 log n + c bits for each rectangle, for a total of 8 log n + c for
two rectangles.
With the upper-left corner and the lower-right corner, we can describe a rectangle.
Hence, for two rectangles, K(x|n) ≈ K(4 points|n) ≈ 8 log n + c .
We have not used the fact that the length and width of the rectangle are not
independent of the position of the lower left corner–for example, if the lower left
corner is near the NE corner of the square, the length and width of the rectangle
have to be small. This will reduce the number of possible (X, Y, L, ) combinations
to be (n(n + 1)/2)2 rather than n4 , but it does not change the key term.
(b) Assuming two rectangles meet at a corner, we need to only describe 3 corners
instead of 4. Hence, K(x|n) ≈ K(3 points|n) ≈ 6 log n + c .
(c) Assuming two rectangles of the same shape, we need to describe the upper-left
and lower right corners of one rectangle and the one corner of the other. Hence,
K(x|n) ≈ K(3 points|n) ≈ 6 log n + c .
(d) If the rectangles have the same area, then describing one rectangle fully and the
other rectangle by one corner and one side (the other side can be calculated). Thus
K(x|n) ≈ K(3 points|n) + K(1 side|n) = 7 log n + c .
Kolmogorov Complexity
345
(e) An image is a specification for each pixel whether it is black or white. Since there
are n2 pixels in the image, 1 bit per pixel, the maximal Kolmogorov complexity
is n2 + c bits.
12. Encrypted text
Suppose English text xn is encrypted into y n by a substitution cypher: a 1 to 1
reassignment of each of the 27 letters of the alphabet (A-Z including the space character)
to itself. Suppose the Kolmogorov complexity of the text x n is K(xn ) = n4 . (This is
about right for English text. We’re now assuming a 27-symbol programming language,
instead of a binary symbol-set for the programming language. So, the length of the
shortest program, using a 27-ary programming language, that prints out a particular
string of English text of length n, is approximately n/4.)
(a) What is the Kolmogorov complexity of the encryption map?
(b) Estimate the Kolmogorov complexity of the encrypted text y n .
(c) How high must n be before you would expect to be able to decode y n ?
Solution: Encrypted text
(a) There are 27! encryption maps. To describe one of them requires in general log 27!
symbols. Note that the question implicitly assumes that we are using a 27-symbol
programming language, so the log here is to base 27.
(b) The complexity of the encrypted text cannot be worse than the complexity of the
English text plus the complexity of the encryption map (plus some small constant).
(c) The idea here is that in order to be able to decode the encrypted text, the length
of the encrypted string, n , must be greater than n/4 + log 27! . Why? Because
short strings have short programs that print them out simply by writing including
the text verbatim and saying “Print this”. This does not take advantage of the
structure of the text, but the text is so short that there isn’t really enough structure
to take advantage of. Any random sequence of symbols of length n can always
be printed out by a program of length n (+ c), so if n is less than log 27! the
overhead of expressing it as an encryption of English is higher than including it
as verbatim data. It is only as the string grows to length appreciably greater than
log27! that the overhead of expressing it as the encryption of English text becomes
neglible. Now the structure starts to dominate.
It should be pointed out that this is only the beginning of an idea about the relationship between encrypted text and the ability to uniquely decipher it. Shannon
studied the relationship between encryption and complexity in [13].
13. Kolmogorov complexity.
Consider the Kolmogorov complexity K(n) over the integers n . If a specific integer n 1
has a low Kolmogorov complexity K(n 1 ) , by how much can the Kolmogorov complexity
K(n1 + k) for the integer n1 + k vary from K(n1 ) ?
346
Kolmogorov Complexity
Solution: Kolmogorov complexity.
Since we can describle n + k by describing n and then describing k and then adding
them, K(n + k) ≤ K(n) + log k + c , since the descriptive complexity of k is less
the log k . Similarly, if the complexity of n + k is small, we cand describe n by
describing n + k and k , and therefore K(n) < K(n + k) + log k + c . Thus we have
|K(n + k) − K(n)| ≤ log k + c .
14. Complexity of large numbers. Let A(n) be the set of positive integers x for which
a terminating program p of length less than or equal to n bits exists that outputs
x . Let B(n) be the complement of A(n) , i.e., B(n) is the set of integers x for
which no program of length less than or equal to n outputs x . Let M (n) be the
maximum of A(n) and let S(n) be the minimum of B(n) . What is the Kolmogorov
complexity K(M (n)) (approximately)? What is K(S(n)) (approximately)? Which is
larger ( M (n) or S(n) )? Give a reasonable lower bound on M (n) and a reasonable
upper bound on S(n) .
Solution: Complexity of large numbers.
Clearly since we can specify the program that printed out M (n) with length less
than n , the Kolmogorov complexity of M (n) is less than n . The description “largest
number that is printed out by a program of less than n bits” does not give rise to an
effective program to compute M (n) , because even though we can simulate in parallel
all programs of length less than n , we will never know when we have found M (n) .
Thus a good bound on K(M (n)) ≈ n .
While S(n) does not have a program of length less than n to compute it, and therefore
K(S(n)) > n , we know that since it is the smallest such number, S(n) − 1 has a short
program of length less than n . Therefore we can describe S(n) by describing S(n) − 1
and the difference, and the complexity K(S(n)) ≈ n .
M (n) is likely to be much much larger than S(n) since we can describe very very large
numbers with short programs (e.g. iterated exponentials) S(n) on the other hand is a
boring small number.
M ...
(n) could be very large, and a good lower bound is an iterated exponential, i.e.,
2
22 , where the iteration is done n times. S(n) on the other hand cannot be less than
2n since all numbers less than 2n have description lengths less than n . However since
there are not enough short programs, the numbers above 2 n is likely to have complexity
greater than n , and so S(n) ≈ 2n .
Chapter 15
Network Information Theory
1. The cooperative capacity of a multiple access channel. (Figure 15.1)
0
$ X1
$$
$$
(W1 , W2 ) 11
11
2
1 X
2
'
p(y|x1 , x2 )
'Y
' (Ŵ , Ŵ )
1
2
'
Figure 15.1: Multiple access channel with cooperating senders.
(a) Suppose X1 and X2 have access to both indices W1 ∈ {1, 2nR }, W2 ∈ {1, 2nR2 }.
Thus the codewords X1 (W1 , W2 ), X2 (W1 , W2 ) depend on both indices. Find the
capacity region.
(b) Evaluate this region for the binary erasure multiple access channel Y = X 1 +
X2 , Xi ∈ {0, 1}. Compare to the non-cooperative region.
Solution: Cooperative capacity of multiple access channel
(a) When both senders have access to the pair of messages to be transmitted, they
can act in concert. The channel is then equivalent to a single user channel with
347
348
Network Information Theory
input alphabet X1 × X2 , and a larger message set W1 × W2 . The capacity of this
single user channel is C = maxp(x) I(X; Y ) = maxp(x1 ,x2 ) I(X1 , X2 ; Y ) . The two
senders can send at any combination of rates with the total rate
R1 + R2 ≤ C
(15.1)
(b) The capacity for the binary erasure multiple access channel was evaluated in class.
When the two senders cooperate to send a common message, the capacity is
C = max I(X1 , X2 ; Y ) = max H(Y ) = log 3,
p(x1 ,x2 )
(15.2)
achieved by (for example) a uniform distribution on the pairs, (0,0), (0,1) and
(1,1). The cooperative and non-cooperative regions are illustrated in Figure 15.2.
R2 5
"
"
"
"
"
"
"
"
"
C2 = 1
" "
" "
" "
" "
" "
" "
"
""
1
"
2
"
"
"
"
"
"
"
0
1
2
C1 = 1
Ccooper. = log 3
'
R1
Figure 15.2: Cooperative and non-cooperative capacity for a binary erasure multiple access
channel
2. Capacity of multiple access channels. Find the capacity region for each of the
following multiple access channels:
349
Network Information Theory
R2 5
C2 = 1 "
"
"
"
"
"
"
"
"
"
"
"
"
"
"
0
C1 = 1
'
R1
Figure 15.3: Capacity region of additive modulo 2 MAC
(a) Additive modulo 2 multiple access access channel. X 1 ∈ {0, 1}, X2 ∈ {0, 1}, Y =
X1 ⊕ X2 .
(b) Multiplicative multiple access channel. X 1 ∈ {−1, 1}, X2 ∈ {−1, 1}, Y = X1 · X2 .
Solution: Examples of multiple access channels.
(a) Additive modulo 2 MAC.
Y = X1 ⊕X2 . Quite clearly we cannot send at a total rate of more than 1 bit, since
H(Y ) ≤ 1 . We can achieve a rate of 1 bit from sender 1 by setting X 2 = 0 , and
similarly we can send 1 bit/transmission from sender 2. By simple time sharing
we can achieve the entire capacity region which is shown in Figure 15.3.
(b) Multiplier channel.
X1 , X2 ∈ {−1, 1}, Y = X1 .X2 .
This channel is equivalent to the previous channel with the mapping −1 → 1 and
1 → 0 . Hence the capacity region is the same as the previous channel.
3. Cut-set interpretation of capacity region of multiple access channel. For the
multiple access channel we know that (R 1 , R2 ) is achievable if
R1 < I(X1 ; Y | X2 ),
R2 < I(X2 ; Y | X1 ),
R1 + R2 < I(X1 , X2 ; Y ),
(15.3)
(15.4)
(15.5)
350
Network Information Theory
for X1 , X2 independent. Show, for X1 , X2 independent, that
I(X1 ; Y | X2 ) = I(X1 ; Y, X2 ).
Interpret the information bounds as bounds on the rate of flow across cutsets S 1 , S2
and S3 .
Solution: Cutset interpretation of the capacity region.
We can interpret I(X1 ; Y, X2 ) as the maximum amount of information that could flow
across the cutset S1 . This is an upper bound on the rate R1 . Similarly, we can
interpret the other bounds.
4. Gaussian multiple access channel capacity. For the AWGN multiple access channel, prove, using typical sequences, the achievability of any rate pairs (R 1 , R2 ) satisfying
R1 <
R2 <
R1 + R2 <
1
log(1 +
2
1
log(1 +
2
1
log(1 +
2
P1
),
N
P2
),
N
P1 + P2
).
N
(15.6)
(15.7)
(15.8)
The proof extends the proof for the discrete multiple access channel in the same way as
the proof for the single user Gaussian channel extends the proof for the discrete single
user channel.
Solution: Gaussian Multiple Access Channel Capacity.
The essence of the proof of the achievability of the capacity region for the Gaussian
multiple access channel is the same as the discrete multiple access channel. The main
difference is the introduction of the power constraint, and the modifications that have to
351
Network Information Theory
be made to ensure that the codewords satisfy the power constraint with high probability.
We will briefly outline the proof of achievability along the lines of the proof in the
discrete cases, pausing only to emphasize the differences.
The channel is defined by
Y = X1 + X2 + Z,
Z ∼ N (0, N )
(15.9)
with power constraints P1 and P2 on the inputs. The achievable rates for this channel
are
R2
R1 + R2
*
+
P1
N
* +
P2
< C
N
*
+
P1 + P2
< C
,
N
R1 < C
where
C(x) =
1
log(1 + x).
2
(15.10)
(15.11)
(15.12)
(15.13)
Codebook generation: Generate 2nR1 independent codewords X1 (w1 ) , w1 ∈ {1, 2, . . . , 2nR1 } ,
of length n , generating each element i.i.d. ∼ N (0, P 1 − %) Similarly generate 2nR2 independent codewords X2 (w2 ) , w2 ∈ {1, 2, . . . , 2nR2 } , generating each element i.i.d.
∼ N (0, P2 − %) . These codewords form the codebook.
Encoding: To send index w1 , sender one sends the codeword X1 (w1 ) . Similarly, to
send w2 , sender 2 sends X2 (w2 ) .
Decoding: The receiver Y n chooses the pair (i, j) such that
(x1 (i), x2 (j), y) ∈ A(n)
!
n
1!
x2 (i) ≤ P1
n k=1 1k
and
n
1!
x2 (j) ≤ P2
n k=1 2k
(15.14)
(15.15)
(15.16)
if such a pair (i, j) exists and is unique; otherwise, an error is declared.
By the symmetry of the random code construction, the conditional probability of error
does not depend on which pair of indices is sent. So, without loss of generality, we can
assume that (w1 , w2 ) = (1, 1) .
An error occurs in the decoding if
(n)
• (x1 (1), x2 (1)) ∈
/ A! ,
(n)
• (x1 (i), x2 (j)) ∈ A!
for some i %= 1 or j %= 1 , or
352
Network Information Theory
• x1 (1) or x2 (1) do not satisfy the power constraint.
Define the events
E01 = {
and
E02 = {
For i %= 0, j %= 0 ,
n
1!
X 2 (1) > P1 }
n k=1 1k
(15.17)
n
1!
X 2 (1) > P2 }.
n k=1 2k
(15.18)
Eij = {(X1 (i), X2 (j), Y) ∈ A(n)
! }.
(15.19)
Then by the union of events bound,
0
Pe(n) = P E01
Z
E02
Z
c
E11
Z
∪(i,j)+=(1,1) Eij
c
≤ P (E01 ) + P (E02 ) + P (E11
)+
!
1
(15.20)
P (Ei1 ) +
i+=1, j=1
!
P (E1j ) +
i=1, j+=1
!
P (Eij ),
i+=1, j+=1
where P is the probability given that (1, 1) was sent. Since we choose the codewords
according to a normal distribution with mean P i − % , with very high probability the
codeword power will be less than P . Hence, P (E 01 ) → 0 and P (E02 ) → 0 . From the
c ) → 0 . By the AEP, for i %= 1 , we have
AEP, P (E11
P (Ei1 ) = P ((X1 (i), X2 (1), Y) ∈ A(n)
! )
=
2
(n)
(x1 ,x2 ,y)∈A!
(15.21)
f (x1 )f (x2 , y)
(15.22)
≤ 2−n(h(X1 )+h(X2 ,Y )−h(X1 ,X2 ,Y )−3!)
(15.23)
−n(I(X1 ;X2 ,Y )−3!)
(15.24)
−n(I(X1 ;Y |X2 )−3!)
(15.25)
P
−n(C( N1
(15.26)
= 2
= 2
= 2
)−3!)
,
since X1 and X2 are independent, and therefore I(X1 ; X2 , Y ) = I(X1 ; X2 )+I(X1 ; Y |X2 ) =
I(X1 ; Y |X2 ) .
Similarly, for j %= 1 ,
and for i %= 1, j %= 1 ,
It follows that
P2
P (E1j ) ≤ 2−n(C( N )−3!) ,
P (Eij ) ≤ 2−n(C(
P1 +P2
)−4!)
N
(15.27)
(15.28)
.
P1
c
Pe(n) ≤ P (E01 ) + P (E02 ) + P (E11
) + 2nR1 2−n(C( N )−3!)
P2
+2nR2 2−n(C( N )−3!) + 2n(R1 +R2 ) 2−n(C(
P1 +P2
)−4!)
N
.
(15.29)
353
Network Information Theory
Thus % > 0 arbitrary and the conditions of the theorem cause each term to tend to 0
as n → ∞ .
The above bound shows that the average probability of error, averaged over all choices
of codebooks in the random code construction, is arbitrarily small. Hence there exists
at least one code C ∗ with arbitrarily small probability of error.
The achievability of the capacity region is proved.
5. Converse for the Gaussian multiple access channel. Prove the converse for the
Gaussian multiple access channel by extending the converse in the discrete case to take
into account the power constraint on the codewords.
Solution: Converse for the Gaussian multiple access channel. The proof of the converse for the Gaussian case proceeds on very similar lines to the discrete case. However,
for the Gaussian case, the two stages of proof that were required in the discrete case,
namely, of finding a new expression for the capacity region and then proving a converse,
can be combined into one single step.
By the code construction, it is possible to estimate (W 1 , W2 ) from the received sequence
Y n with a low probability of error. Hence the conditional entropy of (W 1 , W2 ) given
Y n must be small. By Fano’s inequality,
-
H(W1 , W2 |Y n ) ≤ n(R1 + R2 )Pe(n) + H(Pe(n) ) = n%n .
(n)
It is clear that %n → 0 as Pe
Then we have
(15.30)
→ 0.
H(W1 |Y n ) ≤ H(W1 , W2 |Y n ) ≤ n%n ,
(15.31)
H(W2 |Y ) ≤ H(W1 , W2 |Y ) ≤ n%n .
n
(15.32)
n
We can now bound the rate R1 as
nR1
=
H(W1 )
=
I(W1 ; Y ) + H(W1 |Y )
(15.34)
≤
I(W1 ; Y n ) + n%n
(15.35)
≤
I(X1n (W1 ); Y n ) + n%n
(15.36)
(a)
(b)
=
(c)
≤
=
=
(d)
=
(e)
=
(15.33)
n
H(X1n (W1 ))
n
−
H(X1n (W1 )|Y n )
+ n%n
(15.37)
H(X1n (W1 )|X2n (W2 )) − H(X1n (W1 )|Y n , X2n (W2 )) + n%n
I(X1n (W1 ); Y n |X2n (W2 )) + n%n
h(Y n |X2n (W2 )) − h(Y n |X1n (W1 ), X2n (W2 ))
(15.38)
(15.39)
+ n%n
(15.40)
h(Y n |X2n (W2 )) − h(Z n |X1n (W1 ), X2n (W2 )) + n%n
(15.41)
h(Y n |X2n (W2 )) − h(Z n ) + n%n
(15.42)
354
Network Information Theory
(f )
=
(g)
≤
(h)
≤
(i)
=
(j)
=
(k)
≤
=
h(Y n |X2n (W2 )) −
n
!
i=1
n
!
i=1
n
!
i=1
n
!
i=1
n
!
i=1
n
!
i=1
n
!
h(Yi |X2n (W2 )) −
h(Yi |X2i ) −
h(Zi ) + n%n
(15.43)
i=1
n
!
n
!
h(Zi ) + n%n
h(Zi ) + n%n
(15.45)
i=1
h(X1i + Zi |X2i ) −
h(X1i + Zi ) −
n
!
n
!
h(Zi ) + n%n
P1i
1
log 1 +
2
N
(15.46)
i=1
h(Zi ) + n%n
(15.47)
i=1
1
1
log 2πe(P1i + N ) − log 2πeN + n%n
2
2
*
(15.44)
i=1
+
+ n%n
(15.48)
(15.49)
where
(a) follows from Fano’s inequality,
(b) from the data processing inequality,
(c) from the fact that since W1 and W2 are independent, so are X1n (W1 ) and X2n (W2 ) ,
and hence it follows that H(X1n (W1 )|X2n (W2 )) = H(X1n (W1 )) , and H(X1n (W1 )|Y n , X2n (W2 )) ≤
H(X1n (W1 )|Y n ) by conditioning,
(d) from the fact that Y n = X1n + X2n + Z n ,
(e) from the fact that Z n is independent of X1n and X2n ,
(f) from the fact that the noise is i.i.d.,
(g) from the chain rule and removing conditioning,
(h) from removing conditioning,
(i) from the fact that Yi = X1i + X2i + Zi ,
(j) from the fact that X1i and Zi are independent of X2i , and
(k) from the entropy maximizing property of the normal (Theorem 9.6.5), after defining
2 .
P1i = EX1i
Hence, we have
*
+
+ %n .
(15.50)
*
+
+ %n .
(15.51)
n
1
P1i
1!
log 1 +
R1 ≤
n i=1 2
N
Similarly, we have
R2 ≤
n
1!
1
P2i
log 1 +
n i=1 2
N
To bound the sum of the rates, we have
n(R1 + R2 )
=
H(W1 , W2 )
(15.52)
355
Network Information Theory
=
I(W1 , W2 ; Y n ) + H(W1 , W2 |Y n )
(15.53)
≤
I(W1 , W2 ; Y n ) + n%n
(15.54)
≤
I(X1n (W1 ), X2n (W2 ); Y n ) + n%n
(15.55)
(c)
h(Y n ) − h(Z n ) + n%n
(15.57)
h(Y n ) −
(15.58)
(a)
(b)
=
h(Y ) − h(Y
n
=
(d)
=
(e)
≤
(f )
≤
=
n
!
i=1
n
!
n
!
|X1n (W1 ), X2n (W2 ))
+ n%n
h(Zi ) + n%n
(15.56)
i=1
h(Yi ) −
i=1
n
n
!
h(Zi ) + n%n
(15.59)
i=1
1
1
log 2πe(P1i + P2i + N ) − log 2πeN + n%n
2
2
*
1
P1i + P2i
log 1 +
2
N
+
+ n%n
(15.60)
(15.61)
where
(a) follows from Fano’s inequality,
(b) from the data processing inequality,
(c) from the fact that Y n = X1n + X2n + Z n , and Z n is independent of X1n and X2n ,
(d) from the fact that Zi are i.i.d., (e) follows from the chain rule and removing conditioning, and
(f) from the entropy maximizing property of the normal, and the definitions of P 1i and
P2i .
Hence we have
R1 + R2 ≤
*
n
1!
1
P1i + P2i
log 1 +
n i=1 2
N
+
+ %n .
(15.62)
The power constraint on the codewords imply that
n
1!
P1i ≤ P1 ,
n i=1
and
(15.63)
n
1!
P2i ≤ P2 .
n i=1
(15.64)
Now since log is concave function, we can apply Jensens inequality to the expressions
in (15.50), (15.51) and (15.62). Thus we obtain
.
1
R1 ≤ log 1 +
2
1
n
$n
i=1 P1i
N
/
+ %n
(15.65)
356
Network Information Theory
.
1
n
.
1
n
1
R2 ≤ log 1 +
2
1
R1 + R2 ≤ log 1 +
2
$n
/
$n
+ P2i
i=1 P2i
N
i=1 P1i
+ %n
N
/
(15.66)
+ %n .
(15.67)
which when combined with the power constraints, and taking the limit at n → ∞ , we
obtain the desired converse, i.e.,
R1 <
R2 <
R1 + R2 <
1
log(1 +
2
1
log(1 +
2
1
log(1 +
2
P1
),
N
P2
),
N
P1 + P2
).
N
(15.68)
(15.69)
(15.70)
6. Unusual multiple access channel. Consider the following multiple access channel:
X1 = X2 = Y = {0, 1} . If (X1 , X2 ) = (0, 0) , then Y = 0 . If (X1 , X2 ) = (0, 1) , then
Y = 1 . If (X1 , X2 ) = (1, 0) , then Y = 1 . If (X1 , X2 ) = (1, 1) , then Y = 0 with
probability 21 and Y = 1 with probability 12 .
(a) Show that the rate pairs (1,0) and (0,1) are achievable.
(b) Show that for any non-degenerate distribution p(x 1 )p(x2 ) , we have I(X1 , X2 ; Y ) <
1.
(c) Argue that there are points in the capacity region of this multiple access channel
that can only be achieved by timesharing, i.e., there exist achievable rate pairs
(R1 , R2 ) which lie in the capacity region for the channel but not in the region
defined by
R1 ≤ I(X1 ; Y |X2 ),
(15.71)
R1 + R2 ≤ I(X1 , X2 ; Y )
(15.73)
R2 ≤ I(X2 ; Y |X1 ),
(15.72)
for any product distribution p(x1 )p(x2 ) . Hence the operation of convexification
strictly enlarges the capacity region. This channel was introduced independently
by Csiszár and Körner[4] and Bierbaum and Wallmeier[2].
Solution:
Unusual multiple access channel.
(a) It is easy to see how we could send 1 bit/transmission from X 1 to Y —simply set
X2 = 0 . Then Y = X1 , and we can send 1 bit/transmission to from sender 1 to
the receiver.
Alternatively, if we evaluate the achievable region for the degenerate product distribution p(x1 )p(x2 ) with p(x1 ) = ( 12 , 12 ) , p(x2 ) = (1, 0) , we have I(X1 ; Y |X2 ) = 1 ,
357
Network Information Theory
I(X2 ; Y |X1 ) = 0 , and I(X1 , X2 ; Y ) = 1 . Hence the point (1, 0) lies in the
achievable region for the multiple access channel corresponding to this product
distribution.
By symmetry, the point (0, 1) also lies in the achievable region.
(b) Consider any non-degenerate product distribution, and let p 1 = p(X1 = 1) , and let
p2 = p(X2 = 1) . By non-degenerate we mean that p 1 %= 0 or 1, and p2 %= 0 or 1. In
this case, Y = 0 when (X1 , X2 ) = (0, 0) and half the time when (X1 , X2 ) = (1, 1) ,
i.e., with a probability (1 − p1 )(1 − p2 ) + 12 p1 p2 . Y1 = 1 for the other input pairs,
i.e., with a probability p1 (1 − p2 ) + p2 (1 − p1 ) + 21 p1 p2 . We can evaluate the
achievable region of the multiple access channel for this product distribution. In
particular,
1
R1 +R2 ≤ I(X1 , X2 ; Y ) = H(Y )−H(Y |X1 , X2 ) = H((1−p1 )(1−p2 )+ p1 p2 )−p1 p2 .
2
(15.74)
1
Now H((1 − p1 )(1 − p2 ) + 2 p1 p2 ) ≤ 1 (entropy of a binary random variable is at
most 1) and p1 p2 > 0 for a non-degenerate distribution. Hence R 1 + R2 is strictly
less than 1 for any non-degenerate distribution.
(c) The degenerate distributions have either R 1 or R2 equal to 0. Hence all the distributions that achieve rate pairs (R1 , R2 ) with both rates positive have R1 +R2 < 1 .
For example the union of the achievable regions over all product distributions does
not include the point ( 12 , 12 ) . But this point is clearly achievable by timesharing
between the points (1, 0) and (0, 1) . Or equivalently, the point ( 12 , 12 ) lies in the
convex hull of the union of the achievable regions, but not the union itself. So the
operation of taking the convex hull has strictly increased the capacity region for
this multiple access channel.
7. Convexity of capacity region of broadcast channel. Let C ⊆ R 2 be the capacity
region of all achievable rate pairs R = (R 1 , R2 ) for the broadcast channel. Show that
C is a convex set by using a timesharing argument.
Specifically, show that if R(1) and R(2) are achievable, then λR(1) + (1 − λ)R(2) is
achievable for 0 ≤ λ ≤ 1.
Solution: Convexity of Capacity Regions.
Let R(1) and R(2) be two achievable rate pairs. Then there exist a sequence of
(1)
(1)
(2)
(2)
((2nR1 , 2nR2 ), n) codes and a sequence of ((2nR1 , 2nR2 ), n) codes for the chan(n)
(n)
nel with Pe (1) → 0 and Pe (2) → 0 . We will now construct a code of rate
λR(1) + (1 − λ)R(2) .
For a code length n , use the concatenation of the codebook of length λn and rate
R(1) and the code of length (1 − λ)n and rate R(2) . The new codebook consists of
(1)
(2)
all pairs of codewords and hence the number of X 1 codewords is 2λnR1 2(1−λ)nR1 ,
(1)
(2)
and hence the rate is λR1 + (1 − λ)R1 . Similarly the rate of the X2 codeword is
(1)
(2)
λR2 + (1 − λ)R2 .
358
Network Information Theory
R2 5
H(Y )
"
"
"
0
"
"
"
"
"
H(X|Y )
H(X)
'
R1
Figure 15.4: Slepian Wolf rate region for Y = f (X) .
We will now show that the probability of error for this sequence of codes goes to zero.
The decoding rule for the concatenated code is just the combination of the decoding
rule for the parts of the code. Hence the probability of error for the combined codeword
is less than the sum of the probabilities for each part. For the combined code,
Pe(n) ≤ Pe(λn) (1) + Pe((1−λ)n) (2)
(15.75)
which goes to 0 as n → ∞ . Hence the overall probability of error goes to 0, which
implies the λR(1) + (1 − λ)R(2) is achievable.
8. Slepian-Wolf for deterministically related sources. Find and sketch the SlepianWolf rate region for the simultaneous data compression of (X, Y ), where y = f (x) is
some deterministic function of x.
Solution: Slepian Wolf for Y = f (X) .
The quantities defining the Slepian Wolf rate region are H(X, Y ) = H(X) , H(Y |X) =
0 and H(X|Y ) ≥ 0 . Hence the rate region is as shown in the Figure 15.4.
9. Slepian-Wolf. Let Xi be i.i.d. Bernoulli( p ). Let Zi be i.i.d. ∼ Bernoulli( r ), and let
Z be independent of X. Finally, let Y = X ⊕ Z (mod 2 addition). Let X be described
at rate R1 and Y be described at rate R2 . What region of rates allows recovery of
X, Y with probability of error tending to zero?
Solution: Slepian Wolf for binary sources.
359
Network Information Theory
R2 5
H(Y )
= H(p ∗ r)
"
"
"
H(Y |X)
= H(r)
"
"
"
"
"
'
0
H(X|Y ) = H(p)
+H(r) − H(p ∗ r)
H(X)
= H(p)
R1
Figure 15.5: Slepian Wolf region for binary sources
X ∼ Bern( p ). Y = X ⊕ Z , Z ∼ Bern( r ). Then Y ∼ Bern( p ∗ r ), where p ∗ r =
p(1 − r) + r(1 − p) . H(X) = H(p) . H(Y ) = H(p ∗ r) , H(X, Y ) = H(X, Z) = H(X) +
H(Z) = H(p)+ H(r) . Hence H(Y |X) = H(r) and H(X|Y ) = H(p)+ H(r)− H(p ∗r) .
The Slepian Wolf region in this case is shown in Figure 15.5.
10. Broadcast capacity depends only on the conditional marginals. Consider the
general broadcast channel (X, Y1 × Y2 , p(y1 , y2 | x)). Show that the capacity region
depends only on p(y1 | x) and p(y2 | x). To do this, for any given ((2nR1 , 2nR2 ), n)
code, let
(n)
P1
(n)
P2
(n)
P
= P {Ŵ1 (Y1 ) %= W1 },
(15.76)
= P {(Ŵ1 , Ŵ2 ) %= (W1 , W2 )}.
(15.78)
= P {Ŵ2 (Y2 ) %= W2 },
(15.77)
Then show
(n)
(n)
(n)
max{P1 , P2 } ≤ P (n) ≤ P1
(n)
+ P2 .
The result now follows by a simple argument.
Remark: The probability of error P (n) does depend on the conditional joint distribution
p(y1 , y2 | x). But whether or not P (n) can be driven to zero (at rates ( R1 , R2 )) does
not (except through the conditional marginals p(y 1 | x), p(y2 | x)).
360
Network Information Theory
Solution: Broadcast channel capacity depends only on conditional marginals
(n)
= P (Ŵ1 (Y1 ) %= W1 )
(15.79)
(n)
= P (Ŵ2 (Y2 ) %= W2 )
(15.80)
P1
P2
P (n) = P ((Ŵ1 (Y1 ), Ŵ2 (Y2 )) %= (W1 , W2 ))
(15.81)
Then by the union of events bound, it is obvious that
(n)
P (n) ≤ P1
(n)
+ P2 .
(15.82)
Also since (Ŵ1 (Y1 ) %= W1 ) or (Ŵ2 (Y2 ) %= W2 ) implies ((Ŵ1 (Y1 ), Ŵ2 (Y2 )) %= (W1 , W2 )) ,
we have
(n)
(n)
P (n) ≥ max{P1 , P2 }.
(15.83)
(n)
Hence P (n) → 0 iff P1
(n)
→ 0 and P2
→ 0.
The probability of error, P (n) , for a broadcast channel does depend on the joint con(n)
(n)
ditional distribution. However, the individual probabilities of error P 1 and P2
however depend only on the conditional marginal distributions p(y 1 |x) and p(y2 |x)
respectively. Hence if we have a sequence of codes for a particular broadcast channel
(n)
(n)
with P (n) → 0 , so that P1 → 0 and P2 → 0 , then using the same codes for another
broadcast channel with the same conditional marginals will ensure that P (n) for that
channel as well, and the corresponding rate pair is achievable for the second channel.
Hence the capacity region for a broadcast channel depends only on the conditional
marginals.
11. Converse for the degraded broadcast channel. The following chain of inequalities
proves the converse for the degraded discrete memoryless broadcast channel. Provide
reasons for each of the labeled inequalities.
Setup for converse for degraded broadcast channel capacity:
(W1 , W2 )indep. → X n (W1 , W2 ) → Y1n → Y2n
Encoding fn : 2nR1 × 2nR2 → X n
Decoding: gn : Y1n → 2nR1 ,
Let Ui = (W2 , Y1i−1 ) . Then
hn : Y2n → 2nR2
·
nR2 ≤Fano I(W2 ; Y2n )
(a)
=
n
!
i=1
(b)
=
!
i
(15.84)
I(W2 ; Y2i | Y2i−1 )
(15.85)
(H(Y2i | Y2i−1 ) − H(Y2i | W2 , Y2i−1 ))
(15.86)
361
Network Information Theory
!
(c)
≤
i
!
(d)
=
i
n
!
(e)
=
(H(Y2i ) − H(Y2i | W2 , Y2i−1 , Y1i−1 ))
(15.87)
(H(Y2i ) − H(Y2i | W2 , Y1i−1 ))
(15.88)
I(Ui ; Y2i ).
(15.89)
i=1
Continuation of converse. Give reasons for the labeled inequalities:
·
nR1 ≤Fano I(W1 ; Y1n )
(f )
(15.90)
≤
I(W1 ; Y1n , W2 )
(15.91)
≤
I(W1 ; Y1n | W2 )
(15.92)
(g)
(h)
=
(i)
≤
n
!
i−1
n
!
i=1
I(W1 ; Y1i | Y1i−1 , W2 )
(15.93)
I(Xi ; Y1i | Ui ).
(15.94)
Now let Q be a time sharing random variable with Pr(Q = i) = 1/n , i = 1, 2, . . . , n .
Justify the following:
R1 ≤ I(XQ ; Y1Q |UQ , Q)
R2 ≤ I(UQ ; Y2Q |Q),
(15.95)
(15.96)
for some distribution p(q)p(u|q)p(x|u, q)p(y 1 , y2 |x) . By appropriately redefining U ,
argue that this region is equal to the convex closure of regions of the form
R1 ≤ I(X; Y1 |U )
(15.97)
R2 ≤ I(U ; Y2 ),
(15.98)
for some joint distribution p(u)p(x|u)p(y 1 , y2 |x) .
Solution: Converse for the degraded broadcast channel.
(W1 , W2 ) → X(W1 , W2 ) → Y → Z
(15.99)
(W1 , W2 ) → Xi (W1 , W2 ) → Yi → Zi .
(15.100)
We also have
Let Ui = (W2 , Y i−1 ) .
By Fano’s inequality,
(n)
(n)
H(W2 |Z n ) ≤ P2 nR2 + H(P2 ) = n%n
(15.101)
362
Network Information Theory
(n)
where %n → 0 as P2
→ 0.
We then have the following chain of inequalities
nR2
=
H(W2 )
=
I(W2 ; Z ) + H(W2 |Z )
≤
(a)
=
(b)
=
(c)
≤
(d)
=
(e)
=
(15.102)
n
(15.103)
n
I(W2 ; Z ) + n%n
(15.104)
n
!
(15.105)
i
I(W2 ; Zi |Z i−1 ) + n%n
(15.106)
i
(H(Zi |Z i−1 ) − H(Zi |Z i−1 , W2 )) + n%n
(15.107)
i
(H(Zi ) − H(Zi |Z i−1 , W2 , Y i−1 )) + n%n
(15.108)
i
(H(Zi ) − H(Zi |W2 , Y i−1 )) + n%n
I(Ui ; Zi ) + n%n
(15.109)
!
!
!
!
i
where (15.104) follows from Fano’s inequality,
(a) from the chain rule,
(b) from the definition of conditional mutual information,
(c) from the fact that removing conditioning increases entropy and adding conditioning
reduces it,
(d) from the fact that since the broadcast channel is degraded, Z i−1 depends only
on Y i−1 and is conditionally independent of everything else, hence Z i is conditionally
independent of Z i−1 given Y i−1 ,
(e) follows from the definition of Ui .
Continuation of Converse.
Similarly by Fano’s inequality,
(n)
(n)
H(W1 |Y n ) ≤ P1 nR1 + H(P1 ) = n%n
(15.110)
and we have the chain of inequalities,
nR1
=
H(W1 )
=
I(W1 ; Y ) + H(W1 |Y )
(15.112)
I(W1 ; Y ) + n%n
(15.113)
I(W1 ; W2 , Y n ) + n%n
(15.114)
I(W1 ; Y n |W2 ) + n%n
(15.115)
I(W1 ; Yi |W2 , Y i−1 ) + n%n
(15.116)
≤
(f )
≤
(g)
=
(h)
≤
≤
(15.111)
n
n
n
I(W1 , Xi ; Yi |W2 , Y
i−1
) + n%n
(15.117)
363
Network Information Theory
(i)
≤
=
I(Xi ; Yi |W2 , Y i−1 ) + n%n
I(Xi ; Yi |Ui ) + n%n
(15.118)
(15.119)
where (15.113) follows from Fano’s inequality,
(f) follows from the fact that the difference, I(W 1 ; W2 |Y n ) ≥ 0 ,
(g) follows from the chain rule for I and the fact that W 1 and W2 are independent,
(h) from the chain rule for mutual information, and
(i) from the data processing inequality.
We can then use standard techniques like the introduction of a time-sharing random
variable to complete the proof of the converse for the broadcast channel.
12. Capacity points.
(a) For the degraded broadcast channel X → Y 1 → Y2 , find the points a and b where
the capacity region hits the R1 and R2 axes (Figure 15.6).
Figure 15.6: Capacity region of a broadcast channel
(b) Show that b ≤ a.
Solution: Capacity region of broadcast channel.
(a) The capacity region of the degraded broadcast channel X → Y 1 → Y2 is the
convex hull of regions of the form
R1 ≤ I(X; Y1 |U )
R2 ≤ I(U ; Y2 )
(15.120)
(15.121)
over all choices of auxiliary random variable U and joint distribution of the form
p(u)p(x|u)p(y1 , y2 |x) .
The region is of the form illustrated in Figure 15.7.
The point a on the figure corresponds to the maximum achievable rate from the
sender to receiver 2. From the expression for the capacity region, it is the maximum
value of I(U ; Y2 ) for all auxiliary random variables U .
364
Network Information Theory
For any random variable U and p(u)p(x|u) , U → X → Y 2 forms a Markov
chain, and hence I(U ; Y2 ) ≤ I(X; Y2 ) ≤ maxp(x) I(X; Y2 ) . The maximum can be
achieved by setting U = X and choosing the distribution of X to be the one that
maximizes I(X; Y2 ) . Hence the point a corresponds to R2 = maxp(x) I(X; Y2 ), R1 =
I(X; Y1 |U ) = I(X; Y1 |X) = 0 .
The point b has a similar interpretation. The point b corresponds to the maximum
rate of transmission to receiver 1. From the expression for the capacity region,
R1 ≤ I(X; Y1 |U ) = H(Y1 |U ) − H(Y1 |X, U ) = H(Y1 |U ) − H(Y1 |X),
(15.122)
since U → X → Y1 forms a Markov chain. Since H(Y1 |U ) ≤ H(Y1 ) , we have
R1 ≤ H(Y1 ) − H(Y1 |X) = I(X; Y1 ) ≤ max I(X; Y1 ),
p(x)
(15.123)
and the maximum is attained when we set U ≡ 0 and choose p(x) = p(x|u) to
be the distribution that maximizes I(X; Y 1 ) . In this case, R2 ≤ I(U ; Y2 ) = 0 .
Hence point b corresponds to the rates R 1 = maxp(x) I(X; Y1 ), R2 = 0 .
These results have a simple single user interpretation. If we not sending any
information to receiver 1, then we can treat the channel to receiver 2 as a single
user channel and send at capacity for this channel, i.e., max I(X; Y 2 ) . Similarly,
if we are not sending any information to receiver 2, we can send at capacity to
receiver 1, which is max I(X; Y1 ) .
(b) Since X → Y1 → Y2 forms a Markov chain for all distributions p(x) , we have by
the data processing inequality
a = max I(X; Y2 ) = I(X ∗ ; Y2 )
(15.124)
≤ I(X ∗ ; Y1 )
(15.125)
p(x)
= max I(X; Y1 ) = b,
p(x)
(15.126)
where X ∗ has the distribution that maximizes I(X; Y 2 ) .
13. Degraded broadcast channel. Find the capacity region for the degraded broadcast
channel in Figure 15.8.
Solution: Degraded broadcast channel. From the expression for the capacity region, it
is clear that the only on trivial possibility for the auxiliary random variable U is that
it be binary. From the symmetry of the problem, we see that the auxiliary random
variable should be connected to X by a binary symmetric channel with parameter β .
Hence we have the setup as shown in Figure 15.9.
We can now evaluate the capacity region for this choice of auxiliary random variable.
By symmetry, the best distribution for U is the uniform. Hence
R2 = I(U ; Y2 )
(15.127)
365
Network Information Theory
= H(Y2 ) − H(Y2 |U )
*
+
α
α
= H
, α,
− H((βp + βp)α, α, (βp + βp)α)
2
2
* +
1
= H(α) + αH
− H(α) − αH(βp + βp)
2
= α(1 − H(βp + βp)).
(15.128)
(15.129)
(15.130)
(15.131)
Also
R1 = I(X; Y1 |U )
= H(Y1 |U ) − H(Y1 |U, X)
= H(βp + βp) − H(p).
(15.132)
(15.133)
(15.134)
These two equations characterize the boundary of the capacity region as β varies. When
β = 0 , then R1 = 0 and R2 = α(1 − H(p)) . When β = 12 , we have R1 = 1 − H(p)
and R2 = 0 .
The capacity region is sketched in Figure 15.10.
14. Channels with unknown parameters. We are given a binary symmetric channel
with parameter p. The capacity is C = 1 − H(p).
Now we change the problem slightly. The receiver knows only that p ∈ {p 1 , p2 } ,
i.e., p = p1 or p = p2 , where p1 and p2 are given real numbers. The transmitter knows
the actual value of p. Devise two codes for use by the transmitter, one to be used if
p = p1 , the other to be used if p = p2 , such that transmission to the receiver can take
place at rate ≈ C(p1 ) if p = p1 and at rate ≈ C(p2 ) if p = p2 .
Hint: Devise a method for revealing p to the receiver without affecting the asymptotic
rate. Prefixing the codeword by a sequence of 1’s of appropriate length should work.
Solution: Capacity of channels with unknown parameters.
We have two possiblities; the channel is a BSC with parameter p 1 or a BSC with
parameter p2 . If both sender and receiver know that state of channel, then we can
achieve the capacity corresponding to which channel is in use, i.e., 1 − H(p 1 ) or 1 −
H(p2 ) .
If the receiver does not know the state of the channel, then he cannot know which
codebook is being used by the transmitter. He cannot then decode optimally; hence he
cannot achieve the rates corresponding to the capacities of the channels.
But the transmitter can inform the receiver of the state of the channel so that the
receiver can decode optimally. To do this, the transmitter can precede the codewords
by a sequence of 1’s or 0’s. Let us say we use a string of m 1’s to indicate that the
channel was in state p1 and m 0’s to indicate state p2 . Then, if m = o(n) and
m → ∞ , where n is the block length of the code used, we have the probability of error
in decoding the state of the channel going to zero. Since the receiver will then use the
(n)
right code for the rest of the message, it will be decoded correctly with P e → 0 . The
366
Network Information Theory
nC(pi )
2
effective rate for this code is logn+m
→ C(pi ) , since m = o(n) . So we can achieve
the same asymptotic rate as if both sender and receiver knew the state of the channel.
15. Two-way channel. Consider the two-way channel shown in Figure 15.6. The outputs
Y1 and Y2 depend only on the current inputs X1 and X2 .
(a) By using independently generated codes for the two senders, show that the following rate region is achievable:
R1 < I(X1 ; Y2 |X2 ),
R2 < I(X2 ; Y1 |X1 )
(15.135)
(15.136)
for some product distribution p(x1 )p(x2 )p(y1 , y2 |x1 , x2 ) .
(b) Show that the rates for any code for a two-way channel with arbitrarily small
probability of error must satisfy
R1 ≤ I(X1 ; Y2 |X2 ),
R2 ≤ I(X2 ; Y1 |X1 )
(15.137)
(15.138)
for some joint distribution p(x1 , x2 )p(y1 , y2 |x1 , x2 ) .
The inner and outer bounds on the capacity of the two-way channel are due to Shannon[15].
He also showed that the inner bound and the outer bound do not coincide in the case
of the binary multiplying channel X 1 = X2 = Y1 = Y2 = {0, 1} , Y1 = Y2 = X1 X2 .
The capacity of the two-way channel is still an open problem.
Solution: Two-way channel.
(a) We will only outline the proof of achievability. It is quite straightforward compared
to the more complex channels considered in the text.
Fix p(x1 )p(x2 )p(y1 , y2 |x1 , x2 ) .
Code generation: Generate a code of size 2nR1 of codewords X1 (w1 ) , where the
x1i are generate i.i.d. ∼ p(x1 ) . Similarly generate a codebook X2 (w2 ) of size
2nR2 .
Encoding: To send index w1 from sender 1, he sends X1 (w1 ) . Similarly, sender 2
sends X2 (w2 ) .
Decoding: Receiver 1 looks for the unique w 2 , such that (X1 (w1 ), x2 (w2 ), Y1 ) ∈
(n)
A! (X1 , X2 , Y1 ) . If there is no such w2 or more than one such, it declares an error.
Similarly, receiver 2 looks for the unique w 1 , such that (x1 (w1 ), X2 (w2 ), Y2 ) ∈
(n)
A! (X1 , X2 , Y2 ) .
Analysis of the probability of error: We will only analyze the error at receiver 1.
The analysis for receiver 2 is similar.
Without loss of generality, by the symmetry of the random code construction, we
can assume that (1,1) was sent. We have an error at receiver 1 if
367
Network Information Theory
(n)
• (X1 (1), X2 (1), Y1 ) ∈
/ A! (X1 , X2 , Y1 ) . The probability of this goes to 0 by
the law of large numbers as n → ∞ .
(n)
• There exists an j %= 1 , such that (X1 (1), X2 (j), Y1 ) ∈ A! (X1 , X2 , Y1 ) .
Define the events
Ej = {(X1 (1), X2 (j), Y1 ) ∈ A(n)
! }.
(15.139)
Then by the union of events bound,
0
Pe(n) = P E1c
Z
≤ P (E1c ) +
∪j+=1 Ej
!
1
P (Ej ),
(15.140)
(15.141)
j+=1
where P is the probability given that (1, 1) was sent. From the AEP, P (E 1c ) → 0 .
By Theorem 14.2.3, for j %= 1 , we have
P (Ej ) = P ((X1 (1), X2 (j), Y1 ) ∈ A(n)
! )
!
=
p(x2 )p(x1 , y1 )
(15.142)
(15.143)
(n)
(x1 ,x2 ,y1 )∈A!
−n(H(X2 )−!) −n(H(X1 ,Y )−!)
≤ |A(n)
2
! |2
(15.144)
−n(H(X2 )+H(X1 ,Y )−H(X1 ,X2 ,Y )−3!)
(15.145)
−n(I(X2 ;X1 ,Y )−3!)
(15.146)
≤ 2
= 2
−n(I(X2 ;Y |X1 )−3!)
= 2
,
(15.147)
since X1 and X2 are independent, and therefore I(X1 ; X2 , Y ) = I(X1 ; X2 ) +
I(X1 ; Y |X2 ) = I(X1 ; Y |X2 ) . Therefore
Pe(n) ≤ P (E1c ) + 2nR2 2−n(I(X2 ;Y |X1 )−3!) ,
(15.148)
Since % > 0 is arbitrary, the conditions of the theorem imply that the probability
of error tends to 0 as n → ∞ . Similarly, we can show that the probability of error
at receiver two goes to 0, and thus we have proved the achievability of the region
for the two way channel.
(b) The converse is a simple application of the general Theorem 14.10.1 to this simple
case. The sets S can be taken in turn to be each node. We will not go into the
details.
16. Multiple-access channel
Let the output Y of a multiple-access channel be given by
Y = X1 + sgn(X2 )
where X1 , X2 are both real and power limited,
368
Network Information Theory
E[X12 ] ≤ P1 ,
E[X22 ] ≤ P2 ,
and sgn(x) =
)
1, x > 0
.
−1, x ≤ 0
Note that there is interference but no noise in this channel.
(a) Find the capacity region.
(b) Describe a coding scheme that achieves the capacity region.
Solution: Multiple-access channel
(a) This is continuous noiseless multiple access channel, if we let U 2 = sgn(X2 ) , we
can consider a channel from X1 and U2 to Y
I(X1 ; Y |X2 ) = h(Y |X2 ) − h(Y |X1 , X2 ) = h(X1 |X2 ) − (−∞) = ∞
(15.149)
since X1 and X2 are independent, and similarly
I(X2 ; Y |X1 ) = I(X2 , U2 ; Y |X1 )
(15.150)
= I(U2 ; Y |X1 ) + I(X2 ; Y |X1 , U2 )
(15.151)
= H(U2 ) − H(U2 |Y, X1 )
(15.153)
= I(U2 ; Y |X1 )
(15.152)
= H(U2 )
(15.154)
I(X1 , X2 ; Y ) = ∞ . Thus we can send at infinite rate from X 1 to Y and at a
maximum rate of 1 bit/transmission from X 2 to Y .
(b) We can send a 1 for X2 in the first transmission, and knowing this, Y can recover
X1 perfectly, recovering an infinite number of bits. From then on, X 1 can be 0
and we can send 1 bit per transmission using the sign of X 2 .
17. Slepian Wolf
Let (X, Y ) have the joint pmf p(x, y)
p(x,y) 1
where β =
1
6
2
3
1
α β β
2
β α β
3
β β α
− α2 . (Note: This is a joint, not a conditional, probability mass function.)
369
Network Information Theory
(a) Find the Slepian Wolf rate region for this source.
(b) What is Pr{X = Y } in terms of α ?
(c) What is the rate region if α =
(d) What is the rate region if α =
1
3
1
9
?
?
Solution: Slepian Wolf
$
(a) H(X, Y ) = − p(x, y) log p(x, y) = −3α log α − 6β log β . Since X and Y are
uniformly distributed
H(X) = H(Y ) = log 3
(15.155)
and
H(X|Y ) = H(Y |X) = H(3α, 3β, 3β)
(15.156)
R1 ≥ H(X|Y ) = H(3α, 3β, 3β)
(15.157)
Hence the Slepian Wolf rate region is
R2 ≥ H(Y |X) = H(3α, 3β, 3β)
R1 + R2 ≥ H(X, Y ) = H(3α, 3β, 3β) + log 3
(15.158)
(15.159)
(b) From the joint distribution, Pr(X = Y ) = 3α .
(c) If α =
1
3
, β = 0 , and H(X|Y ) = H(Y |X) = 0 . The rate region then becomes
R1 ≥ 0
R2 ≥ 0
R1 + R2 ≥ log 3
(15.160)
(15.161)
(15.162)
(d) If α = 91 , β = 91 , and H(X|Y ) = H(Y |X) = log 3 . X and Y are independent,
and the rate region then becomes
R1 ≥ log 3
R2 ≥ log 3
R1 + R2 ≥ 2 log 3
(15.163)
(15.164)
(15.165)
18. Square channel
What is the capacity of the following multiple access channel?
X1 ∈ {−1, 0, 1}
X2 ∈ {−1, 0, 1}
Y
= X12 + X22
(a) Find the capacity region.
(b) Describe p∗ (x1 ), p∗ (x2 ) achieving a point on the boundary of the capacity region.
370
Network Information Theory
Solution: Square channel
(a) If we let U1 = X12 and U2 = X22 , then the channel is equivalent to a sum multiple
access channel Y = U1 + U2 . We could also get the same behaviour by using only
two input symbols (0 and 1) for both X1 and X2 .
Thus the capacity region
R1 < I(X1 ; Y |X2 ) = H(Y |X2 )
R2 < I(X2 ; Y |X1 ) = H(Y |X1 )
R1 + R2 < I(X1 , X2 ; Y ) = H(Y )
(15.166)
(15.167)
(15.168)
By choosing p(x1 , x2 ) = 1/4 for (x1 , x2 ) = (1, 0), (0, 0), (0, 1), (1, 1) and 0 otherwise, we obtain H(Y |X1 ) = H(Y |X2 ) = 1 , H(Y ) = 1.5 , and by the results for
the binary erasure multiple access channel, the capacity of the channel is limited
by
R1 < 1
(15.169)
R2 < 1
(15.170)
R1 + R2 < 1.5
(15.171)
(b) One possible distribution that achieves points on the boundary of the rate region
is given by the distribution in part (a).
19. Slepian-Wolf: Two senders know random variables U 1 and U2 respectively. Let the
random variables (U1 , U2 ) have the following joint distribution:
U1 \U2
0
1
2
..
.
m−1
0
α
γ
m−1
γ
m−1
..
.
γ
m−1
1
2
β
m−1
β
m−1
0
0
0
0
..
.
0
0
..
.
···
···
···
···
..
.
m−1
···
0
β
m−1
0
0
..
.
where α + β + γ = 1 . Find the region of rates (R 1 , R2 ) that would allow a common
receiver to decode both random variables reliably.
Solution: Slepian-Wolf
For this joint distribution,
H(U1 ) = H(α + β,
γ
γ
,...,
) = H(α + β, γ) + γ log(m − 1)
m−1
m−1
(15.172)
H(U2 ) = H(α + γ,
β
β
,...,
) = H(α + γ, β) + β log(m − 1)
m−1
m−1
(15.173)
371
Network Information Theory
*
H(U1 , U2 ) = H α,
β
β
γ
γ
,...,
,
,...,
m−1
m−1 m−1
m−1
+
= H(α, β, γ)+β log(m−1)+γ log(m−1)
H(U1 |U2 ) = H(α, β, γ) − H(α + γ, β) + γ log(m − 1)
(15.174)
(15.175)
H(U2 |U1 ) = H(α, β, γ) − H(α + β, γ) + β log(m − 1)
(15.176)
R1 ≥ H(α, β, γ) − H(α + γ, β) + γ log(m − 1)
(15.177)
and hence the Slepian Wolf region is
R2 ≥ H(α, β, γ) − H(α + β, γ) + β log(m − 1)
R1 + R2 ≥ H(α, β, γ) + β log(m − 1) + γ log(m − 1)
(15.178)
(15.179)
20. Multiple access.
(a) Find the capacity region for the multiple access channel
Y = X1X2
where
X1 %{2, 4} , X2 %{1, 2} .
(b) Suppose the range of X1 is {1, 2} . Is the capacity region decreased? Why or why
not?
Solution: Multiple access.
(a) With X1 ∈ {2, 4}, X2 ∈ {1, 2} , the channel Y = X1X2 behaves as:
X1
2
4
2
4
X2
1
1
2
2
Y
2
4
4
16
We compute
R1 ≤ I(X1 ; Y |X2 ) = I(X1 ; X1X2 |X2 ) = H(X1 ) = 1 bit per trans
R2 ≤ I(X2 ; Y |X1 ) = I(X2 ; X1X2 |X1 ) = H(X2 ) = 1 bit per trans
3
R1 + R2 ≤ I(X1 , X2 ; Y ) = H(Y ) − H(Y |X1 , X2 ) = H(Y ) = bits per trans,
2
where the bound on R1 + R2 is met at the corners in the picture below, where
either sender 1 or 2 sends 1 bit per transmission and the other user treats the
channel as a binary erasure channel with capacity 1 − p erasure = 1 − 12 = 12 bits
per use of the channel. Other points on the line are achieved by timesharing.
(b) With X1 ∈ {1, 2}, X2 ∈ {1, 2} , the channel Y = X1X2 behaves as:
372
Network Information Theory
X1
1
2
1
2
X2
1
1
2
2
Y
1
2
1
4
Note when X1 = 1 , X2 has no effect on Y and can not be recovered given X 1
and Y . If X1 ∼ Br(α) and X2 ∼ Br(β) then:
R1 ≤ I(X1 ; Y |X2 ) = I(X1 ; X1X2 |X2 ) = H(α)
R2 ≤ I(X2 ; Y |X1 ) = H(Y |X1 ) − H(Y |X1 , X2 ) = H(Y |X1 )
= p(X1 = 1)H(Y |X1 = 1) + p(X1 = 2)H(Y |X1 = 2)
= αH(β)
R1 + R2 ≤ I(X1 , X2 ; Y ) = H(Y ) − H(Y |X1 , X2 ) = H(Y )
= H(αβ, αβ, 1 − αβ − αβ) = H(α) + αH(β)
We may choose β =
1
2
to maximize the above bounds, giving
R1 ≤ H(α)
R2 ≤ α
R1 + R2 ≤ H(α) + α
Above, we plot the region for X1 ∈ {2, 4} (solid line) against that when X1 ∈
{1, 2} (dotted). What we find is that, surprisingly, the rate region from the first
case is not reduced in the second. In fact, neither region contains the other, so
for each version of this channel, there are achievable rate pairs which are not
achievable in the other.
21. Broadcast Channel. Consider the following degraded broadcast channel.
0 $ 1 − α1 % 0 $ 1 − α2 % 0
$
$
α$
α$
2 $$
1 $$
1 2
2
$ E
%
$
#
#E
#1
#1
α
α#
#
#
2
1
#
#
% 1 ##
%1
1#
1 − α2
1
−
α
1
X
Y1
Y2
(a) What is the capacity of the channel from X to Y 1 ?
(b) From X to Y2 ?
(c) What is the capacity region of all (R 1 , R2 ) achievable for this broadcast channel?
Simplify and sketch.
Solution: Broadcast Channel.
373
Network Information Theory
(a) The channel from X to Y1 is a standard erasure channel with probability of
erasure = α1 , and hence the capacity is 1 − α1
(b) We can show that the effective channel from X to Y 2 is a binary erasure channel
with erasure probability α1 + α2 − α1 α2 , and hence the capacity is 1 − α1 − α2 +
α1 α2 = (1 − α1 )(1 − α2 )
(c) As in Problem 15.13, the auxiliary random variable U in the capacity region of
the broadcast channel has to be binary. Hence we have the following picture
We can now evaluate the capacity region for this choice of auxiliary random
variable. By symmetry, the best distribution for U is the uniform. Let α =
α1 + α2 − α1 α2 , and therefore 1 − α = α = α1 α2 . Hence
R2 = I(U ; Y2 )
(15.180)
= H(Y2 ) − H(Y2 |U )
*
+
α
α
= H
, α,
− H((βα1 α2 , α1 + α1 α2 , βα1 α2 )
2
2
* +
1
= H(α) + αH
− H(α) − αH(β, β)
2
= α(1 − H(β)).
(15.181)
(15.182)
(15.183)
(15.184)
Also
R1 = I(X; Y1 |U )
(15.185)
= H(Y1 |U ) − H(Y1 |U, X)
(15.186)
= α1 H(β) + H(α1 ) − H(α1 )
(15.188)
= H(βα1 , α1 , βα1 ) − H(α1 )
(15.187)
= α1 H(β)
(15.189)
These two equations characterize the boundary of the capacity region as β varies.
When β = 0 , then R1 = 0 and R2 = α . When β = 12 , we have R1 = α1 and
R2 = 0 .
The capacity region is sketched in Figure 15.13.
22. Stereo. The sum and the difference of the right and left ear signals are to be individually compressed for a common receiver. Let Z 1 be Bernoulli (p1 ) and Z2 be Bernoulli
(p2 ) and suppose Z1 and Z2 are independent. Let X = Z1 + Z2 , and Y = Z1 − Z2 .
(a) What is the Slepian Wolf rate region of achievable (R X , RY ) ?
X
%
RX
%
Decoder
Y
%
RY
%
% (X̂, Ŷ )
374
Network Information Theory
(b) Is this larger or smaller than the rate region of (R Z1 , RZ2 ) ? Why?
Z1
%
RZ1
%
Decoder
Z2
%
RZ2
% (Zˆ , Zˆ )
1
2
%
There is a simple way to do this part.
Solution: Stereo.
The joint distribution of X and Y is shown in following table
probability
Z1 Z2 X Y
0
0
0 0 (1 − p1 )(1 − p2 )
0
1
1 -1
(1 − p1 )p2
1
0
1 1
p1 (1 − p2 )
1
1
2 0
p 1 p2
and hence we can calculate
and
H(X) = H(p1 p2 , p1 + p2 − 2p1 p2 , (1 − p1 )(1 − p2 ))
(15.190)
H(X, Y ) = H(Z1 , Z2 ) = H(p1 ) + H(p2 )
(15.192)
H(Y ) = H(p1 p2 + (1 − p1 )(1 − p2 ), p1 − p1 p2 , p2 − p1 p2 )
(15.191)
and therefore
H(X|Y ) = H(p1 ) + H(p2 ) − H(p1 p2 + (1 − p1 )(1 − p2 ), p1 − p1 p2 , p2 − p(15.193)
1 p2 )
H(Y |X) = H(p1 ) + H(p2 ) − H(p1 p2 , p1 + p2 − 2p1 p2 , (1 − p1 )(1 − p2 )) (15.194)
The Slepian Wolf region in this case is
R1 ≥ H(X|Y ) = H(p1 ) + H(p2 ) − H(p1 p2 + (1 − p1 )(1 − p2 ), p1 − p1 p2 , p2(15.195)
− p 1 p2 )
R2 ≥ H(Y |X) = H(p1 ) + H(p2 ) − H(p1 p2 , p1 + p2 − 2p1 p2 , (1 − p1 )(1 − p2(15.196)
))
R1 + R2 ≥ H(p1 ) + H(p2 )
(15.197)
23. The Slepian Wolf region for (Z1 , Z2 ) is
R1 ≥ H(Z1 |Z2 ) = H(p1 )
R2 ≥ H(Z2 |Z1 ) = H(p2 )
R1 + R2 ≥ H(Z1 , Z2 ) = H(p1 ) + H(p2 )
(15.198)
(15.199)
(15.200)
which is a rectangular region.
The minimum sum of rates is the same in both cases, since if we knew both X and
Y , we could find Z1 and Z2 and vice versa. However, the region in part (a) is usually
pentagonal in shape, and is larger than the region in (b).
375
Network Information Theory
24. Multiplicative multiple access channel. Find and sketch the capacity region of
the multiplicative multiple access channel
X1 $
$
$ #$
$
2
$ *
1
#!"
#
##
X2 #
%
Y
with X1 ∈ {0, 1} , X2 ∈ {1, 2, 3} , and Y = X1 X2 .
Solution: Multiplicative multiple access channel.
Since Y = X1 X2 , if X1 = 0 , Y = 0 and we receive no information about X 2 .
When X1 = 1 , Y = X2 , and we can decode X2 perfectly, thus we can achieve a rate
R1 = 0, R2 = log 3 .
Let α be the probability that X1 = 1 . By symmetry, X2 should have an uniform
distribution on {1, 2, 3} . The capacity region of the multiple access channel
I(X1 ; Y |X2 ) = H(X1 |X2 ) − H(X1 |Y, X2 ) = H(X1 ) = H(α)
I(X2 ; Y |X1 ) = H(Y |X1 ) = αH(X2 ) = α log 3
α α α
I(X1 , X2 ; Y ) = H(Y ) = H(1 − α, , , ) = H(α) + α log 3
3 3 3
(15.201)
(15.202)
(15.203)
Thus the rate region is characterized by the equations
R1 ≤ H(α)
R2 ≤ α log 3
(15.204)
(15.205)
where α varies from 0 to 1
The maximum value for R1 occurs for α =
rates occurs (by calculus) at α = 43 .
1
2
. The maximum value for the sum of the
25. Distributed data compression. Let Z 1 , Z2 , Z3 be independent Bernoulli (p) . Find
the Slepian-Wolf rate region for the description of (X 1 , X2 , X3 ) where
X1 = Z1
X2 = Z1 + Z2
X3 = Z1 + Z2 + Z3 .
376
Network Information Theory
X1
%
%
X2
%
%
X3
%
%
% (X̂ , X̂ , X̂ )
1
2
3
Solution: Distributed data compression.
To establish the rate region, appeal to Theorem 14.4.2 in the text, which generalizes
the case with two encoders. The inequalities defining the rate region are given by
R(S) > H(X(S)|X(S c ))
for all S ⊆ {1, 2, 3} , and R(S) =
$
i∈S
Ri .
The rest is calculating entropies H(X(S)|X(S c )) for each S . We have
H1 = H(X1 ) = H(Z1 ) = H(p),
H2 = H(X2 ) = H(Z1 + Z2 ) = H(p2 , 2p(1 − p), (1 − p)2 ),
H3 = H(X3 ) = H(Z1 + Z2 + Z3 )
= H(p3 , 3p2 (1 − p), 3p(1 − p)2 , (1 − p)3 ),
H12 = H(X1 , X2 ) = H(Z1 , Z2 ) = 2H(p),
H13 = H(X1 , X3 ) = H(X1 ) + H(X3 |X1 ) = H(X1 ) + H(Z2 + Z3 )
= H(p2 , 2p(1 − p), (1 − p)2 ) + H(p),
H23 = H(X2 , X3 ) = H(X2 ) + H(X3 |X2 ) = H(X2 ) + H(Z3 )
= H(p2 , 2p(1 − p), (1 − p)2 ) + H(p),
and
H123 = H(X1 , X2 , X3 ) = H(Z1 , Z2 , Z3 ) = 3H(p).
Using the above identities and chain rule, we obtain the rate region as
R1 > H(X1 |X2 , X3 ) = H123 − H23
= 2H(p) − H(p2 , 2p(1 − p), (1 − p)2 ) = 2p(1 − p),
R2 > H(X2 |X1 , X3 ) = H123 − H13 = 2p(1 − p),
R3 > H(X3 |X1 , X2 ) = H123 − H12 = H(p),
R1 + R2 > H(X1 , X2 |X3 ) = H123 − H3
= 3H(p) − H(p3 , 3p2 (1 − p), 3p(1 − p)2 , (1 − p)3 ) = 3p(1 − p) log(3),
377
Network Information Theory
R1 + R3 > H(X1 , X3 |X2 ) = H123 − H2
= 3H(p) − H(p2 , 2p(1 − p), (1 − p)2 ) = H(p) + 2p(1 − p),
R2 + R3 > H(X2 , X3 |X1 ) = H123 − H1 = 2H(p),
and
R1 + R2 + R3 > H123 = 3H(p).
(Simplifications are contributed by KBS.)
26. Noiseless multiple access channel Consider the following multiple access channel
with two binary inputs X1 , X2 ∈ {0, 1} and output Y = (X1 , X2 ) .
(a) Find the capacity region. Note that each sender can send at capacity.
(b) Now consider the cooperative capacity region, R 1 ≥ 0, R2 ≥ 0, R1 +R2 ≤ maxp(x1 ,x2 ) I(X1 , X2 ; Y ) .
Argue that the throughput R1 + R2 does not increase, but the capacity region
increases.
Solution: Noiseless multiple access channel
(a) Since Y = (X1 , X2 ) , I(X1 ; Y |X2 ) = H(X1 |X2 ) = H(X1 ) ≤ 1 , and I(X1 , X2 ; Y ) =
H(X1 , X2 ) ≤ 2 , and hence the capacity region of the MAC becomes R 1 ≤ 1 ,
R2 ≤ 1 , R1 + R2 ≤ 2 .
(b) The cooperative capacity region is R 1 + R2 ≤ maxp(x1 ,x2 ) I(X1 , X2 ; Y ) = 2 . Thus,
the cooperative capacity has the same sum of rates, but with cooperation, one of
the senders could send 2 bits (while the other rate is 0). Thus the capacity region
increases from the square ( R1 ≤ 1 , R2 ≤ 1 ) to the triangle R1 + R2 ≤ 2 .
27. Infinite bandwidth multiple access channel Find the capacity region for the Gaussian multiple access channel with infinite bandwidth. Argue that all senders can send
at their individual capacities, i.e., infinite bandwidth eliminates interference.
Solution: Infinite bandwidth multiple access channel
The capacity of a Gaussian multiple access channel with bandwidth W is given by the
following rate region
R1
R2
R1 + R2
*
+
P1
≤ W log 1 +
NW
*
+
P2
≤ W log 1 +
NW
*
+
P1 + P2
≤ W log 1 +
NW
(15.206)
(15.207)
(15.208)
A hueristic argument to prove this follows from the single user Gaussian channel capacity with bandwidth W combined with “onion-peeling” and timesharing.
378
Network Information Theory
As W → ∞ , these bounds reduce to
R1 ≤
R2 ≤
R1 + R2 ≤
P1
N
P2
N
P1 + P2
N
(15.209)
(15.210)
(15.211)
which is a rectangular region corresponding to no interference between the two senders.
28. A multiple access identity.
Let C(x) = 21 log(1 + x) denote the channel capacity of a Gaussian channel with signal
to noise ratio x . Show
C
*
P1
N
+
+C
*
P2
P1 + N
+
=C
*
+
P1 + P2
.
N
This suggests that 2 independent users can send information as well as if they had
pooled their power.
Solution: A multiple access identity.
C(
P1 + P2
) =
N
=
=
=
=
1
P1 + P2
log(1 +
)
2
N
1
N + P1 + P2
log(
)
2
N
1
N + P1 + P2 N + P1
log(
·
)
2
N + P1
N
1
N + P1 + P2
1
N + P1
log(
) + log(
)
2
N + P1
2
N
P2
P
C(
) + C( )
P1 + N
N1
(15.212)
(15.213)
(15.214)
(15.215)
(15.216)
29. Frequency Division Multiple Access (FDMA). Maximize the throughput R 1 +
2
R2 = W1 log(1 + NPW1 1 ) + (W − W1 ) log(1 + N (WP−W
) over W1 to show that bandwidth
1)
should be proportional to transmitted power for FDMA.
Solution: Frequency Division Multiple Access (FDMA).
Allocating bandwidth W1 and W2 = W − W1 to the two senders, we can achieve the
following rates
*
+
P1
R1 = W1 log 1 +
,
(15.217)
N W1
*
+
P2
R2 = W2 log 1 +
.
N W2
(15.218)
379
Network Information Theory
To maximimize the sum of the rates, we write
*
R = R1 + R2 = W1 log 1 +
P1
N W1
+
*
+ (W − W1 ) log 1 +
P2
N (W − W1 )
+
(15.219)
and differentiating with respect to W 1 , we obtain
*
+
*
+
W1
P1
−
P1
N W12
1 + N W1
*
+
P2
W − W1
−log 1 +
+
2
N (W − W1 )
1 + N (WP−W
log 1 +
P1
N W1
+
1)
*
P2
N (W − W1 )2
+
= 0 (15.220)
Instead of solving this equation, we can verify that if we set
W1 =
so that
P1
W
P1 + P2
P1
P2
P1 + P2
=
=
N W1
N W2
NW
(15.221)
(15.222)
that (15.220) is satisfied, and that using bandwidth proportional to the power optimizes
the total rate for Frequency Division Multiple Access.
30. Trilingual speaker broadcast channel
A speaker of Dutch, Spanish and French wishes to communicate simultaneously to three
people: D, S, and F . D knows only Dutch, but can distinguish when a Spanish word
is being spoken as distinguished from a French word, similarly for the other two, who
know only Spanish and French respectively, but can distinguish when a foreign word is
spoken and which language is being spoken.
Suppose each language, Dutch, Spanish, and French, has M words: M words of Dutch,
M words of French, and M words of Spanish.
(a) What is the maximum rate at which the trilingual speaker can speak to D ?
(b) If he speaks to D at the maximum rate, what is the maximum rate he can simultaneously speak to S ?
(c) If he is speaking to D and S at the above joint rate, can he also speak to F at
some positive rate? If so, what is it? If not, why not?
Solution: Trilingual speaker broadcast channel
(a) Speaking Dutch gives M words, and in addition two words for the distinguishability of French and Spanish from Dutch, thus log(M + 2) bits.
(b) Transmitting log M bits for a fraction of 1/(M + 2) of the time gives R =
(log M )/(M + 2) .
(c) Same reasoning as in (b) gives R = (log M )/(M + 2) .
380
Network Information Theory
31. Parallel Gaussian channels from a mobile telephone
Assume that a sender X is sending to two fixed base stations.
Assume that the sender sends a signal X that is constrained to have average power P .
Assume that the two base stations receive signals Y 1 and Y2 , where
Y1 = α1 X + Z1
Y2 = α2 X + Z2
where Zi ∼ N (0, N1 ) , Z2 ∼ N (0, N2 ) , and Z1 and Z2 are independent. We will
assume the α ’s is constant over a transmitted block.
(a) Assuming that both signals Y1 and Y2 are available at a common decoder Y =
(Y1 , Y2 ), what is the capacity of the channel from the sender to the common
receiver?
(b) If instead the two receivers Y1 and Y2 each independently decode their signals,
this becomes a broadcast channel. Let R 1 be the rate to base station 1 and R2
be the rate to base station 2. Find the capacity region of this channel.
Solution: Parallel Gaussian channels from a mobile telephone
(a) Let Y = [Y1 , Y2 ]T . Obviously,
I(X; Y ) = h(Y1 , Y2 ) − h(Z1 , Z2 )
thus it is clear that the maximizing distribution on X is Gaussian N (0, P ) . Therefore we have
1
h(Y1 , Y2 ) = log 2πe|KY |
2
and consequently, by independence of the noises
C=
1
|KY |
log
2
N1 N2
.
Plugging in |KY | = (1 − α)2 P N1 + α2 P N2 + N1 N2 we have
.
1
(1 − α)2 P
α2 P
C = log 1 +
+
2
N2
N1
/
.
(b) The problem is equivalent to the degraded broadcast channel with
Y1 = X + Z1 /α
Y2 = X + Z2 /(1 − α)
.
Thus, the noise is N (0, N1 /α2 ) and N (0, N2 /(1−α)2 ) . Without loss of generality
assume that N2 /(1 − α)2 > N1 /α2 . Then, referring to Example 14.6.6. in Cover
381
Network Information Theory
and Thomas, the rate region is
.
R1 < C
.
R2 < C
θα2 P
N1
/
(1 − θ)(1 − α)2 P
θ(1 − α)2 P + N2
/
, 0≤θ≤1 .
32. Gaussian multiple access.
A group of m users, each with power P , is using a Gaussian multiple access channel
at capacity, so that
*
+
m
!
mP
Ri = C
,
(15.223)
N
i=1
where C(x) =
1
2
log(1 + x) and N is the receiver noise power.
A new user of power P0 wishes to join in.
(a) At what rate can he send without disturbing the other users?
(b) What should his power P0 be so that the new users rate is equal to the combined
communication rate C(mP/N ) of all the other users?
Solution: Gaussian multiple access.
(a) If the new user can be decoded while treating all the other senders as part of the
noise, then his signal can be subtracted out before decoding the other senders, and
hence will not disturb the rates of the other senders. Therefore if
*
P0
1
R0 < log 1 +
2
mP + N
+
(15.224)
,
the new user will not disturb the other senders.
(b) The new user will have a rate equal to the sum of the existing senders if
*
1
P0
log 1 +
2
mP + N
or
+
=
*
1
mP
log 1 +
2
N
P0 = (mP + N )
mP
N
+
(15.225)
(15.226)
33. Converse for deterministic broadcast channel.
A deterministic broadcast channel is defined by an input X , two outputs, Y 1 and Y2
which are functions of the input X . Thus Y 1 = f1 (X) and Y2 = f2 (X) . Let R1 and
R2 be the rates at which information can be sent to the two receivers. Prove that
R1 ≤ H(Y1 )
R2 ≤ H(Y2 )
R1 + R2 ≤ H(Y1 , Y2 )
(15.227)
(15.228)
(15.229)
382
Network Information Theory
Solution: Converse for deterministic broadcast channel.
We can derive this bound from single user arguments. The maximum rate that the
sender can send information to receiver 1 is less than
R1 ≤ I(X; Y1 ) = H(Y1 ) − H(Y1 |X) = H(Y1 )
(15.230)
since the channel is deterministic and therefore H(Y 1 |X) = H(Y2 |X) = 0 . Similarly,
R2 ≤ H(Y2 ) .
Also, if the receivers cooperated with each other, the capacity
R1 + R2 ≤ I(X; Y1 , Y2 ) = H(Y1 , Y2 )
(15.231)
since the sum of rates to the two receivers without cooperation cannot be greater than
the single user capacity of a channel from X to (Y 1 , Y2 ) .
34. Multiple access channel
Consider the multiple access channel Y = X 1 +X2 (mod 4), where X1 ∈ {0, 1, 2, 3}, X2 ∈
{0, 1} .
(a) Find the capacity region (R1 , R2 ) .
(b) What is the maximum throughput R1 + R2 ?
Solution: Multiple access channel
(a) The MAC capacity region is given by the standard set of equations which reduce
as follows since there is no noise:
R1 < I(X1 ; Y |X2 ) = H(Y |X2 ) − H(Y |X1 , X2 ) = H(Y |X2 ) = H(X1 )
R2 < I(X2 ; Y |X1 ) = H(Y |X1 ) − H(Y |X1 , X2 ) = H(Y |X1 ) = H(X2 )
R1 + R2 < I(X1 , X2 ; Y ) = H(Y ) − H(Y |X1 , X2 ) = H(Y )
Since entropy is maximized under a uniform distribution over the finite alphebet,
R1 < H(X1 ) ≤ 2 , R2 < H(X2 ) ≤ 1 , and R1 + R2 < H(Y ) ≤ 2 . Further,
if X1 ∼ unif (0, 1, 2, 3) , and X2 ∼ unif (0, 1) then Y ∼ unif (0, 1, 2, 3) , so the
upper bounds are achieved. This gives the capacity region in Figure 15.14.
(b) The throughput of R1 + R2 ≤ 2 by the third constraint above, and is achieved at
many points including when R1 = 2 and R2 = 0 . So the maximum throughput
is R1 + R2 = 2 .
35. Distributed source compression
Let
Z1 =
)
1, p
0, q,
Z2 =
)
1, p
0, q,
383
Network Information Theory
and let U = Z1 Z2 , V = Z1 + Z2 . Assume Z1 and Z2 are independent. This induces a
joint distribution on (U, V ) . Let (Ui , Vi ) be iid according to this distribution. Sender
1 describes U n at rate R1 , and sender 2 describes V n at rate R2 .
(a) Find the Slepian-Wolf rate region for recovering (U n , V n ) at the receiver.
(b) What is the residual uncertainty (conditional entropy) that the receiver has about
(X n , Y n ) .
Solution: Distributed source compression
(a) Below is a table listing the possible results and their associated probabilities.
Z1
0
0
1
1
Z2
0
1
0
1
U
0
0
0
1
V
0
1
1
2
Prob
q2
pq
pq
p2
Evaluating the three standard inequalities for the Slepian-Wolf rate region gives
the following:
R1 > H(U |V ) = 0
R2 > H(V |U ) = P r(U = 0)H(V |U = 0) = (1 − p )H
2
.
q2
1 − p2
/
R1 + R2 > H(U, V ) = H(V ) + H(U |V ) = H(V ) = H(q 2 , 2pq, p2 )
Where the first equation comes because U is a deterministic function of V . The
second equation comes from the definition of conditional entropy and noting that
H(V |U = 1) = 0 . The Slepian-Wolf rate region is depicted in figure 15.15.
(b) The residual uncertainty is given by H(Z 1n , Z2n |U n , V n ) = nH(Z1 , Z2 |U, V ) because everything is iid. Since there is only uncertainty in (Z 1 , Z2 ) when (U =
0, V = 1) , the residual
simplifies to nP r(U = 0, V = 1)H(Z 1 , Z2 |U =
0 uncertainty
1
1
0, V = 1) = n(2pq)H 2 = 2pqn .
36. MAC capacity with costs
$
The cost of using symbol x is r(x) . The cost of a codeword x n is r(xn ) = n1 ni=1 r(xi ) .
$
A (2nR , n) codebook satisfies cost constraint r if n1 ni=1 r(xi (w)) ≤ r, for all w ∈ 2nR .
(a) Find an expression for the capacity C(r) of a discrete memoryless channel with
cost constraint r .
(b) Find an expression for the multiple access channel capacity region for (X 1 ×
X2 , p(y|x1 , x2 ), Y) if sender X1 has cost constraint r1 and sender X2 has cost
constraint r2 .
384
Network Information Theory
(c) Prove the converse for (b).
Solution: MAC capacity with costs
(a) The capacity of a discrete memoryless channel with cost constraint r is given by
C(r) =
p(x):
$max
x
p(x)r(x)≤r
I(X; Y ).
(15.232)
The achievability follows immediately from Shannon’s ‘average over random codebooks’ method and joint typicality decoding. (See Section 9.1 for the power constraint example.)
For the converse, we need to establish following simple properties of the capacitycost function C(r) .
Theorem 15.0.4 The capacity cost function C(r) given in (15.232) is a nondecreasing concave function of r .
Remark: These properties of the capacity cost function C(r) exactly parallel
those of the rate distortion function R(D) . (See Lemma 10.4.1 of the text.)
Proof: The monotonicity is a direct consequence of the definition of C(r) . To
prove the concavity, consider two points (C 1 , r1 ) and (C2 , r2 ) which lie on the
capacity cost curve. Let the distributions that achieve these pairs be p 1 (x) and
p2 (x) . Consider the distribution p λ = λp1 + (1 − λ)p2 . Since the cost is a linear
function of the distribution, we have r(p λ ) = λr1 + (1 − λ)r2 . Mutual information,
on the other hand, is a concave function of the input distribution (Theorem 2.7.4)
and hence
C(λr1 + (1 − λ)r2 ) = C(r(pλ ))
(15.233)
≥ Ipλ (X; Y )
(15.234)
= λC(r1 ) + (1 − λ)C(r2 ),
(15.236)
≥ λIp1 (X; Y ) + (1 − λ)Ip2 (X; Y )
(15.235)
which proves that C(r) is concave in r .
!
nR
Now we are ready to prove the converse. Consider any (2 , n) code that satisfies
the cost constraint
n
1!
r(xi (w)) ≤ r
n i=1
for w = 1, 2, . . . , 2nR , which in turn implies that
n
1!
E(r(Xi )) ≤ r,
n i=1
(15.237)
where the expectation is with respect to the uniformly drawn message index W .
As in the case without the cost constraint, we begin with Fano’s inequality to
obtain the following chain of inequalities:
385
Network Information Theory
nR
=
H(W )
≤
I(W ; Y ) + n%n
≤
H(Y ) − H(Y |X ) + n%n
≤
≤
=
=
(a)
≤
=
≤
(c)
≤
(15.239)
I(X ; Y ) + n%n
n
n
!
i=1
n
!
i=1
n
!
i=1
n
!
n
(15.241)
H(Yi ) − H(Yi |X n , Y i−1 ) + n%n
(15.242)
H(Yi ) − H(Yi |Xi ) + n%n
(15.243)
I(Xi ; Yi ) + n%n
(15.244)
C(E(r(Xi ))) + n%n
(15.245)
i=1
n
!
n
nC
(15.240)
n
n
i=1
(b)
(15.238)
n
.
n
1
C(E(r(Xi ))) + n%n
n
/
n
1!
E(r(Xi )) + n%n
n i=1
nC(r) + n%n ,
(15.246)
(15.247)
(15.248)
where
(a) follows from the definition of the capacity cost function,
(b) from the concavity of the capacity cost function and Jensen’s inequality, and
(c) from Eq. (15.237) and the fact that C(r) is non-decreasing in r .
Note that we cannot jump from (15.244) to (15.248) since E(r(X i )) may be greater
than r for some i .
(b) The capacity region under cost constraints r 1 and r2 is given by the closure of
the set of all (R1 , R2 ) pairs satisfying
R1 < I(X1 ; Y |X2 , Q),
R2 < I(X2 ; Y |X1 , Q),
R1 + R2 < I(X1 , X2 ; Y |Q)
for some choice of the joint distribution p(q)p(x 1 |q)p(x2 |q)p(y|x1 , x2 ) with
!
x1
!
x2
and |Q| ≤ 4.
p(x1 )r1 (x1 ) ≤ r1 ,
p(x2 )r2 (x2 ) ≤ r2 ,
386
Network Information Theory
(c) Again the achievability proof is an easy extension from the case without cost
constraints. For the converse, consider any sequence of ((2 nR1 , 2nR2 ), n) codes
(n)
with Pe → 0 satisfying
1!
r1 (x1i (w1i )) ≤ r1
n i
1!
r2 (x2i (w2i )) ≤ r2 ,
n i
for all w1i = 1, 2, . . . , 2nR1 , w2i = 1, 2, . . . , 2nR2 . By taking expectation with
respect to the random message index pair (W 1 , W2 ) , we get
1!
E(r1 (X1i )) ≤ r1
n i
1!
E(r2 (X2i )) ≤ r2 .
n i
and
(15.249)
By starting from Fano’s inequality and taking the exact same steps as in the
converse proof for the MAC without constraints (see Section 14.3.4 of the text),
we obtain
nR1 ≤
nR2 ≤
n(R1 + R2 ) ≤
n
!
i=1
n
!
i=1
n
!
i=1
I(X1i ; Yi |X2i ) + n%1n = nI(X1Q ; YQ |X2Q , Q) + n%1n ,
I(X2i ; Yi |X1i ) + n%2n = nI(X2Q ; YQ |X1Q , Q) + n%2n ,
I(X1i , X2i ; Yi ) + n%n = nI(X1Q , X2Q ; YQ |Q) + n%n ,
where the random variable Q is uniform over {1, 2, . . . , n} and independent of
(X1i , X2i , Yi ) for all i .
-
-
-
Now define X1 =X1Q , X2 =XQ , and Y =YQ . It is easy to check that (Q, X1 , X2 , Y )
have a joint distribution of the form p(q)p(x 1 |q)p(x2 |q)p(y|x1 , x2 ) . Moreover, from
Eq. (15.249),
!
Pr(X1 = x1 )r1 (x1 ) =
x1
!
Pr(X1Q = x1 )r1 (x1 )
x1
=
=
=
=
n
!!
x1 i=1
n
!!
Pr(X1Q = x1 |Q = i) Pr(Q = i)r1 (x1 )
1
Pr(X1i = x1 )r1 (x1 )
n
x1 i=1
n !
!
1
n
1
n
i=1 x1
n !
!
i=1 x1
Pr(X1i = x1 )r1 (x1 )
Pr(X1i = x1 )r1 (x1 )
387
Network Information Theory
n
1!
E(r1 (X1i ))
n i=1
=
≤ r1 ,
and similarly,
!
x2
Pr(X2 = x2 )r2 (x2 ) ≤ r2 .
Therefore, we have shown that any sequence of ((2 nR1 , 2nR2 ), n) codes satisfying
(n)
cost constraints with Pe → 0 should have the rates satisfying
R1 < I(X1 ; Y |X2 , Q),
R2 < I(X2 ; Y |X1 , Q),
R1 + R2 < I(X1 , X2 ; Y |Q)
for some choice of the joint distribution p(q)p(x 1 |q)p(x2 |q)p(y|x1 , x2 ) with
!
p(x1 )r1 (x1 ) ≤ r1
!
p(x2 )r2 (x2 ) ≤ r2 .
x1
and
x2
Finally, from Theorem 14.3.4, the region is unchanged if we limit the cardinality
of Q to 4, which completes the proof of the converse.
Note that, compared to the single user case in part (a), the converse for the MAC
with cost constraints is rather straightforward. Here the time sharing random
variable Q saves the trouble of dealing with costs at each time index i .
388
Network Information Theory
R
2
R
1
Figure 15.7: Capacity region of degraded broadcast channel
1−α
'
'9
'
)
(
99
*
(
99
*
(
99
*
(
α 9999
*
(
* p
(
99
* (
9:
9
Y1
(
*
7 Y2
78
(p *
7
7
(
*
7
α 777
(
*
(
*
7
(
*
77
(
*
77
7
(
+
*
' '7
'
*
X
1−p
1−p
1−α
Figure 15.8: Broadcast channel with a binary symmetric channel and an erasure channel
389
Network Information Theory
1−p' ' 1−α
'
'
'
11
4
3
"
4
3
1
"
3
"
3
1
11
"
3
"
3
α
11
"
3
" p 3
12
"3
"3
1
Y1
U
X
$ Y2
3"
3p"
$0
3
"
3
"
$$
α
3
"
3
"
$$
3
"
3
"
$$
$
3
;
"
;
"
'
'3
' '$
'
"
1−p
1−α
Figure 15.9: Broadcast channel with auxiliary random variable
Figure 15.10: Capacity region of the broadcast channel
R
2
R
1
390
Network Information Theory
1
R1
0.8
0.6
0.4
0.2
PSfrag replacements
0
0
0.2
0.4
0.6
R2
0.8
1
1.2
Figure 15.11: Rate regions for X1 ∈ {2, 4} and X1 ∈ {1, 2}
1 − α1 ' ' 1 − α2
'
'1
'
1
4
3
"11
4
3
11
"
3
" 11
3
11
"
3
"α1 113
11
α2
"
3
"
311
11
"3
"3
'
2
1
2
1 Y2
Y
U
X
0
$
0
$ 1
3"
3" $$
$
$
3
"
"
α3
α2
1
$$
$$
3
"
3$$
"
$$
3
"
3
"
$
$
$
$
3
'
;
"
'$
3
;
"
' '$
'
"
1 − α1
1 − α2
Figure 15.12: Broadcast channel with auxiliary random variable
Network Information Theory
391
Figure 15.13: Capacity region of the broadcast channel
R
2
R
1
392
Network Information Theory
R2 5
"
"
"
"
"
"
"
"
"
"
1
1
0
2
'
R1
Figure 15.14: MAC Capacity Region
R2 5
"
"
"
"
"
"
0 2 1
"
q
"
(1 − p2 )H 1−p2
0
,
H p2
-
Figure 15.15: SW Rate Region
'
R1
Chapter 16
Information Theory and Portfolio
Theory
1. Growth rate. Let
X=
)
(1, a),
(1, 1/a),
with probability 1/2
,
with probability 1/2
where a > 1 . This vector X represents a stock market vector of cash vs. a hot stock.
Let
W (b, F ) = E log bt X,
and
W ∗ = max W (b, F )
b
be the growth rate.
(a) Find the log optimal portfolio b∗ .
(b) Find the growth rate W ∗ .
(c) Find the asymptotic behavior of
Sn =
n
B
bt Xi
i=1
for all b.
Solution: Doubling Rate.
(a) Let the portfolio be (1 − b2 , b2 ) . Then
W (b, F ) =
1
b2
1
ln(1 − b2 + ab2 ) + ln(1 − b2 + ).
2
2
a
393
(16.1)
394
Information Theory and Portfolio Theory
Differentiating to find the maximum, we have
1 − a1
dW
1
a−1
=
−
db2
2 1 − b2 + ab2 1 − b2 +
Solving this equation, we get b∗2 =
1
2
b2
a
=0
(16.2)
. Hence the log optimal portfolio b∗ is ( 12 , 12 ) .
(b) The optimal doubling rate W ∗ = W (b∗ , F ) is
W
∗
=
=
*
+
*
1
1 a
1
1
1
ln
+
+ ln
+
2
2 2
2
2 2a
1 (1 + a)2
ln
− ln 2.
2
a
+
(16.3)
(16.4)
(c) The asymptotic behavior of an infinite product of i.i.d. terms is essentially determined by the expected log of the individual terms.
Sn
=
n
B
i=1
=
1
en n
$n
i=1
ln bt Xi
(16.6)
nE ln bt X
(16.7)
nW (b,F )
(16.8)
→ e
=
(16.5)
bt Xi
e
,
where the convergence is with probability 1 by the strong law of large numbers.
We can substitute for W (b, F ) from (16.1).
2. Side information. Suppose, in the previous problem, that
Y=
)
1,
0,
if (X1 , X2 ) ≥ (1, 1),
if (X1 , X2 ) ≤ (1, 1).
Let the portfolio b depend on Y. Find the new growth rate W ∗∗ and verify that
∆W = W ∗∗ − W ∗ satisfies
∆W ≤ I(X; Y ).
Solution: Side Information.
In the previous problem, if we knew Y so that we knew which of the two possible stock
vectors would occur, then the optimum strategy is clear. In the case when Y = 1 , we
should put all out money in the second stock to maximize the conditional expected log
return. Similarly, when Y = 0 , we should put all the money in the first stock. The
average expected log return is
W ∗ (Y ) =
1
1
1
ln a + ln 1 = ln a.
2
2
2
(16.9)
395
Information Theory and Portfolio Theory
The increase in doubling rate due to the side information is
∆W
since
a
1+a
= W ∗ (Y ) − W ∗
1 (1 + a)2
1
ln a − ln
+ ln 2
=
2
2
a
a
= ln
+ ln 2
1+a
≤ ln 2,
(16.10)
(16.11)
(16.12)
(16.13)
< 1 . Also in this case,
I(X; Y ) = H(Y ) − H(Y |X) = H(Y ) = ln 2,
(16.14)
since Y is a function of X and uniformly distributed on {0, 1} .
We can hence verify that
∆W ≤ I(X; Y ).
(16.15)
3. Stock dominance. Consider a stock market vector
X = (X1 , X2 ).
Suppose X1 = 2 with probability 1.
(a) Find necessary and sufficient conditions on the distribution of stock X 2 such that
the log optimal portfolio b∗ invests all the wealth in stock X2 , i.e., b∗ = (0, 1) .
(b) Argue for any distribution on X2 that the growth rate satisfies W ∗ ≥ 1 .
Solution: Stock Market We have a stock market vector
X = (X1 , X2 )
with X1 = 2 .
(a) The Kuhn Tucker conditions for the portfolio b = (0, 1) to be optimal is that
E
X2
=1
X2
(16.16)
E
2
≤ 1.
X2
(16.17)
and
The first is trivial. So the second condition is the only condition on the distribution
for the optimal portfolio to be (0,1).
(b) Since the optimal portfolio does better than the (1,0) portfolio
W ∗ ≥ W (b) = W (1, 0) = log 2 = 1.
(16.18)
396
Information Theory and Portfolio Theory
4. Including experts and mutual funds. Let X ∼ F (x), x ∈ R m
+ be the vector of
price relatives for a stock market. Suppose an “expert” suggests a portfolio b . This
would result in a wealth factor bt X . We add this to the stock alternatives to form
X̃ = (X1 , X2 , . . . , Xm , bt X) . Show that the new growth rate
∗
W̃ =
max
b1 ,...,bm ,bm+1
2
ln(bt x̃)dF (x̃)
(16.19)
is equal to the old growth rate
∗
W = max
b1 ,...,bm
2
ln(bt x)dF (x).
(16.20)
Solution: Including experts and mutual funds.
This problem asks you to show that the existence of a mutual fund does not fundamentally change the market; that is, it asks you to show that you can make as much money
without the presence of the mutual fund as you can make with it. This should be obvious, since, if you thought a particular mutual fund would be a good idea to hold, you
could always invest in its constituent stocks directly in exactly the same proportions as
the mutual fund did.
(a) Outline of Proof
We are asked to compare two quantities, Ŵ ∗ and W ∗ . Ŵ ∗ is the maximum
doubling rate of the “extended” market. That is, it is the maximimum achievable
doubling-rate over the set of extended portfolios: those that include investment
in the mutual fund. W ∗ is the maximum doubling rate of the “non-extended”
market. That is, it is the maximum achievable doubling-rate over the set of nonextended portfolios: those without investment in the mutual fund. Our strategy
will be to show that the set of achievable doubling rates in the extended market
is precisely the same as the set of achievable doubling rates in the non-extended
market, and hence that the maximum value on both sets must be the same. In
particular, we need to show that for any extended portfolio b̂ that achieves some
particular doubling rate Ŵ on the extended market, there exists a corresponding non-extended portfolio b that achieves the same doubling rate W = Ŵ on
the non-extended market, and, conversely, that for any non-extended portfolio b
achieving some particular doubling-rate on the non-extended market, we can find
an equivalent extended portfolio b̂ that achieves the same doubling-rate on the
extended market.
(b) Converse: W ∗ ≤ Ŵ ∗
The converse is easy. Let b = (b1 , b2 , . . . , bm ) be any non-extended portfolio. Then
clearly the extended portfolio b̂ = (b, 0) = (b1 , b2 , . . . bm , 0) achieves the same
doubling rate on the extended market. In particular, then, if b ∗ achieves W ∗ on
the non-extended market, then (b∗ , 0) achieves W ∗ on the extended market, and
so the maximum doubling rate on the extended market must be at least as big as
W ∗ , that is: W ∗ ≤ Ŵ ∗
397
Information Theory and Portfolio Theory
(c) Ŵ ∗ ≤ W ∗
First, some definitions. Let X = (X1 , X2 , . . . , Xm ) be the non-extended stockmarket. Let c = (c1 , c2 , . . . , cm ) be the portfolio that generates the mutual fund,
Xm+1 . Thus, Xm+1 = cT X . Let X̂ = (X1 , X2 , . . . , Xm , Xm+1 ) be the extended
stock-market.
Now consider any extended portfolio b̂ = (b̂1 , b̂2 , . . . , b̂m , b̂m+1 ) . The doubling
rate Ŵ associated with the portfolio b̂ is
Ŵ
= E[log(b̂T X̂)]
= E[log(b̂1 X1 + b̂2 X2 + . . . + b̂m Xm + b̂m+1 Xm+1 )]
= E[log(b̂1 X1 + b̂2 X2 + . . . + b̂m Xm + b̂m+1 cT X)]
= E[log(b̂1 X1 + b̂2 X2 + . . . + b̂m Xm + b̂m+1 (c1 X1 + c2 X2 + . . . + cm Xm ))]
= E[log(b̂1 X1 + b̂2 X2 + . . . + b̂m Xm + b̂m+1 (c1 X1 + c2 X2 + . . . + cm Xm ))]
= E[log((b̂1 + b̂m+1 c1 )X1 + (b̂2 + b̂m+1 c2 )X2 + . . . + (b̂m + b̂m+1 cm )Xm )]
But this last expression can be re-expressed as the doubling rate W associated
with the non-extended portfolio b , where b i = b̂i + b̂m+1 ci . In particular, then,
when b̂ = b̂∗ is the portfolio achieving the optimal doubling rate Ŵ ∗ , then there
is an associated portfolio b, on the non-extended market, given by b i = b̂∗i + b̂∗m+1 ci
that also achieves doubling-rate Ŵ ∗ . Hence, Ŵ ∗ ≤ W ∗ .
Combining the above two inequalities, we must conclude that Ŵ ∗ = W ∗ .
5. Growth rate for symmetric distribution. Consider a stock vector X ∼ F (x), X ∈
Rm , X ≥ 0 , where the component stocks are exchangeable. Thus F (x 1 , x2 , . . . , xm ) =
F (xσ(1) , xσ(2) , . . . , xσ(m) ), for all permutations σ .
(a) Find the portfolio b∗ optimizing the growth rate and establish its optimality.
1 $m
Now assume that X has been normalized so that m
i=1 Xi = 1, and F is
symmetric as before.
(b) Again assuming X to be normalized, show that all symmetric distributions F
have the same growth rate against b∗ .
(c) Find this growth rate.
Solution: Growth rate for symmetric distribution.
(a) By the assumption of exchangeability, putting an equal amount in each stock is
clearly the best strategy. In fact,
E
Xi
Xi
= E −1 $
=1
T
b∗ X
m
Xi
,
398
Information Theory and Portfolio Theory
so b∗ satisfies the Kuhn-Tucker conditions.
(b)+(c) Putting an equal amount in each stock we get
E log bT∗ X = E log
m
1 !
Xi
m i=1
= E log 1
Thus the growth rate is 0.
6. Convexity. We are interested in the set of stock market densities that yield the same
set of all probability densities on R m
optimal porfolio. Let Pb0 be the
+ for which b0
N
t
is optimal. Thus Pb0 = {p(x) : ln(b x)p(x)dx is maximized by b = b0 } . Show that
Pb0 is a convex set. It may be helpful to use Theorem 16.2.2.
Solution: Convexity.
Let f1 and f2 be two stock-market densities in the set Pb0 . Since both f1 and f2
are in this set, then, by definition, b 0 is the optimal constant-rebalance portfolio when
the stock market vector is drawn according to f 1 , and it is also the optimal constantrebalance portfolio when when stock market vector is drawn according to f 2 .
In order to show that the set Pb0 is convex, we need to show that any arbitrary mixture
distribution, f = λf1 + λ̄f2 , is also in the set; that is, we must show that b 0 is also the
optimal portfolio for f .
We know that W (b, f ) is linear in f . So
W (b, f ) = W (b, λf1 + λ̄f2 )
= λW (b, f1 ) + λ̄W (b, f2 )
But by assumption each of the summands in the last expression is maximized when
b = b0 , so the entire expression is also maximized when b = b 0 . Hence, f is in Pb0
and the set is convex.
7. Short selling. Let
X=
Let B = {(b1 , b2 ) : b1 + b2 = 1}
)
(1, 2), p
(1, 21 ), 1 − p
Thus this set of portfolios B does not include the constraint b i ≥ 0 . (This allows short
selling.)
(a) Find the log optimal portfolio b∗ (p) .
Information Theory and Portfolio Theory
399
(b) Relate the growth rate W ∗ (p) to the entropy rate H(p) .
Solution: Short selling.
First, some philosophy. What does it mean to allow negative components in our portfolio vector? Suppose at the beginning of a trading day our current wealth is S . We
want to invest our wealth S according to the portfolio b . If b i is positive, then we
want to own bi S dollars worth of stock i . But if bi is negative, then we want to owe
bi S dollars worth of stock i . This is what selling-short means. It means we sell a stock
we don’t own in exchange for cash, but then we end up owing our broker so many shares
of the stock we sold. Instead of owing money, we owe stock. The difference is that if
the stock goes down in price by the end of the trading day, then we owe less money!
So selling short is equivalent to betting that the stock will go down.
So, this is all well and good, but it seems to me that there may be some problems.
First of all, why do we still insist that the components sum to one? It made a lot of
sense when we interpreted the components, all positive, as fractions of our wealth, but
it makes less sense if we are allowed to borrow money by selling short. Why not have
the components sum to zero instead?
Secondly, if you owe money, then it’s possible for your wealth to be negative. This is
bad for our model because the log of a negative value is undefined. The reason we take
logs in the first place is to turn a product into a sum that converges almost surely. But
we are only justified in taking the logs in the first place if the product is positive, which
it may not be if we allow short-selling.
Now, having gotten all these annoying philosophical worries out of the way, we can
solve the problem quite simply by viewing it just as an unconstrained calculus problem
and not worrying about what it all means.
(a) We’ll represent an arbitrary portfolio as b = (b, 1 − b) . The quantity we’re trying
to maximize is
W (b) = E[log(bT X)]
= E[log(bX1 + (1 − b)X2 )]
1
= p log(b + 2(1 − b)) + (1 − p) log(b + (1 − b))
2
1 1
= p log(b + 2 − 2b) + (1 − p) log(b + − b)
2 2
1 1
= p log(2 − b) + (1 − p) log( + b)
2 2
1
= p log(2 − b) + (1 − p) log + (1 − p) log(1 + b)
2
We solve for the maximum of W (b) by taking the derivative and solving for zero:
400
Information Theory and Portfolio Theory
dW
db
−p
1−p
+
=0
2−b 1+b
⇒ b = 2 − 3p
=
⇒ b = (2 − 3p, 3p − 1)
(b) This questions asks us to relate the growth rate W ∗ to the entropy rate H(p) of
the market. Evidently there is some equality or inequality we should discover, as
is the case with the horse race. Our intuition should tell us that low entropy rates
correspond to high doubling rates and that high entropy rates correspond to low
doubling rates. Quite simply, the more certain we are about what the market is
going to do next (low entropy rate), the more money we should be able to make
in it.
W∗
=
=
=
=
=
=
=
∗
W + H(p)
⇒
=
≤
W (2 − 3p)
1
p log((2 − 3p) + 2(3p − 1)) + (1 − p) log((2 − 3p) + (3p − 1))
2
3
1
p log(2 − 3p + 6p − 2) + (1 − p) log(2 − 3p + p − )
2
2
3 3
p log 3p + (1 − p) log( − p)
2 2
3
p log p + p log 3 + (1 − p) log + (1 − p) log(1 − p)
2
−H(p) + p log 3 + (1 − p) log 3 − (1 − p) log 2
−H(p) + log 3 − (1 − p) log 2
log 3 − (1 − p)
log 3
Hence, we can conclude that W ∗ + H(p) ≤ log 3
8. Normalizing x . Suppose we define the log optimal portfolio b ∗ to be the portfolio
maximizing the relative growth rate
2
ln
bt x
1
m
$m
i=1 xi
$
dF (x1 , . . . , xm ).
1
The virtue of the normalization m
Xi , which can be viewed as the wealth associated
with aN uniform portfolio, is that the relative growth rate is finite, even when the growth
rate ln bt xdF (x) is not. This matters, for example, if X has a St. Petersburg-like
distribution. Thus the log optimal portfolio b ∗ is defined for all distributions F , even
those with infinite growth rates W ∗ (F ) .
401
Information Theory and Portfolio Theory
N
(a) Show that if b maximizes ln(bt x)dF (x) , it also maximizes
1 1
1
u = (m
, m, . . . , m
).
(b) Find the log optimal portfolio b∗ for
X=
)
k
N
t
ln but xx dF (x) , where
k
(22 +1 , 22 ), 2−(k+1)
k
k
(22 , 22 +1 ), 2−(k+1)
where k = 1, 2, . . .
(c) Find EX and W ∗ .
(d) Argue that b∗ is competitively better than any portfolio b in the sense that
Pr{bt X > cb∗t X} ≤ 1c .
Solution: Normalizing x
T
(a) E[ loguTbXX ] = E[log bT X − log uT X] = E[log bT X] − E[log uT X]
where the second quantity in this last expression is just a number that does not
change as the portfolio b changes. So any portfolio that maximizes the first
quantity in the last expression maximizes the entire expression.
(b) Well, you can grunge out all the math here, which is messy but not difficult. But
you can also notice that the symmetry of the values that X can take on demands
that, if there is any optimum solution, it must be at b = ( 12 , 12 ) . For every value
of the form (a, b) that X can take on, there is a value of the form (b, a) that
X takes on with equal probability, so there is absolutely no bias in the market
between allocating funds to stock 1 vs. stock 2.
k
k
k
Normalizing X by ut x = 12 (22 +1 + 22 ) = 32 22 , we obtain
X̂ =
)
( 34 , 23 ),
( 32 , 43 ),
with probability 2−(k+1)
with probability 2−(k+1)
(16.21)
Since X̂ only takes on two values, we can sum over k and obtain
X̂ =
)
( 34 , 23 ),
( 23 , 43 ),
with probability
with probability
1
2
1
2
The doubling rate for a portfolio on this distbribution is
W (b) =
*
+
*
(16.22)
+
1
4
2
1
2
4
log
b1 + (1 − b1 ) + log
b1 + (1 − b1 )
2
3
3
2
3
3
(16.23)
Differentiating and setting to zero and solving gives b = ( 12 , 12 ) .
(c) It is easy to calculate that
E[X] =
=
∞ 0
!
k=1
∞
!
k=1
= ∞
k +1
22
k −k−1
3 · 22
k
+ 22
1
2−(k+1)
(16.24)
(16.25)
(16.26)
402
Information Theory and Portfolio Theory
and similarly that
W
∗
=
=
=
∞ 4
!
k=1
∞
!
k=1
∞
!
k=1
= ∞,
*
1 2k +1 1 2k
2
+ 2
log
2
2
−k
2
*
3
log 2
2
*
2k
+
+
*
1 2k 1 2k +1
2 + 2
+ log
2
2
+5
2−(k+1)(16.27)
(16.28)
3
2
2−k 2k log 2 + log
+
(16.29)
(16.30)
for the standard definition of W ∗ . If we use the new definition, then obviously
W ∗ = 0 , since the maximizing distribution b ∗ is the uniform distribution, which
is the distribution by which we are normalizing.
(d) The inequality can be shown by Markov’s inequality and Theorem 16.2.2 as follows
@
∗t
Pr b X > cb X
t
A
= Pr
)
[
bt X
>c
b∗t X
(16.31)
t
≤
≤
E bb∗tXX
c
1
c
(16.32)
(16.33)
and therefore no portfolio exists that almost surely beats b ∗ . Also the probability
that any other portfolio is more than twice the return of b ∗ is less than 12 , etc.
9. Universal portfolio. We examine the first n = 2 steps of the implementation of
the universal portfolio for m = 2 stocks. Let the stock vectors for days 1 and 2 be
x1 = (1, 12 ) , and x2 = (1, 2). Let b = (b, 1 − b) denote a portfolio.
(a) Graph S2 (b) =
(b) Calculate
S2∗
C2
t
i=1 b xi ,
0 ≤ b ≤ 1.
= maxb S2 (b) .
(c) Argue that log S2 (b) is concave in b .
(d) Calculate the (universal) wealth Ŝ2 =
N1
0
S2 (b)db.
(e) Calculate the universal portfolio at times n = 1 and n = 2 :
b̂1 =
b̂2 (x1 ) =
2
0
1
2
1
bdb
0
bS1 (b)db/
2
1
0
S1 (b)db.
(f) Which of S2 (b), S2∗ , Ŝ2 , b̂2 are unchanged if we permute the order of appearance
of the stock vector outcomes, i.e., if the sequence is now (1, 2), (1, 21 ) ?
403
Information Theory and Portfolio Theory
Solution: Universal portfolio.
All integrals, unless otherwise stated are over [0, 1] .
(a) S2 (b) = (b/2 + 1/2)(2 − b) = 1 + b/2 − b2 /2.
(b) Maximizing over S2 (b) we have S2∗ = S2 (1/2) = 9/8.
(c) S2 (b) is concave and log(·) is a monotonic increasing concave function so log S 2 (b)
is concave as well (check!).
N
(d) Using (a) we have Sˆ2 = (1 + b/2 + b2 /2)db = 13/12.
(e) Clearly b̂1 = 1/2 , and
b̂2 (x1 ) =
=
2
2
bS1 (b)db/
2
S1 (b)db
0.5b(b + 1)db/
= 5/9.
2
0.5(b + 1)db
(f) Only b̂2 (x1 ) changes.
10. Growth optimal. Let X1 , X2 ≥ 0 , be price relatives of two independent stocks.
Suppose EX1 > EX2 . Do you always want some of X1 in a growth rate optimal
portfolio S(b) = bX1 + b̄X2 ? Prove or provide a counterexample.
Solution: Growth optimal.
Yes, we always want some of X1 . The following is a proof by contradiction. Assume
that b∗ = (0, 1)t so that X1 is not active. Then the KKT conditions for this choice
X1
2
of b∗ imply that E X
≤ 1 and E X
X2 = 1 , because by assumption stock 1 is inactive
2
and stock 2 is active. The second condition is obviously satisfied, so only the first
condition needs to be checked. Since X1 and X2 are independent the expectation can
be rewritten as EX1 E X12 . Since X2 is nonnegative, X12 is convex over the region of
EX1
1
1
interest, so by Jensen’s inequality E X12 ≥ EX
. This gives that E X
X2 ≥ EX2 > 1 since
2
EX1 > EX2 . But this contradicts the KKT condition, therefore the assumption that
b∗ = (0, 1)t must be wrong, and so we must want some of X 1 .
Note that we never want to short sell X1 . For any b < 0 , we have
X1
+ (1 − b))
X2
X1
≤ ln (bE
+ (1 − b))
X2
< ln 1 = 0.
E ln(bX1 + (1 − b)X2 ) − E log X2 ≤ E ln (b
Hence, the short selling on X1 is always worse than b = (0, 1) .
Alternatively, we can prove the same result directly as follows. Let −∞ < b < ∞.
Consider the growth rate W (b) = E ln(bX1 − (1 − b)X2 ) . Differentiating w.r.t. b , we
get
X1 − X2
W ' (b) = E
.
bX1 + (1 − b)X2
404
Information Theory and Portfolio Theory
Note that W (b) is concave in b . Thus W ' (b) is monotonically nonincreasing. Since
∗
1
W ' (b∗ ) = 0 and W ' (0) = E X
X2 − 1 > 0 , it is immediate that b > 0 .
11. Cost of universality. In the discussion of finite horizon universal portfolios, it was
shown that the loss factor due to universality is
. /* + *
k
n
!
1
n
=
Vn k=0 k
k
n
n−k
n
+n−k
.
(16.34)
Evaluate Vn for n = 1, 2, 3 .
Solution: Cost of universality.
Simple computation of the equation allows us to calculate
n
1
2
3
4
5
6
7
8
9
10
1
Vn
0
0.125
0.197530864197531
0.251953125
0.29696
0.336076817558299
0.371099019723317
0.403074979782104
0.43267543343584
0.460358496
Vn
∞
8
5.0625
3.96899224806202
3.36745689655172
2.97551020408163
2.69469857599079
2.48092799146348
2.31120124398809
2.17222014731754
12. Convex families. This problem generalizes Theorem 16.2.2. We say that S is a
convex family of random variables if S 1 , S2 ∈ S implies λS1 + (1 − λ)S2 ∈ S . Let S
be a closed convex family of random variables. Show that there is a random variable
S ∗ ∈ S such that
* +
S
E ln
≤0
(16.35)
S∗
for all S ∈ S if and only if
for all S ∈ S .
E
*
S
S∗
+
≤1
(16.36)
Solution: Convex families.
Define S ∗ as the random variable that maximizes E ln S over all S ∈ S . Since this
is a maximization of a concave function over a convex set, there is a global maximum.
For this value of S ∗ , we have
E ln S ≤ E ln S ∗
(16.37)
for all S ∈ S , and therefore
for all S ∈ S .
E ln
S
≤0
S∗
(16.38)
405
Information Theory and Portfolio Theory
We need to show that for this value of S ∗ , that
E
S
≤1
S∗
(16.39)
for all S ∈ S . Let T ∈ S be defined as T = λS + (1 − λ)S ∗ = S ∗ + λ(S − S ∗ ) . Then
as λ → 0 , expanding the logarithm in a Taylor series and taking only the first term,
we have
E ln T − E ln S
∗
=
=
=
≤
*
λ(S − S ∗ )
E ln S 1 +
S∗
∗
λ(S − S )
E
∗
* S
+
S
λ E ∗ −1
S
0
∗
+
− E ln S ∗
(16.40)
(16.41)
(16.42)
(16.43)
where the last inequality follows from the fact that S ∗ maximizes the expected logarithm. Therefore if S ∗ maximizes the expected logarithm over the convex set, then for
every S in the set,
S
E ∗ ≤1
(16.44)
S
The other direction follows from Jensen’s inequality, since if ES/S ∗ ≤ 1 for all S , then
E ln
S
S
≤ ln E ∗ ≤ ln 1 = 0.
∗
S
S
(16.45)
406
Information Theory and Portfolio Theory
Chapter 17
Inequalities in Information Theory
1. Sum of positive definite matrices. For any two positive definite matrices, K 1 and
K2 , show that |K1 + K2 | ≥ |K1 | .
Solution: Sum of positive definite matrices
Let X , Y be independent random vectors with X ∼ φ K1 and Y ∼ φK2 . Then
X+Y ∼ φK1 +K2 and hence 21 ln(2πe)n |K1 +K2 | = h(X+Y) ≥ h(X) = 12 ln(2πe)n |K1 | ,
by Lemma 17.2.1.
2. Fan’s inequality[6] for ratios of determinants. For all 1 ≤ p ≤ n , for a positive
definite K = K(1, 2, . . . , n) , show that
p
B |K(i, p + 1, p + 2, . . . , n)|
|K|
≤
.
|K(p + 1, p + 2, . . . , n)| i=1 |K(p + 1, p + 2, . . . , n)|
(17.1)
Solution: Ky Fan’s inequality for the ratio of determinants. We use the same idea as
in Theorem 17.9.2, except that we use the conditional form of Theorem 17.1.5.
1
|K|
ln(2πe)p
2
|K(p + 1, p + 2, . . . , n)|
= h(X1 , X2 , . . . , Xp |Xp+1 , Xp+2 , . . . , Xn )
≤
=
!
h(Xi |Xp+1 , Xp+2 , . . . , Xn )
p
!
1
i=1
2
ln 2πe
|K(i, p + 1, p + 2, . . . , n)|
.(17.2)
|K(p + 1, p + 2, . . . , n)|
3. Convexity of determinant ratios. For positive definite matrices K , K 0 , show that
0|
ln |K+K
is convex in K .
|K|
Solution: Convexity of determinant ratios
The form of the expression is related to the capacity of the Gaussian channel, and hence
we can use results from the concavity of mutual information to prove this result.
407
408
Inequalities in Information Theory
Consider a colored noise Gaussian channel
Yi = Xi + Zi ,
(17.3)
where X1 , X2 , . . . , Xn ∼ N (0, K0 ) and Z1 , Z2 , . . . , Zn ∼ N (0, K) , and X and Z are
independent
Then
I(X1 , X2 , . . . , Xn ; Y1 , Y2 , . . . , Yn ) = h(Y1 , Y2 , . . . , Yn ) − h(Y1 , Y2 , . . . , Yn |X1 , X2 , . . .(17.4)
, Xn )
= h(Y1 , Y2 , . . . , Yn ) − h(Z1 , Z2 , . . . , Zn )
1
1
=
log(2πe)n |K + K0 | − log(2πe)n |K|
2
2
1
|K0 + K|
=
log
2
|K|
(17.5)
(17.6)
(17.7)
Now from Theorem 2.7.2, relative entropy is a convex function of the the distributions
(The theorem should be extended to the continuous case by replacing probability mass
functions by densities and summations by integrations.) Thus if f λ (x, y) = λf1 (x, y) +
(1 − λ)f2 (x, y) , gλ (x, y) = λg1 (x, y) + (1 − λ)g2 (x, y) , we have
D(fλ (x, y)||gλ (x, y)) ≤ λD(f1 (x, y)||g1 (x, y)) + (1 − λ)D(f2 (x, y)||g2 (x, y))
(17.8)
Let Z n ∼ N (0, K1 ) with probability λ and Z n ∼ N (0, K2 ) with probability 1 − λ .
Let f1 (xn , y n ) be the joint distribution corresponding to Y n = X n + Z n when Z n ∼
N (0, K1 ) , and g1 (x, y) = f1 (x)f1 (y) be the corresponding product distribution. Then
I(X1n ; Y1n ) = D(f1 (xn , y n )||f1 (xn )f1 (y n )) = D(f1 (xn , y n )||g1 (xn , y n )) =
|K0 + K1 |
1
log
2
|K1 |
(17.9)
Similarly
|K0 + K2 |
1
log
2
|K2 |
(17.10)
However, the mixture distribution is not Guassian, and cannot write the same expression in terms of determinants. Instead, using the fact that the Gaussian is the worst
noise given the moment constraints, we have by convexity of relative entropy
I(X2n ; Y2n ) = D(f1 (xn , y n )||f1 (xn )f1 (y n )) = D(f1 (xn , y n )||g1 (xn , y n )) =
1
|K0 + Kλ |
log
2
|Kλ |
≤ I(Xλn ; Yλn )
(17.11)
= D(fλ (xn , y n )||fλ (xn )fλ (y n ))
(17.12)
≤ λD(f1 (x, y)||g1 (x, y)) + (1 − λ)D(f2 (x, y)||g2 (x, y))(17.13)
= λI(X1n ; Y1n ) + (1 − λ)I(X2n ; Y2n )
1
|K0 + K1 |
1
|K0 + K2 |
= λ log
+ (1 − λ) log
2
|K1 |
2
|K2 |
proving the convexity of the determinant ratio.
(17.14)
(17.15)
409
Inequalities in Information Theory
4. Data Processing Inequality: Let random variable X 1 , X2 , X3 and X4 form a
Markov chain X1 → X2 → X3 → X4 . Show that
I(X1 ; X3 ) + I(X2 ; X4 ) ≤ I(X1 ; X4 ) + I(X2 ; X3 ).
(17.16)
Solution: Data Processing Inequality: (repeat of Problem 4.33)
X1 → X2 → X3 → X4
I(X1 ; X4 )
+I(X2 ; X3 ) − I(X1 ; X3 ) − I(X2 ; X4 )
(17.17)
= H(X1 ) − H(X1 |X4 ) + H(X2 ) − H(X2 |X3 ) − (H(X1 ) − H(X1 |X3 ))
−(H(X2 ) − H(X2 |X4 ))
= H(X1 |X3 ) − H(X1 |X4 ) + H(X2 |X4 ) − H(X2 |X3 )
(17.18)
(17.19)
= H(X1 , X2 |X3 ) − H(X2 |X1 , X3 ) − H(X1 , X2 |X4 ) + H(X2 |X1 , X4(17.20)
)
+H(X1 , X2 |X4 ) − H(X1 |X2 , X4 ) − H(X1 , X2 |X3 ) + H(X1 |X(17.21)
2 , X3 ))
= −H(X2 |X1 , X3 ) + H(X2 |X1 , X4 )
(17.22)
− H(X2 |X1 , X4 ) − H(X2 |X1 , X3 , X4 )
(17.23)
≥ 0
(17.25)
= I(X2 ; X3 |X1 , X4 )
(17.24)
where H(X1 |X2 , X3 ) = H(X1 |X2 , X4 ) by the Markovity of the random variables.
5. Markov chains: Let random variables X, Y, Z and W form a Markov chain so that
X → Y → (Z, W ) , i.e., p(x, y, z, w) = p(x)p(y|x)p(z, w|y) . Show that
I(X; Z) + I(X; W ) ≤ I(X; Y ) + I(Z; W )
(17.26)
Solution: Markov chains: (repeat of Problem 4.34)
X → Y → (Z, W ) , hence by the data processing inequality, I(X; Y ) ≥ I(X; (Z, W )) ,
and hence
I(X : Y )
+I(Z; W ) − I(X; Z) − I(X; W )
≥ I(X : Z, W ) + I(Z; W ) − I(X; Z) − I(X; W )
(17.27)
(17.28)
= H(Z, W ) + H(X) − H(X, W, Z) + H(W ) + H(Z) − H(W, Z)
−H(Z) − H(X) + H(X, Z)) − H(W ) − H(X) + H(W, X)(17.29)
= −H(X, W, Z) + H(X, Z) + H(X, W ) − H(X)
(17.30)
= H(W |X) − H(W |X, Z)
(17.31)
= I(W ; Z|X)
(17.32)
≥ 0
(17.33)
410
Inequalities in Information Theory
Bibliography
[1] T. Berger. Multiterminal source coding. In G. Longo, editor, The Information Theory
Approach to Communications. Springer-Verlag, New York, 1977.
[2] M. Bierbaum and H.M. Wallmeier. A note on the capacity region of the multiple access
channel. IEEE Trans. Inform. Theory, IT-25:484, 1979.
[3] D.C. Boes. On the estimation of mixing distributions. Ann. Math.Statist., 37:177–188,
Jan. 1966.
[4] I. Csiszár and J. Körner. Information Theory: Coding Theorems for Discrete Memoryless
Systems. Academic Press, 1981.
[5] Ky Fan. On a theorem of Weyl concerning the eigenvalues of linear transformations II.
Proc. National Acad. Sci. U.S., 36:31–35, 1950.
[6] Ky Fan. Some inequalities concerning positive-definite matrices. Proc. Cambridge Phil.
Soc., 51:414–421, 1955.
[7] R.G. Gallager. Information Theory and Reliable Communication. Wiley, New York,
1968.
[8] R.G. Gallager. Variations on a theme by Huffman. IEEE Trans. Inform. Theory, IT24:668–674, 1978.
[9] L. Lovasz. On the Shannon capacity of a graph. IEEE Trans. Inform. Theory, IT-25:1–7,
1979.
[10] J.T. Pinkston. An application of rate-distortion theory to a converse to the coding
theorem. IEEE Trans. Inform. Theory, IT-15:66–71, 1969.
[11] A Rényi. Wahrscheinlichkeitsrechnung, mit einem Anhang über Informationstheorie.
Veb Deutscher Verlag der Wissenschaften, Berlin, 1962.
[12] A.A. Sardinas and G.W. Patterson. A necessary and sufficient condition for the unique
decomposition of coded messages. In IRE Convention Record, Part 8, pages 104–108,
1953.
[13] C.E. Shannon. Communication theory of secrecy systems. Bell Sys. Tech. Journal,
28:656–715, 1949.
411
412
BIBLIOGRAPHY
[14] C.E. Shannon. Coding theorems for a discrete source with a fidelity criterion. IRE
National Convention Record, Part 4, pages 142–163, 1959.
[15] C.E. Shannon. Two-way communication channels. In Proc. 4th Berkeley Symp. Math.
Stat. Prob., volume 1, pages 611–644. Univ. California Press, 1961.
[16] J.A. Storer and T.G. Szymanski. Data compression via textual substitution. J. ACM,
29(4):928–951, 1982.