Download Shannon`s Main Theorem

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Karhunen–Loève theorem wikipedia , lookup

Infinite monkey theorem wikipedia , lookup

Law of large numbers wikipedia , lookup

Transcript
Shannon’s Theorems
First theorem: H(S) ≤ Ln(Sn)/n < H(S) + 1/n
where Ln is the length of a certain code.
Second theorem: extends this idea to a
channel with errors, allowing one to reach
arbitrarily close to the channel capacity while
simultaneously correcting almost all the errors.
Proof: it does so without constructing a
specific code, and relies instead on a random
code.
Random Codes
Send an n-bit block code through a binary symmetric channel:
A = {ai : i = 1, …, M}
B = {bj : |bj| = n, j = 1, …, 2n}
M distinct equiprobable
n-bit blocks
P Q
Q P
I2(ai) = log2 M
C = 1 − H2(Q)
Q<½
Intuitively, each block
comes through with n∙C
bits of information.
small number ε > 0
To signal close to capacity, we want I2(ai) = n (C − ε)

M2
n (C   )
2nC intuitively, # of messages that can get thru channel
 n
by increasing n, this can be made arbitrarily large
2
 we can choose M so that we use only a small fraction of the # of
messages that could get thru – redundancy. Excess redundancy
gives us the room required to bring the error rate down. For a large
n, pick M random codewords from {0, 1}n.
10.4
With high probability, almost all ai will be a certain distance apart
(provided M « 2n). Picture the ai in n-dimensional Hamming space.
As each ai goes thru channel, we expect nQ errors on average.
Consider a sphere on radius
n (Q + ε′) about each ai:
nε′
bj
received symbol
Similarly, around each bj: What us
the probability that an uncorrectable
error occurs?
too much noise
PE  P(ai  S ) 
nQ
ai
By the law of
large numbers,
lim P(b j  S n (Q   ) (ai ))  0
 P( a  S )
A \{ a i }
a   ai
a′ ai
bj
n 
can be made « δ
N. b. Pbi  S (ai )   Pai  S (b j ) 
 P(a  S )
ai
sent symbol
 
another a′
is also inside
nQ
nε′
10.4
Idea
Pick # of code words M to be 2n(C−ε) where C is
the channel capacity (the block size n is as yet
undetermined and depends on how close ε we
wish to approach the channel capacity). The
number of possible random codes = (2n)M = 2nM,
each equally likely. Let PE = the probability of
errors averaged over all random codes. The idea
is to show that PE → 0. I.e. given any code, most
of the time it will probably work!
Proof
Suppose a is what’s sent, and b what’s received.
P  P d (a, b)  n (Q   )   P d (a, b)  n (Q   )




E
too many errors
a  a
another codeword is too close
Let X = 0/1 be a random variable representing
errors in the channel, with probability P/Q. So if
the error vector a  b = (X1, …, Xn), then d(a, b)
= X1 + … + Xn.
 X  X n

Pd (a, b)  n(Q   )  PX 1    X n  n(Q   )  P  1
 Q    
n


as
 X  X n
 V {X }
P 1
 Q    
 0 n   (by law of large numbers)
2
n

 n 
N. B. Q = E{X}  Q < ½ , pick ε′  Q + ε′ < ½
10.5
Since the a′ are randomly (uniformly) distributed throughout,
2nH 2 (Q  )
Pd (a, b)  n(Q   ) 
2n
Chance that some particular
code word lands too close.
by the binomial bound
volume of whole space
1 
n   log 1 
Q 
2nH 2 (Q )  2

2n
1
 H (Q   )  H (Q)   H (Q)
2
1 
1
1
1 Q
H (Q)  1  log  1  log
 log
 log   1. Hence,
Q
1 Q
Q
Q 
 H is convex down and Q    
Chance that any one is too close.
1 
n  log2  1 
Q 
2nH2 (Q )  2
M  Pd (a, b)  n(Q   )  n nH2 (Q ) n n 
2 2
2 2

1  
2
n   log2  1   
Q  


 0
as n  
N.b. e = log2(1/Q–1) > 0, so we can choose ε′e < ε.

10.5