Download Introduction to Low-Density Parity

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
Transcript
Code and Decoder Design of
LDPC Codes for Gbps Systems
Jeremy Thorpe
Presented to: Microsoft Research
2002.11.25
Talk Overview
 Introduction (13 slides)
 Wiring Complexity ( 9 slides)
 Logic Complexity (7 slides)
Reliable Communication over
Unreliable Channels
 Channel is the means by
which information is
communicated from sender
to receiver
 Sender chooses X
 Channel generates Y from
conditional probability
distribution P(Y|X)
 Receiver observes Y
X
P(Y|X)
channel
Y
Shannon’s Channel Coding
Theorem
 Using the channel n times, we can communicate k
bits of information with probability of error as
small as we like as long as
k
R C
n
 as long as n is large enough. C is a number that
characterizes any channel.
 The same is impossible if R>C.
The Coding Strategy
 Encoder chooses the mth
codeword in codebook C
and transmits it across the
channel
 Decoder observes the
channel output y and
generates m’ based on the
knowledge of the codebook
C and the channel
statistics.
x C  X n
m {0,1}k
Encoder
Channel
Decoder
m'{0,1}k
y Y n
Linear Codes
 A linear code C can be defined in terms of
either a generator matrix or parity-check
matrix.
 Generator matrix G (k×n)
C  {mG }
 Parity-check matrix H (n-k×n)
C  {c : cH '  0}
Regular LDPC Codes
 LDPC Codes linear codes defined in terms of H.
 The number of ones in each column of H is a fixed
number λ.
 The number of ones in each row of H is a fixed
number ρ.
 Typical parameters for Regular LDPC codes are
(λ,ρ)=(3,6).
Graph Representation of LDPC
Codes
v
v|c
0
...
x
Variable nodes
...
 H is represented by a
bipartite graph.
 nodes v (degree λ) on
the left represent
variables.
 Nodes c (degree ρ)on
the right represent
equations:
Check nodes
Message-Passing Decoding of
LDPC Codes
 Message Passing (or Belief Propagation)
decoding is a low-complexity algorithm
which approximately answers the question
“what is the most likely x given y?”
 MP recursively defines messages mv,c(i) and
mc,v(i) from each node variable node v to each
adjacent check node c, for iteration i=0,1,...
Two Types of Messages...
 Liklihood Ratio
x , y 
p( y | x  1)
p( y | x  0)
 Probability Difference
 x , y  p( x  1 | y )  p( x  0 | y )
 For y1,...yn independent  For x1,...xn
conditionally on x:
independent:
x , y   x , y
n
1
i
i
 x , y    x , y
i
i
i
i
...Related by the Biliniear
Transform
 Definition:
1 x
B( x) 
1 x
 Properties:
B( B( x))  x
B ( x , y )   x , y
B(  x , y )  x , y
p ( y | x  1)
)
p ( y | x  0)
p ( y | x  0)  p ( y | x  1)
p ( y | x  0)  p ( y | x  1)
p ( y | x  0)  p ( y | x  1)
2 p( y)
2 p( x  0 | y) p( y )  2 p( x  1 | y ) p( y)
2 p( y)
p( x  0 | y)  p( x  1 | y)
B ( x , y )  B (




 x, y
Variable to Check Messages
 On any iteration i, the
message from v to c is:
mv ,c
(i )
 xv , yv  B(mc ',v
( i 1)
v
c
)
c '|v  c
...
...
Check to Variable Messages
 On any iteration, the
message from c to v is:
mc ,v
(i )

 B(m
v ',c
(i )
v
c
)
v '|c  v
...
...
Decision Rule
 After sufficiently many iterations, return the
likelihood ratio:
0, if x , y  B(mc ,v (i 1) )  0
v v

c|v
xˆ  

otherwise
1,
Theorem about MP Algorithm
r
...
v
...
 If the algorithm stops after r
iterations, then the algorithm
returns the maximum a
posteriori probability estimate
of xv given y within radius r of
v.
 However, the variables within
a radius r of v must be
dependent only by the
equations within radius r of v,
...
Wiring Complexity
Physical Implementation
(VLSI)
 We have seen that the MP decoding algorithm for
LDPC codes is defined in terms of a graph
 Nodes perform local computation
 Edges carry messages from v to c, and c to v
 Instantiate this graph on a chip!


Edges →Wires
Nodes →Logic units
Complexity vs. Performance
 Longer codes provide:


More efficient use of
the channel (eg. less
power used over the
AWGN channel)
Faster throughput for
fixed technology and
decoding parameters
(number of iterations)
 Longer codes demand:


More logic resources
Way more wiring
resources
The Wiring Problem
 The number of edges in
the graph grows like
the number of nodes n.
 The length of the edges
in a random graph also
grows like n .
A random graph
Graph Construction?
 Idea: find a construction that has low wirelength and maintains good performance...
 Drawback: it is difficult to construct any
graph that has the performance of random
graph.
A Better Solution:
 Use an algorithm which generates a graph at
random, but with a preference for:


Short edge length
Quantities related to code performance
Conventional Graph Wisdom
 Short loops give rise to dependent messages
(which are assumed to be independent) after
a small number of iterations, and should be
avoided.
Simulated Annealing!
 Simulated annealing approximately
minimizes an Energy Function over a
Solution space.
 Requires a good way to traverse the solution
space.
Generating LDPC graphs with
Simulated Annealing
 Define energy function
with two components:
E  cw Ew  cl El

Wirelength
Ew   | w |
w

Loopiness
El    |l|
l
 traverse the space by
picking two edges at
random and do:
Results of Simulated Annealing
 The graph on the right
has nearly identical
performance to the one
shown previously
A graph generated by
Simulated Annealing
Logic Complexity
Complexity of Classical
Algorithm
 Original algorithm defines messages in terms
of arithmetic operations over real numbers:
mv ,c
(i )
 xv , yv  B(mc ,v
c|v  c
( i 1)
)
 However, this implies floating-point
addition, multiplication, and even division!
A modified Algorithm
( i 1)
(i )
 We define a modified
mv ,c  x , y  B(mc ,v )
c|v  c
algorithm in which all
(i )
(i )
messages are their
m'v,c  log( mv,c )
logarithms in the
( i 1)

log(

)

log(
B
(exp(
m
)))
original scheme

x ,y
c ,v
c|v  c
 The channel message
( i 1)


'


(
m
)

x ,y
c ,v
λ is similarly replaced
c|v  c
by it's logarithm.
v
v
v
v
v
v
Quantization
 Replaced a product by
a sum, but now we
have a transcendental
function φ.
 However, if we
quantize the messages,
we can pre-compute φ
for all values!
m'v,c   ' xv , yv   (mc,v
(i )
c|v  c
( i 1)
)
Quantized MP Performance
 The graph to the following page shows the
bit error rate for a regular (3,6) of length
n=10,000 code using between 2 and 4 bits of
quantization.
 (Some error floors predicted by density
evolution, some are not)
Conclusion
 There is a tradeoff between logic complexity
and performance
 Nearly optimal performance (+.1 dB = ×
1.03 power) is achievable with 4-bit
messages.
 More work is needed to avoid error-floors
due to quantization.