Sunteți pe pagina 1din 3

A Demonstration Platform for Irregular LDPC Codes

David Freiberger

Abstract Low-density parity check codes are becoming popular for Check Nodes
the transmission of signals over noisy transmission channels. This
paper describes a MATLAB demonstration of irregular LDPC Edges
codes, utilizing LDPC software written by Bagawan S. Nugroho[1].
The objective of this paper is to demonstrate and compare very Variable Nodes
basic irregular LDPC codes to a undergraduate student audience. A parity check matrix can be represented using a Tanner graph, as in
figure 1a, and the constraints on a codeword can be verified (figure 2).

I. INTRODUCTION

In order to transmit signals over noisy transmission channels such as (a) Tanner graph
seen in radio frequency communication and similar applications, it is
necessary to incorporate error checking codes. Generally a form of par-
ity-checking is used, in which extra bits are added to the transmission
through an encoder, allowing a decoder at the receiver side to perform
constraint checks on each bit received. These parity checks allow the
receiving device to remove noise from the received signal. Unfortu-
nately, it is not possible to encode a signal that absolutely guarantees
equality between sent and received data, as proved by Shannon [1]. (b) Matrix representation
However, low-density parity check (LDPC) codes demonstrate high per- Figure 1 Representations of H (not sparse)
formance capabilities, arbitrarily close to the Shannon limit, and are
becoming feasible with today's processing technology [2].

LDPC codes are founded on basic linear-algebra principles. In this


paper math is performed in the binary subspace of real-numbers, and
hence addition and multiplication are performed base 2, e.g. addition
becomes the XOR operation, and multiplication becomes the AND oper-
ation. LDPC codes utilize a sparse binary parity-check matrix H of size
MxN that can be either regular, meaning that there are a specific number
of 1's per row and column, or irregular, in which there may or may not
be a constraint on the number of 1's. In this demonstration we will deal
only with an irregular parity-check matrix (figure 1). The sparseness of
H means that there are a very low number of 1's in H, compared to it's Figure 2 Code word validation
total size, which lead to some of the useful properties of LDPC codes
[3]. Several algorithms for the encoding of x are available in [2]. The
method used in this demonstration creates parity check vector bases
II. ENCODING
using sparse LU decomposition. The algorithm attempts to create a
The parity-check matrix is used to encode a message or codeword x. lower-triangular and upper-triangular matrix that is as sparse as possible,
Both the sending and receiving device have a copy of H, with may be and that satisfies for equation (1) [4,5]. The algorithm is described in
contained within a lookup table in hardware memory, for example. The Radford M. Neal's presentation Sparse Matrix Methods and Probabil-
columns in H correspond to the bits of the codeword, and the rows in H istic Inference Algorithms,[5] and consists of the following basic steps:
correspond to the parity-checks on the codeword [2]. A valid codeword 1) Partition H into a invertible M x M left part A, and a M x
will satisfy
N right part B.
!!!"! (1)
2) Partition the codeword x into M check bits c and N-M

Page 1 of 3
source bits s. choose the type of noise, and finally produce a BER plot and step-by-

#%
step process plots. Figure 3 shows the main screen of the MATLAB
c demonstration software, and figure 4 shows a processing job.
# A$B% &0
s
' Ac( Bs&0
' c&A Bs
)1

3) Set z = Bs and row reduce Ac = z to find Uc = y, where


U is the upper triangular matrix.

4) Record the reduction as the solution to Ly = z.

Combining the check bit vector c and the codeword x gives a vector
[x|c] that can be transmitted to the receiver for decoding.

III. DECODING

Decoding for this demonstration is achieved using four different


sum-product algorithms as implemented in MATLAB by [6]. The basic
theory behind these instances of sum-product algorithms was first
developed by Robert Gallager in the 1960's, and is known as belief
propagation or probability propagation. Belief propagation can be imple-
mented in several ways, all of which attempt to calculate the probability
of a message bit being 0 or 1, using probability and likelihood ratios [4]. Figure 3 Main screen of MATLAB demonstration software
The basic algorithm consists of the following steps:

1) Initialize prior likelihood and probability vectors.

2) Step through rows of H

a. Calculate probability that check bit ci is one, divided


by probability that check bit is zero.

3) Step through columns of H

a. Calculate probability that message bit xi is one,


divided by probability that message bit is zero.

b. Calculate the probability rate for each message bit


that it is 1 or 0 by multiplying all the check bit probability Figure 4 Process step plots, including BER graph
rates associated with that message bit and the message bit V. CONCLUSIONS
probability rate itself.
This project encompasses a basic demonstration of LDPC codes. It
c. If the rate is greater than 0.5, return a 1 for that bit, shows that LDPC codes can be implemented that exhibit an arbitrary
else 0. level of accuracy by increasing certain parameters in the encoding and
The above algorithm can be modified using different mathematical decoding steps. Finally, while the codes may be processor-intensive (as
methods for calculating probability rates. Further, the algorithm can be seen in the time it takes to process a rather small amount of example
iterated to increase accuracy, and has been shown to reach bit-error rates data), with today's digital signal processing technology, it should be
(BERs) close to the Shannon limit. entirely possible to implement LDPC coding.

IV. COMPARISONS

The demonstration software allows the user to adjust parameters for


both the encoding, transmission, and decoding process of a simple
LDPC system. The user can choose input data in image form, generate a
random parity check matrix based on the size of the image data, choose
encoding strategy and select decoding algorithms, input noise and

Page 2 of 3
VI. ADDITIONAL DATA

As a final comparison, a BER plot is given in figure 5 that plots the


BER curves for the non-LDPC encoded case, and for the bit-flip, log
domain, simplified log domain, and probability domain sum-product
decoding algorithms.

Data size for figure 5 is 256x256.

Number of iterations for encoding and decoding is set at 5.

Figure 5 BER plot

VII. REFERENCES
[1] C. E. Shannon, A mathematical theory of communication, Bell System
Technical Journal, vol. 27, pp. 379 423, 1948.

[2] Amin Shokrollahi, LDPC Codes: An Introduction , Digital Fountain Inc,


Fremont, 2003

[3] Tim Davis. "Sparse Matrix." From MathWorld--A Wolfram Web Resource,
created by Eric W. Weisstein. http://mathworld.wolfram.com/SparseMatrix.html

[4] Radford M. Neal, http://www.cs.toronto.edu/~radford/ftp/LDPC-2006-02-08/

[5] Radford M. Neal, Sparse Matrix Methods and Probabilistic Inference


Algorithms, www.cs.toronto.edu/~radford/ftp/ima-part1.ps

[6] Bagawan S. Nugroho, http://bsnugroho.googlepages.com/ldpc

Page 3 of 3

S-ar putea să vă placă și