Home > Information technology essays > Tanner-Graph Based Coded Signals

Essay: Tanner-Graph Based Coded Signals

Essay details and download:

  • Subject area(s): Information technology essays
  • Reading time: 10 minutes
  • Price: Free download
  • Published: 29 June 2016*
  • Last Modified: 15 October 2024
  • File format: Text
  • Words: 2,928 (approx)
  • Number of pages: 12 (approx)

Text preview of this essay:

This page of the essay has 2,928 words.

2.1. Error Control Codes.
Claude Shannon was innovated the arithmetical basics for the notion of channel capacity in 1948. He reveals that rather than constructing a very good channel, one can realize an eligible bit error rate by utilizing error control codes. This is done by introducing channel capacity as a border. Then numerous coding schemes have been introduced to approach the Shannon limit [22].
In 1950, the first block code is a class of single error correcting code with a strong algebraic background were introduced by Hamming [23]. Although Hamming codes provide modest improvement, research on the subject continued until the BCH codes [24,25] and Reed Solomon codes [26] were created a decade later. Along came the Convolutional codes [27] and their decoding have been enhanced by the Viterbi algorithm [28].
But it wasn’t until the 1990s that turbo codes were able to provide reliable communications with power efficiencies close to the theoretical limit predicted by Shannon. A short time after that, another type of codes known as Low Density Parity Check (LDPC) codes with the same capacity approaching property were rediscovered. On additive white Gaussian noise (AWGN) channels, Turbo codes and LDPC codes both closely approach capacity by the use of iterative decoding algorithms at the receiver [9].
During encoding, based on the input bits, redundant bits to help the recovery of the lost data are added to the input sequence in order to form a codeword. To maintain a good performance, the minimum distance between codewords should be maximized. The minimum distance (dmin) is the smallest Hamming distance between any two codewords of the code. A binary code of size M has M = 2k binary codewords where k is the number of information bits. The k bits sequences are one by one converted into codewords with block length of n, given n > k. such a code is referred as an (n,k) binary code. The amount of redundancy added to the input bits is measured by 1- R, which R = k/n is called code rate [9].
2.2. Linear Block codes.
Consider a source that produces symbols from an alphabet A having q symbols, where A forms a field. Referring to a tuple (c_0,c_1,….,c_(n-1))∈A^n with n elements as an n-vector or an n-tuple. An (n, k) block code C over an alphabet of q symbols is a set of qk n-vectors
called codewords or code vectors. Associated with the code is an encoder which maps a message, a k-tuple m∈A^k to its associated codeword[113].
For a block code to be useful for error correction purposes, there should be a one-to-one correspondence between a message m and its codeword c. However, for a given code C, there may be more than one possible way of mapping messages to codewords. A block code can be represented as an exhaustive list, but for large k this would be prohibitively complex to store and decode. The complexity can be reduced by imposing some sort of mathematical structure on the code. The most common requirement is linearity [113].
A block code C over a field F_q, of q symbols of length n and qk codewords is a q-ary linear (n, k) code if and only if its qk codewords form a k-dimensional vector subspace of the vector space of all the n-tuples F_q^n. The number n is said to be the length of the code and the number k is the dimension of the code. For a linear code, the sum of any two codewords is also a codeword. More generally, any linear combination of codewords is a codeword [113].
The Hamming weight wt(c) of a codeword c is the number of nonzero components of the codeword. The minimum weight wmin of a code C is the smallest Hamming weight of any nonzero codeword:
w_min= 〖min〗_(c∈C,c≠0) wt(c) (2.1)
For a linear code C, the minimum distance that satisfies dmin = wmin. That is, the minimum distance of a linear block code is equal to the minimum weight of its nonzero codewords. According to the concept of Hamming sphere [113], the random error correcting capability of a code with minimum distance dmin is [113]:
t= ⌊(d_min-1)/2⌋ (2.2)
2.2.1 Matrix Description of Linear Block Codes
Since a linear block code C is a k-dimensional vector space, there exist k linearly independent vectors which we designate as g0, gl, . . . , gk-1 such that every codeword c in C can be represented as a linear combination of these vectors [113]:
c= m_0 g_o+m_1 g_1+⋯+m_(k-1) g_(k-1) (2.3)
where m_i∈F_q. For binary codes, all arithmetic in equation (2.3) is done modulo 2, for codes of F_q, the arithmetic is done in F_q. Thinking of the g_i as row vectors and stacking up, the k × n matrix G is formed as:
G=[■(g_0@g_█(1@.@.@.@.@.@.@.@.@.@.)@g_(k-1) )] (2.4)
Let
m=[m_0 m_1…..m_(k-1) ] (2.5)
Then equation (2.3) can be written as:
c=mG (2.6)
and every codeword c ∈ C has such a representation for some vector m. Since the rows of G generate (or span) the (n, k) linear code C, G is called a generator matrix for C. Equation (2.6) can be thought of as an encoding operation for the code C. Representing the code thus requires storing only k vectors of length n (rather than the qk vectors that would be required to store all codewords of a nonlinear code) [113].
Let C be an (n,k) block code, an encoder is systematic if the message symbols m_0,m_1,…..,m_(k-1) may be found explicitly and unchanged in the codeword. That is, there are coordinates i_0,i_1,…..,i_(k-1) (which are most frequently sequential,
(io,io+1,…..,io+k-1) such that c_(i_0 )=m_0,〖 c〗_(i_1 )= m_1,……,c_(i_(k-1) )= m_(k-1). For a linear code, the generator for a systematic encoder is called a systematic generator [113]. It should be emphasized that being systematic is a property of the encoder and not a property of the code. For a linear block code, the encoding operation represented by G is systematic if an identity matrix can be identified among the rows of G. Frequently, a systematic generator is written in the form:
G=[■(P&I_k )] (2.7)
where I_kis the k×k identity matrix and P is a k × (n – k) matrix which generates parity symbols. The encoding operation is [113]:
c=m[P I_k ]=[mP m] (2.8)
The codeword is divided into two parts: the part m consists of the message symbols, and the part mP consists of the parity check symbols. Performing elementary row operations (replacing a row with linear combinations of some rows) does not change the row span, so that the same code is produced. If two columns of a generator are interchanged, then the corresponding positions of the code are changed, but the distance structure of the code is preserved [113].
Since a linear code C is a k-dimensional vector subspace of F_q^n, there must be a dual space to C of dimension n – k [113].
The dual space to an (n, k) code C of dimension k is the (n, n – k) a dual code of C, denoted by C^⟘ As a vector space, C^⟘ has a basis which we denote by {h_0,h_1,….,h_(n-k-1)}. A matrix H is form using these basis vectors as rows:
H=[■(h_0@h_█(1@.@.@.@.@.@.@.@.)@h_(n-k-1) )] (2.9)
This matrix is known as the parity check matrix for the code C. The generator matrix and the parity check matrix for a code satisfy:
GH^T=0 (2.10)
Let a linear block code C have a parity check matrix H . The minimum distance dmin of C is equal to the smallest positive number of columns of H which are linearly dependent. That is, all combinations of dmin – 1 columns are linearly independent, so there is some set of dmin columns which are linearly dependent [113].
Let the columns of H be designated as ho, h1, . . . , hn-1. Then since cH^T=0 for any codeword c:
c_0 h_0+c_1 h_1+⋯+c_(n-1) h_(n-1)=0 (2.11)
Let c be the codeword of smallest weight, w=wt(c)= d_min with nonzero positions only
at indices i_1,i_2,……,i_w,then:
c_(i_1 ) h_(i_1 )+c_(i_2 ) h_(i_3 )+⋯+c_(i_w ) h_(i_w )=0 (2.12)
Clearly, the columns of H corresponding to the elements of c are linearly dependent. On the other hand, if there were a linearly dependent set of u 0 and zi = 0 when M_i > 0. If z_i H^T=0 z_i could be considered as a correct codeword then stop the iterations, or go back to step 1 until the iteration exceeds the maximum number, but the codeword z cannot be considered as a correct codeword.
2.6. Repeat Accumulate codes.
Repeat–accumulate (RA) codes are a specific class of serially concatenated codes in which the outer code is a rate-1/q repetition code and the inner code is a convolutional code with generator 1/(1 + D). A 1/(1 + D) convolutional code simply outputs the modulo-2 sum of the current input bit and the previous output bit, i.e. it provides a running sum of all past inputs and so is often called an accumulator. These two component codes give repeat–accumulate codes their name [2]. The non-systematic encoder of RA code is shown in figure (2.5).
The repetition code is defined as an (n,k) code where each message bit is repeated q times and thus n = qk. The accumulator can be viewed as a truncated rate-1 recursive convolutional encoder with transfer function 1/(1+D), but it is preferable to think of it as a block code whose input block [x_1 x_2…… x_n] and output block [y_1 y_2……y_n] are related by the formula [1]:
y_n = x_1+x_2+x_3+….+x_n (2.74)
Figure (2.5) Non-systematic repeat accumulate encoder
The message length is k. Each message bit is repeated q times for a (n,k) repetition code and then randomly interleaved and passed through a rate-1 accumulator. The output bits of the accumulator denote the parity bits of the code and are transmitted across the channel. Non-systematic encoding is used for this encoder which means the message bits do not form part of the codeword and hence are not transmitted. The overall rate for this code is 1/q. It is evident that the RA code structure consists of two very simple codes and the interleaver induces randomness thus providing potential for near-capacity performance. However, the major drawback of these codes is their low code rate. It has been shown that q ≥ 3 is required in order to get near-capacity performance. This implies that the maximum code rate for this code is 1/3 which is extremely low especially for current wireless applications which have strict constraints on bandwidth utilization [1].
To overcome the low code rate problem, the encoder is slightly modified by performing an additional parity check before the accumulator. Systematic encoding as shown in figure (2.6) is used in this encoder structure where the message bits are also transmitted across the channel [1].
Figure (2.6) Systematic repeat accumulate encoder
Repeat–accumulate codes that transmit both the message bits and parity bits are called systematic RA codes. Often, systematic RA codes include a third component, called a combiner, placed before the accumulator to reduce the code rate. A rate-a combiner simply sums, modulo-2, each set of a bits input to it. In this case the repetition code can be thought of as the outer code and the combiner and accumulator together as the inner code. Again an interleaver is placed between the inner and outer codes [2].
In this encoder structure, the message bits are repeated as usual but a parity check is performed on a group of bits before they are passed through the accumulator. The grouping
factor a implies that a parity check is performed on groups of a interleaved bits. Due to the
grouping, only k/a bits are passed through the accumulator instead of k bits thus increasing the code rate. Therefore, for a particular code rate, if we use higher values of a, higher values of q can be used leading to larger performance gains [1].
2.5.1 Encoding RA codes
The simple component codes of RA codes lead to a particularly straightforward encoding process. The message bits are copied q times, interleaved, added modulo 2 in sets of a bits at a time and then passed through a memory-1 convolutional encoder.
v_i = mf(i) , f(i)=⌈i/q⌉ (2.75)
Where ⌈x⌉ denotes the smallest integer greater than or equal to x and v is the input data to the interleaver after repeated q times.
The interleaver pattern П = [π1, π2, . . . , πqK ] defines a permutation of the bits in v to[2]:
d = [d_1 d_2 ……d_qk ]= [v_(π_1 ) v_(π_2 ) ……v_(π_qk ) ] (2.76)
The bits at the output of the interleaver are summed, modulo 2, in sets of a bits by the combiner. The m = Kq/a bits s= 〖[s〗_1 s_2……s_m] at the output of the combiner are given by:
s_i = d_(a(i-1)+1)⊕ d_(a(i-1)+2) ⊕ ….⊕ d_ai, i = 1,2,……,m. (2.77)
Where ⊕ denotes modulo-2 addition.
Finally, the m parity bits p=[p_1 p_2……p_m] at the output of the accumulator are defined by [2]:
p_i = p_(i-1)⊕s_i, i = 1,2,……,m. (2.78)
The encoder for an accumulator can also be called a differential encoder.
For systematic RA codes, c=[m_1 m_2…… m_k p_1 p_2……p_m] is the final codeword and thus we have a code with length N=K(1+q/a) and rate r=a/(a+q). For non-systematic RA codes, only the parity bits are sent to the receiver and so we have a code with length N=K q/a and rate r=a/q.
2.7. Sum–product Decoding of RA Codes.
For sum product decoding of RA codes it saw that systematic RA codes are a type of serially concatenated turbo code; however, importantly, they are also a special type of low density parity check code. The parity-check equation of an RA code comprises the combiner and accumulator equations [2]:
p_i⊕p_(i-1)⊕d_(a(i-1)+1)⊕d_(a(i-1)+2)⊕…….⊕d_ai (2.79)
Recall that the di are just copies of the message bits (of which message bits is determined by the choice of interleaver) and pi is just the ith parity bit, so it is straightforward to represent these parity-check equations by a parity check matrix H. The first K columns of H correspond to the message bits and the last m columns correspond to the parity bits. For example p_i= p_(i-1)⊕m_c1⊕m_c2 then the ith row of H is 1 in columns c_1,c_2,k+i and k+i-1 and 0 elsewhere [2].
Generally, an m × N RA-code parity-check matrix H has two parts:
H= 〖[H〗_1 H_2] (2.80)
Where H2 is an m × m matrix due to the accumulator as shown in the matrix in form (2.81):
H=[■(1&0&■(1&0&■(1&0&■(0&0&■(0&0))))@0&1&■(0&1&■(1&1&■(0&0&■(0&0))))@■(1@0@■(1@0))&■(1@0@■(0@1))&■(■(0&0&■(0&1&■(1&0&■(0&0))))@■(1&1&■(0&0&■(1&1&■(0&0))))@■(■(1&0&■(0&0&■(0&1&■(1&0))))@■(0&1&■(0&0&■(0&0&■(1&1)))))))] (2.81)
and H1 is an m × K matrix, with rows having weight a and columns having weight q, in which the choice of interleaver determines the positions of the entries in H1. Two different interleaver patterns can describe the same RA code if the difference in the permutation pattern results in a difference in which copy of the same message bit is used i.e. two 1 entries in H1 are swapped with each other. Interleavers which produce repeated edges are not allowed [2].
As mentioned earlier, an LDPC code can be put into an approximately lower triangular form to facilitate almost linear-time encoding. An RA code can be viewed as an LDPC code with a lower triangular form already built into the parity-check matrix during the code design [2].
As for LDPC codes, the Tanner graph of an RA code is defined by H. Figure (2.7) shows the Tanner graph representation of an RA code. Unlike for a general LDPC code, the message bits in the codeword of an RA code are easily distinguished from the parity bits. We distinguish between systematic-bit nodes corresponding to the K message bits in the codeword, shown at the top of the graph, and parity-bit nodes corresponding to the m parity bits in the codeword, which are shown at the bottom of the graph. The systematic-bit nodes have degree q while the parity-bit nodes have degree 2 except for the final parity-bit
node, which has degree 1. The check nodes have degree a + 2 except the first, which has degree a + 1.
Figure (2.7) Tanner graph of RA code
The parity-check matrix of an RA code is called (q, a) regular if the weights of the rows of H1 all have the same value, a, and the weights of the columns of H1 all have the same value, q. Note that a regular RA parity-check matrix has columns of weight 2, and one column of weight 1, in H2 and so is not regular in the sense of regular LDPC codes [1].
If the parity-bit nodes and systematic-bit nodes are treated as indistinguishable bit nodes then the sum–product decoding of an RA code is exactly the same as the sum–product decoding of an LDPC code with the same parity-check matrix [2].
Alternatively, the sum–product algorithm can be scheduled slightly differently by updating systematic-bit node to check node messages, then check node to parity-bit node messages, then parity-bit node to check node messages and finally check node to systematic-bit node messages to complete one iteration of the decoder [2].
In a sum–product decoder for RA codes the parity-bit nodes are decoded in exactly the same way as the systematic-bit nodes. The turbo decoder for RA codes, however, uses BCJR decoding on a trellis for the parity bits. Because of this a single iteration of turbo decoding is more complex than a single iteration of sum–product decoding for the same code; however, turbo decoding usually requires fewer iterations overall [2].

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Tanner-Graph Based Coded Signals. Available from:<https://www.essaysauce.com/information-technology-essays/tanner-graph-based-coded-signals/> [Accessed 19-12-24].

These Information technology essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.