Alain Glavieux9781905209248, 190520924X
Table of contents :
Table of Contents……Page 6
Homage to Alain Glavieux……Page 16
1.1. Introduction: the Shannon paradigm……Page 20
1.2.1. Source coding……Page 24
1.2.2. Channel coding……Page 25
1.2.3. Cryptography……Page 26
1.2.4. Standardization of the Shannon diagram blocks……Page 27
1.3.1. Principle……Page 28
1.3.2. Measurement of self-information……Page 29
1.3.3. Entropy of a source……Page 30
1.3.4. Mutual information measure……Page 31
1.3.5. Channel capacity……Page 33
1.4.1. Introduction……Page 34
1.4.2. Decodability, Kraft-McMillan inequality……Page 35
1.4.3. Demonstration of the fundamental theorem……Page 36
1.4.4. Outline of optimal algorithms of source coding……Page 37
1.5.1. Introduction and statement of the fundamental theorem……Page 38
1.5.3. Need for redundancy……Page 39
1.5.4. Example of the binary symmetric channel……Page 40
1.5.5. A geometrical interpretation……Page 44
1.5.6. Fundamental theorem: Gallager’s proof……Page 45
1.6.2. A reference model in physical reality: the channel with Gaussian additive noise……Page 51
1.6.3. Communication via a channel with additive white Gaussian noise……Page 54
1.6.4. Channel with fadings……Page 56
1.7. Information theory and channel coding……Page 57
1.8. Bibliography……Page 59
2.1.1. The fundamental question of message redundancy……Page 60
2.1.2. Unstructured codes……Page 61
2.2.2. Properties of linear codes……Page 63
2.2.3. Dual code……Page 65
2.2.4. Some linear codes……Page 69
2.2.5. Decoding of linear codes……Page 70
2.3.2. Polynomial modulo calculations: quotient ring……Page 72
2.3.4. Order and the opposite of an element of F2[X]/(p(X))……Page 73
2.3.5. Minimum polynomials……Page 78
2.3.6. The field of nth roots of unity……Page 79
2.3.7. Projective geometry in a finite field……Page 80
2.4.1. Introduction……Page 81
2.4.2. Base, coding, dual code and code annihilator……Page 82
2.4.3. Certain cyclic codes……Page 87
2.4.4. Existence and construction of cyclic codes……Page 93
2.5.1. Basic gates for error correcting codes……Page 101
2.5.3. Circuits for the correct codes……Page 102
2.5.4. Polynomial representation and representation to the power of a primitive representation for a field……Page 106
2.6.1. Meggitt decoding (trapping of bursts)……Page 107
2.6.2. Decoding by the DFT……Page 108
2.6.3. FG-decoding……Page 113
2.6.4. Berlekamp-Massey decoding……Page 118
2.6.5. Majority decoding……Page 124
2.6.6. Hard decoding, soft decoding and chase decoding……Page 129
2.7.1. Introduction……Page 130
2.7.5. Coding……Page 131
2.8.1. Unstructured codes……Page 132
2.8.2. Linear codes……Page 133
2.8.3. Finite bodies……Page 136
2.8.4. Cyclic codes……Page 138
2.8.5. Exercises on circuits……Page 142
3.1 Introduction……Page 148
3.2. State transition diagram, trellis, tree……Page 154
3.3. Transfer function and distance spectrum……Page 156
3.4. Perforated convolutional codes……Page 159
3.6. The decoding of convolutional codes……Page 161
3.6.1. Viterbi algorithm……Page 162
3.6.2. MAP criterion or BCJR algorithm……Page 175
3.6.3. SubMAP algorithm……Page 188
3.7. Performance of convolutional codes……Page 191
3.7.1. Channel with binary input and continuous output……Page 192
3.7.2. Channel with binary input and output……Page 199
3.8. Distance spectrum of convolutional codes……Page 201
3.9. Recursive convolution codes……Page 203
4.1. Hamming distance and Euclidean distance……Page 216
4.2. Trellis code……Page 219
4.4. Some examples of TCM……Page 220
4.5. Choice of a TCM diagram……Page 224
4.6. TCM representations……Page 226
4.7. TCM transparent to rotations……Page 228
4.7.1. Partitions transparent to rotations……Page 230
4.7.2. Transparent trellis with rotations……Page 231
4.7.3. Transparent encoder……Page 232
4.8.1. Upper limit of the probability of an error event……Page 234
4.8.2. Examples……Page 245
4.8.3. Calculation of δ[sub(free)]……Page 247
4.9. Power spectral density……Page 251
4.10. Multi-level coding……Page 253
4.10.1. Block coded modulation……Page 254
4.10.2. Decoding of multilevel codes by stages……Page 256
4.11. Probability of error for the BCM……Page 257
4.12.1. Modeling of channels with fading……Page 260
4.12.2. Rayleigh fading channel: Euclidean distance and Hamming distance……Page 266
4.13. Bit interleaved coded modulation (BICM)……Page 270
4.14. Bibliography……Page 272
5.1 History of turbocodes……Page 274
5.1.2. Negative feedback in the decoder……Page 275
5.1.4. Extrinsic information……Page 277
5.1.5. Parallel concatenation……Page 278
5.2. A simple and convincing illustration of the turbo effect……Page 279
5.3.1. Coding……Page 284
5.3.2. The termination of constituent codes……Page 291
5.3.3. Decoding……Page 294
5.3.4. SISO decoding and extrinsic information……Page 299
5.4. The permutation function……Page 306
5.4.1. The regular permutation……Page 307
5.4.2. Statistical approach……Page 309
5.4.3. Real permutations……Page 310
5.5. m-binary turbocodes……Page 316
5.5.1. m-binary RSC encoders……Page 317
5.5.2. m-binary turbocodes……Page 319
5.5.3. Double-binary turbocodes with 8 states……Page 321
5.5.4. Double-binary turbocodes with 16 states……Page 322
5.6. Bibliography……Page 323
6.1. Introduction……Page 326
6.2. Concatenation of block codes……Page 327
6.2.1. Parallel concatenation of block codes……Page 328
6.2.2. Serial concatenation of block codes……Page 332
6.2.3. Properties of product codes and theoretical performances……Page 337
6.3. Soft decoding of block codes……Page 342
6.3.1. Soft decoding of block codes……Page 343
6.3.2. Soft decoding of block codes (Chase algorithm)……Page 345
6.3.3. Decoding of block codes by the Viterbi algorithm……Page 353
6.3.4. Decoding of block codes by the Hartmann and Rudolph algorithm……Page 357
6.4. Iterative decoding of product codes……Page 359
6.4.1. SISO decoding of a block code……Page 360
6.4.2. Implementation of the weighting algorithm……Page 364
6.4.3. Iterative decoding of product codes……Page 366
6.4.4. Comparison of the performances of BTC……Page 368
6.6. Bibliography……Page 386
7.2.1. Influence of integration constraints……Page 392
7.2.2. General architecture and organization of the circuit……Page 395
7.2.3. Memorizing of data and results……Page 399
7.2.4. Elementary decoder……Page 403
7.2.5. High flow structure……Page 411
7.3. Flexibility of turbo block codes……Page 416
7.4.1. Construction of the code……Page 423
7.4.2. Binary error rates (BER) function of the signal-to-noise ratio in a Gaussian channel……Page 425
7.4.3. Variation of the size of the blocks……Page 427
7.5. Multidimensional turbocodes……Page 428
7.6. Bibliography……Page 431
List of Authors……Page 434
H……Page 436
V……Page 437
Reviews
There are no reviews yet.