Introduction to information theory and data compression

Free Download

Authors:

Edition: 2nd ed

Series: Discrete mathematics and its applications

ISBN: 1584883138, 9781584883135, 9780203998090

Size: 3 MB (2913027 bytes)

Pages: 362/362

File format:

Language:

Publishing Year:

Category:

D.C. Hankerson, Greg A. Harris, Peter D. Johnson Jr.1584883138, 9781584883135, 9780203998090

An effective blend of carefully explained theory and practical applications, this text imparts the fundamentals of both information theory and data compression. Although the two topics are related, this unique text allows either topic to be presented independently, and it was specifically designed so that the data compression section requires no prior knowledge of information theory.The treatment of information theory, while theoretical and abstract, is quite elementary, making this text less daunting than many others. After presenting the fundamental definitions and results of the theory, the authors then apply the theory to memoryless, discrete channels with zeroth-order, one-state sources. The chapters on data compression acquaint students with a myriad of lossless compression methods and then introduce two lossy compression methods. Students emerge from this study competent in a wide range of techniques. The authors’ presentation is highly practical but includes some important proofs, either in the text or in the exercises, so instructors can, if they choose, place more emphasis on the mathematics.Introduction to Information Theory and Data Compression, Second Edition is ideally suited for an upper-level or graduate course for students in mathematics, engineering, and computer science.Features:·Expanded discussion of the historical and theoretical basis of information theory that builds a firm, intuitive grasp of the subject·Reorganization of theoretical results along with new exercises, ranging from the routine to the more difficult, that reinforce students’ ability to apply the definitions and results in specific situations.·Simplified treatment of the algorithm(s) of Gallager and Knuth·Discussion of the information rate of a code and the trade-off between error correction and information rate·Treatment of probabilistic finite state source automata, including basic results, examples, references, and exercises·Octave and MATLAB image compression codes included in an appendix for use with the exercises and projects involving transform methods·Supplementary materials, including software, available for download from the authors’ Web site.

Table of contents :
Introduction to Information Theory and Data Compression, Second Edition……Page 3
Preface……Page 5
Organization……Page 6
Acknowledgments……Page 7
Contents……Page 9
Bibliography……Page 1
Bibliography……Page 11
1.1 Introduction……Page 12
The D’Alembert-Laplace controversy……Page 13
1.2 Events……Page 14
1.3 Conditional probability……Page 18
Examples……Page 19
1.4 Independence……Page 22
1.5 Bernoulli trials……Page 24
1.6 An elementary counting principle……Page 26
1.7 On drawing without replacement……Page 28
1.8 Random variables and expected, or average, value……Page 29
Examples……Page 30
1.9 The Law of Large Numbers……Page 33
Bibliography……Page 34
2.1 How is information quantified?……Page 35
2.1.1 Naming the units……Page 37
2.1.2 Information connecting two events……Page 39
2.1.3 The inevitability of Shannon’s quantification of information……Page 40
2.2 Systems of events and mutual information……Page 43
2.3 Entropy……Page 50
2.4 Information and entropy……Page 53
Bibliography……Page 55
3.1 Discrete memoryless channels……Page 56
3.2 Transition probabilities and binary symmetric channels……Page 59
3.3 Input frequencies……Page 61
3.4 Channel capacity……Page 65
3.5 Proof of Theorem 3.4.3, on the capacity equations……Page 76
Bibliography……Page 78
4.1 Encoding and decoding……Page 79
4.2 Prefix-condition codes and the Kraft-McMillan inequality……Page 83
4.3 Average code word length and Huffman’s algorithm……Page 87
4.3.1 The validity of Huffman’s algorithm……Page 94
4.4 Optimizing the input frequencies……Page 98
Optimizing the input frequencies, after minimizing………Page 99
4.5 Error correction, maximum likelihood decoding, nearest code word decoding, and reliability……Page 103
4.6 Shannon’s Noisy Channel Theorem……Page 114
4.7 Error correction with binary symmetric channels and equal source frequencies……Page 119
4.8 The information rate of a code……Page 123
Bibliography……Page 126
Chapter 5: Lossless Data Compression by Replacement Schemes……Page 127
5.1 Replacement via encoding scheme……Page 128
5.2 Review of the prefix condition……Page 131
5.3 Choosing an encoding scheme……Page 134
5.3.1 Shannon’s method……Page 135
5.3.2 Fano’s method……Page 138
5.3.3 Huffman’s algorithm……Page 139
5.4 The Noiseless Coding Theorem and Shannon’s bound……Page 142
Bibliography……Page 147
Chapter 6: Arithmetic Coding……Page 148
6.1 Pure zeroth-order arithmetic coding: dfwld……Page 149
Subdividing to find A (w)……Page 150
Finding the dfwld in a subinterval……Page 151
6.1.1 Rescaling while encoding……Page 153
Rescaling one bit at a time……Page 155
The underflow expansion……Page 156
6.1.2 Decoding……Page 157
6.2 What’s good about dfwld coding: the compression ratio……Page 162
Comments……Page 164
6.3 What’s bad about dfwld coding and some ways to fix it……Page 167
6.3.1 Supplying the source word length……Page 168
6.3.2 Computation……Page 169
6.3.3 Must decoding wait until encoding is completed?……Page 171
6.4 Implementing arithmetic coding……Page 174
Implementation and performance issues……Page 179
Performance……Page 182
Bibliography……Page 186
Chapter 7: Higher-order Modeling……Page 187
7.1 Higher-order Huffman encoding……Page 188
7.2 The Shannon bound for higher-order encoding……Page 192
7.3 Higher-order arithmetic coding……Page 197
7.4 Statistical models, statistics, and the possibly unknowable truth……Page 199
7.5 Probabilistic finite state source automata……Page 203
Simulating a source with a pfssa……Page 207
Bibliography……Page 210
Chapter 8: Adaptive Methods……Page 211
8.1 Adaptive Huffman encoding……Page 212
8.1.1 Compression and readjustment……Page 215
8.1.2 Higher-order adaptive Huffman encoding……Page 216
8.2 Maintaining the tree in adaptive Huffman encoding: the method of Knuth and Gallager……Page 218
8.2.1 Gallager’s method……Page 221
8.2.2 Knuth’s algorithm……Page 222
8.3 Adaptive arithmetic coding……Page 225
8.4.1 Interval encoding……Page 227
8.4.2 Recency rank encoding……Page 230
Bibliography……Page 233
Chapter 9: Dictionary Methods……Page 234
9.1 LZ77 (sliding window) schemes……Page 235
9.1.1 An LZ77 implementation……Page 237
LZRW1……Page 238
9.1.2 Case study: GNU zip……Page 240
9.2 The LZ78 approach……Page 242
9.2.1 The LZW variant……Page 245
9.2.2 Case study: Unix compress……Page 247
Bibliography……Page 249
Chapter 10: Transform Methods and Image Compression……Page 250
10.1 Transforms……Page 252
10.2 Periodic signals and the Fourier transform……Page 254
10.2.1 The Fourier transform and compression: an example……Page 261
10.3 The cosine and sine transforms……Page 272
10.3.1 A general orthogonal transform……Page 275
10.3.2 Summary……Page 276
10.4 Two-dimensional transforms……Page 278
10.4.1 The 2D Fourier, cosine, and sine transforms……Page 280
10.4.2 Matrix expressions for 2D transforms……Page 284
10.5 An application: JPEG image compression……Page 286
10.6 A brief introduction to wavelets……Page 296
10.6.1 2D Haar wavelets……Page 301
Image compression with wavelets……Page 302
10.7 Notes……Page 305
Bibliography……Page 307
Installation……Page 308
Approximation by partial sums……Page 309
Adjusting the quantizer……Page 312
A JPEG enhancement……Page 314
A.2 Reference……Page 319
Bibliography……Page 324
Appendix B: Source Listing for LZRW1-A……Page 325
Definitions and documentation……Page 326
The compress routine……Page 329
The decompress routine……Page 330
The compress header……Page 331
The port header……Page 334
Bibliography……Page 335
C.1 Resources……Page 336
C.2 Data compression and patents……Page 338
The story……Page 341
The counting argument……Page 342
Conclusion……Page 343
Bibliography……Page 344
1.3 Conditional probability……Page 345
1.8 Random variables and expected, or average, value……Page 346
2.3 Entropy……Page 347
3.2 Transition probabilities and binary symmetric channels……Page 348
3.4 Channel capacity……Page 349
4.4 Optimizing the input frequencies……Page 350
5.4 The Noiseless Coding Theorem and Shannon’s bound……Page 351
6.4 Implementing arithmetic coding……Page 352
7.1 Higher-order Huffman encoding……Page 353
9.1 LZ77 (sliding window) schemes……Page 355
10.2 Periodic signals and the Fourier transform……Page 356
10.6 brief introduction toA wavelets……Page 357
Appendix D: Notes on and Solutions to Some Exercises……Page 358
Bibliography……Page 359

Reviews

There are no reviews yet.

Be the first to review “Introduction to information theory and data compression”
Shopping Cart
Scroll to Top