Book contents
- Frontmatter
- Contents
- Editor's statement
- Section editor's foreword
- Preface to the first edition
- Preface to the second edition
- Introduction
- Part one Information theory
- 1 Entropy and mutual information
- 2 Discrete memoryless channels and their capacity–cost functions
- 3 Discrete memoryless sources and their rate-distortion functions
- 4 The Gaussian channel and source
- 5 The source–channel coding theorem
- 6 Survey of advanced topics for part one
- Part two Coding theory
- Appendices
- References
- Index of Theorems
- Index
6 - Survey of advanced topics for part one
from Part one - Information theory
Published online by Cambridge University Press: 10 November 2009
- Frontmatter
- Contents
- Editor's statement
- Section editor's foreword
- Preface to the first edition
- Preface to the second edition
- Introduction
- Part one Information theory
- 1 Entropy and mutual information
- 2 Discrete memoryless channels and their capacity–cost functions
- 3 Discrete memoryless sources and their rate-distortion functions
- 4 The Gaussian channel and source
- 5 The source–channel coding theorem
- 6 Survey of advanced topics for part one
- Part two Coding theory
- Appendices
- References
- Index of Theorems
- Index
Summary
Introduction
In this chapter we briefly summarize some of the important results in information theory which we have not been able to treat in detail. We shall give no proofs, but instead refer the interested reader elsewhere, usually to a textbook, sometimes to an original paper, for details.
We choose to restrict our attention solely to generalizations and extensions of the twin pearls of information theory, Shannon's channel coding theorem (Theorem 2.4 and its corollary) and his source coding theorem (Theorem 3.4). We treat each in a separate section.
The channel coding theorem
We restate the theorem for reference (see Corollary to Theorem 2.4).
Associated with each discrete memoryless channel, there is a nonnegative number C (called channel capacity) with the following property. For any ε > 0 and R < C, for large enough n, there exists a code of length n and rate ≧ R (i.e., with at least 2Rn distinct codewords), and an appropriate decoding algorithm, such that, when the code is used on the given channel, the probability of decoder error is < ε.
We shall now conduct a guided tour through the theorem, pointing out as we go places where the hypotheses can be weakened or the conclusions strengthened. The points of interest will be the phrases discrete memoryless channel, a nonnegative number C, for large enough n and there exists a code … and … decoding algorithm. We shall also briefly discuss various converses to the coding theorem.
- Type
- Chapter
- Information
- The Theory of Information and Coding , pp. 123 - 136Publisher: Cambridge University PressPrint publication year: 2002