Book contents
- Frontmatter
- Contents
- List of Figures
- Preface
- Acknowledgments
- Glossary of Notations
- 1 Introduction
- 2 Information Dispersal
- 3 Interconnection Networks
- 4 Introduction to Parallel Routing
- 5 Fault-Tolerant Routing Schemes and Analysis
- 6 Simulation of the PRAM
- 7 Asynchronism and Sensitivity
- 8 On-Line Maintenance
- 9 A Fault-Tolerant Parallel Computer
- Bibliography
- Index
Preface
Published online by Cambridge University Press: 03 October 2009
- Frontmatter
- Contents
- List of Figures
- Preface
- Acknowledgments
- Glossary of Notations
- 1 Introduction
- 2 Information Dispersal
- 3 Interconnection Networks
- 4 Introduction to Parallel Routing
- 5 Fault-Tolerant Routing Schemes and Analysis
- 6 Simulation of the PRAM
- 7 Asynchronism and Sensitivity
- 8 On-Line Maintenance
- 9 A Fault-Tolerant Parallel Computer
- Bibliography
- Index
Summary
It has long been recognized that computer design utilizing more than one processor is one promising approach — some say the only approach — toward more powerful computing machines. Once one adopts this view, several issues immediately emerge: how to connect processors and memories, how do processors communicate efficiently, how to tolerate faults, how to exploit the redundancy inherent in multiprocessors to perform on-line maintenance and repair, and so forth.
This book confronts the above-mentioned issues with two keys insights. There exist error-correcting codes that generate redundancy which is efficient in terms of the number of bits. Such redundancy is used to correct errors and erasures caused by component failures and resource limitations (such as limited buffer size). This insight comes from Michael Rabin. The next insight, due to Leslie Valiant, demonstrates the criticality of randomization in achieving communication efficiency.
We intend to make this book an up-to-date account of the information dispersal approach as it is applied to parallel computation. We also discuss related work in the general area of parallel communication and computation and provide an extensive bibliography in the hope that either might be helpful for researchers and students who want to explore any particular topic. Although materials in this book extend across several disciplines (algebra, coding theory, number theory, arithmetics, algorithms, graph theory, combinatorics, and probability), it is, the author believes, a self-contained book; adequate introduction is given and every proof is complete.
- Type
- Chapter
- Information
- Information Dispersal and Parallel Computation , pp. xiii - xivPublisher: Cambridge University PressPrint publication year: 1993