Book contents
- Frontmatter
- Contents
- Preface
- 1 Basics of probability theory
- 2 Markov chains
- 3 Computer simulation of Markov chains
- 4 Irreducible and aperiodic Markov chains
- 5 Stationary distributions
- 6 Reversible Markov chains
- 7 Markov chain Monte Carlo
- 8 Fast convergence of MCMC algorithms
- 9 Approximate counting
- 10 The Propp–Wilson algorithm
- 11 Sandwiching
- 12 Propp–Wilson with read-once randomness
- 13 Simulated annealing
- 14 Further reading
- References
- Index
4 - Irreducible and aperiodic Markov chains
Published online by Cambridge University Press: 29 March 2010
- Frontmatter
- Contents
- Preface
- 1 Basics of probability theory
- 2 Markov chains
- 3 Computer simulation of Markov chains
- 4 Irreducible and aperiodic Markov chains
- 5 Stationary distributions
- 6 Reversible Markov chains
- 7 Markov chain Monte Carlo
- 8 Fast convergence of MCMC algorithms
- 9 Approximate counting
- 10 The Propp–Wilson algorithm
- 11 Sandwiching
- 12 Propp–Wilson with read-once randomness
- 13 Simulated annealing
- 14 Further reading
- References
- Index
Summary
For several of the most interesting results in Markov theory, we need to put certain assumptions on the Markov chains we are considering. It is an important task, in Markov theory just as in all other branches of mathematics, to find conditions that on the one hand are strong enough to have useful consequences, but on the other hand are weak enough to hold (and be easy to check) for many interesting examples. In this chapter, we will discuss two such conditions on Markov chains: irreducibility and aperiodicity. These conditions are of central importance in Markov theory, and in particular they play a key role in the study of stationary distributions, which is the topic of Chapter 5. We shall, for simplicity, discuss these notions in the setting of homogeneous Markov chains, although they do have natural extensions to the more general setting of inhomogeneous Markov chains.
We begin with irreducibility, which, loosely speaking, is the property that “all states of the Markov chain can be reached from all others”. To make this more precise, consider a Markov chain (X0, X1, …) with state space S = {s1, …, sk} and transition matrix P. We say that a state sicommunicates with another state sj, writing si → sj, if the chain has positive probability of ever reaching sj when we start from si.
- Type
- Chapter
- Information
- Finite Markov Chains and Algorithmic Applications , pp. 23 - 27Publisher: Cambridge University PressPrint publication year: 2002