Article contents
Markov Chain Monte Carlo on finite state spaces
Published online by Cambridge University Press: 18 June 2020
Extract
We elaborate the idea behind Markov chain Monte Carlo (MCMC) methods in a mathematically coherent, yet simple and understandable way. To this end, we prove a pivotal convergence theorem for finite Markov chains and a minimal version of the Perron-Frobenius theorem. Subsequently, we briefly discuss two fundamental MCMC methods, the Gibbs and Metropolis-Hastings sampler. Only very basic knowledge about matrices, convergence of real sequences and probability theory is required.
- Type
- Articles
- Information
- Copyright
- © Mathematical Association 2020
References
Bishop, C. M. and Mitchell, T. M., Pattern recognition and machine learning, Springer (2014).Google Scholar
Geman, S. and Geman, D., Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images, IEEE Transactions on pattern analysis and machine intelligence (1984).CrossRefGoogle Scholar
Hastings, W. K., Monte Carlo sampling methods using Markov chains and their applications, Biometrika (1970).CrossRefGoogle Scholar
Koenig, W., Stochastische Prozesse I: Markovketten in diskreter und stetiger Zeit University of Leipzig (2005).Google Scholar
Higdon, D. M., Auxiliary variable methods for Markov chain Monte Carlo with applications, Journal of the American Statistical Association (1998).CrossRefGoogle Scholar
Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. and Teller, E., Equations of state calculations by fast computing machines, The Journal of Chemical Physics 21 (6) (1953) pp. 1087–1092.CrossRefGoogle Scholar
- 1
- Cited by