Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-26T07:21:19.923Z Has data issue: false hasContentIssue false

Conductance bounds on the L2 convergence rate of Metropolis algorithms on unbounded state spaces

Published online by Cambridge University Press:  01 July 2016

Søren F. Jarner*
Affiliation:
The Danish Labour Market Supplementary Pension
Wai Kong Yuen*
Affiliation:
Brock University
*
Postal address: Quantitative Research, Fixed Income, The Danish Labour Market Supplementary Pension, Kongens Vaenge 8, DK-3400 Hilleroed, Denmark. Email address: sj@atp.dk
∗∗ Postal address: Department of Mathematics, Brock University, St Catharines, Ontario, Canada L2S 3A1. Email address: jyuen@brocku.ca

Abstract

In this paper we derive bounds on the conductance and hence on the spectral gap of a Metropolis algorithm with a monotone, log-concave target density on an interval of ℝ. We show that the minimal conductance set has measure ½ and we use this characterization to bound the conductance in terms of the conductance of the algorithm restricted to a smaller domain. Whereas previous work on conductance has resulted in good bounds for Markov chains on bounded domains, this is the first conductance bound applicable to unbounded domains. We then show how this result can be combined with the state-decomposition theorem of Madras and Randall (2002) to bound the spectral gap of Metropolis algorithms with target distributions with monotone, log-concave tails on ℝ.

Type
General Applied Probability
Copyright
Copyright © Applied Probability Trust 2004 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Billingsley, P. (1999). Convergence of Probability Measures, 2nd edn. John Wiley, New York.CrossRefGoogle Scholar
[2] Diaconis, P. (1988). Group Representations in Probability and Statistics (IMS Lecture Note Monogr. Ser. 11). Institute of Mathematical Statistics, Hayward, CA.Google Scholar
[3] Diaconis, P. and Saloff-Coste, L. (1993). Comparison theorems for reversible Markov chains. Ann. Appl. Prob. 3, 696730.CrossRefGoogle Scholar
[4] Diaconis, P. and Saloff-Coste, L. (1998). What do we know about the Metropolis algorithm? J. Comput. System Sci. 57, 2036.CrossRefGoogle Scholar
[5] Diaconis, P. and Stroock, D. (1991). Geometric bounds for eigenvalues of Markov chains. Ann. Appl. Prob. 1, 3661.CrossRefGoogle Scholar
[6] Fort, G. and Moulines, E. (2003). Polynomial ergodicity of Markov transition kernels. Stoch. Process. Appl. 103, 5799.CrossRefGoogle Scholar
[7] Frieze, A., Kannan, R. and Polson, N. G. (1994). Sampling from log-concave distributions. Ann. Appl. Prob. 4, 812837.Google Scholar
[8] Gelfand, A. E. and Smith, A. F. M. (1990). Sampling based approaches to calculating marginal densities. J. Amer. Statist. Assoc. 85, 398409.CrossRefGoogle Scholar
[9] Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57, 97109.CrossRefGoogle Scholar
[10] Jarner, S. F. (2001). Polynomial and geometric convergence of Markov chains with applications to MCMC methods. Doctoral Thesis, Lancaster University.Google Scholar
[11] Jerrum, M. and Sinclair, A. (1989). Approximating the permanent. SIAM J. Comput. 18, 11491178.CrossRefGoogle Scholar
[12] Lawler, G. F. and Sokal, A. D. (1988). Bounds on the L2 spectrum for Markov chains and Markov processes: a generalization of Cheeger's inequality. Trans. Amer. Math. Soc. 309, 557580.Google Scholar
[13] Lindvall, T. (1992). Lectures on the Coupling Method. John Wiley, New York.Google Scholar
[14] Lovász, L. and Simonovits, M. (1993). Random walks in a convex body and an improved volume algorithm. Random Structures Algorithms 4, 359412.CrossRefGoogle Scholar
[15] Madras, N. and Randall, D. (2002). Markov chain decomposition for convergence rate analysis. Ann. Appl. Prob. 12, 581606.CrossRefGoogle Scholar
[16] Mengersen, K. L. and Tweedie, R. L. (1996). Rates of convergence of the Hastings and Metropolis algorithms. Ann. Stat. 24, 101121.CrossRefGoogle Scholar
[17] Metropolis, N. et al. (1953). Equation of state calculations by fast computing machines. J. Chem. Phys. 21, 10871091.CrossRefGoogle Scholar
[18] Polson, N. G. (1996). Convergence of Markov chain Monte Carlo algorithms. In Bayesian Statistics 5, eds Bernardo, J. M., Berger, J. O., Dawid, A. P. and Smith, A. F. M., Oxford University Press, pp. 297321.Google Scholar
[19] Roberts, G. O. and Rosenthal, J. S. (1997). Geometric ergodicity and hybrid Markov chains. Electron. Commun. Prob. 2, 1325.CrossRefGoogle Scholar
[20] Roberts, G. O. and Tweedie, R. L. (1999). Bounds on regeneration times and convergence rates for Markov chains. Stoch. Process. Appl. 80, 211229.CrossRefGoogle Scholar
[21] Roberts, G. O. and Tweedie, R. L. (2000). Rates of convergence of stochastically monotone and continuous time Markov models. J. Appl. Prob. 37, 359373.CrossRefGoogle Scholar
[22] Rosenthal, J. S. (1995). Minorization conditions and convergence rates for Markov chain Monte Carlo. J. Amer. Stat. Assoc. 90, 558566.CrossRefGoogle Scholar
[23] Sinclair, A. and Jerrum, M. (1989). Approximate counting, uniform generation and rapidly mixing Markov chains. Inf. Comput. 82, 93133.CrossRefGoogle Scholar
[24] Smith, A. F. M. and Roberts, G. O. (1993). Bayesian computation via the Gibbs sampler and related Markov chain Monte Carlo methods (with discussion). J. R. Statist. Soc. B 55, 324.Google Scholar
[25] Tierney, L. (1998). A note on Metropolis–Hastings kernels for general state spaces. Ann. Appl. Prob. 8, 19.CrossRefGoogle Scholar
[26] Yuen, W. K. (2000). Applications of geometric bounds to the convergence rate of Markov chains on Rn . Stoch. Process. Appl. 87, 123.CrossRefGoogle Scholar
[27] Yuen, W. K. (2002). Generalization of discrete-time geometric bounds to convergence rate of Markov processes on Rn . Commun. Statist. Stoch. Models 18, 301331.CrossRefGoogle Scholar