Article contents
Simulated annealing methods with general acceptance probabilities
Published online by Cambridge University Press: 14 July 2016
Abstract
Heuristic solution methods for combinatorial optimization problems are often based on local neighborhood searches. These tend to get trapped in a local optimum and the final result is often heavily dependent on the starting solution. Simulated annealing methods attempt to avoid these problems by randomizing the procedure so as to allow for occasional changes that worsen the solution. In this paper we provide probabilistic analyses of different designs of these methods.
- Type
- Research Papers
- Information
- Copyright
- Copyright © Applied Probability Trust 1987
References
Anily, S. and Federgruen, A. (1985a) Ergodicity in parametric non-stationary Markov chains: Application to simulated annealing methods. Operat. Res.
To appear.Google Scholar
Anily, S. and Federgruen, A. (1985b) Probabilistic analysis of simulated annealing (Working paper; unabridged version of this paper).Google Scholar
Aragon, C., Johnson, D.
Mcgeogh, L. and Schevon, C. (1985) Optimization by simulated annealing: An experimental evaluation.Google Scholar
Binder, K. (1978) Monte Carlo Methods in Statistical Physics.
Springer-Verlag, Berlin.Google Scholar
Cerny, V. (1985) A thermodynamical approach to the travelling salesman problem: An efficient simulation algorithm. J. Optim. Theory Applic.
15, 41–51.Google Scholar
Cornuejols, G., Fisher, M. and Nemhauser, G. (1977) Location of bank accounts to optimize float: An analytic study of exact and approximate solutions. Management Sci.
23, 789–811.Google Scholar
Dobrushin, R. (1956) Central limit theorems for non-stationary Markov chains II. Theory Prob. Appl.
1, 329–383.Google Scholar
Geman, S. and Geman, D. (1984) Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of images. IEEE Proc. Pattern Analysis and Machine Intelligence
6, 721–741.Google Scholar
Gidas, B. (1985) Non-stationary Markov chains and convergence of the annealing algorithm. J. Statist. Phys.
39, 73–131.Google Scholar
Hastings, W. (1970) Monte Carlo sampling methods using Markov chains and their applications. Biometrika
57, 97–109.Google Scholar
Huang, C., Isaacson, D. and Vinograde, B. (1976) The rate of convergence of certain nonhomogeneous Markov chains. Z. Wahrscheinlichkeitsth.
35, 141–146.Google Scholar
Isaacson, D. and Madsen, R. (1976) Markov Chains: Theory and Applications, Wiley and Sons, New York.Google Scholar
Kirkpatrick, S., Gelat, Jr. C. and Vecchi, M. (1983) Optimization by simulated annealing. Science
220, 671–680.Google Scholar
Lundy, M. and Mees, A. (1986) Convergence of the annealing algorithm. Math. Programming
34, 111–124.Google Scholar
Metropolis, W., Rosenbluth, A., Rosenbluth, M., Teller, A. and Teller, E. (1953) Equation of state calculations by fast computing machines. J. Chem. Phys.
21, 1087–1092.Google Scholar
Meyer, C. D. (1980) The solution of a finite Markov chain and perturbation bounds for the limiting probabilities, SIAM J. Algebraic and Discrete Methods
1, 273–283.Google Scholar
Mitra, D., Romeo, F. and Sangiovanni-Vincentelli, A. (1986) Convergence and finite-time behavior of simulated annealing. Adv. Appl. Prob.
18, 747–771.Google Scholar
Romeo, F. and Sangiovanni-Vincentelli, A. (1984) Probabilistic hill climbing algorithms: Properties and applications. Electronics Research Laboratory, University of California, Berkeley, California.Google Scholar
Schweitzer, P. J. (1968) Perturbations and finite Markov chains. J. Appl. Prob.
5, 401–413.Google Scholar
Wilson, K., Jacobs, D. and Prins, J. (1982) Statistical mechanics algorithm for Monte Carlo optimization. Physics Today.Google Scholar
- 92
- Cited by