Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-29T06:15:12.309Z Has data issue: false hasContentIssue false

Classification and lumpability in the stochastic Hopfield model

Published online by Cambridge University Press:  19 February 2016

R. L. Paige*
Affiliation:
Texas Technological University
*
Postal address: Department of Mathematics and Statistics, Texas Technological University, Lubbock, TX 79409, USA. Email address: rpaige@math.ttu.edu

Abstract

Connections between classification and lumpability in the stochastic Hopfield model (SHM) are explored and developed. A simplification of the SHM's complexity based upon its inherent lumpability is derived. Contributions resulting from this reduction in complexity include: (i) computationally feasible classification time computations; (ii) a development of techniques for enumerating the stationary distribution of the SHM's energy function; and (iii) a characterization of the set of possible absorbing states of the Markov chain associated with the zero temperature SHM.

Type
General Applied Probability
Copyright
Copyright © Applied Probability Trust 2001 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Amit, D. J. (1989). Modeling Brain Function. Cambridge University Press.CrossRefGoogle Scholar
Amit, D. J., Gutfreund, H. and Sompolinsky, H. (1985). Spin-glass models for neural networks. Phys. Rev. A,32, 10071018.CrossRefGoogle ScholarPubMed
Butler, R. W. (2000). Reliabilities for feedback systems and their saddlepoint approximation. Statist. Sci. 15, 279298.CrossRefGoogle Scholar
Daniels, H. (1954). Saddlepoint approximations in statistics. Ann. Math. Statist. 25, 631650.CrossRefGoogle Scholar
Daniels, H. (1987). Tail probability approximations. Internat. Statist. Rev. 55, 3748.CrossRefGoogle Scholar
Geman, S. and Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach Intellig. 6, 720741.Google ScholarPubMed
Hertz, J. A., Krogh, A. S. and Palmer, R. G. (1991). Introduction to the Theory of Neural Computation. Addison-Wesley, New York.Google Scholar
Hoffman, R. E. (1992). Attractor neural networks and psychotic disorders. Psych. Ann. 22, 119124.CrossRefGoogle Scholar
Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sci. USA 79, 25542558.CrossRefGoogle ScholarPubMed
Kam, M. and Cheng, R. (1989). Decision-making with the Boltzmann machine. In Proc. Amer. Control Conf. (Pittsburgh, PA, June 1989) Vol. 1, pp. 902907.Google Scholar
Kemeny, J. and Snell, L. (1969). Finite Markov Chains. Van Nostrand, Princeton, NJ.Google Scholar
Kleinfield, D. and Somopolinsky, H. (1989). Associative neural models for central pattern generators. In Methods in Neuronal Modeling: From Synapses to Networks, eds Koch, C. and Segev, I., MIT Press, Cambridge, MA, pp. 195246.Google Scholar
Sumita, U. and Rieders, M. (1988). First passage times and lumpability of semi-Markov processes. J. Appl. Prob. 25, 675687.CrossRefGoogle Scholar
Whittle, P. (1991). Neural nets and implicit inference. Ann. Appl. Prob. 1, 173188.CrossRefGoogle Scholar