Hostname: page-component-78c5997874-g7gxr Total loading time: 0 Render date: 2024-11-10T21:44:18.315Z Has data issue: false hasContentIssue false

Probabilistic analysis of a learning matrix

Published online by Cambridge University Press:  01 July 2016

William G. Faris*
Affiliation:
University of Arizona
Robert S. Maier*
Affiliation:
University of Arizona
*
Postal address for both authors: Department of Mathematics, University of Arizona, Tucson, Arizona 85721, USA.
Postal address for both authors: Department of Mathematics, University of Arizona, Tucson, Arizona 85721, USA.

Abstract

A learning matrix is defined by a set of input and output pattern vectors. The entries in these vectors are zeros and ones. The matrix is the maximum of the outer products of the input and output pattern vectors. The entries in the matrix are also zeros and ones. The product of this matrix with a selected input pattern vector defines an activity vector. It is shown that when the patterns are taken to be random, then there are central limit and large deviation theorems for the activity vector. They give conditions for when the activity vector may be used to reconstruct the output pattern vector corresponding to the selected input pattern vector.

Type
Research Article
Copyright
Copyright © Applied Probability Trust 1988 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Research supported by National Science Foundation grant DMS 810215.

References

[1] Chung, K. L. (1974) A Course in Probability Theory, 2nd edn. Academic Press, New York.Google Scholar
[2] Mcclelland, J. L. (1986) Resource requirements of standard and programmable nets. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations, ed. Rumelhart, D. E. and McClelland, J. L.. MIT Press, Cambridge, Massachusetts, 460487.Google Scholar
[3] Newman, C. M. (1980) Normal fluctuations and the FKG inequalities. Commun. Math. Phys. 74, 119128.CrossRefGoogle Scholar
[4] Newman, C. M. (1984) Asymptotic independence and limit theorems for positively and negatively dependent random variables. In, Inequalities in Statistics and Probability, ed. Tong, Y. L., IMS Lecture Notes—Monograph Series 5. Springer, New York, 127140.CrossRefGoogle Scholar
[5] Newman, C. M., Rinott, Y. and Tversky, A. (1983) Nearest neighbors and Voronoi regions in certain point processes. Adv. Appl. Prob. 15, 726751.CrossRefGoogle Scholar
[6] Newman, C. M. and Wright, A. L. (1981) An invariance principle for certain dependent sequences. Ann. Prob. 9, 671675.CrossRefGoogle Scholar
[7] Palm, G. (1980) On associative memory. Biol. Cybernet. 36, 1931.CrossRefGoogle ScholarPubMed
[8] Palm, G. (1981) On the storage capacity of an associative memory with randomly distributed storage elements. Biol. Cybernet. 39, 125127.CrossRefGoogle Scholar
[9] Steinbuch, K. (1961) Die Lernmatrix. Kybernetik (Biol. Cybernet.) 1, 3645.Google Scholar
[10] Willshaw, D. (1981) Holography, associative memory, and inductive generalization. In Parallel Models of Associative Memory, ed. Hinton, G. E. and Anderson, J. A.. Lawrence Erlbaum Associates, Hillsdale, New Jersey, 83104.Google Scholar
[11] Wood, T. E. (1983) A Berry–Esseen theorem for associated random variables. Ann. Prob. 11, 10421047.CrossRefGoogle Scholar