Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-01-07T18:25:43.376Z Has data issue: false hasContentIssue false

Markovian Processes with Identifiable States: General Considerations and Application to All-or-None Learning

Published online by Cambridge University Press:  01 January 2025

James G. Greeno
Affiliation:
Indiana University
Theodore E. Steiner
Affiliation:
Indiana University

Abstract

It often happens that a theory specifies some variables or states which cannot be identified completely in an experiment. When this happens, there are important questions as to whether the experiment is relevant to certain assumptions of the theory. Some of these questions are taken up in the present article, where a method is developed for describing the implications of a theory for an experiment. The method consists of constructing a second theory with all of its states identifiable in the outcome-space of the experiment. The method can be applied (i.e., an equivalent identifiable theory exists) whenever a theory specifies a probability function on the sample-space of possible outcomes of the experiment. An interesting relationship between lumpability of states and recurrent events plays an important role in the development of the identifiable theory. An identifiable theory of an experiment can be used to investigate relationships among different theories of the experiment. As an example, an identifiable theory of all-or-none learning is developed, and it is shown that a large class of all-or-none theories are equivalent for experiments in which a task is learned to a strict criterion.

Type
Original Paper
Copyright
Copyright © 1964 Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

*

This research was supported in part by the National Science Foundation under grant GB-319.

References

Bower, G. H. Application of a model to paired-associate learning. Psychometrika, 1961, 26, 255280.CrossRefGoogle Scholar
Burke, C. J. and Rosenblatt, M. A Markovian function of a Markov chain. Ann. math. Statist., 1958, 29, 11121122.CrossRefGoogle Scholar
Bush, R. R. and Mosteller, F. Stochastic models for learning, New York: Wiley, 1955.CrossRefGoogle Scholar
Estes, W. K. Component and pattern models with Markovian interpretations. In Bush, R. R. and Estes, W. K. (Eds.), Studies in mathematical learning theory. Stanford: Stanford Univ. Press, 1959, 952.Google Scholar
Estes, W. K. Learning theory and the new mental chemistry. Psychol. Rev., 1960, 67, 207223.CrossRefGoogle ScholarPubMed
Estes, W. K. and Suppes, P. Foundations of linear models. In Bush, R. R. and Estes, W. K. (Eds.), Studies in mathematical learning theory. Stanford: Stanford Univ. Press, 1959, 137179.Google Scholar
Feller, W. An introduction to probability theory and its applications (2nd ed.), New York: Wiley, 1950.Google Scholar
Kemeny, J. G. and Snell, J. L. Finite Markov chains, Princeton: Van Nostrand, 1960.Google Scholar
Kraemer, H. C. Point estimation in learning models. J. math. Psychol., 1964, 1, 2853.CrossRefGoogle Scholar
Restle, F. The selection of strategies in cue learning. Psychol. Rev., 1962, 69, 329343.CrossRefGoogle ScholarPubMed
Sternberg, S. Stochastic learning theory. In Luce, R. D., Bush, R. R. and Galanter, E. (Eds.), Handbook of mathematical psychology, Vol. II. New York: Wiley, 1963, 1120.Google Scholar
Suppes, P. and Zinnes, J. L. Basic measurement theory. In Luce, R. D., Bush, R. R. and Galanter, E. (Eds.), Handbook of mathematical psychology, Vol. I. New York: Wiley, 1963, 176.Google Scholar