Hostname: page-component-78c5997874-94fs2 Total loading time: 0 Render date: 2024-11-10T16:29:36.462Z Has data issue: false hasContentIssue false

On the entropy for semi-Markov processes

Published online by Cambridge University Press:  14 July 2016

Valerie Girardin*
Affiliation:
Université de Caen
Nikolaos Limnios*
Affiliation:
Université de Technologie de Compiègne
*
Postal address: Mathématiques, Campus II, Université de Caen, BP 5186, 14032 Caen, France. Email address: girardin@math.unicaen.fr
∗∗Postal address: Laboratoire de Mathématiques Appliquées, Université de Technologie de Compiègne, BP 20529, 60205 Compiègne Cedex, France.

Abstract

The aim of this paper is to define the entropy of a finite semi-Markov process. We define the entropy of the finite distributions of the process, and obtain explicitly its entropy rate by extending the Shannon–McMillan–Breiman theorem to this class of nonstationary continuous-time processes. The particular cases of pure jump Markov processes and renewal processes are considered. The relative entropy rate between two semi-Markov processes is also defined.

Type
Research Papers
Copyright
Copyright © Applied Probability Trust 2003 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Albert, A. (1962). Estimating the infinitesimal generator of a continuous time finite state Markov process. Ann. Math. Statist. 38, 727753.Google Scholar
Bad Dumitrescu, M. (1988). Some informational properties of Markov pure-jump processes. Cas. Pestovani Mat. 113, 429434.Google Scholar
Breiman, L. (1958). The individual ergodic theorem of information theory. Ann. Math. Statist. 28, 809811.Google Scholar
Breiman, L. (1960). Correction to: the individual ergodic theorem of information theory. Ann. Math. Statist. 31, 809810.Google Scholar
Gut, A. (1988). Stopped Random Walks, Limit Theorems and Applications. Springer, New York.Google Scholar
Limnios, N. and Oprişan, G. (2001). Semi-Markov Processes and Reliability. Birkhauser, Boston, MA.Google Scholar
Mcmillan, M. (1953). The basic theorems of information theory. Ann. Math. Statist. 24, 196219.Google Scholar
Moore, E. H., and Pyke, R. (1968). Estimation of the transition distributions of a Markov renewal process. Ann. Inst. Statist. Math. 20, 411424.Google Scholar
Perez, A. (1964). Extensions of Shannon—McMillan's limit theorem to more general stochastic processes. In Trans. Third Prague Conf. Inf. Theory, Statist. Decision Functions, Random Processes, Publishing House of the Czechoslovak Academy of Science, Prague, pp. 545574.Google Scholar
Pinsker, M. S. (1964). Information and Information Stability of Random Variables and Processes. Holden-Day, San Francisco, CA.Google Scholar
Shannon, C. E. (1948). A mathematical theory of communication. Bell Syst. Tech. J. 27, 379423, 623—656.Google Scholar