Hostname: page-component-78c5997874-m6dg7 Total loading time: 0 Render date: 2024-11-10T12:39:27.719Z Has data issue: false hasContentIssue false

Exact distributions for reward functions on semi-Markov and Markov additive processes

Published online by Cambridge University Press:  14 July 2016

Valeri T. Stefanov*
Affiliation:
The University of Western Australia
*
Postal address: School of Mathematics and Statistics, The University of Western Australia, Crawley, WA 6009, Australia. Email address: stefanov@maths.uwa.edu.au
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

The distribution theory for reward functions on semi-Markov processes has been of interest since the early 1960s. The relevant asymptotic distribution theory has been satisfactorily developed. On the other hand, it has been noticed that it is difficult to find exact distribution results which lead to the effective computation of such distributions. Note that there is no satisfactory exact distribution result for rewards accumulated over deterministic time intervals [0, t], even in the special case of continuous-time Markov chains. The present paper provides neat general results which lead to explicit closed-form expressions for the relevant Laplace transforms of general reward functions on semi-Markov and Markov additive processes.

Type
Research Papers
Copyright
© Applied Probability Trust 2006 

References

Abate, J., Choudhury, G. L. and Whitt, W. (1998). Numerical inversion of multidimensional Laplace transforms by the Laguerre method. Performance Eval. 31, 229243.CrossRefGoogle Scholar
Asmussen, S. (2003). Applied Probability and Queues, 2nd edn. Springer, New York.Google Scholar
Barndorff-Nielsen, O. (1978). Information and Exponential Families. John Wiley, Chichester.Google Scholar
Bladt, M., Meini, B., Neuts, M. F. and Sericola, B. (2002). Distributions of reward functions on continuous-time Markov chains. In Matrix-Analytic Methods (Adelaide, 2002), World Scientific, River Edge, NJ, pp. 3962.CrossRefGoogle Scholar
Brown, L. (1986). Fundamentals of Statistical Exponential Families. IMS, Hayward, CA.Google Scholar
Brychkov, Y. A., Gleske, H. J., Prudnikov, A. P. and Tuan, V. K. (1992). Multidimensional Integral Transformations. Gordon and Breach, Philadelphia, PA.Google Scholar
Choudhury, G. L., Lucantoni, D. M. and Whitt, W. (1994). Multidimensional transform inversion with applications to the transient M/G/1 queue. Ann. Appl. Prob. 4, 719740.Google Scholar
Çinlar, E. (1969). Markov renewal theory. Adv. Appl. Prob. 1, 123187.CrossRefGoogle Scholar
Darroch, J. N. and Morris, K. W. (1968). Passage-time generating functions for continuous-time finite Markov chains. J. Appl. Prob. 5, 414426.Google Scholar
Howard, R. A. (1971). Dynamic Probabilistic Systems, Vols I and II. John Wiley, New York.Google Scholar
Jewell, W. S. (1963). Markov renewal programming. I. Formulation, finite return models. Operat. Res. 11, 938948.CrossRefGoogle Scholar
Küchler, U. and Sørensen, M. (1997). Exponential Families of Stochastic Processes. Springer, New York.CrossRefGoogle Scholar
Masuda, Y. (1993). Partially observable semi-Markov reward processes. J. Appl. Prob. 30, 548560.Google Scholar
Masuda, Y. and Sumita, U. (1987). Analysis of a counting process associated with a semi-Markov process: number of entries into a subset of state space. Adv. Appl. Prob. 19, 767783.Google Scholar
Masuda, Y. and Sumita, U. (1991). A multivariate reward process defined on a semi-Markov process and its first-passage-time distributions. J. Appl. Prob. 28, 360373.Google Scholar
McLean, R. A. and Neuts, M. F. (1967). The integral of a step function defined on a semi-Markov process. SIAM J. Appl. Math. 15, 726737.Google Scholar
Puri, P. S. (1971). A method for studying the integral functionals of stochastic processes with applications. I. Markov chain case. J. Appl. Prob. 8, 331343.Google Scholar
Puri, P. S. (1972). A method for studying the integral functionals of stochastic processes with applications. II. Sojourn time distributions for Markov chains. Z. Wahrscheinlichkeitsth. 23, 8596.Google Scholar
Pyke, R. (1961). Markov renewal processes with finitely many states. Ann. Math. Statist. 32, 12431259.Google Scholar
Sericola, B. (1990). Closed-form solution for the distribution of the total time spent in a subset of states of a homogeneous Markov process during a finite observation period. J. Appl. Prob. 27, 713719.Google Scholar
Sericola, B. (2000). Occupation times in Markov processes. Stoch. Models 16, 479510.CrossRefGoogle Scholar
Stefanov, V. T. (1995). Explicit limit results for minimal sufficient statistics and maximum likelihood estimators for some Markov processes: exponential families approach. Ann. Statist. 23, 10731101.Google Scholar
Stefanov, V. T. (2000). On some waiting time problems. J. Appl. Prob. 37, 756764.CrossRefGoogle Scholar
Sumita, U. and Masuda, Y. (1987). An alternative approach to the analysis of finite semi-Markov and related processes. Stoch. Models 3, 6787.Google Scholar