Hostname: page-component-78c5997874-g7gxr Total loading time: 0 Render date: 2024-11-13T05:03:59.778Z Has data issue: false hasContentIssue false

A decomposition theorem for infinite stochastic matrices

Published online by Cambridge University Press:  14 July 2016

Daniel P. Heyman*
Affiliation:
Bellcore
*
Postal address: Bellcore, Room 3D-308, 331 Newman Springs Road, Red Bank, NJ 0771–7020, USA.

Abstract

We prove that every infinite-state stochastic matrix P say, that is irreducible and consists of positive-recurrrent states can be represented in the form I – P=(AI)(BS), where A is strictly upper-triangular, B is strictly lower-triangular, and S is diagonal. Moreover, the elements of A are expected values of random variables that we will specify, and the elements of B and S are probabilities of events that we will specify. The decomposition can be used to obtain steady-state probabilities, mean first-passage-times and the fundamental matrix.

Type
Research Papers
Copyright
Copyright © Applied Probability Trust 1995 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Golub, G. H. and Van Loan, C. F. (1983) Matrix Computations. Johns Hopkins University Press, Baltimore, MD.Google Scholar
[2] Grassmann, W. K. (1993) Means and variances in Markov reward systems. In Linear Algebra, Markov Chains, and Queueing Models, ed. Meyer, C. D. and Plemmons, R. J. Springer-Verlag, New York.Google Scholar
[3] Grassmann, W. K., Taksar, M. I. and Heyman, D. P. (1985) Regenerative analysis and steady state distributions for Markov chains. Operat. Res. 33, 11071116.CrossRefGoogle Scholar
[4] Grassmann, W. K. and Heyman, D. P. (1990) Equilibrium distribution of block-structured Markov chains with repeating rows. J. Appl. Prob. 27, 557576.CrossRefGoogle Scholar
[5] Grassmann, W. K. and Heyman, D. P. (1993) Computation of steady-state probabilities for infinite-state Markov chains with repeating rows. ORSA J. Comput. 5, 292303.CrossRefGoogle Scholar
[6] Heyman, D. P. and Sobel, M. J. (1982) Stochastic Models in Operations Research, Vol. I. McGraw-Hill, New York.Google Scholar
[7] Kemeny, J. G., Snell, J. L. and Knapp, A. W. (1966) Denumerable Markov Chains. Van Nostrand, Princeton, NJ.Google Scholar
[8] Ramaswami, V. (1988) A stable recursion for the steady state vector in Markov chains of M/G/1 type. Stock Models 4, 193–188.Google Scholar