Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-28T17:23:52.643Z Has data issue: false hasContentIssue false

Countable state Markov decision processes with unbounded jump rates and discounted cost: optimality equation and approximations

Published online by Cambridge University Press:  21 March 2016

H. Blok*
Affiliation:
Leiden University
F. M. Spieksma*
Affiliation:
Leiden University
*
Postal address: Mathematisch Instituut, Leiden University, Postbus 9512, 2300 RA Leiden, The Netherlands.
Postal address: Mathematisch Instituut, Leiden University, Postbus 9512, 2300 RA Leiden, The Netherlands.
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

This paper considers Markov decision processes (MDPs) with unbounded rates, as a function of state. We are especially interested in studying structural properties of optimal policies and the value function. A common method to derive such properties is by value iteration applied to the uniformised MDP. However, due to the unboundedness of the rates, uniformisation is not possible, and so value iteration cannot be applied in the way we need. To circumvent this, one can perturb the MDP. Then we need two results for the perturbed sequence of MDPs: 1. there exists a unique solution to the discounted cost optimality equation for each perturbation as well as for the original MDP; 2. if the perturbed sequence of MDPs converges in a suitable manner then the associated optimal policies and the value function should converge as well. We can model both the MDP and perturbed MDPs as a collection of parametrised Markov processes. Then both of the results above are essentially implied by certain continuity properties of the process as a function of the parameter. In this paper we deduce tight verifiable conditions that imply the necessary continuity properties. The most important of these conditions are drift conditions that are strongly related to nonexplosiveness.

Type
General Applied Probability
Copyright
Copyright © Applied Probability Trust 2015 

References

Adan, I. J. B. F., Kulkarni, V. G. and van Wijk, A. C. C. (2013). Optimal control of a server farm. Inf. Syst. Operat. Res. 51, 241252.Google Scholar
Anderson, W. J. (1991). Continuous-Time Markov Chains. Springer, New York.Google Scholar
Federgruen, A. (1978). On N-person stochastic games with denumerable state space. Adv. Appl. Prob. 10, 452471.CrossRefGoogle Scholar
Guo, X. and Hernández-Lerma, O. (2003). Continuous-time controlled Markov chains with discounted rewards. Acta Appl. Math. 79, 195216.CrossRefGoogle Scholar
Guo, X. and Piunovskiy, A. (2011). Discounted continuous-time Markov decision processes with constraints: unbounded transition and loss rates. Math. Operat. Res. 36, 105132.Google Scholar
Guo, X. and Zhu, W. (2002). Denumerable-state continuous-time Markov decision processes with unbounded transition and reward rates under the discounted criterion. J. Appl. Prob. 39, 233250.Google Scholar
Guo, X., Hernández-Lerma, O and Prieto-Rumeau, T. (2006). A survey of recent results on continuous-time Markov decision processes. Top 14, 177261.Google Scholar
Hordijk, A. (1974). Dynamic Programming and Markov Potential Theory (Math. Centre Tracts 51), Mathematisch Centrum, Amsterdam.Google Scholar
Munkres, J. R. (2000). Topology, 2nd edn. Prentice Hall, Upper Saddle River, NJ.Google Scholar
Piunovskiy, A. and Zhang, Y. (2014). Discounted continuous-time Markov decision processes with unbounded rates and randomized history-dependent policies: the dynamic programming approach. 4OR-Q J. Operat. Res. 12, 4975.Google Scholar
Prieto-Rumeau, T. and Hernández-Lerma, O. (2012). Discounted continuous-time controlled Markov chains: convergence of control models. J. Appl. Prob. 49, 10721090.Google Scholar
Prieto-Rumeau, T. and Hernández-Lerma, O. (2012). Selected Topics on Continuous-Time Controlled Markov Chains and Markov Games (ICP Adv. Texts Math. 5), Imperial College Press, London.Google Scholar
Reuter, G. E. H. (1957). Denumerable Markov processes and the associated contraction semigroups on. l. Acta Math. 97, 146.Google Scholar
Royden, H. L. (1988). Real Analysis, 2nd edn. Macmillan, New York.Google Scholar
Rudin, W. (1976). Principles of Mathematical Analysis, 3rd edn. McGraw-Hill, New York.Google Scholar
Spieksma, F. M. (2012). Kolmogorov forward equation and explosiveness in countable state Markov processes. Ann. Operat. Res. 10.1007/s10479-012-1262-7.Google Scholar
Spieksma, F. M. (2013). Countable state Markov processes: non-explosiveness and moment function. Prob. Eng. Inf. Sci. 29, 623637.Google Scholar