Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-28T13:56:55.595Z Has data issue: false hasContentIssue false

Blackwell Optimality for Controlled Diffusion Processes

Published online by Cambridge University Press:  14 July 2016

Héctor Jasso-Fuentes*
Affiliation:
CINVESTAV
Onésimo Hernández-Lerma*
Affiliation:
CINVESTAV
*
Postal address: Department of Mathematics, CINVESTAV-IPN, A. Postal 14-740, Mexico DF 07000, Mexico.
Postal address: Department of Mathematics, CINVESTAV-IPN, A. Postal 14-740, Mexico DF 07000, Mexico.
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

In this paper we study m-discount optimality (m ≥ −1) and Blackwell optimality for a general class of controlled (Markov) diffusion processes. To this end, a key step is to express the expected discounted reward function as a Laurent series, and then search certain control policies that lexicographically maximize the mth coefficient of this series for m = −1,0,1,…. This approach naturally leads to m-discount optimality and it gives Blackwell optimality in the limit as m → ∞.

Type
Research Article
Copyright
Copyright © Applied Probability Trust 2009 

References

[1] Akella, R. and Kumar, P. R. (1986). Optimal control of production rate in a failure prone manufacturing system. IEEE Trans. Automatic Control 31, 116126.Google Scholar
[2] Arapostathis, A., Ghosh, M. K. and Borkar, V. S. (2009). Ergodic Control of Diffusion Processes. To appear.Google Scholar
[3] Blackwell, D. (1962). Discrete dynamic programming. Ann. Math. Statist. 33, 719726.Google Scholar
[4] Borkar, V. S. and Ghosh, M. K. (1990). Ergodic control of multidimensional diffusions. II. Adaptive control. Appl. Math. Optimization 21, 191220.Google Scholar
[5] Dekker, R. and Hordijk, A. (1992). Recurrence conditions for average and Blackwell optimality in denumerable state Markov decision chains. Math. Operat. Res. 17, 271289.Google Scholar
[6] Dynkin, E. B. (1965). Markov Processes, Vol. 1. Springer, Berlin.Google Scholar
[7] Fort, G. and Roberts, G. O. (2005). Subgeometric ergodicity of strong Markov processes. Ann. Appl. Prob. 15, 15651589.CrossRefGoogle Scholar
[8] Ghosh, M. K. and Marcus, S. I. (1991). Infinite horizon controlled diffusion problems with nonstandard criteria. J. Math. Systems Estim. Control 1, 4569.Google Scholar
[9] Ghosh, M. K., Arapostathis, A. and Marcus, S. I. (1993). Optimal control of switching diffusions with application to flexible manufacturing systems. SIAM J. Control Optimization 31, 11831204.Google Scholar
[10] Ghosh, M. K., Arapostathis, A. and Marcus, S. I. (1997). Ergodic control of switching diffusions. SIAM J. Control Optimization 35, 19521952.Google Scholar
[11] Glynn, P. W. and Meyn, S. P. (1996). A Liapounov bound for solutions of the Poisson equation. Ann. Prob. 24, 916931.CrossRefGoogle Scholar
[12] Has'minskii, R. Z. (1980). Stochastic Stability of Differential Equations. Sijthoff and Noordhoff, Germantown, Md.CrossRefGoogle Scholar
[13] Hernández-Lerma, O. (1994). Lectures on Continuous-Time Markov Control Processes. Sociedad Matemática Mexicana, Mexico.Google Scholar
[14] Hernández-Lerma, O. and Lasserre, J. B. (1999). Further Topics on Discrete-Time Markov Control Processes (Appl. Math. 42). Springer, New York.CrossRefGoogle Scholar
[15] Hilgert, N. and Hernández-Lerma, O. (2003). Bias optimality versus strong 0-discount optimality in Markov control processes with unbounded costs. Acta Appl. Math. 77, 215235.Google Scholar
[16] Hordijk, A. and Yushkevich, A. A. (2002). Blackwell optimality. In Handbook of Markov Decision Processes (Internat. Ser. Operat. Res. Manag. Sci. 40), eds Feinberg, E. A. and Shwartz, A., Kluwer, Boston, MA, pp. 231267.CrossRefGoogle Scholar
[17] Jasso-Fuentes, H. (2007). Infinite-horizon optimal control problems for Markov diffusion processes. , Mathematics Department, CINVESTAV-IPN.Google Scholar
[18] Jasso-Fuentes, H. and Hernández-Lerma, O. (2008). Characterizations of overtaking optimality for controlled diffusion processes. Appl. Math. Optimization 57, 349369.Google Scholar
[19] Jasso-Fuentes, H. and Hernández-Lerma, O. (2009). Ergodic control, bias, and sensitive discount optimality for Markov diffusion processes. Stoch. Anal. Appl. 27, 363385.Google Scholar
[20] Leizarowitz, A. (1988). Controlled diffusion processes on infinite horizon with the overtaking criterion. Appl. Math. Optimization 17, 6178.Google Scholar
[21] Leizarowitz, A. (1990). Optimal controls for diffusion in {R}d—min-max max-min formula for the minimal cost growth rate. J. Math. Anal. Appl. 149, 180209.Google Scholar
[22] Meyn, S. P. and Tweedie, R. L. (1993). Stability of Markovian processes. III. Foster–Lyapunov criteria for continuous-time precesses. Adv. Appl. Prob. 25, 518548.Google Scholar
[23] Prieto-Rumeau, T. (2006). Blackwell optimality in the class of Markov policies for continuous-time controlled Markov chains. Acta Appl. Math. 92, 7796.Google Scholar
[24] Prieto-Rumeau, T. and Hernandez-Lerma, O. (2005). Bias and overtaking equilibria for zero-sum continuous-time Markov games. Math. Meth. Operat. Res. 61, 437454.Google Scholar
[25] Prieto-Rumeau, T. and Hernandez-Lerma, O. (2005). The Laurent series, sensitive discount and Blackwell optimality for continuous-time controlled Markov chains. Math. Meth. Operat. Res. 61, 123145.Google Scholar
[26] Prieto-Rumeau, T. and Hernández-Lerma, O. (2006). Bias optimality for continuous-time controlled Markov chains. SIAM J. Control Optimization 45, 5173.CrossRefGoogle Scholar
[27] Puterman, M. L. (1974). Sensitive discount optimality in controlled one-dimensional diffusions. Ann. Prob. 2, 408419.CrossRefGoogle Scholar
[28] Taylor, H. M. (1976). A Laurent series for the resolvent of a strongly continuous stochastic semi-group. Math. Program. 6, 258263.Google Scholar
[29] Veinott, A. F. Jr. (1969). Discrete dynamic programming with sensitive discount optimality criteria. Ann. Math. Statist. 40, 16351660.CrossRefGoogle Scholar
[30] Veretennikov, A. Y. and Klokov, S. A. (2005). On the subexponential rate of mixing for Markov processes. Theory Prob. Appl. 49, 110122.Google Scholar
[31] Yosida, K. (1995). Functional Analysis (Reprint). Springer, Berlin.Google Scholar
[32] Zhu, Q. and Guo, X. (2005). Another set of conditions for strong n (n=−1,0) discount optimality in Markov decision processes. Stoch. Anal. Appl. 23, 953974.Google Scholar