Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-27T21:52:23.769Z Has data issue: false hasContentIssue false

The stationary probability density of a class of bounded Markov processes

Published online by Cambridge University Press:  01 July 2016

Muhamad Azfar Ramli*
Affiliation:
National University of Singapore
Gerard Leng*
Affiliation:
National University of Singapore
*
Postal address: Cooperative Systems Lab E1-03-06, Department of Mechanical Engineering, National University of Singapore, 1 Engineering Drive 2, Singapore 117576.
Postal address: Cooperative Systems Lab E1-03-06, Department of Mechanical Engineering, National University of Singapore, 1 Engineering Drive 2, Singapore 117576.
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

In this paper we generalize a bounded Markov process, described by Stoyanov and Pacheco-González for a class of transition probability functions. A recursive integral equation for the probability density of these bounded Markov processes is derived and the stationary probability density is obtained by solving an equivalent differential equation. Examples of stationary densities for different transition probability functions are given and an application for designing a robotic coverage algorithm with specific emphasis on particular regions is discussed.

Type
General Applied Probability
Copyright
Copyright © Applied Probability Trust 2010 

References

Bargiel, M. and Tory, E. M. (2007). A five parameter Markov model for simulating the paths of sedimenting particles. Appl. Math. Modelling 31, 20802094.Google Scholar
Farahpour, F. et al. (2007). A Langevin equation for the rates of currency exchange based on the Markov analysis. Physica A 385, 601608.Google Scholar
Flanders, H. (1973). Differentiation under the integral sign. Amer. Math. Monthly 80, 615627.Google Scholar
Hernández-Suárez, C. M. and Castillo-Chavez, C. (1999). A basic result on the integral for birth-death Markov processes. Math. Biosci. 161, 95104.Google Scholar
Johnson, N. L., Kotz, S. and Balakrishnan, N. (1995). Continuous Univariate Distributions, Vol. 2, 2nd edn. John Wiley, New York.Google Scholar
Lasota, A. and Mackey, M. C. (1994). Chaos, Fractals, and Noise, 2nd edn. Springer, New York.Google Scholar
Lerman, K., Martinoli, A. and Galstyan, A. (2005). A review of probabilistic macroscopic models for swarm robotic systems. In Swarm Robotics Workshop: State-of-the-art Survey (Lecture Notes Comput. Sci. 3342), eds Sahin, E. and Spears, W., Springer, Berlin, pp. 143152.Google Scholar
Meyn, S. P. and Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. Springer, London.Google Scholar
Nielsen, C. K. (2009). Non-stationary, stable Markov processes on a continuous state space. Econom. Theory 40, 473496.Google Scholar
Nikitin, Y. and Orzinger, E. (2000). The intermediate arc-sine law. Statist. Prob. Lett. 49, 119125.Google Scholar
Pacheco-González, C. G. (2009). Ergodicity of a bounded Markov chain with attractiveness towards the centre. Statist. Prob. Lett. 79, 21772181.Google Scholar
Pacheco-González, C. G. and Stoyanov, J. (2008). A class of Markov chains with beta ergodic distributions. Math. Sci. 33, 110119.Google Scholar
Stoyanov, J. and Pirinsky, C. (2000). Random motions, classes of ergodic Markov chains and beta distributions. Statist. Prob. Lett. 50, 293304.Google Scholar