Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-01-15T03:54:33.962Z Has data issue: false hasContentIssue false

Computing optimal quality control policies — two actions

Published online by Cambridge University Press:  14 July 2016

Robert C. Wang*
Affiliation:
Mountain States Telephone and Telegraph Company (Montain Bell), Denver, Colorado

Abstract

We study a Markov quality control model in which we may either reset the machine or keep the machine producing items. Both discounted and average costs are considered. We shall show that a monotone policy is optimal and show how to compute the optimal critical point. Two special models are studied along with some numerical results.

Type
Short Communications
Copyright
Copyright © Applied Probability Trust 1976 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Blackwell, D. (1965) Discounted dynamic programming. Ann. Math. Statist. 36, 226235.CrossRefGoogle Scholar
Ross, S. (1968) Arbitrary state Markovian decision processes. Ann. Math. Statist. 39, 21182122.Google Scholar
Ross, S. (1970) Applied Probability Models with Optimization Applications. Holden-Day, San Francisco.Google Scholar
Strauch, R. (1966) Negative dynamic programming. Ann. Math. Statist. 37, 871890.CrossRefGoogle Scholar
Wang, R. C. (1976) Computing optimal replacement policies. To appear.Google Scholar