Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-28T01:33:10.020Z Has data issue: false hasContentIssue false

Comparing two algorithms for automatic planning by robots in stochastic environments*

Published online by Cambridge University Press:  09 March 2009

Alan D. Christiansen
Affiliation:
Computer Science Department, Tulane University, New Orleans, LA 70118-5674 (USA). Supported at CMU by an AT&T Bell Laboratories Ph.D. Scholarship and by the National Science Foundation under grant DMC-8520475. A portion of this work was completed during a visit to the Laboratoire d'Informatique Fondamentale et d'ntelligence Artificielle (LIF1A) in Grenoble. France. supported by INRIA.
Kenneth Y. Goldberg
Affiliation:
Institute for Robotics and Intelligent Systems, University of Southern California. Los Angeles. CA 90089-0273 (USA).Supported by the National Science Foundation under Awards No. IRl-9123747. and DDM-9215362 (Strategic Manufacturing Initiative).

Summary

Planning a sequence of robot actions is especially difficult when the outcome of actions is uncertain, as is inevitable when interacting with the physical environment. In this paper we consider the case of finite state and action spaces where actions can be modeled as Markov transitions. Finding a plan that achieves a desired state with maximum probability is known to be an NP-Complete problem. We consider two algorithms: an exponential-time algorithm that maximizes probability, and a polynomial-time algorithm that maximizes a lower bound on the probability. As these algorithms trade off plan time for plan quality, we compare their performance on a mechanical system for orienting parts. Our results lead us to identify two properties of stochastic actions that can be used to choose between these planning algorithms for other applications.

Type
Articles
Copyright
Copyright © Cambridge University Press 1995

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Astrom, K.J., “Adaptive feedback control.” Proceedings of the IEEE 75(2)185217 (1987).CrossRefGoogle Scholar
2.DeGroot, M.H., Optimal Statistical Decisions (McGraw- Hill. New York. 1970).Google Scholar
3.Berger, J.O., Statistical Decision Theory and Bayesian Analysis (Springer-Verlag, Berlin, 1985).CrossRefGoogle Scholar
4.Howard, R.A., Dynamic Probabilistic Systems (2 volumes) (John Wiley, New York, 1971).Google Scholar
5.Dynkin, E.B. and Yushkevich, A.A., Controlled Markov Processes (Springer-Verlag, Berlin, 1979).CrossRefGoogle Scholar
6.Papadimitriou, C.H. and Tsitsiklis, J.N., “The complexity of Markov decision processes” Mathematics of Operations Research, 12(3) 441450 (08 1987).CrossRefGoogle Scholar
7.Feldman, J.A. and Sproull, R.F., “Decision theory and artificial intelligence II: The hungry monkey”. Cognitive Science, 1 158192(1977).Google Scholar
8.Pearl, J., Probabilistic Reasoning in Intelligent Systems (Morgan-Kaufmann, Los Altos, California, 1988).Google Scholar
9.Russell, S. and Wefald, E. “Decision-theoretic control of reasoning: General thery and an application to game playing” Technical Report UCB/CSD 88/435 (UC Berkeley, 10 1988).Google Scholar
10.Cheeseman, P. “In defense of probability” Proceedings of the Ninth International Joint Conference on Artificial Intelligence,Los Angeles, CA, IJCAI(August 1985) pp. 10021009.Google Scholar
11.Simon, H.A., “A behavioral model of rational choice” Quart. J. Economics 69. 99118 (1955). Reprinted in Models of Thought (Yale University Press, 1979).CrossRefGoogle Scholar
12.Etzioni, O., “Tractable decicion-analytic control” Proceedings of the First International Conference on Principles of Knowledge Representation and Reasoning,Los Altos, California(1989) (Morgan Kaufmann, San Mateo, 1989). An expanded version is available as technical report CMU-CS-89–119.Google Scholar
13.Bertsekas, D.P., Dynamic Programming: Deterministic and Stochastic Models (Prentice-Hall, Englewood Cliffs, New Jersey, 1987).Google Scholar
14.Dean, T.L. and Wellman, M.P., Planning and Control. (Morgan Kaufmann, San Mateo, California, 1991).CrossRefGoogle Scholar
15.Simons, J., Van Brussel, H., De Schutter, J., and Verhaert, J, “A self-learning automation with variable resolution for high precision assembly by industrial robots” IEEE Transactions on Automatic Control AC27(5) 11091113 (10 1982).CrossRefGoogle Scholar
16.Narendra, K.S. and Thathachar, M.A.L.Learning Automata: An Introduction (Prentice Hall, Englewood Cliffs, New Jersey, 1988).Google Scholar
17.Dufay, B. and Latombe, J. C., “An approach to automatic robot programming based on inductive learning” International Symposium on Robotics Research(1983) pp. 97115.Google Scholar
18.Barto, A.G.. “Connectionist learning for control: An overview” Technical Report COINS 89-89 (University of Massachusetts-Amherst, 09 1989).Google Scholar
19.Buckley, S.J.. “Teaching compliant motion strategies” IEEE J. Robotics and Automation 5(1) 112118 (1989).CrossRefGoogle Scholar
20.Gross, K.P.. “Concept Acquisition through Attribute Evolution and Experimentation” PhD thesis (Carnegie Mellon University, School of Computer Science, 05 1991).Google Scholar
21.Bennett, S. and DeJong, G., “Comparing stochastic planning to the acquisition of increasingly permissive plans for complex, uncertain domains” International Workshop on Machine Learning (06 1991) pp. 586590.CrossRefGoogle Scholar
22.Christiansen, A.D., “Automatic Acquisition of Task Theories for Robotic Manipulation” PhD thesis (Carnegie Mellon University, School of Computer Science, 03 1992).Google Scholar
23.Mason, M.T., “Mechanics and planning of manipulator pushing operations” Int. J. Robotics Research 5(3) 5371 (1986).CrossRefGoogle Scholar
24.Brost, R. C., “Automatic grasp planning in the present of uncertainty” Int. J. Robotics Research 7(1) 317 (02, 1988).CrossRefGoogle Scholar
25.Trinkle, J.C., Abel, J.M. and Paul, R.P., “An investigation of frictionless enveloping grasping in the plane” Int. J. Robotics Research 7(3) 3351 (06, 1988).CrossRefGoogle Scholar
26.Peshkin, M.A., “The motion of a pushed, sliding workpiece” IEEE Transactions on Robotics and Automation 4(6) 569598 (12, 1988).CrossRefGoogle Scholar
27.Brost, R.C., “Analysis and Planning of Planar Manipulation Tasks” PhD thesis (Carnegie Mellon University, School of Computer Science, 01 1991).Google Scholar
28.Erdmann, M.A. and Mason, M.T., “An exploration of sensorless manipulation” IEEE J. Robotics and Automation 4(4) 369379 (08, 1988).CrossRefGoogle Scholar
29.Taylor, R.H., Mason, M.T. and Goldberg, K.Y., “Sensorbased manipulation planning as a game with nature” International Symposium on Robotics Research(August, 1987) pp. 421429.Google Scholar
30.Christiansen, A.D., “Manipulation planning from empirical backprojections” IEEE International Conference on Robotics and Automation(April, 1991) pp. 762768.Google Scholar
31.Christiansen, A.D., Mason, M.T. and Mitchell, T.M., “Learning reliable manipulation strategies without initial physical models” Robotics and Autonomous Systems 8 718 (11, 1991).CrossRefGoogle Scholar
32.Christiansen, A.D., “Learning to predict in uncertain continuous tasks” International Conference on Machine Learning(July, 1992) pp. 7281.CrossRefGoogle Scholar
33.Goldberg, K.Y., “Stochastic Plans for Robotic Manipulation” PhD thesis (Garnegie Mellon University, School of Computer Science, 08 1990).Google Scholar
34.Pearl, J., Heuristics: Intelligent Search Strategies for Computer Problem Solving. (Addison-Wesley, Reading, Massachusetts, 1984).Google Scholar
35.Wilson, R. and Latombe, J. C., “On the qualitative structure of a mechanical assembly” National Conference on Artificial Intelligence(1992) pp. 697702.Google Scholar
36.Erdmann, M.A., “On Probabilistic Strategies for Robot Tasks” PhD. thesis (MIT, Cambridge, MA, 1989).Google Scholar