Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-15T06:58:25.043Z Has data issue: false hasContentIssue false

Towards life-long adaptive agents: using metareasoning for combining knowledge-based planning with situated learning

Published online by Cambridge University Press:  18 December 2018

Priyam Parashar
Affiliation:
Contextual Robotics Institute, UC San Diego, La Jolla, CA 92093, USA; e-mail: pparashar@ucsd.edu
Ashok K. Goel
Affiliation:
Design & Intelligence Laboratory, Georgia Institute of Technology, Atlanta, GA 30308, USA; e-mail: goel@cc.gatech.edu
Bradley Sheneman
Affiliation:
American Family Insurance, Chicago, IL; e-mail: bradsheneman@gmail.com
Henrik I. Christensen
Affiliation:
Contextual Robotics Institute, UC San Diego, La Jolla, CA 92093, USA; e-mail: hichristensen@ucsd.edu

Abstract

We consider task planning for long-living intelligent agents situated in dynamic environments. Specifically, we address the problem of incomplete knowledge of the world due to the addition of new objects with unknown action models. We propose a multilayered agent architecture that uses meta-reasoning to control hierarchical task planning and situated learning, monitor expectations generated by a plan against world observations, forms goals and rewards for the situated reinforcement learner, and learns the missing planning knowledge relevant to the new objects. We use occupancy grids as a low-level representation for the high-level expectations to capture changes in the physical world due to the additional objects, and provide a similarity method for detecting discrepancies between the expectations and the observations at run-time; the meta-reasoner uses these discrepancies to formulate goals and rewards for the learner, and the learned policies are added to the hierarchical task network plan library for future re-use. We describe our experiments in the Minecraft and Gazebo microworlds to demonstrate the efficacy of the architecture and the technique for learning. We test our approach against an ablated reinforcement learning (RL) version, and our results indicate this form of expectation enhances the learning curve for RL while being more generic than propositional representations.

Type
Special Issue Contribution
Copyright
© Cambridge University Press, 2018 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Anderson, M. L. & Oates, T. 2007. A review of recent research in metareasoning and metalearning. AI Magazine 28, 12.Google Scholar
Argall, B. D., Chernova, S., Veloso, M. & Browning, B. 2009. A survey of robot learning from demonstration. Robotics and Autonomous Systems 57, 469483.Google Scholar
Blum, A. L. & Furst, M. L. 1997. Fast planning through planning graph analysis. Artificial Intelligence 90, 281300.Google Scholar
Breazeal, C. 2004. Designing Sociable Robots. MIT Press.Google Scholar
Breazeal, C. & Scassellati, B. 2002. Robots that imitate humans. Trends in Cognitive Sciences 6, 481487.Google Scholar
Cox, M. T. 2005. Field review: metacognition in computation: a selected research review. Artificial Intelligence 169, 104141.Google Scholar
Cox, M. T., Alavi, Z., Dannenhauer, D., Eyorokon, V., Muñoz-Avila, H. & Perlis, D. 2016. MIDCA: a metacognitive, integrated dual-cycle architecture for self-regulated autonomy, In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 3712–3718. AAAI Press.Google Scholar
Cox, M. T., Muñoz-Avila, H. & Bergmann, R. 2005. Case-based planning. Knowledge Engineering Review 20, 283287.Google Scholar
Cox, M. T. & Raja, A. 2011. Metareasoning: Thinking about Thinking. MIT Press.Google Scholar
Dannenhauer, D. & Muñoz-Avila, H. 2015a. Goal-driven autonomy with semantically-annotated hierarchical cases. In Case-Based Reasoning Research and Development,Lecture Notes in Computer Science 9343, Hüllermeier, E. & Minor, M. (eds). Springer International Publishing, 88103.Google Scholar
Dannenhauer, D. & Muñoz-Avila, H. 2015b. Raising expectations in GDA agents acting in dynamic environments. In Proceedings of the 24th International Conference on Artificial Intelligence, 2241–2247. AAAI Press.Google Scholar
Dannenhauer, D., Muñoz-Avila, H. & Cox, M. T. 2016. Informed expectations to guide GDA agents in partially observable environments, In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence 2493–2499. AAAI Press.Google Scholar
Efthymiadis, K., Devlin, S. & Kudenko, D. 2016. Overcoming incorrect knowledge in plan-based reward shaping. Knowledge Engineering Review 31, 3143.Google Scholar
Efthymiadis, K. & Kudenko, D. 2013. Using plan-based reward shaping to learn strategies in StarCraft: Broodwar. In 2013 IEEE Conference on Computational Inteligence in Games (CIG), 1–8. IEEE.Google Scholar
Efthymiadis, K. & Kudenko, D. 2015. Knowledge revision for reinforcement learning with abstract MDPs. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems’, AAMAS ‘15, 763–770. International Foundation for Autonomous Agents and Multiagent Systems.Google Scholar
Erol, K., Hendler, J. & Nau, D. S. 1994. HTN planning: complexity and expressivity. AAAI 94, 1123–1128.Google Scholar
Fitzgerald, T., Bullard, K., Thomaz, A. & Goel, A. K. 2016. Situated mapping for transfer learning. In Fourth Annual Conference on Advances in Cognitive Systems.Google Scholar
Goel, A. K. & Jones, J. 2011. Metareasoning for self-adaptation in intelligent agents. Metareasoning. The MIT Press.Google Scholar
Goel, A. K. & Rugaber, S. 2017. GAIA: A CAD-like environment for designing game-playing agents. IEEE Intelligent Systems 32, 6067.Google Scholar
Grounds, M. & Kudenko, D. 2008. Combining reinforcement learning with symbolic planning. In Adaptive Agents and Multi-Agent Systems III. Adaptation and Multi-Agent Learning, 75–86. Springer.Google Scholar
Grzes, M. & Kudenko, D. 2008. Plan-based reward shaping for reinforcement learning. In 2008 4th International IEEE Conference Intelligent Systems, 2, 10–22–10–29. IEEE.Google Scholar
Hammond, K. J. 2012. Case-Based Planning: Viewing Planning as a Memory Task. Elsevier.Google Scholar
Jaidee, U., Muñoz-Avila, H. & Aha, D. W. 2011a. Case-based learning in goal-driven autonomy agents for real-time strategy combat tasks. Proceedings of the ICCBR.Google Scholar
Jaidee, U., Muñoz-Avila, H. & Aha, D. W. 2011b. Integrated learning for goal-driven autonomy. In IJCAI Proceedings—International Joint Conference on Artificial Intelligence, 22, 2450. IJCAI/AAAI.Google Scholar
Jaidee, U., Muñoz-Avila, H. & Aha, D. W. 2012. Learning and reusing goal-specific policies for goal-driven autonomy. In Case-Based Reasoning Research and Development Lecture Notes in Computer Science 7466, B. D. Agudo & I. Watson (eds). Springer, 182–195.Google Scholar
Jones, J. K. & Goel, A. K. 2012. Perceptually grounded self-diagnosis and self-repair of domain knowledge. Knowledge-Based Systems 27, 281301.Google Scholar
Koenig, N. & Howard, A. 2004. Design and use paradigms for gazebo, an open-source multi-robot simulator. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566) 3, 2149–2154. IEEE.Google Scholar
Kolodner, J. 2014. Case-Based Reasoning. Morgan Kaufmann.Google Scholar
Leake, D. B. 1996. Case-Based Reasoning: Experiences, Lessons and Future Directions, 1st edition. MIT Press.Google Scholar
Lozano-Perez, T. 1983. Spatial planning: a configuration space approach. IEEE Transactions on Computers C-32, 108120.Google Scholar
Muñoz-Avila, H., Jaidee, U., Aha, D. W. & Carter, E. 2010. Goal-d autonomy with case-based reasoning. In Case-Based Reasoning. Research and Development, 228–241. Springer.Google Scholar
Murdock, J. W. & Goel, A. K. 2001. Meta-case-based reasoning: using functional models to adapt case-based agents. In Case-Based Reasoning Research and Development, 407–421. Springer.Google Scholar
Murdock, J. W. & Goel, A. K. 2008. Meta-case-based reasoning: self-improvement through self-understanding. Journal of Experimental and Theoretical Artificial Intelligence 20, 136.Google Scholar
Murdock, J. W. & Goel, A. K. 2011. Self-improvement through self-understanding: model-based reflection for agent adaptation. Georgia Institute of Technology.Google Scholar
Nau, D. S., Au, T. C., Ilghami, O., Kuter, U., Murdock, J. W., Wu, D. & Yaman, F. 2003. SHOP2: An HTN planning system. 1, 379404.Google Scholar
Nau, D. S., Cao, Y., Lotem, A. & Munoz-Avila, H. 1999. SHOP: simple hierarchical ordered planner. In Proceedings of the 16th International Joint Conference on Artificial Intelligence, 2, 968–973. IJCAI'99, Morgan Kaufmann Publishers Inc.Google Scholar
Ng, A. Y., Harada, D. & Russell, S. 1999. Policy invariance under reward transformations: theory and application to reward shaping. ICML 99, 278–287.Google Scholar
Nilsson, N. J. 1998. Artificial Intelligence: A New Synthesis. Elsevier.Google Scholar
Nilsson, N. J. 2014. Principles of Artificial Intelligence. Morgan Kaufmann.Google Scholar
Ontanón, S., Mishra, K., Sugandh, N. & Ram, A. 2010. On-line case-based planning. Computational Intelligence 26, 84119.Google Scholar
Paisner, M., Maynord, M., Cox, M. T. & Perlis, D. 2013. Goal-driven autonomy in dynamic environments. In Goal Reasoning: Papers from the ACS Workshop, 79.Google Scholar
Riesbeck, C. K. & Schank, R. C. 2013. Inside Case-Based Reasoning. Psychology Press.Google Scholar
Stroulia, E. & Goel, A. K. 1995. Functional representation and reasoning for reflective systems. Applications of Artificial Intelligence 9, 101124.Google Scholar
Stroulia, E. & Goel, A. K. 1999. Evaluating PSMs in evolutionary design: the UTOGNOSTIC experiments. International Journal of Human-Computer Studies 51, 825847.Google Scholar
Sutton, R. S. & Barto, A. G. 1998. Reinforcement learning: An introduction, 1. MIT Press.Google Scholar
Thrun, S., Burgard, W. & Fox, D. 2005. Probabilistic Robotics. MIT Press.Google Scholar
Ulam, P., Goel, A. K., Jones, J. & Murdock, W. 2005. Using model-based reflection to guide reinforcement learning. In Proceedings of the IJCAI 2005 Workshop on Reasoning, Representation and Learning in Computer Games.Google Scholar
Ulam, P., Jones, J. & Goel, A. K. 2008. Combining model-based meta-reasoning and reinforcement learning for adapting game-playing agents. In Proceedings of the Fourth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 132–137. AAAI Press.Google Scholar
Vattam, S., Klenk, M., Molineaux, M. & Aha, D. W. 2013. Breadth of Approaches to Goal Reasoning: A Research Survey. Technical Report. Naval Research Lab.Google Scholar
Watkins, C. J. C. H. & Dayan, P. 1992. Q-learning. Machine Learning 8, 279292.Google Scholar