Hostname: page-component-78c5997874-s2hrs Total loading time: 0 Render date: 2024-11-10T10:50:58.349Z Has data issue: false hasContentIssue false

Combination of Recurrent Neural Network and Deep Learning for Robot Navigation Task in Off-Road Environment

Published online by Cambridge University Press:  04 November 2019

Farinaz Alamiyan-Harandi
Affiliation:
Computer Engineering Department, Faculty of Engineering, Yazd University, Yazd, Iran E-mail: f.alamiyan@yazd.ac.ir
Vali Derhami*
Affiliation:
Computer Engineering Department, Faculty of Engineering, Yazd University, Yazd, Iran E-mail: f.alamiyan@yazd.ac.ir
Fatemeh Jamshidi
Affiliation:
Department of Electrical Engineering, Faculty of Engineering, Fasa University, Fasa, Iran E-mail: jamshidi@fasau.ac.ir
*
*Corresponding author. E-mail: vderhami@yazd.ac.ir

Summary

This paper tackles the challenge of the necessity of using the sequence of past environment states as the controller’s inputs in a vision-based robot navigation task. In this task, a robot has to follow a given trajectory without falling in pits and missing its balance in uneven terrain, when the only sensory input is the raw image captured by a camera. The robot should distinguish big pits from small holes to decide between avoiding and passing over. In non-Markov processes such as the abovementioned task, the decision is done using past sensory data to ensure admissible performance. Applying images as sensory inputs naturally causes the curse of dimensionality difficulty. On the other hand, using sequences of past images intensifies this difficulty. In this paper, a new framework called recurrent deep learning (RDL) with combination of deep learning (DL) and recurrent neural network is proposed to cope with the above challenge. At first, the proper features are extracted from the raw image using DL. Then, these represented features plus some expert-defined features are used as the inputs of a fully connected recurrent network (as target network) to generate command control of the robot. To evaluate the proposed RDL framework, some experiments are established on WEBOTS and MATLAB co-simulation platform. The simulation results demonstrate the proposed framework outperforms the conventional controller based on DL for the navigation task in the uneven terrains.

Type
Articles
Copyright
© Cambridge University Press 2019

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Antonelli, G., Chiaverini, S. and Fusco, G., “A fuzzy-logic-based approach for mobile robot path tracking,” IEEE Trans. Fuzzy Syst. 15(2), 211221 (2007).CrossRefGoogle Scholar
Yang, Y., Fu, M., Zhu, H., Xiong, G. and Changsheng, S., “Control Methods of Mobile Robot Rough-Terrain Trajectory Tracking,” 8th IEEE International Conference on Control and Automation (ICRA), Anchorage, Alaska (2010) pp. 731738.Google Scholar
Hadsell, R., Sermanet, P., Ben, J., Erkan, A., Han, J., Flepp, B., Muller, U. and LeCun, Y., “Online Learning for Offroad Robots: Using Spatial Label Propagation to Learn Long-Range Traversability,” Proceedings of Robotics: Science and Systems (RSS) (2007) p. 32.Google Scholar
Thrun, S., Montemerlo, M., Dahlkamp, H., Stavens, D., Aron, A., Diebel, J., Fong, P., Gale, J., Halpenny, M., Hoffmann, G. and Lau, K., “Stanley: The robot that won the DARPA Grand Challenge,” J. Field Rob. 23(9), 661692 (2006).CrossRefGoogle Scholar
Konolige, K., Agrawal, M., Bolles, R., Cowan, C., Fischler, M. and Gerkey, B., “Outdoor Mapping and Navigation Using Stereo Vision,” In: Experimental Robotics (Springer, Berlin, Heidelberg, 2008) pp. 179190.CrossRefGoogle Scholar
Castejón, C., Boada, B., Blanco, D. and Moreno, L., “Traversable region modeling for outdoor navigation,” J. Intell. Rob. Syst. 43(2–4), 175216 (2005).CrossRefGoogle Scholar
Hanafi, D., Abueejela, Y. M. and Zakaria, M. F., “Wall follower autonomous robot development applying fuzzy incremental controller,” Intell. Control Autom. 4(1), 18 (2013).CrossRefGoogle Scholar
Ye, C., Yung, N. H. and Wang, D., “A fuzzy controller with supervised learning assisted reinforcement learning algorithm for obstacle avoidance,” IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 33(1), 1727 (2003).Google ScholarPubMed
Jafar, F. A., Zakaria, N. A. and Yokota, K., “Visual features based motion controller for mobile robot navigation,” Int. J. Simul. Syst. Sci. Technol. 15(1), 714 (2014).Google Scholar
Saeedi, P., Lawrence, P. D. and Lowe, D. G., “Vision-based 3-D trajectory tracking for unknown environments,” IEEE Trans. Rob. 22(1), 119136 (2006).CrossRefGoogle Scholar
Hoffmann, G. M., Tomlin, C. J., Montemerlo, M. and Thrun, S., “Autonomous Automobile Trajectory Tracking for Off-Road Driving: Controller Design, Experimental Validation and Racing,” American Control Conference, New York City, USA (2007) pp. 22962301.Google Scholar
Quintìa, P., Domenech, J. E., Regueiro, C. V., Gamallo, C. and Iglesias, R., “Learning a Wall Following Behaviour in Mobile Robotics Using Stereo and Mono Vision,” IX Workshop en Agentes Fìsicos, Vigo, Espana (2008).Google Scholar
Hadsell, R., Sermanet, P., Ben, J., Erkan, A., Scoffier, M., Kavukcuoglu, K., Muller, U. and LeCun, Y., “Learning long-range vision for autonomous off-road driving,” J. Field Rob. 26(2), 120144 (2009).CrossRefGoogle Scholar
Muller, U., Ben, J., Cosatto, E., Flepp, B. and Cun, Y. L., “Off-Road Obstacle Avoidance Through End-to-End Learning” In: Advances in Neural Information Processing Systems (Weiss, Y., Scholkopf, B., and Platt, J., eds.) (MIT Press, Cambridge, MA, 2006) pp. 739746.Google Scholar
Bengio, Y., Courville, A. and Vincent, P., “Representation learning: A review and new perspectives,” IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 17981828 (2013).CrossRefGoogle ScholarPubMed
Riedmiller, M., “Neural Fitted Q Iteration - First Experiences with a Data Efficient Neural Reinforcement Learning Method,” European Conference on Machine Learning, Porto, Portugal (2005) pp. 317328.Google Scholar
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K. and Ostrovski, G., “Human-level control through deep reinforcement learning,” Nature 518(7540), 529533 (2015).CrossRefGoogle ScholarPubMed
Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N. M. O., Erez, T., Tassa, Y., Silver, D. and Wierstra, D. P., “Continuous Control with Deep Reinforcement Learning,” U.S. Patent Application 15/217,758 (2017).Google Scholar
Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D. and Kavukcuoglu, K., “Asynchronous Methods for Deep Reinforcement Learning,” International Conference on Machine Learning, New York City, USA (2016) pp. 19281937.Google Scholar
Günther, J., Pilarski, P., Helfrich, G., Shen, H. and Diepold, K., “Intelligent laser welding through representation, prediction, and control learning: An architecture with deep neural networks and reinforcement learning,” Mechatronics 34, 111 (2016).CrossRefGoogle Scholar
Schäfer, H., Proetzsch, M. and Berns, K., “Obstacle Detection in Mobile Outdoor Robots,” Proceedings of International Conference on Informatics in Control, Automation and Robotics, Angers, France (2007) pp. 141148.Google Scholar
Ian, G., Yoshua, B. and Aaron, C., Deep Learning (MIT Press, Cambridge, MA, 2016).Google Scholar
Yildirim, S., “Design of adaptive robot control system using recurrent neural network,” J. Intell. Rob. Syst. 44(3), 247261 (2005).CrossRefGoogle Scholar
Pham, D. and Yildirim, S., “Design of a neural internal model control system for a robot,” Robotica 18(5), 505512 (2000).CrossRefGoogle Scholar
Brahmi, H., Ammar, B. and Alimi, A. M., “Intelligent Path Planning Algorithm for Autonomous Robot Based on Recurrent Neural Networks,” International Conference on Advanced Logistics and Transport (ICALT), Tunisia (2013) pp. 199204.Google Scholar
Hausknecht, M. and Stone, P., “Deep recurrent q-learning for partially observable MDPs,” CoRR, abs/1507.06527 (2015).Google Scholar
Alamiyan Harandi, F. and Derhami, V., “Feature extraction from depth data using deep learning for supervised control of a wheeled robot,” J. Control 11(4), 1324 (2018).Google Scholar
Alamiyan Harandi, F., Derhami, V. and Jamshidib, F., “A new framework for mobile robot trajectory tracking using depth data and learning algorithms,” J. Intell. Fuzzy Syst. 34(6), 39693982 (2018).CrossRefGoogle Scholar
Ondrúška, P. and Posner, I., “Deep Tracking: Seeing Beyond Seeing Using Recurrent Neural Networks,” Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona, USA (2016) pp. 33613367.Google Scholar
Lange, S. and Riedmiller, M., “Deep Auto-encoder Neural Networks in Reinforcement Learning,” International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain (2010) pp. 18.Google Scholar
Shao, L., Cai, Z., Liu, L. and Lu, K., “Performance evaluation of deep feature learning for RGB-D image/video classification,” Inf. Sci. 385, 266283 (2017).CrossRefGoogle Scholar
Bengio, Y., “Learning deep architectures for AI,” Found. Trends Mach. Learn. 2(1), 1127 (2009).CrossRefGoogle Scholar
Hinton, G. and Salakhutdinov, R. R., “Reducing the dimensionality of data with neural networks,” Science 313(5786), 504507 (2006).CrossRefGoogle ScholarPubMed
Liu, J. N., Hu, Y., You, J. J. and Chan, P. W., “Deep Neural Network Based Feature Representation for Weather ForecastingProceedings on the International Conference on Artificial Intelligence (ICAI), Las Vegas, USA (2014).Google Scholar
Rumelhart, D. E., Hinton, G. E. and Williams, R. J., “Learning representations by back-propagating errors,” Nature 323(6088), 533 (1986).CrossRefGoogle Scholar
Fathinezhad, F., Derhami, V. and Rezaeian, M., “Supervised fuzzy reinforcement learning for robot navigation,” Appl. Soft Comput. 40, 3341 (2016).CrossRefGoogle Scholar
Justesen, N., Bontrager, P., Togelius, J. and Risi, S., “Deep learning for video game playing,” IEEE Transactions on Games, 11 (2019). doi: 10.1109/TG.2019.2896986Google Scholar

Alamiyan-Harandi et al. supplementary material

Alamiyan-Harandi et al. supplementary material

Download Alamiyan-Harandi et al. supplementary material(Video)
Video 27.6 MB