Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-26T06:50:43.505Z Has data issue: false hasContentIssue false

Assist system for remote manipulation of electric drills by the robot “WAREC-1R” using deep reinforcement learning

Published online by Cambridge University Press:  04 June 2021

Xiao Sun*
Affiliation:
Department of Mechatronics, University of Yamanashi, Yamanashi,Japan,
Hiroshi Naito
Affiliation:
Department of Modern Mechanical Engineering, Waseda University, Tokyo, Japan
Akio Namiki
Affiliation:
Department of Mechanical Engineering, Chiba University, Chiba, Japan
Yang Liu
Affiliation:
Department of Mechanical Engineering, Chiba University, Chiba, Japan
Takashi Matsuzawa
Affiliation:
Department of Modern Mechanical Engineering, Waseda University, Tokyo, Japan
Atsuo Takanishi
Affiliation:
Department of Modern Mechanical Engineering, Waseda University, Tokyo, Japan
*
*Corresponding author. Email: xsun@yamanashi.ac.jp

Abstract

Operation of tools has long been studied in robotics. Although appropriate hold of the tool by robots is the base of successful tool operation, it is not with ease especially for tools with complicated shape. In this paper, an assist system for a four-limbed robot is proposed for remote operation of reaching and grasping electric drills using deep reinforcement learning. Through comparative evaluation experiments, the increase of success rate for reaching and grasping is verified and the decrease in both physical and mental workload of the operator is also validated by the index of NASA-TLX.

Type
Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Terae, K., Matsubara, T., Hashimoto, K., Kubota, N., Namiki, A., Sun, X., Matsuzawa, T., Imai, A., Okawara, M., Kimura, S., Kumagai, K., Yamaguchi, K., Naito, H., Namura, K., Sato, T., Murakami, M., Yoshida, S. and Takanishi, A., “Development of Disaster Response Robot for Extreme Environments (27th Report: Body Mechanism Containing Batteries and Enabling Wireless Operation with Multi-sensor System),” The 37th Annual Conference of the Robotics Society of Japan, 1G2-01 (2019, in Japanese).Google Scholar
Tough Robotics Challenge (TRC). https://www.jst.go.jp/impact/en/program/07.html. Accessed April 20, 2021.Google Scholar
Sun, X., Hayashi, S., Hashimoto, K., Matsuzawa, T., Yoshida, Y., Sakai, N., Imai, A., Okawara, M., Kumagai, K., Matsubara, T., Yamaguchi, K. and Takanishi, A., “Error Compensation System with Proximity Sensors for Vertical Ladder Climbing of the Robot ‘WAREC-1’,” 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids) (2018) pp. 40–46. doi: 10.1109/HUMANOIDS.2018.8625073.CrossRefGoogle Scholar
Matsuzawa, T., Hashimoto, K., Sun, X., Teramachi, T., Kimura, S., Sakai, N., Yoshida, Y., Imai, A., Kumagai, K., Matsubara, T., Yamaguchi, K., Tan, W. and Takanishi, A., “Crawling Gait Generation Method for Four-Limbed Robot Based on Normalized Energy Stability Margin,” 2017 International Symposium on Safety, Security and Rescue Robotics (SSRR), Shanghai (2017) pp. 223–229. doi: 10.1109/SSRR.2017.8088167.CrossRefGoogle Scholar
Sato, T., Imai, A., Hashimoto, K., Namiki, A., Mouri, T., Sun, X., Matsuzawa, T., Okawara, M., Kimura, S., Kumagai, K., Matsubara, T., Yamaguchi, K., Naito, H., Namura, K., Terae, K., Murakami, M., Yoshida, S. and Takanishi, A., “Development of Disaster Response Robot for Extreme Environments (28th Report: The End-Effector of a Four-Limbed Robot Compatible with Manipulation Using Fingertips and Palm and Various Locomotion Styles),” Proceedings of the 37th Annual Conference of the Robotics Society of Japan, 1G2-02 (2019, in Japanese).Google Scholar
Chen, H., Ji, Y. and Niu, L., “Reinforcement learning path planning algorithm based on obstacle area expansion strategy,” Intell. Serv. Rob. 13, 289–297 (2020). doi: 10.1007/s11370-020-00313-y.CrossRefGoogle Scholar
Kim, D., Li, A. and Lee, J., “Stable robotic grasping of multiple objects using deep neural networks,” Robotica 39(4), 735–748 (2020). doi: 10.1017/S0263574720000703.CrossRefGoogle Scholar
Polvara, R., Sharma, S., Wan, J., Manning, A. and Sutton, R., “Autonomous vehicular landings on the deck of an unmanned surface vehicle using deep reinforcement learning,” Robotica 37(11), 1867–1882 (2019). doi: 10.1017/S0263574719000316.CrossRefGoogle Scholar
Yuan, R., Zhang, F., Wang, Y., Fu, Y. and Wang, S., “A Q-learning approach based on human reasoning fornavigation in a dynamic environment,” Robotica 37(3), 445–468 (2019). doi: 10.1017/S026357471800111X.CrossRefGoogle Scholar
Sharkawy, A., Koustoumpardis, P. and Aspragathos, N., “Neural network design for manipulator collision detection based only on the joint position sensors,” Robotica 38(10), 1737–1755 (2020). doi: 10.1017/S0263574719000985.CrossRefGoogle Scholar
Rajeswaran, A., Kumar, V., Gupta, A., Vezzani, G., Schulman, J., Todorov, E. and Levine, S., “Learning complex dexterous manipulation with deep reinforcement learning and demonstrations,” Comput. Sci. (2018). doi: 10.15607/RSS.2018.XIV.049.CrossRefGoogle Scholar
Fang, K., Zhu, Y., Garg, A., Kurenkov, A., Mehta, V., Fei-Fei, L. and Savarese, S., “Learning task-oriented grasping for tool manipulation from simulated self-supervision,” Int. J. Rob. Res. 39(4) (2019). doi: 10.1177/0278364919872545.CrossRefGoogle Scholar
Andrychowicz, M., Baker, B., Chociej, M., Jozefowicz, R., McGrew, B., Pachocki, J., Petron, A., Plappert, M., Powell, G., Ray, A., Schneider, J., Sidor, S., Tobin, J., Welinder, P., Weng, L. and Zaremba, W., “Learning dexterous in-hand manipulation,” Int. J. Rob. Res. 39(1) (2018). doi: 10.1177/0278364919887447.CrossRefGoogle Scholar
Redmon, J. and Farhadi, A., “Yolov3: An incremental improvement,” CoRR, abs/1804.02767 (2018).Google Scholar
Hashimoto, K., Kimura, S., Sakai, N., Hamamoto, S., Koizumi, A., Sun, X., Matsuzawa, T., Teramachi, T., Yoshida, Y., Imai, A., Kumagai, K., Matsubara, T., Yamaguchi, K., Ma, G. and Takanishi, A., “WAREC-1 - A Four-Limbed Robot Having High Locomotion Ability with Versatility in Locomotion Styles,” 2017 International Symposium on Safety, Security and Rescue Robotics (SSRR) (2017) pp. 172–178. doi: 10.1109/SSRR.2017.8088159.CrossRefGoogle Scholar
Namiki, A., Gao, M., Matsushita, S., Ito, N., Tanaka, T., Ueda, A., Murakami, Y., Ikeda, S., Wada, T. and Tachi, S., “Development of Remote Operation System of High-Power Dual Arm Robot Controlled by Telexistence FST ,” Robotics and Mechatronics Conference 2012, 2P1-P04 (2012, in Japanese).Google Scholar
Namiki, A., Matsumoto, Y., Liu, Y. and Maruyama, T., “Vision-Based Predictive Assist Control on Master-Slave Systems,” 2017 IEEE International Conference on Robotics and Automation (ICRA) (2017) pp. 5357–5362. doi: 10.1109/ICRA.2017.7989630.CrossRefGoogle Scholar
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S. and Hassabis, D., “Human-level control through deep reinforcement learning,” Nature 518(7540), 529–533 (2015). doi: 10.1038/nature14236.CrossRefGoogle Scholar
NASA TLX: TASK LOAD INDEX. https://humansystems.arc.nasa.gov/groups/TLX/. Accessed June 20, 2020.Google Scholar