Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-11-11T00:28:33.088Z Has data issue: false hasContentIssue false

Research on rapid location method of mobile robot based on semantic grid map in large scene similar environment

Published online by Cambridge University Press:  08 June 2022

Hengyang Kuang
Affiliation:
School of Advanced Manufacturing Engineering, Chongqing University of Posts and Telecommunications, Chongqing400065, China
Yansheng Li*
Affiliation:
School of Advanced Manufacturing Engineering, Chongqing University of Posts and Telecommunications, Chongqing400065, China
Yi Zhang
Affiliation:
School of Advanced Manufacturing Engineering, Chongqing University of Posts and Telecommunications, Chongqing400065, China
Yong Wan
Affiliation:
School of Advanced Manufacturing Engineering, Chongqing University of Posts and Telecommunications, Chongqing400065, China
Gengyu Ge
Affiliation:
School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing400065, China
*
*Corresponding author. E-mail: liyansheng@cqupt.edu.cn

Abstract

Aiming at the problem that adaptive Monte Carlo localization (AMCL) algorithm is difficult to localize in large scenes and similar environments. This paper uses a semantic information-assisted approach to improve the AMCL algorithm. This method realizes the robust localization of the robot in the large scenes and similar environments. Firstly, the 2D grid map created by simultaneous localization and mapping using lidar can obtain highly accurate indoor environmental contour information. Secondly, the semantic object capture is achieved by using a depth camera combined with an instance segmentation algorithm. Then, the semantic grid map is created by mapping the semantic point cloud through the back-projection process of the pinhole camera. Finally, semantic grid maps are used as a priori information to assist in localization, which will be used to improve the initial particle swarm distribution in the global localization of the AMCL algorithm and thus will solve the robot localization problem in this environment. The experimental evidence shows that the semantic grid map solves the environmental information degradation problem caused by 2D lidar as well as improves the robot’s perception of the environment. In addition, this paper improves the localization robustness of the AMCL algorithm in large scenes and similar environments, resulting in an average localization success rate of about 90% or even higher, and further reduces the number of iterations. The global localization problem of robots in large scenes and similar environments is effectively solved.

Type
Research Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Cui, W., Liu, Q.D., Zhang, L.H., Wang, H.X., Lu, X. and Li, J., “A robust mobile robot indoor positioning system based on Wi-Fi,” Int. J. Adv. Robot. Syst. 17 (2020).CrossRefGoogle Scholar
Lee, G., Moon, B.C., Lee, S. and Han, D., “Fusion of the SLAM with Wi-Fi-based positioning methods for mobile robot-based learning data collection, localization, and tracking in indoor spaces,” Sensors 20 (2020).Google ScholarPubMed
Tao, B., Wu, H., Gong, Z., Yin, Z. and Ding, H., “An RFID-based mobile robot localization method combining phase difference and readability,” IEEE Trans. Automat. Sci. Eng. 18, 14061416 (2021).CrossRefGoogle Scholar
Min, H. G., Wu, X., Cheng, C. Y. and Zhao, X., “Kinematic and dynamic vehicle model-assisted global positioning method for autonomous vehicles with low-cost GPS/Camera/In-vehicle sensors,” Sensors 19 (2019).CrossRefGoogle ScholarPubMed
Debeunne, C. and Vivet, D., “A review of visual-LiDAR fusion based simultaneous localization and mapping,” Sensors 20 (2020).CrossRefGoogle ScholarPubMed
Hess, W., Kohler, D., Rapp, H. and Andor, D., “Real-Time Loop Closure in 2D LIDAR SLAM,” In: Proceedings of IEEE International Conference on Robotics and Automation (2016) pp. 12711278.Google Scholar
Kohlbrecher, S., Von Stryk, O., Meyer, J. and Klingauf, U., “A Flexible and Scalable SLAM System with Full 3D Motion Estimation,” In: Proceedings of IEEE International Symposium on Safety, Security, and Rescue Robotics (2011) pp. 155160.Google Scholar
Grisetti, G., Stachniss, C. and Burgard, W., “Improved techniques for grid mapping with Rao-Blackwellized particle filters,” IEEE Trans. Robot. 23, 3446 (2007).CrossRefGoogle Scholar
Konolige, K., Grisetti, G., Kummerle, R., Burgard, W., Limketkai, B. and Vincent, R., “Efficient Sparse Pose Adjustment for 2D Mapping,” In: Proceedings of IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings (2010) pp. 2229.Google Scholar
Mur-Artal, R., Montiel, J. M. M. and Tardos, J. D., ORB-SLAM: A Versatile and Accurate Monocular SLAM System (2015).CrossRefGoogle Scholar
Mur-Artal, R. and Tardos, J. D., “ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras,” IEEE Trans. Robot. 33, 12551262 (2017).CrossRefGoogle Scholar
Endres, F., Hess, J., Engelhard, N., Sturm, J., Cremers, D. and Burgard, W., “An evaluation of the RGB-D SLAM system,” In: Proceedings of IEEE International Conference on Robotics and Automation (2012) pp. 16911696.Google Scholar
Tateno, K., Tombari, F. and Navab, N., “When 2.5D is Not Enough: Simultaneous Reconstruction, Segmentation and Recognition on Dense SLAM,” In: Proceedings of IEEE International Conference on Robotics and Automation (2016) pp. 22952302.Google Scholar
Qin, T., Li, P. and Shen, S., “VINS-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Trans. Robot. 34, 10041020 (2018).CrossRefGoogle Scholar
Wang, R., Schworer, M. and Cremers, D., “Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras,” In: Proceedings of IEEE International Conference on Computer Vision (2017) pp. 39233931.Google Scholar
Qian, C., Zhang, H.J., Tang, J., Li, B.J. and Liu, H., “An orthogonal weighted occupancy likelihood map with IMU-aided laser scan matching for 2D indoor mapping,” Sensors 19 (2019).CrossRefGoogle ScholarPubMed
Thrun, S., Burgard, W. and Fox, D., Probabilistic Robotics. MIT (2006).Google Scholar
Li, C.-Y., Li, I. H., Chien, Y.-H., Wang, W.-Y. and Hsu, C.-C., “Improved Monte Carlo Localization with Robust Orientation Estimation based on Cloud Computing,” In: Proceedings of IEEE Congress on Evolutionary Computation (2016) pp. 45224527.Google Scholar
Zhao, S., Gu, J., Ou, Y., Zhang, W., Pu, J. and Peng, H., “IRobot Self-Localization using EKF,” In: Proceedings of IEEE International Conference on Information and Automation (2016) pp. 801806.Google Scholar
Xu, X., Pang, F., Ran, Y., Bai, Y., Zhang, L., Tan, Z., Wei, C. and Luo, M., “An indoor mobile robot positioning algorithm based on adaptive federated Kalman Filter,” IEEE Sens. J. 21, 2309823107 (2021).CrossRefGoogle Scholar
Yu, H., Wang, J., Wang, B., Han, H. and Chang, G., “Generalized total Kalman filter algorithm of nonlinear dynamic errors-in-variables model with application on indoor mobile robot positioning,” Acta Geodaetica et Geophysica 53, 107123 (2018).CrossRefGoogle Scholar
Mccormac, J., Handa, A., Davison, A. and Leutenegger, S., “Semantic Fusion: Dense 3D Semantic Mapping with Convolutional Neural Networks,” In: Proceedings of 2017 IEEE International Conference on Robotics and Automation (2017) pp. 4628–4635.Google Scholar
Salas-Moreno, R. F., Newcombe, R. A., Strasdat, H., Kelly, P. H. J., Davison, A. J., and IEEE, “SLAM Plus Plus: Simultaneous Localisation and Mapping at the Level of Objects,” In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2013) pp. 1352–1359.Google Scholar
Yang, S. and Scherer, S., “CubeSLAM: Monocular 3-D Object SLAM,” IEEE Trans. Robot. 35, 925938 (2019).CrossRefGoogle Scholar
Nicholson, L., Milford, M. and Sunderhauf, N., “QuadricSLAM: Dual quadrics from object detections as landmarks in object-oriented SLAM,” IEEE Robot. Automat. Lett. 4, 18 (2019).CrossRefGoogle Scholar
Kundu, A., Li, Y., Dellaert, F., Li, F. and Rehg, J. M., “Joint Semantic Segmentation and 3D Reconstruction from Monocular Video,” In: Proceedings of 13th European Conference on Computer Vision (ECCV) (2014) pp. 703718.Google Scholar
Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K. and Yuille, A.L., “DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,” IEEE Trans. Patt. Anal. Mach. Intell. 40, 834848 (2018).CrossRefGoogle ScholarPubMed
Siam, M., Gamal, M., Abdel-Razek, M., Yogamani, S., Jagersand, M. and IEEE, “RTSEG: Real-Time Semantic Segmentation Comparative Study,” In: Proceedings of IEEE International Conference on Image Processing (ICIP) (2018) pp. 16031607.Google Scholar
Wang, P., Chen, P., Yuan, Y., Liu, D., Huang, Z., Hou, X. and Cottrell, G., “Understanding Convolution for Semantic Segmentation,” In: Proceedings of IEEE Winter Conference on Applications of Computer Vision (2018) pp. 14511460.Google Scholar
Ren, S., He, K., Girshick, R. and Sun, J., “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Trans. Patt. Anal. Mach. Intell. 39, 11371149 (2017).CrossRefGoogle ScholarPubMed
Bolya, D., Zhou, C., Xiao, F. and Lee, Y. J., “YOLACT: Real-Time Instance Segmentation,” In: Proceedings of 17th IEEE/CVF International Conference on Computer Vision (2019) pp. 91569165.Google Scholar
He, K., Gkioxari, G., Dollar, P. and Girshick, R., “Mask R-CNN,” IEEE Trans. Patt. Anal. Mach. Intell. 42, 386397 (2020).CrossRefGoogle ScholarPubMed