Article contents
A Study of Sensor-Fusion Mechanism for Mobile Robot Global Localization
Published online by Cambridge University Press: 22 April 2019
Summary
Estimating the robot state within a known map is an essential problem for mobile robot; it is also referred to “localization”. Even LiDAR-based localization is practical in many applications, it is difficult to achieve global localization with LiDAR only for its low-dimension feedback, especially in environments with repetitive geometric features. A sensor-fusion-based localization system is introduced in this paper, which has the capability of addressing the global localization problem. Both LiDAR and vision sensors are integrated, making use of the rich information introduced by vision sensor and the robustness from LiDAR. A hybrid grid-map is built for global localization, and a visual global descriptor is applied to speed up the localization convergence, combined with a pose refining pipeline for improving the localization accuracy. Also, a trigger mechanism is introduced to solve kidnapped problem and verify the relocalization result. The experiments under different conditions are designed to evaluate the performance of the proposed approach, as well as a comparison with the existing localization systems. According to the experimental results, our system is able to solve the global localization problem, and the sensor-fusion mechanism in our system has an improved performance.
- Type
- Articles
- Information
- Copyright
- © Cambridge University Press 2019
Footnotes
The first two authors contributed equally to this work.
References
- 13
- Cited by