Hostname: page-component-78c5997874-lj6df Total loading time: 0 Render date: 2024-11-10T17:39:00.217Z Has data issue: false hasContentIssue false

A monocular mobile robot reactive navigation approach based on the inverse perspective transformation

Published online by Cambridge University Press:  22 May 2012

Francisco Bonin-Font*
Affiliation:
Department of Mathematics and Computer Science, University of the Balearic Islands, Palma, Balearic Islands, Spain. E-mails: francisco.bonin@uib.es, antoni.burguera@uib.es, alberto.ortiz@uib.es and goliver@uib.es
Antoni Burguera
Affiliation:
Department of Mathematics and Computer Science, University of the Balearic Islands, Palma, Balearic Islands, Spain. E-mails: francisco.bonin@uib.es, antoni.burguera@uib.es, alberto.ortiz@uib.es and goliver@uib.es
Alberto Ortiz
Affiliation:
Department of Mathematics and Computer Science, University of the Balearic Islands, Palma, Balearic Islands, Spain. E-mails: francisco.bonin@uib.es, antoni.burguera@uib.es, alberto.ortiz@uib.es and goliver@uib.es
Gabriel Oliver
Affiliation:
Department of Mathematics and Computer Science, University of the Balearic Islands, Palma, Balearic Islands, Spain. E-mails: francisco.bonin@uib.es, antoni.burguera@uib.es, alberto.ortiz@uib.es and goliver@uib.es
*
*Corresponding author. E-mail: francisco.bonin@uib.es

Summary

This paper presents an approach to visual obstacle avoidance and reactive robot navigation for outdoor and indoor environments. The obstacle detection algorithm includes an image feature tracking procedure followed by a feature classification process based on the IPT (Inverse Perspective Transformation). The classifier discriminates obstacle points from ground points. Obstacle features permit to draw out the obstacle boundaries which are used to construct a local and qualitative polar occupancy grid, analogously to a visual sonar. The navigation task is completed with a robocentric localization algorithm to compute the robot pose by means of an EKF (Extended Kalman Filter). The filter integrates the world coordinates of the ground points and the robot position in its state vector. The visual pose estimation process is intended to correct possible drifts on the dead-reckoning data provided by the proprioceptive robot sensors. The experiments, conducted indoors and outdoors, illustrate the range of scenarios where our proposal has proved to be useful, and show, both qualitatively and quantitatively, the benefits it provides.

Type
Articles
Copyright
Copyright © Cambridge University Press 2012

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Agrawal, M., Konolige, K. and Blas, M. R., “Censur E: Center surround extremas or realtime feature detection and matching,” Lecture Notes Comput. Sci. 5305 (3), 102115 (2008).CrossRefGoogle Scholar
2.Antich, J. and Ortiz, A., “Bug-based T2: A New Globally Convergent Potential Field Approach to Obstacle Avoidance,” Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS), Beijing, China (2006).Google Scholar
3.Bay, H., Tuytelaars, T. and Van Gool, L., “Speeded-Up Robust Features (SURF),” Comput. Vis. Image Underst. 110, 346359 (2008).CrossRefGoogle Scholar
4.Bertozzi, M. and Broggi, A., “GOLD: A parallel real-time stereo vision system for generic obstacle and Lane detection,” IEEE Trans. Image Proc. 7 (1), 6281 (1998).CrossRefGoogle Scholar
5.Bonin, F., Ortiz, A. and Oliver, G., “Visual navigation for mobile robots: A survey,” J. Intell. Robot. Syst. 53 (3), 263296 (2008).CrossRefGoogle Scholar
6.Bonin-Font, F., Ortiz, A. and Oliver, G., “Experimental Assessment of Different Feature Tracking Strategies for an IPT-based Navigation Task,” Proceedings of IFAC Intelligent Autonomous Vehicles (IAV), Lecce (Italy) (2010).Google Scholar
7.Borenstein, J. and Koren, I., “The vector field histogram - fast obstacle avoidance for mobile robots,” J. Robot. Autom. 7 (3), 278288 (1991).CrossRefGoogle Scholar
8.Bowyer, K., Kranenburg, C. and Dougherty, S., “Edge detector evaluation using empirical ROC curves,” Comput. Vis. Image Underst. 84 (1), 77103 (2001).CrossRefGoogle Scholar
9.Burguera, A., González, Y. and Oliver, G., “On the use of likelihood fields to perform sonar scan matching localization. Springer Auton. Robots 26 (4), 203222 (2009).CrossRefGoogle Scholar
10.Canny, J., “A computational approach to edge detection,”IEEE Trans. Pattern Anal. Mach. Intell. 8 (6), 679698 (1986).CrossRefGoogle ScholarPubMed
11.Castellanos, J. A., Martínez-Cantín, R., Tardós, J. D. and Neira, J., “Robocentric map Joining: Improving the consistency of EKF-SLAM,” Robot. Auton. Syst. 55, 2129 (2007).CrossRefGoogle Scholar
12.Çelik, K., Chung, S., Clausman, M. and Somani, A. K., “Monocular Vision SLAM for Indoor Aerial Vehicles,” Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS), St. Louis, USA (2009) pp. 15661573.Google Scholar
13.Civera, J., Grasa, O. G., Davison, A. J. and Montiel, J. M. M., “1-Point RANSAC for EKF-Based Structure from Motion,” Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS), St. Louis, USA (2009).Google Scholar
14.Diosi, A. and Kleeman, L., “Fast laser Scan Matching Using Polar Coordinates,” Int. J. Robot. Res. 26 (10), 11251153 (2007).CrossRefGoogle Scholar
15.Duda, R. O. and Hart, P., Pattern Classification and Scene Analysis (John Wiley and Sons, USA, 1973).Google Scholar
16.Fasola, J. and Veloso, M., “Real-Time Object Detection using Segmented and Grayscale Images,” Proceedings of IEEE Int'l Conference on Robotics and Automations (ICRA), Orlando, FL, USA (2006) pp. 40884090.Google Scholar
17.Hanley, J. A. and McNeil, B., “The meaning and Use of the Area under a Receiver Operating Charateristic (ROC) Curve,” Radiology 143 (1), 381395 (1982).CrossRefGoogle Scholar
18.Harris, C. and Stephens, M., “Combined Corner and Edge Detector,” In: Proceedings of the Fourth Alvey Vision Conference, Manchester, UK, vol. 15 (1988) pp. 147151.Google Scholar
19.Hartley, R. and Zisserman, A., Multiple View Geometry in Computer Vision (Cambridge University Press, Cambridge, UK, 2003) ISBN: 0521623049.Google Scholar
20.Huang, S. and Dissanayake, , “Convergence and consistency Analysis for Extended Kalman Filter based SLAM,” IEEE Trans. Robot. 23 (5), 10361049 (2007).Google Scholar
21.Lowe, D. G., “Distinctive image Features from Scale-Invariant Keypoints,” Int. J. Comput. Vis. 60 (2), 91110 (2004).CrossRefGoogle Scholar
22.Ma, G., Park, S. B., Miller-Schneiders, S., Ioffe, A. and Kummert, A.Vision-based Pedestrian Detection - Reliable Pedestrian Candidate Detection by Combining IPM and a 1D Profile,” Proceedings of the IEEE Intelligent Transportation Systems Conference, Seattle, WA, USA (Sep. 2007) pp. 137142.Google Scholar
23.Martin, M. C., “Evolving visual Sonar: Depth from monocular images,” Pattern Recognit. Lett. 27, 11741180 (2006).CrossRefGoogle Scholar
24.Marzban, C., “The ROC curve and the area under it as performance measures,” Weather Forecast. 19 (6), 11061114 (2004).CrossRefGoogle Scholar
25.Minguez, J. and Montano, L., “Nearness Diagram (ND) Navigation: Collision avoidance in troublesome scenarios,” IEEE Trans. Robot. Autom. 20 (1), 4559 (2004).CrossRefGoogle Scholar
26.Mourikis, A., Trawny, N., Roumeliotis, S., Johnson, A., Ansar, E. and Matthies, L.. “Vision-aided inertial navigation for spacecraft entry, descent and landing,” IEEE Trans. Robot. 25 (2), 264280 (2009).CrossRefGoogle Scholar
27.Ollero, A., Ferruz, J., Caballero, F., Hurtado, S. and Merino, L., “Motion Compensation and Object Detection for Autonomous Helicopter Visual Navigation in the COMETS System,” Proceedings of IEEE Int'l Conference on Robotics and Automation (ICRA), New Orleans, USA (2004) pp. 1924.Google Scholar
28.Paz, L. M., Piniés, P., Tardós, J. D. and Neira, J.. “Large scale 6DOF SLAM with a stereo camera in hand,” IEEE Trans. Robot. 24 (5), 946957 (2007).CrossRefGoogle Scholar
29.Roberts, R., Nguyen, H., Krishnamurthi, N. and Balch, T., “Memory-based Learning for Visual Odometry,” Proceedings of IEEE Int'l Conference on Robotics and Automation (ICRA), Pasadena, CA, USA (2008).Google Scholar
30.Rodrigo, R., Zouqi, M., Chen, Z. and Samarabandu, J., “Robust and efficient feature tracking for indoor navigation,” IEEE Trans. Syst., Man and Cybern. 39 (3), 658671 (2009).CrossRefGoogle ScholarPubMed
31.Rosten, E. and Drummond, T.Machine Learning for High-Speed Corner Detection,” In: Proceedings of the European Conference on Computer Vision (ECCV), vol. 1 (2006) pp. 430443.Google Scholar
32.Serrano, A. J., Soria, E., Martin, J. D., Magdalena, R. and Gomez, J.. “Feature Selection Using ROC Curves on Classification Problems,” Proceedings of the International Joint Conference on Neural Networks (IJCNN), Barcelona (Spain) (2010) pp. 16.Google Scholar
33.Shi, J. and Tomasi, C., “Good Features to Track,” Proceedings of IEEE Int'l Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA (1994) pp. 593600.Google Scholar
34.Simond, N. and Parent, M., “Obstacle Detection from IPM and Super-Homography,” Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS), California, Sant Diego, USA (Nov. 2007) pp. 42834288.Google Scholar
35.Talukder, A. and Matties, L., “Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical FLow,” In: Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan (Oct. 2004) pp. 37183725.Google Scholar
36.Thrun, S., Burgard, W. and Fox, D., Probabilistic Robotics, Massachusetts (USA) (The MIT Press, 2005).Google Scholar
37.Zhou, C., Wei, Y. and Tan, T., “Mobile Robot Self-Localization Based on Global Visual Appearance Features,” In: Proceedings of IEEE Int'l Conference on Robotics and Automation (ICRA), Taipei, Taiwan (2003) pp. 12711276.Google Scholar