Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-28T17:36:38.663Z Has data issue: false hasContentIssue false

Multi-resolution Visual Positioning and Navigation Technique for Unmanned Aerial System Landing Assistance

Published online by Cambridge University Press:  21 June 2017

Chong Yu*
Affiliation:
(Asia-Pacific Research and Development Ltd, Intel Corporation, Shanghai, 200241, China)
Jiyuan Cai
Affiliation:
(School of Aeronautics and Astronautics, Shanghai Jiao Tong University, Shanghai, 200240, China)
Qingyu Chen
Affiliation:
(Robotics Institute, University of Michigan, Ann Arbor, MI, 48105, USA)
*

Abstract

To achieve more accurate navigation performance in the landing process, a multi-resolution visual positioning technique is proposed for landing assistance of an Unmanned Aerial System (UAS). This technique uses a captured image of an artificial landmark (e.g. barcode) to provide relative positioning information in the X, Y and Z axes, and yaw, roll and pitch orientations. A multi-resolution coding algorithm is designed to ensure the UAS will not lose the detection of the landing target due to limited visual angles or camera resolution. Simulation and real world experiments prove the performance of the proposed technique in positioning accuracy, detection accuracy, and navigation effect. Two types of UAS are used to verify the generalisation of the proposed technique. Comparison experiments to state-of-the-art techniques are also included with the results analysis.

Type
Research Article
Copyright
Copyright © The Royal Institute of Navigation 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Benini, A., Rutherford, M.J. and Valavanis, K.P. (2016). Real-time, GPU-based pose estimation of a UAV for autonomous takeoff and landing. IEEE International Conference on Robotics and Automation (ICRA), 34633470.CrossRefGoogle Scholar
Cocchioni, F., Mancini, A. and Longhi, S. (2014). Autonomous navigation, landing and recharge of a quadrotor using artificial vision. IEEE International Conference on Unmanned Aircraft Systems (ICUAS), 418429.CrossRefGoogle Scholar
Felzenszwalb, P.F. and Huttenlocher, D.P. (2004). Efficient graph-based image segmentation. International Journal of Computer Vision, 59(2), 167181.CrossRefGoogle Scholar
Groves, P.D. (2011). Shadow matching: A new GNSS positioning technique for urban canyons. Journal of Navigation, 64(03), 417430.CrossRefGoogle Scholar
Guan, X. and Bai, H. (2012). A GPU accelerated real-time self-contained visual navigation system for UAVs. IEEE International Conference on Information and Automation (ICIA), 578581.CrossRefGoogle Scholar
Hartley, R. and Zisserman, A. (2003). Multiple view geometry in computer vision. Cambridge university press.Google Scholar
Jung, Y., Lee, D. and Bang, H. (2014). Study on ellipse fitting problem for vision-based autonomous landing of an UAV. IEEE 14th International Conference on Control, Automation and Systems (ICCAS), 16311634.CrossRefGoogle Scholar
Lange, S., Sunderhauf, N. and Protzel, P. (2009). A vision based onboard approach for landing and position control of an autonomous multirotor UAV in GPS-denied environments. IEEE International Conference on Advanced Robotics (ICAR), 16.Google Scholar
Lee, H.K., Soon, B., Barnes, J., Wang, J. and Rizos, C. (2008). Experimental analysis of GPS/Pseudolite/INS integration for aircraft precision approach and landing. Journal of Navigation, 61(02), 257270.CrossRefGoogle Scholar
Lee, M.F.R., Su, S.F., Yeah, J.W.E., Huang, H.M. and Chen, J. (2014). Autonomous landing system for aerial mobile robot cooperation. IEEE 15th International Symposium on Soft Computing and Intelligent Systems (SCIS), Joint 7th International Conference on and Advanced Intelligent Systems (ISIS), 13061311.CrossRefGoogle Scholar
Li, K., Liu, P., Pang, T., Yang, Z. and Chen, B.M. (2015). Development of an unmanned aerial vehicle for rooftop landing and surveillance. IEEE International Conference on Unmanned Aircraft Systems (ICUAS), 832838.CrossRefGoogle Scholar
Li, S., Lu, R., Zhang, L. and Peng, Y. (2013). Image processing algorithms for deep-space autonomous optical navigation. Journal of Navigation, 66(04), 605623.CrossRefGoogle Scholar
Olson, E. (2011). AprilTag: A robust and flexible visual fiducial system. IEEE International Conference on Robotics and Automation (ICRA), 34003407.CrossRefGoogle Scholar
Roozing, W. and Göktoğan, A.H. (2013). Low-cost vision-based 6-DOF MAV localization using IR beacons. IEEE/ASME International Conference on Advanced Intelligent Mechatronics, 10031009.CrossRefGoogle Scholar
Yang, S., Scherer, S.A., Schauwecker, K. and Zell, A. (2014). Autonomous landing of mavs on an arbitrarily textured landing site using onboard monocular vision. Journal of Intelligent & Robotic Systems, 74(1–2), 2743.CrossRefGoogle Scholar