Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-27T08:08:26.990Z Has data issue: false hasContentIssue false

A novel method for finding grasping handles in a clutter using RGBD Gaussian mixture models

Published online by Cambridge University Press:  16 June 2021

Olyvia Kundu
Affiliation:
TATA Consultancy Services, Bangalore560066, India
Samrat Dutta
Affiliation:
TATA Consultancy Services, Bangalore560066, India
Swagat Kumar*
Affiliation:
TATA Consultancy Services, Bangalore560066, India
*
*Corresponding author. Email: swagat.kumar@tcs.com

Abstract

The paper proposes a novel method to detect graspable handles for picking objects from a confined and cluttered space, such as the bins of a rack in a retail warehouse. The proposed method combines color and depth curvature information to create a Gaussian mixture model that can segment the target object from its background and imposes the geometrical constraints of a two-finger gripper to localize the graspable regions. This helps in overcoming the limitations of a poorly trained deep network object detector and provides a simple and efficient method for grasp pose detection that does not require a priori knowledge about object geometry and can be implemented online with near real-time performance. The efficacy of the proposed approach is demonstrated through simulation as well as real-world experiment.

Type
Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Marwan, Q. M., Chua, S. C. and Kwek, L. C., “Comprehensive review on reaching and grasping of objects in robotics,Robotica, 134 (2021). doi: 10.1017/S0263574721000023 Google Scholar
Saxena, A., Driemeyer, J. and Ng, A. Y., “Robotic grasping of novel objects using vision,” Int. J. Rob. Res. 27(2), 157173 (2008).CrossRefGoogle Scholar
Montesano, L. and Lopes, M., “Learning Grasping Affordances from Local Visual Descriptors,” IEEE 8th International Conference on Development and Learning, 2009. ICDL 2009 (IEEE, 2009) pp. 16.CrossRefGoogle Scholar
Jiang, Y., Moseson, S. and Saxena, A., “Efficient Grasping from RGBD Images: Learning Using a New Rectangle Representation,” 2011 IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2011) pp. 33043311.Google Scholar
Gualtieri, M., ten Pas, A., Saenko, K. and Platt, R., “High precision grasp pose detection in dense clutter,” CoRR, vol. abs/1603.01564 (2016).CrossRefGoogle Scholar
Jain, S. and Argall, B., “Grasp Detection for Assistive Robotic Manipulation,” 2016 IEEE International Conference on Robotics and Automation (ICRA) (2016) pp. 20152021.Google Scholar
Fischinger, D. and Vincze, M., “Shape based learning for grasping novel objects in cluttered scenes,” IFAC Proc, Vol. 45(22), 787792 (2012).CrossRefGoogle Scholar
Zeng, A., Song, S., Yu, K.-T., Donlon, E., Hogan, F. R., Bauza, M., Ma, D., Taylor, O., Liu, M., Romo, E. and Fazeli, N., “Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching,” 2018 IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2018) pp. 37503757.CrossRefGoogle Scholar
Lenz, I., Lee, H. and Saxena, A., “Deep learning for detecting robotic grasps,” Int. J. Rob. Res. 34(4-5), 705724 (2015).CrossRefGoogle Scholar
Redmon, J. and Angelova, A., “Real-Time Grasp Detection Using Convolutional Neural Networks,” 2015 IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2015) pp. 13161322.CrossRefGoogle Scholar
Kim, D., Li, A. and Lee, J., “Stable robotic grasping of multiple objects using deep neural networks,” Robotica 39(4), 735748 (2021). doi: 10.1017/S0263574720000703 CrossRefGoogle Scholar
Dong, M., Wei, S., Yu, X. and Yin, J., “Mask-gd segmentation based robotic grasp detection,” arXiv preprint arXiv:2101.08183 (2021).CrossRefGoogle Scholar
Aslan, S. N., Uçar, A. and Güzeliş, C., “Semantic Segmentation for Object Detection and Grasping with Humanoid Robots,” 2020 Innovations in Intelligent Systems and Applications Conference (ASYU) (IEEE, 2020) pp. 1–6.CrossRefGoogle Scholar
Miller, A. T. and Allen, P. K., “Graspit! a versatile simulator for robotic grasping,” IEEE Rob. Autom. Mag. 11(4), 110122 (2004).CrossRefGoogle Scholar
Klank, U., Pangercic, D., Rusu, R. B. and Beetz, M., “Real-Time CAD Model Matching for Mobile Manipulation and Grasping,” 9th IEEE-RAS International Conference on Humanoid Robots, 2009. Humanoids 2009 (IEEE, 2009) pp. 290296.CrossRefGoogle Scholar
Yan, W., Deng, Z., Chen, J., Nie, H. and Zhang, J., “Precision grasp planning for multi-finger hand to grasp unknown objects,” Robotica 37(8), 14151437 (2019).CrossRefGoogle Scholar
Pas, A. T. and Platt, R., “Using geometry to detect grasps in 3d point clouds,” (2015). arXiv preprint arXiv:1501.03100.Google Scholar
Ten Pas, A. and Platt, R., “Localizing Handle-like Grasp Affordances in 3d Point Clouds,” In: Experimental Robotics (Springer, 2016) pp. 623638.Google Scholar
ten Pas, A. and Platt, R., “Localizing grasp affordances in 3d points clouds using taubin quadric fitting,” CoRR, vol. abs/1311.3192 (2013).Google Scholar
Wurman, P. R. and Romano, J. M., “The amazonpicking challenge 2015,” IEEE Rob. Autom. Mag. 22(3), 1012 (2015).CrossRefGoogle Scholar
Eppner, C., Höfer, S., Jonschkowski, R., Martn-Martn, R., Sieverling, A., Wall, V. and Brock, O., “Lessons from the Amazon Picking Challenge: Four Aspects of Building Robotic Systems,” Proceedings of Robotics: Science and Systems, AnnArbor, Michigan (2016).Google Scholar
Richtsfeld, M. and Vincze, M. ., “Grasping of Unknown Objects from a Table Top,” Workshop on Vision in Action (2008).CrossRefGoogle Scholar
Ren, S., He, K., Girshick, R. B. and Sun, J., “Faster R-CNN: Towards real-time object detection with region proposal networks,” CoRR, vol. abs/1506.01497 (2015).Google Scholar
Zivkovic, Z., “Improved Adaptive Gaussian Mixture Model for Background Subtraction,” Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, vol. 2 (IEEE, 2004) pp. 2831.CrossRefGoogle Scholar
Revol, C. and Jourlin, M., “A new minimum variance region growing algorithm for image segmentation,” Pattern Recogn. Lett. 18(3), 249258 (1997).CrossRefGoogle Scholar
Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A. and Fitzgibbon, A., “Kinectfusion: Real-Time 3d Reconstruction and Interaction Using a Moving Depth Camera,” Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (ACM, 2011) pp. 559568.CrossRefGoogle Scholar
Rabbani, T., Van Den Heuvel, F. and Vosselmann, G., “Segmentation of point clouds using smoothness constraint,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 36(5), 248253 (2006).Google Scholar
Mahler, J., Liang, J., Niyaz, S., Laskey, M., Doan, R., Liu, X., Ojea, J. A. and Goldberg, K., “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” arXiv preprint arXiv:1703.09312 (2017).CrossRefGoogle Scholar
Kumra, S. and Kanan, C., “Robotic Grasp Detection Using Deep Convolutional Neural Networks,” 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2017) pp. 769776.CrossRefGoogle Scholar
Birglen, L. and Schlicht, T., “A statistical review of industrial robotic grippers,” Rob. Comput. Integr. Manuf. 49, 8897 (2018).CrossRefGoogle Scholar
LeCun, Y., Bengio, Y. and Hinton, G., “Deep learning,” Nature 521(7553), 436 (2015).CrossRefGoogle ScholarPubMed
Krizhevsky, A., Sutskever, I. and Hinton, G. E., “Imagenet Classification with Deep Convolutional Neural Networks,” In: Advances in Neural Information Processing Systems (2012) pp. 10971105.Google Scholar
Redmon, J., Divvala, S., Girshick, R. and Farhadi, A., “You Only Look Once: Unified, Real-Time Object Detection,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016) pp. 779788.Google Scholar
Dai, J., Li, Y., He, K. and Sun, J., “R-FCN: Object Detection via Region-Based Fully Convolutional Networks,” In: Advances in Neural Information Processing Systems (2016) pp. 379387.Google Scholar
Zhao, H., Shi, J., Qi, X., Wang, X. and Jia, J., “Pyramid Scene Parsing Network,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017) pp. 28812890.Google Scholar
Mahajan, K., Majumder, A., Nanduri, H. and Kumar, S., “A Deep Framework for Automatic Annotation with Application to Retail Warehouses,” BMVC (2018).Google Scholar
Singh, C. K., Majumder, A., Kumar, S. and Behera, L., “Deep Network Based Automatic Annotation for Warehouse Automation,” 2018 International Joint Conference on Neural Networks (IJCNN) (IEEE, 2018) pp. 18.CrossRefGoogle Scholar
Eitel, A., Springenberg, J. T., Spinello, L., Riedmiller, M. and Burgard, W., “Multimodal Deep Learning for Robust RGB-D Object Recognition,” 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2015) pp. 681687.CrossRefGoogle Scholar
Bo, L., Ren, X. and Fox, D., “Unsupervised Feature Learning for RGB-D Based Object Recognition,” In: Experimental Robotics (Springer, 2013) pp. 387402.CrossRefGoogle Scholar
Maturana, D. and Scherer, S., “Voxnet: A 3d Convolutional Neural Network for Real-Time Object Recognition,” 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2015) pp. 922928.CrossRefGoogle Scholar
PCL, “The point cloud library.” http://pointclouds.org/.Google Scholar
Singh, A., Sha, J., Narayan, K. S., Achim, T. and Abbeel, P., “Bigbird: A Large-Scale 3d Database of Object Instances,” 2014 IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2014) pp. 509516.CrossRefGoogle Scholar
OnRobot, “RG2 Industrial Gripper.” http://onrobot.dk/Product.html.Google Scholar
Smisek, J., Jancosek, M. and Pajdla, T., “3d with Kinect,” In: Consumer Depth Cameras for Computer Vision (Springer, 2013) pp. 325.CrossRefGoogle Scholar
Kundu, O., “Demonstration video for the proposed grasping algorithm.” https://www.youtube.com/watch?v=P9BIPXtnQrw (2017).Google Scholar
Kundu, O., “Program source code on gitlab.” https://gitlab.com/olyvia/primitives_fit.git.Google Scholar
University of Berkeley, “Caffe deep learning framework.” https://caffe.berkeleyvision.org/ (2020).Google Scholar

Kundu et al. supplementary material

Kundu et al. supplementary material

Download Kundu et al. supplementary material(Video)
Video 9.1 MB