Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-27T07:33:45.725Z Has data issue: false hasContentIssue false

Learning-based simulation and modeling of unorganized chaining behavior using data generated from 3D human motion tracking

Published online by Cambridge University Press:  16 June 2021

Abhinav Malviya*
Affiliation:
Centre of Intelligent Robotics, Indian Institute of Information Technology, Allahabad, Prayagraj, India
Rahul Kala
Affiliation:
Centre of Intelligent Robotics, Indian Institute of Information Technology, Allahabad, Prayagraj, India
*
*Corresponding author. Email: abhinavmcs0001@gmail.com

Abstract

The paper models the unorganized chaining behavior, where humans need to walk in a chain due to a constrained environment. Detection and tracking are done using a 3D LiDAR, which has the challenges of environmental noises, uncontrolled environment, and occlusions. The Kalman filter is used for tracking. The trajectories are analyzed and used to train a behavioral model. The modeling has applications in socialistic robot motion planning and simulations. Based on the results, we conclude that the trajectory prediction by our approach is more socialistic and has a lesser error when compared to the artificial potential field method.

Type
Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Guo, L., Manglani, S., Liu, Y. and Jia, Y., “Automatic sensor correction of autonomous vehicles by human-vehicle teaching-and-learning,” IEEE Trans. Veh. Technol. 67(9), 80858099 (2018).CrossRefGoogle Scholar
Hu, C., Wang, R., Yan, F. and Chen, N., “Output constraint control on path following of four-wheel independently actuated autonomous ground vehicles,” IEEE Trans. Veh. Technol. 65(6), 40334043 (2015).CrossRefGoogle Scholar
Broch, J., Maltz, D. A., Johnson, D. B., Hu, Y. C. and Jetcheva, J., “A Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Protocols,” Proceedings of the 4th Annual ACM/IEEE International Conference on Mobile Computing and Networking (1998) pp. 8597.Google Scholar
Ariyakhajorn, J., Wannawilai, P. and Sathitwiriyawong, C., “A Comparative Study of Random Waypoint and Gauss-Markov Mobility Models in the Performance Evaluation of Manet,2006 International Symposium on Communications and Information Technologies (IEEE, 2006) pp. 894899.CrossRefGoogle Scholar
Lee, K., Hong, S., Kim, S. J., Rhee, I. and Chong, S., “SLAW: Self-similar least-action human walk,” IEEE/ACM Trans. Networking 20(2), 515529 (2011).CrossRefGoogle Scholar
Wang, D. Z. and Posner, I., “Voting for Voting in Online Point Cloud Object Detection,” In: Robotics: Science and System, vol. 1(3) (2015) pp. 1015607.Google Scholar
Engelcke, M., Rao, D., Wang, D. Z., Tong, C. H. and Posner, I., “Vote3deep: Fast Object Detection in 3D Point Clouds Using Efficient Convolutional Neural Networks,” 2017 IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2017) pp. 13551361.CrossRefGoogle Scholar
Song, S. and Xiao, J., “Sliding Shapes for 3D Object Detection in Depth Images,European Conference on Computer Vision (2014) pp. 634651.Google Scholar
Song, S. and Xiao, J., “Deep Sliding Shapes for Amodal 3D Object Detection in RGB-D Images,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016) pp. 808816.Google Scholar
Li, B., “3D Fully Convolutional Network for Vehicle Detection in Point Cloud,” 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2017) pp. 15131518.CrossRefGoogle Scholar
Li, B., Zhang, T. and Xia, T., “Vehicle detection from 3D lidar using fully convolutional network,” RSS (2016).Google Scholar
Ku, J., Mozifian, M., Lee, J., Harakeh, A. and Waslander, S. L., “Joint 3D Proposal Generation and Object Detection from View Aggregation,” 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2018) pp. 18.CrossRefGoogle Scholar
Chen, X., Ma, H., Wan, J., Li, B. and Xia, T., “Multi-View 3D Object Detection Network for Autonomous Driving,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017) pp. 19071915.Google Scholar
Li, K., Wang, X., Xu, Y. and Wang, J., “Density enhancement-based long-range pedestrian detection using 3D range data,” IEEE Trans. Intell. Transp. Syst. 17(5), 13681380 (2015).CrossRefGoogle Scholar
Börcs, A., Nagy, B. and Benedek, C., “Instant object detection in lidar point clouds,” IEEE Geosci. Remote Sens. Lett. 14(7), 992996 (2017).CrossRefGoogle Scholar
Choi, Y., Kim, N., Hwang, S., Park, K., Yoon, J. S., An, K. and Kweon, I. S., “KAIST multi-spectral day/night data set for autonomous and assisted driving,” IEEE Trans. Intell. Transp. Syst. 19(3), 934948 (2018).CrossRefGoogle Scholar
Yan, Z., Duckett, T. and Bellotto, N., “Online learning for 3D LiDAR-based human detection: Experimental analysis of point cloud clustering and classification methods,” Auto. Rob. 44(2), 147164 (2020).CrossRefGoogle Scholar
Lang, A. H., Vora, S., Caesar, H., Zhou, L., Yang, J. and Beijbom, O., “Pointpillars: Fast Encoders for Object Detection from Point Clouds,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019) pp. 1269712705.Google Scholar
Qi, C. R., Liu, W., Wu, C., Su, H. and Guibas, L. J., “Frustum Pointnets for 3D Object Detection from RGB-D Data,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018) pp. 918927.Google Scholar
Kalman, R. E., “A new approach to linear filtering and prediction problems,” J. Basic Eng. 82(1), 3545 (1960).CrossRefGoogle Scholar
Bar-Shalom, Y., Tracking and Data Association (Academic Press Professional Inc., San Diego, CA, USA, 1987).Google Scholar
Rezatofighi Hamid, S., Milan, A., Zhang, Z., Shi, Q., Dick, A. and Reid, I., “Joint Probabilistic Data Association Revisited,” Proceedings of the IEEE International Conference on Computer Vision (2015) pp. 30473055.Google Scholar
Ess, A., Leibe, B. and Van Gool, L., “Depth and Appearance for Mobile Scene Analysis,” 2007 IEEE 11th International Conference on Computer Vision (IEEE, 2007) pp. 1–8.CrossRefGoogle Scholar
Bajracharya, M., Moghaddam, B., Howard, A., Brennan, S. and Matthies, L. H., “A fast stereo-based system for detecting and tracking pedestrians from a moving vehicle,” Int. J. Rob. Res. 28(11–12), 14661485 (2009).CrossRefGoogle Scholar
Gavrila, D. M. and Munder, S., “Multi-cue pedestrian detection and tracking from a moving vehicle,Int. J. Comput. Vis. 73(1), 4159 (2007).CrossRefGoogle Scholar
Dequaire, J., Ondrúška, P., Rao, D., Wang, D. and Posner, I., “Deep tracking in the wild: End-to-end tracking using recurrent neural networks,” Int. J. Rob. Res. 37(4–5), 492512 (2018).CrossRefGoogle Scholar
Ondruska, P. and Posner, I., “Deep Tracking: Seeing Beyond Seeing Using Recurrent Neural Networks,Thirtieth AAAI Conference on Artificial Intelligence (2016).Google Scholar
Gao, S., Han, Z., Li, C., Ye, Q. and Jiao, J., “Real-time multipedestrian tracking in traffic scenes via an RGB-D-based layered graph model,IEEE Trans. Intell. Transp. Syst. 16(5), 28142825 (2015).CrossRefGoogle Scholar
Held, D., Levinson, J., Thrun, S. and Savarese, S., “Robust real-time tracking combining 3D shape, color, and motion,” Int. J. Rob. Res. 35(1–3), 3049 (2016).CrossRefGoogle Scholar
Lee, K. H. and Hwang, J. N., “On-road pedestrian tracking across multiple driving recorders,IEEE Trans. Multimedia 17(9), 14291438 (2015).CrossRefGoogle Scholar
Ozuysal, M., Lepetit, V. and Fua, P., “Pose Estimation for Category Specific Multiview Object Localization,” 2009 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009) pp. 778785.CrossRefGoogle Scholar
Ghodrati, A., Pedersoli, M. and Tuytelaars, T., “Is 2D Information Enough for Viewpoint Estimation?,” Proceedings BMVC 2014 (2014) pp. 112.Google Scholar
Teney, D. and Piater, J., “Multiview feature distributions for object detection and continuous pose estimation,” Comput. Vis. Image Understanding 125(1), 265282 (2014).CrossRefGoogle Scholar
Hara, K. and Chellappa, R., “Growing Regression Forests by Classification: Applications to Object Pose Estimation,” European Conference on Computer Vision (2014) pp. 552567.Google Scholar
Danielsson, S., Petersson, L. and Eidehall, A., “Monte Carlo Based Threat Assessment: Analysis and Improvements,” 2007 IEEE Intelligent Vehicles Symposium (IEEE, 2007) pp. 233238.CrossRefGoogle Scholar
Ariyakhajorn, J., Wannawilai, P. and Sathitwiriyawong, C., “A Comparative Study of Random Waypoint and Gauss-Markov Mobility Models in the Performance Evaluation of Manet,” 2006 International Symposium on Communications and Information Technologies (IEEE, 2006) pp. 894899.CrossRefGoogle Scholar
Qiao, S., Shen, D., Wang, X., Han, N. and Zhu, W., “A self-adaptive parameter selection trajectory prediction approach via hidden Markov models,” IEEE Trans. Intell. Transp. Syst. 16(1), 284296 (2014).CrossRefGoogle Scholar
Lv, Q., Qiao, Y., Ansari, N., Liu, J. and Yang, J., “Big data driven hidden Markov model based individual mobility prediction at points of interest,” IEEE Trans. Veh. Technol. 66(6), 52045216 (2016).CrossRefGoogle Scholar
Wang, C., Ma, L., Li, R., Durrani, T. S. and Zhang, H., “Exploring trajectory prediction through machine learning methods,” IEEE Access 7(1), 101441101452 (2019).CrossRefGoogle Scholar
Wang, L., Zhang, L. and Yi, Z., “Trajectory predictor by using recurrent neural networks in visual tracking,” IEEE Trans. Cybern. 47(10), 31723183 (2017).CrossRefGoogle ScholarPubMed
Hu, W., Xie, D., Fu, Z., Zeng, W. and Maybank, S., “Semantic-based surveillance video retrieval,IEEE Trans. Image Process. 16(4), 11681181 (2007).CrossRefGoogle Scholar
Morris, B. T. and Trivedi, M. M., “Trajectory learning for activity understanding: Unsupervised, multilevel, and long-term adaptive approach,” IEEE Trans. Pattern Anal. Mach. Intell. 33(11), 22872301 (2011).CrossRefGoogle ScholarPubMed
Alahi, A., Goel, K., Ramanathan, V., Robicquet, A., Fei-Fei, L. and Savarese, S., “Social LSTM: Human Trajectory Prediction in Crowded Spaces,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016) pp. 961971.Google Scholar
Luber, M., Stork, J. A., Tipaldi, G. D. and Arras, K. O., “People Tracking with Human Motion Predictions from Social Forces,” 2010 IEEE International Conference on Robotics and Automation (IEEE, 2010) pp. 464469.CrossRefGoogle Scholar
Leal-Taixé, L., Fenzi, M., Kuznetsova, A., Rosenhahn, B. and Savarese, S., “Learning an Image-Based Motion Context for Multiple People Tracking,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014) pp. 35423549.Google Scholar
Choi, W. and Savarese, S., “A Unified Framework for Multi-Target Tracking and Collective Activity Recognition,” European Conference on Computer Vision (2012) pp. 215230.Google Scholar
Choi, W. and Savarese, S., “Understanding collective activities of people from videos,” IEEE Trans. Pattern Anal. Mach. Intell. 36(6), 12421257 (2014).CrossRefGoogle Scholar
Kruse, T., Pandey, A. K., Alami, R. and Kirsch, A., “Human-aware robot navigation: A survey,” Rob. Auto. Syst. 61(12), 17261743 (2013).CrossRefGoogle Scholar
Rios-Martinez, J., Spalanzani, A. and Laugier, C., “From proxemics theory to socially-aware navigation: A survey,” Int. J. Soc. Rob. 7(2), 137153 (2015).CrossRefGoogle Scholar
Lam, C. P., Chou, C. T., Chiang, K. H. and Fu, L. C., “Human-centered robot navigation—towards a harmoniously human–robot coexisting environment,” IEEE Trans. Rob. 27(1), 99112 (2010).CrossRefGoogle Scholar
Kretzschmar, H., Spies, M., Sprunk, C. and Burgard, W., “Socially compliant mobile robot navigation via inverse reinforcement learning,” Int. J. Rob. Res. 35(11), 12891307 (2016).CrossRefGoogle Scholar
Truong, X. T. and Ngo, T. D., “Toward socially aware robot navigation in dynamic and crowded environments: A proactive social motion model,” IEEE Trans. Autom. Sci. Eng. 14(4), 17431760 (2017).CrossRefGoogle Scholar
Paliwal, S. S. and Kala, R., “Maximum clearance rapid motion planning algorithm,” Robotica 36(6), 882903 (2018).CrossRefGoogle Scholar
Malviya, V., Reddy, A. K. and Kala, R., “Autonomous social robot navigation using a behavioral finite state social machine,” Robotica, 38(12), 22662289 (2020).CrossRefGoogle Scholar
Sinha, A. and Papadakis, P., “Mind the gap: Detection and traversability analysis of terrain gaps using LIDAR for safe robot navigation,” Robotica 31(7), 10851101 (2013).CrossRefGoogle Scholar