Hostname: page-component-78c5997874-mlc7c Total loading time: 0 Render date: 2024-11-14T10:35:30.763Z Has data issue: false hasContentIssue false

Acceleration of hybrid MPI parallel NBODY6++ for large N-body globular cluster simulations

Published online by Cambridge University Press:  07 March 2016

Long Wang
Affiliation:
Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing, China email: long.wang@pku.edu.cn
Rainer Spurzem
Affiliation:
Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing, China email: long.wang@pku.edu.cn National Astronomical Observatories and Key Laboratory of Computational Astrophysics, Chinese Academy of Sciences, Beijing, China
Sverre Aarseth
Affiliation:
Institute of Astronomy, University of Cambridge, Cambridge, UK
Keigo Nitadori
Affiliation:
RIKEN Advanced Institute for Computational Science, Kobe, Japan
Peter Berczik
Affiliation:
National Astronomical Observatories and Key Laboratory of Computational Astrophysics, Chinese Academy of Sciences, Beijing, China
M.B.N. Kouwenhoven
Affiliation:
Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing, China email: long.wang@pku.edu.cn
Thorsten Naab
Affiliation:
Max-Planck Institut für Astrophysik, Garching, Germany
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Previous research on globular clusters (GCs) dynamics is mostly based on semi-analytic, Fokker-Planck, Monte-Carlo methods and on direct N-body (NB) simulations. These works have great advantages but also limits since GCs are massive and compact and close encounters and binaries play very important roles in their dynamics. The former three methods make approximations and assumptions, while expensive computing time and number of stars limit the latter method. The current largest direct NB simulation has ~ 500k stars (Heggie 2014). Here, we accelerate the direct NB code NBODY6++ (which extends NBODY6 to supercomputers by using MPI) with new parallel computing technologies (GPU, OpenMP + SSE/AVX). Our aim is to handle large N (up to 106) direct NB simulations to obtain better understanding of the dynamical evolution of GCs.

Type
Contributed Papers
Copyright
Copyright © International Astronomical Union 2016 

References

Aarseth, S. J., 1963, MNRAS, 126, 223CrossRefGoogle Scholar
Heggie, D. C., 2014, MNRAS, 445, 3435CrossRefGoogle Scholar
Hemsendorf, M., Khalisi, E., Omarov, C. T., & Spurzem, R., 2003, High Performance Computing in Science and Engineering. Springer Verlag, 71, 388Google Scholar
Huang, S., Spurzem, R., & Berczik, P. 2015, RAA, in press (arXiv:1508.02510)Google Scholar
King, I. R., 1966, AJ, 71, 64CrossRefGoogle Scholar
Kroupa, P., Tout, C. A., & Gilmore, G., 1993, MNRAS, 262, 545CrossRefGoogle Scholar
Kroupa, P., 2001, MNRAS, 322, 231CrossRefGoogle Scholar
Kustaanheimo, P. & Stiefel, E., 1965, J. Reine Angew. Math., 218, 204CrossRefGoogle Scholar
Makino, J. & Hut, P., 1988, ApJS, 68, 833CrossRefGoogle Scholar
Mikkola, S. & Aarseth, S. J., 1993, CeMDA, 57, 439CrossRefGoogle Scholar
Nitadori, K. & Aarseth, S. J., 2012, MNRAS, 424, 545CrossRefGoogle Scholar
Spurzem, R., 1999, JCoAM, 109, 407Google Scholar
Plummer, H. C., 1911, MNRAS, 71, 460CrossRefGoogle Scholar
Wang, L., Spurzem, R., Aarseth, S., et al. 2015, MNRAS, 450, 4070CrossRefGoogle Scholar