General

Arepo is a multi-purpose gravitation and finite-volume magnetohydrodynamics (MHD) code for astrophysics. It descended from the smoothed-particle hydrodynamics (SPH) code Gadget (most recent version is Gadget-4), with which it shares some aspects of code usage and structure. However, the code performance, accuracy and numerical methods have been improved significantly over the years. One of the key differences is the discretization of the MHD equations on a moving Voronoi-mesh which combines the superior shock capturing, mixing and convergence properties of grid-based schemes with the Galilean-invariance of SPH. The combination of both has been key for some types of simulations in astrophysics, in particular where the internal gas dynamics of objects moving relative to each other needs to be modeled accurately. Gravity is solved by a highly optimized tree-particle-mesh algorithm, similar to, but more accurate than previous versions of the Gadget code. The public code release paper listed under ‘Arepo code papers’ gives a more detailed mathematical description of the techniques employed.
Arepo is MPI parallel and tested for simulations exceeding 20000 MPI ranks on state of the art supercomputers, showing good scaling. This makes the code also ideal for simulations on compute clusters using a few hundred to a few thousand MPI ranks. The public version currently does not support shared-memory (e.g. OpenMP) parallelization. The getting started section or the code wiki provide an introduction on the code usage.

User support

We provide detailed examples that should work out-of-the-box as well as a user guide to facilitate newcomers to get started with this (admittedly very complex) code. In case the given documentation is not sufficient, contact us directly via email.

Arepo code papers

E pur si muove: Galilean-invariant cosmological hydrodynamical simulations on a moving mesh – Springel, V., 2010, MNRAS, 401, 791
Paper introducing the Arepo code in its original version. The main architecture and algorithms of the code can be found here. Note, however, that some aspects have changed over the years.

Magnetohydrodynamics on an unstructured moving grid – Pakmor, R., Bauer, A., Springel, V., 2011, MNRAS, 418, 1392
Paper introducing the magnetohydrodynamics module in Arepo.

Improving the convergence properties of the moving-mesh code AREPO – Pakmor, R. et al., 2016, MNRAS, 455, 1134
Paper introducing a new way to estimate the gradients form the neighbouring cells, which improves the convergence order as well as angular momentum conservation. This method is now standard.

The Arepo public code release – Weinberger, R.; Springel V.; Pakmor, R., 2020, ApJS, 248, 32
Release paper of the public version of Arepo.

Simulation papers with modifications to Arepo

Simulations of magnetic fields in isolated disc galaxies – Pakmor, R., Springel, V., MNRAS, 432, 176
Paper introducing the Powell divergence cleaning scheme for the magnetohydrodynamics in Arepo.

Moving mesh cosmology: numerical techniques and global statistics – Vogelsberger, M. et al., 2012, MNRAS, 425, 3024
Paper introducing the first cosmological simulations performed with Arepo. Some changes regarding the mesh regularization are introduced, as well as the interaction with the star formation module.

Simulation projects using Arepo

Illustris TNG
A series of cosmological volume gravity+magnetohydrodynamics simulations with a comprehensive model for galaxy formation physics.

Auriga
Simulating Milky-Way like galaxies in a cosmological context at very high resolution, modeling gravity and ideal magnetohydrodynamics as well as a comprehensive model for galaxy formation physics.

Illustris
Published in 2014, Illustris was a major cosmological volume simulation which explored the formation of galaxies in a up to this point unprecedented scope.

Acknowledgements

We would like to thank for financial support by the Priority Programme 1648 “SPPEXA” of the German Science Foundation, and by the European Research Council through ERC-StG grant EXAGAL-308037. Furthermore, we thank the the Max Planck Computing and Data Facility (MPCDF, formerly known as RZG) for hosting the code repository and for their support.