Auszug |
---|
a versatile package to perform molecular dynamics for systems with hundreds to millions of particles. |
...
- GROMACS provides extremely high performance compared to all other programs.
- GROMACS can make simultaneous use of both CPU and GPU available in a system. There are options to statically and dynamically balance the load between the different resources.
- GROMACS is user-friendly, with topologies and parameter files written in clear text format.
- Both run input files and trajectories are independent of hardware endian-ness, and can thus be read by any version GROMACS.
- GROMACS comes with a large selection of flexible tools for trajectory analysis.
- GROMACS can be run in parallel, using the standard MPI communication protocol.
- GROMACS contains several state-of-the-art algorithms.
- GROMACS is Free Software, available under the GNU Lesser General Public License (LGPL).
Weaknesses
- GROMACS does not do to much further analysis to get very high simulation speed.
- Sometimes it is challenging to get non-standard information about the simulated system.
- Different versions sometimes have differences in default parameters/methods. Reproducing older version simulations with a newer version can be difficult.
- Additional tools and utilities provided by GROMACS are sometimes not the top quality.
...
Version | Installation Path | modulefile | compiler | comment |
---|---|---|---|---|
Modules for running on CPUs | ||||
2018.4 | /sw/chem/gromacs/2018.4/skl/impi | gromacs/2018.4 | intelmpi | |
2018.4 | /sw/chem/gromacs/2018.4/skl/impi-plumed | gromacs/2018.4-plumed | intelmpi | with plumed |
2019.6 | /sw/chem/gromacs/2019.6/skl/impi | gromacs/2019.6 | intelmpi | |
2019.6 | /sw/chem/gromacs/2019.6/skl/impi-plumed | gromacs/2019.6-plumed | intelmpi | with plumed |
2021.2 | /sw/chem/gromacs/2021.2/skl/impi | gromacs/2021.2 | intelmpi | |
2021.2 | /sw/chem/gromacs/2021.2/skl/impi-plumed | gromacs/2021.2-plumed | intelmpi | with plumed |
2022.5 | /sw/chem/gromacs/2022.5/skl/impi | gromacs/2022.5 | intelmpi | |
2022.5 | /sw/chem/gromacs/2022.5/skl/impi-plumed | gromacs/2022.5-plumed | intelmpi | with plumed |
Modules for running on GPUs | ||||
2022.5 | /sw/chem/gromacs/2022.5/a100/impi | gromacs/2022.5 | gcc | with plumed |
2023.0 | /sw/chem/gromacs/2023.0/a100/tmpi_gcc | gromacs/2023.0_tmpi | ||
2024.0 | /sw/chem/gromacs/2024.0/a100/tmpi | gromacs/2024.0_tmpi |
*Release notes can be found here.
...
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash
#SBATCH --time=12:00:00
#SBATCH --partition=gpu-a100
#SBATCH --ntasks=72
export SLURM_CPU_BIND=none
module load gcc/11.3.0 intel/2023.0.0 cuda/11.8
module load gromacs/2023.0_tmpi
export GMX_GPU_DD_COMMS=true
export GMX_GPU_PME_PP_COMMS=true
OMP_NUM_THREADS=9
gmx mdrun -ntomp 9 -ntmpi 4 -nb gpu -pme gpu -npme 1 -gputasks 0001 OTHER MDRUNARGUMENTS |
If you are using MPI versions (non-thread-MPI, or eg., to take advantage of PLUMED) GPU-accelerated GROMACS, you can proceed in a similar fashion, but instead use the mpirun
task launcher before the GROMACS binary. An example job script asking for 2 A100 GPUs across 2 nodes is shown below:
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash
#SBATCH --time=12:00:00
#SBATCH --partition=gpu-a100
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=72
export SLURM_CPU_BIND=none
module load gcc/11.3.0 cuda/11.8 impi/2021.11
module load gromacs/2022.5
export GMX_GPU_DD_COMMS=true
export GMX_GPU_PME_PP_COMMS=true
export GMX_ENABLE_DIRECT_GPU_COMM=true
OMP_NUM_THREADS=9
mpirun -np 4 -ppn 2 gmx_mpi mdrun -ntomp 9 -ntmpi 4 -nb gpu -pme gpu -npme 1 -gpu_id 01 OTHER MDRUNARGUMENTS |
Whole node GPU job script
...
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash
#SBATCH --time=12:00:00
#SBATCH --partition=gpu-a100
#SBATCH --ntasks=72
export SLURM_CPU_BIND=none
module load gcc/11.3.0 intel/2023.0.0 cuda/11.8
module load gromacs/2023.0_tmpi
export GMX_GPU_DD_COMMS=true
export GMX_GPU_PME_PP_COMMS=true
OMP_NUM_THREADS=9
gmx mdrun -ntomp 9 -ntmpi 16 -gputasks 0000111122223333 MDRUNARGUMENTS |
Note: Settings of the Thread-MPI ranks and OpenMP threads is for achieve optimal performance. The number of ranks should be a multiple of the number of sockets, and the number of cores per node should be a multiple of the number of threads per rank.
Related Modules
Gromacs-Plumed
PLUMED is an open-source, community-developed library that provides a wide range of different methods, such as enhanced-sampling algorithms, free-energy methods and tools to analyze the vast amounts of data produced by molecular dynamics (MD) simulations. PLUMED works together with some of the most popular MD engines.
Gromacs/20XX.X-plumed modules are versions have been patched with PLUMED's modifications, and these versions are able to run meta-dynamics simulations.
Analyzing results
GROMACS Tools
...
Turbo-boost has been mostly disabled on Emmy at GWDG (partitions medium40, large40, standard96, large96, and huge96) in order to save energy. However, this has a particularly strong performance impact on GROMACS in the range of 20-40%. Therefore, we recommend that GROMACS jobs be submitted requesting turbo-boost to be enabled with the --constraint=turbo_on option given to srun or sbatch.
Useful links
- GROMACS Manuals and documentation
- GROMACS Community Forums
- Useful MD Tutorials for GROMACS
- VMD Visual Molecular Dynamics