Gromacs¶
Gromacs is a molecular dynamics package mainly designed for simulations of proteins, lipids and nucleic acids.
Gromacs is available as a module on Apocrita.
Use Spack for additional variants
A few of the most commonly used variants of Gromacs have been installed by the Apocrita ITSR Apps Team. Advanced users may want to self-install their own additional variants via Spack.
Versions¶
Gromacs on Apocrita is available with three different installation variants:
Gromacs modules for GPU are installed with Open MPI libraries.
Usage¶
A typical Gromacs workload consists of two stages:
Firstly grompp
is used to prepare a file that contains the parameters for
the simulation. The second stage is to use mdrun
to take this file as input
and run the simulation.
To run the required version of Gromacs, simply load the gromacs-<type>
module, substituting type
with the actual type desired.
Core Usage
To ensure that Gromacs uses the correct number of cores, you should select
an appropriate
parallelisation scheme and
use the $NSLOTS
environment variable to match your threading to your core
request.
To use OpenMP (serial single-node CPU jobs only), the -ntomp=${NSLOTS}
option should be used to spawn one OpenMP thread per core (see
example below).
For Open MPI jobs, use mpirun -np ${NSLOTS}
(see
serial and parallel
examples below).
Serial (OpenMP)¶
This the basic serial build of Gromacs, intended to run jobs that thread using
OpenMP (-ntomp ${NSLOTS}
) on a single node.
To run the serial version of Gromacs, simply load the gromacs
module:
module load gromacs
gmx -h
An example of an OpenMP serial Gromacs job can be found here.
Open MPI¶
This build of Gromacs is installed with Open MPI parallelisation to run parallel simulations.
To run the required version of Gromacs with Open MPI support, simply load the
gromacs-mpi
module:
module load gromacs-mpi
gmx_mpi -h
Example jobs can be found below.
GPU¶
This build of Gromacs provides GPU support; simulation performances with a high calculation intensity on a GPU node.
GPU job submission is required
Gromacs with GPU support requires a GPU node. Information on how to submit to GPU nodes is available here.
To run the required version of Gromacs with GPU support, simply load the
gromacs-gpu
module:
module load gromacs-gpu
An example Gromacs job script requesting a GPU node can be found here.
Example jobs¶
Use one parallelisation scheme, don't mix
Most jobs should use either OpenMP or Open MPI to spawn multiple threads. Single-node CPU jobs can use either method (see examples below), but you must use Open MPI for multi-node CPU parallel jobs. For more information see Parallelisation schemes below, or the more detailed official Gromacs documentation.
Gromacs job submission examples of each type:
Serial jobs¶
Serial job (OpenMP)¶
Here is an example job running on 4 cores and 4GB of memory:
#!/bin/bash
#$ -cwd
#$ -pe smp 4
#$ -l h_rt=1:0:0
#$ -l h_vmem=1G
module load gromacs
gmx grompp -f example.mdp -c example.gro -p example.top -o example.tpr
gmx mdrun -ntomp ${NSLOTS} -v -s example.tpr -deffnm example
Serial job (Open MPI)¶
Here is an example job running on 4 cores and 4GB of memory:
#!/bin/bash
#$ -cwd
#$ -pe smp 4
#$ -l h_rt=1:0:0
#$ -l h_vmem=1G
module load gromacs-mpi
gmx_mpi grompp -f example.mdp -c example.gro -p example.top -o example.tpr
mpirun -np ${NSLOTS} gmx_mpi mdrun -v -deffnm example -s example.tpr
Parallel job (Open MPI only)¶
Here is an example job running on 96 cores across 2 ddy nodes with MPI:
#!/bin/bash
#$ -cwd
#$ -j y
#$ -pe parallel 96
#$ -l infiniband=ddy-i
#$ -l h_rt=240:0:0
module load gromacs-mpi
gmx_mpi grompp -f example.mdp -c example.gro -p example.top -o example.tpr
mpirun -np ${NSLOTS} gmx_mpi mdrun -v -deffnm example -s example.tpr
GPU job¶
Here is an example job running on 1 GPU:
#!/bin/bash
#$ -cwd
#$ -j y
#$ -pe smp 8
#$ -l h_rt=240:0:0
#$ -l h_vmem=11G
#$ -l gpu=1
module load gromacs-gpu
gmx_mpi grompp -f example.mdp -c example.gro -p example.top -o example.tpr
gmx_mpi mdrun -s example.tpr -nb gpu -pme gpu -update gpu -bonded gpu
Parallelisation schemes¶
Gromacs provides considerable flexibility with respect to how it can be
configured. A command of the form: mpirun -np M gmx mdrun -ntomp N...
will launch M MPI processes with N OpenMP threads each.
However you can omit -ntomp N
in this case and Gromacs will
spawn 1 thread per MPI process.
To make this command work well with the scheduler, we suggest:
mpirun -np ${NSLOTS} gmx mdrun...`
In this case ${NSLOTS}
will expand to the number of cores you have
requested in your job script. That way you won't waste resources by requesting
more cores than Gromacs will actually use.
By default, Gromacs carries out particle-particle (PP) and particle mesh Ewald (PME) calculations one after another within the same process. However this can slow things down a lot since PME calculations depend on global communication and they may spend time waiting for other nodes to become available.
If a job uses more than 8 processes, then mdrun
will attempt to designate
dedicated nodes for PME estimating the optimal amount. However, you can
override this behaviour:
mpirun -np NP_tot mdrun_mpi -npme NP_pme -ntomp NT`
This will launch NP_tot
processes, with NP_pme
dedicated to PME and
NT
threads for each MPI process. Note that for the moment we recommend
ensuring that you use the environment variable ${NSLOTS}
to provide
NP_tot
as this will ensure that your job distributes the workload
properly across all cores when using multiple nodes.