Skip to content

LAMMPS

LAMMPS is a molecular dynamics application for large-scale atomic/molecular parallel simulations of solid-state materials, soft matter and mesoscopic systems.

LAMMPS is available as a module on Apocrita.

Versions

Regular and GPU accelerated versions have been installed on Apocrita.

Load the GPU accelerated module which matches the GPU type

As the lmp_gpu binary was compiled against a specific GPU card type, ensure a compatible gpu type is requested and the matching module loaded otherwise you will see a CUDA error when running your code. See the example jobs below for further information.

Usage

To run the required version, load one of the following modules:

  • For LAMMPS (non-GPU), load lammps/<version>
  • For GPU accelerated LAMMPS versions, load lammps-gpu/<version>

To run the default installed version of LAMMPS, simply load the lammps module:

$ module load lammps
$ mpirun -np ${NSLOTS} lmp_intel_cpu_intelmpi --help

Usage example: lmp_intel_cpu_intelmpi -var t 300 -echo screen -in in.alloy
...

For full usage documentation, pass the --help option.

Example jobs

Serial job

AVX512 CPU instruction set required

Non-GPU versions of LAMMPS require the AVX-512 instruction set. To ensure that a serial job runs on nodes supporting AVX-512, include the -l avx512 parameter in the job script. Refer to the node types page for a table of supported CPU instruction sets per node.

Here is an example job running on 4 cores and 16GB of total memory:

#!/bin/bash
#$ -cwd
#$ -j y
#$ -pe smp 4
#$ -l h_rt=1:0:0
#$ -l h_vmem=4G
#$ -l avx512

mpirun -np ${NSLOTS} lmp_intel_cpu_intelmpi \
       -in in.file \
       -log output.log

Parallel job

Here is an example job running on 96 cores across 2 ddy nodes with MPI:

#!/bin/bash
#$ -cwd
#$ -j y
#$ -pe parallel 96
#$ -l infiniband=ddy-i
#$ -l h_rt=240:0:0

module load lammps

mpirun -np ${NSLOTS} lmp_intel_cpu_intelmpi \
       -in in.file \
       -log output.log

GPU jobs

Here is an example job running on 1 GPU on the SBG (Volta) nodes:

#!/bin/bash
#$ -cwd
#$ -j y
#$ -pe smp 12
#$ -l h_rt=240:0:0
#$ -l h_vmem=7.5G
#$ -l gpu=1
#$ -l gpu_type=volta

# Load the GPU-accelerated version which has been compiled
# for the Volta GPU type
module load lammps-gpu/<version>-volta

mpirun -np ${NSLOTS} lmp_gpu \
       -sf gpu \
       -pk gpu 1 \
       -in in.lc \
       -log in.lc.log

Here is an example job running on 2 GPUs on the SBG (Volta) nodes:

#!/bin/bash
#$ -cwd
#$ -j y
#$ -pe smp 24
#$ -l h_rt=240:0:0
#$ -l h_vmem=7.5G
#$ -l gpu=2
#$ -l gpu_type=volta

# Load the GPU-accelerated version which has been compiled
# for the Volta GPU type
module load lammps-gpu/<version>-volta

mpirun -np ${NSLOTS} lmp_gpu \
       -sf gpu \
       -pk gpu 2 \
       -in in.lc \
       -log in.lc.log

References