Skip to content

RELION

REgularised LIkelihood OptimisatioN (RELION) employs an empirical Bayesian approach to refinement of (multiple) 3D reconstructions or 2D class averages in electron cryomicroscopy.

RELION is available as a module on Apocrita.

Usage

To run the default installed version of RELION, simply load the relion module:

module load relion

RELION 4

To specifically load RELION 4:

module load relion/4

For usage documentation, pass the -h switch to any of the RELION commands, for example: relion_refine -h.

RELION 5

To specifically load RELION 5:

module load relion/5

Since RELION 5 runs from a container, you need to run relion and then the command you wish to run, e.g.:

relion relion_refine -h

GUI support for RELION

RELION has been compiled with support for the GUI and the binary relion is available on Apocrita, however we strongly suggest to use our OnDemand service for GUI jobs.

Example jobs

The following examples demonstrate refinement within RELION using the relion50_tutorial_precalculated_results dataset from the official Single Particle Analysis tutorial.

Serial job

Here is an example refinement job running on 4 cores and 16GB of memory:

#!/bin/bash
#SBATCH -n 4               # (or --ntasks=4) Request 4 cores
#SBATCH --mem-per-cpu=4G   # Request 4GB RAM per core (16G total)
#SBATCH -t 1:0:0           # (or --time=1:0:0) Request 1 hour runtime

# If using RELION 4
module load relion/4
relion_refine \
  --i Extract/job018/particles.star \
  --o output \
  --j ${SLURM_NTASKS}

# If using RELION 5
module load relion/5
relion relion_refine \
  --i Extract/job018/particles.star \
  --o output \
  --j ${SLURM_NTASKS}

Serial MPI job (RELION 5 only)

WARNING

By default, many RELION jobs will make poor default choices regarding CPU threading and MPI processes. Please ensure you set these to match the resources requested for your job as detailed extensively in our RELION Open OnDemand documentation. And please monitor CPU core usage throughout your job as it runs, and use jobstats to check for CPU efficiency and overall resource usage for completed jobs and adjust future jobs accordingly.

Jobs can be "designed" using the RELION Open OnDemand App interface and then ported over to a job script. To establish the RELION command, once you have designed the parameters for the job, click the "Check command" button:

RELION Check command

You will be presented with a one line command that starts along the lines of:

`which relion_refine_mpi` (etc.)

Copy and paste the full command into your job script and replace the first section with:

relion-mpi --np X relion_refine_mpi (etc.)

(Replace relion_refine_mpi with the specific RELION binary and X with the number of cores requested)

Here is an example RELION 5 serial MPI job running on 4 cores, requesting 4 MPI processes ("--np ${SLURM_NTASKS}") and 1 thread per process ("--j 1") for a total of 4 MPI threads:

#!/bin/bash
#SBATCH -n 4               # (or --ntasks=4) Request 4 cores
#SBATCH --mem-per-cpu=4G   # Request 4GB RAM per core (16G total)
#SBATCH -t 1:0:0           # (or --time=1:0:0) Request 1 hour runtime

module load relion/5

relion-mpi \
  --np ${SLURM_NTASKS} \
  relion_refine_mpi \
    --i Extract/job018/particles.star \
    --o output \
    --j 1

This job's output should look similar to the below:

Using Point-to-Point MPI Blocksize = 4294967296 bytes
Using Collective MPI Blocksize     = 67108864 bytes
RELION version: 5.0.1-commit-73f0a3
Precision: BASE=double, VECTOR-ACC=single

 === RELION MPI setup ===
 + Number of MPI processes                 = 4
 + Leader      (0) runs on host            = (NODE)
 + Follower     1  runs on host            = (NODE)
 + Follower     2  runs on host            = (NODE)
 + Follower     3  runs on host            = (NODE)
 ==========================
 Running CPU instructions in double precision.
(etc.)

Parallel job (RELION 4 only)

Parallel jobs are only supported by RELION 4

RELION 4 supports parallel jobs but RELION 5 currently runs from a container and has no support for multi-node jobs using Open MPI.

Here is an example RELION 4 job running on 96 cores across 2 ddy nodes using MPI:

#!/bin/bash
#SBATCH -N 2          # (or --nodes=2) Request 2 nodes
#SBATCH -n 96         # (or --ntasks=96) Request 96 cores
#SBATCH -p parallel   # (or --partition=parallel) Request the parallel partition
#SBATCH -t 240:0:0    # (or --time=240:0:0) Request 240 hours time
#SBATCH --exclusive
#SBATCH --mem=0

module load relion/4

# Slurm knows how many tasks to use for mpirun, detected automatically from
# ${SLURM_NTASKS}. Use -- to ensure arguments are passed to the application
# and not mpirun
mpirun \
  -- \
  relion_refine_mpi \
    --i Extract/job018/particles.star  \
    --o output

GPU serial job

Here is an example job running on 1 GPU:

#!/bin/bash
#SBATCH -n 8                # (or --ntasks=8) Request 8 cores
#SBATCH --cpus-per-gpu=8    # 8 cores per GPU
#SBATCH -p gpushort         # (or --partition=gpushort) Request the gpushort partition
#SBATCH -t 1:0:0            # (or --time=240:0:0) Request 1 hour runtime
#SBATCH --mem-per-cpu=11G   # Request 11GB RAM per core (88G total)
#SBATCH --gres=gpu:1        # Request 1 GPU of any type

# If using RELION 4
module load relion/4
relion_refine \
  --i Extract/job018/particles.star \
  --o output \
  --j ${SLURM_NTASKS} \
  --gpu

# If using RELION 5
module load relion/5
relion relion_refine \
  --i Extract/job018/particles.star \
  --o output \
  --j ${SLURM_NTASKS} \
  --gpu

GPU serial MPI job (RELION 5 only)

WARNING

By default, many RELION jobs will make poor default choices regarding CPU threading, MPI processes and GPU counts. Please ensure you set these to match the resources requested for your job as detailed extensively in our RELION Open OnDemand documentation. And please monitor both GPU usage and CPU core usage throughout your job and use jobstats to check for CPU efficiency and overall resource usage for completed jobs and adjust future jobs accordingly.

Jobs can be "designed" using the RELION Open OnDemand App interface and then ported over to a job script. To establish the RELION command, once you have designed the parameters for the job, click the "Check command" button:

RELION Check command

You will be presented with a one line command that starts along the lines of:

`which relion_refine_mpi` (etc.)

Copy the full command into your job script and replace the first section with:

relion-mpi --np 2 relion_refine_mpi (etc.)

(Replace relion_refine_mpi with the specific RELION binary)

Here is an example RELION 5 GPU serial MPI job running on 8 cores and 1 GPU, requesting 2 MPI processes ("--np 2", one "Leader" and one "Follower") and 4 threads per "Follower" process ("--j 4") for a total of 8 MPI threads:

#!/bin/bash
#SBATCH -n 8                # (or --ntasks=8) Request 8 cores
#SBATCH --cpus-per-gpu=8    # 8 cores per GPU
#SBATCH -p gpushort         # (or --partition=gpushort) Request the gpushort partition
#SBATCH -t 1:0:0            # (or --time=240:0:0) Request 1 hour runtime
#SBATCH --mem-per-cpu=11G   # Request 11GB RAM per core (88G total)
#SBATCH --gres=gpu:1        # Request 1 GPU of any type


module load relion/5
relion-mpi \
  --np 2 \
  relion_refine_mpi \
    --i Extract/job018/particles.star \
    --o output \
    --j 4 \
    --gpu 0

This job's output should look similar to the below:

RELION version: 5.0.1-commit-73f0a3
Precision: BASE=double, CUDA-ACC=single

 === RELION MPI setup ===
 + Number of MPI processes                 = 2
 + Number of threads per MPI process       = 4
 + Total number of threads therefore       = 8
 + Leader      (0) runs on host            = (NODE)
 + Follower     1  runs on host            = (NODE)
 ==========================
 uniqueHost (NODE) has 1 ranks.
 Follower 1 will distribute threads over devices  0
 Thread 0 on follower 1 mapped to device 0
 Thread 1 on follower 1 mapped to device 0
 Thread 2 on follower 1 mapped to device 0
 Thread 3 on follower 1 mapped to device 0
 Running CPU instructions in double precision.
(etc.)

References