Skip to content

Legacy SL6 container

The legacy container is a way to run applications that were available on our old cluster running Scientific Linux 6.2

The Singularity container /data/containers/sl6/sl6compute.img is available on Apocrita and is essentially a replica of one of the old compute nodes.

Usage

To access a shell or run a script within the container, you will first need to load the singularity module:

$ module load singularity
$ singularity shell /data/containers/sl6/sl6compute.img
Singularity~> cat /etc/redhat-release
Scientific Linux release 6.2 (Carbon)

Using the module command inside the container will provide access to the legacy applications. Data files on the cluster filesystem can still be accessed and written from within the container.

Use qlogin for interactive sessions

Using singularity shell for interactive container sessions can be useful for initial testing. As with other interactive tasks, these tests should be performed within qlogin sessions, and not run directly on the login nodes

Singularity~> module avail samtools
---------------- /data/apps/environmentmodules/general ----------------
samtools/0.1.18         samtools/1.1            samtools/1.3.1(default)

Singularity~> module load samtools
Singularity~> samtools view -b -S -o genome_reads_aligned.bam genome_reads_aligned.sam

Support for parallel applications

While Singularity does allow multi-node parallel jobs with MPI, the legacy versions of MPI are too old to support it. Therefore, we have removed these packages from the container to reduce size and complexity.

Example jobs

When running batch jobs, the best approach is to combine all the commands to be run in the container into a single file, and run it with the singularity exec command.

Serial jobs

Create a file run_commands.sh with the commands that would have been run as a job script in the legacy environment, making it executable with chmod +x run_commands.sh.

#!/bin/bash
module load masurca
masurca example.cfg
./assemble.sh

Then prepare your job script according to your resource requirements, to be run in the usual way with qsub <jobscript>:

#!/bin/bash
#$ -cwd
#$ -j y
#$ -pe smp 1
#$ -l h_rt=1:0:0
#$ -l h_vmem=1G

module load singularity
singularity exec /data/containers/sl6/sl6compute.img ./run_commands.sh

References

Singularity Documentation