Skip to content

Minkowski Engine

Minkowski Engine is an auto-differentiation library for sparse tensors. It supports all standard neural network layers such as convolution, pooling, unpooling, and broadcasting operations for sparse tensors.

Minkowski Engine is available as an Apptainer container on Apocrita.

Usage

Minkowski Engine requires a suite of supporting tools to be installed, so for reproducibility, we provide all of the tools in a container along with Minkowski Engine.

To run the default version of Minkowski Engine, simply load the minkowski_engine module:

module load minkowski_engine

Calling python after loading the minkowski_engine module will invoke the installed version of Python inside the container. Additionally, this entry point will automatically use any requested GPU cards.

Example jobs

CUDA support

CUDA has been installed inside the Minkowski Engine container so you do not need to load a CUDA module before running your analysis.

Serial job

Here is an example job running on 1 core and 5GB of memory:

#!/bin/bash
#$ -cwd
#$ -j y
#$ -pe smp 1
#$ -l h_rt=1:0:0
#$ -l h_vmem=5G

module load minkowski_engine

python train_network.py

GPU job

Here is an example job running on 8 cores and 1 GPU:

#!/bin/bash
#$ -cwd
#$ -j y
#$ -pe smp 12
#$ -l h_rt=240:0:0
#$ -l h_vmem=7.5G
#$ -l gpu=1

module load minkowski_engine

python train_network.py

References