DeepLabCut is a toolbox for markerless pose estimation of animals performing various tasks.
DeepLabCut is a Python 3 package which uses TensorFlow 1.8 on GPU nodes, and can be run on Apocrita inside a Python virtualenv.
Researchers need to request permission to be added to the list of GPU node users.
DeepLabCut only supports batch mode on Apocrita
DeepLabCut has been provided for computationally intensive batch mode tasks, and does not support any GUI tasks. Parts of the workflow such as using the GUI to manually label frames, should be performed on a local workstation.
For the initial setup, create a virtual environment, activate it, and use a requirements file to install packages into the virtual environment.
To also run the DeepLabCut examples, clone the repository (these examples assume you are doing this in your home directory), as follows:
git clone https://github.com/AlexEMG/DeepLabCut.git cd DeepLabCut
Using an editor such as vim, create
the following text:
deeplabcut ipywidgets seaborn tensorflow-gpu==1.8 imageio==2.3.0 imageio-ffmpeg https://extras.wxpython.org/wxPython4/extras/linux/gtk3/centos-7/wxPython-4.0.6-cp36-cp36m-linux_x86_64.whl
Create the environment, which in these examples is called
dlcenv, and will be
created in the DeepLabCut directory.
# Load the python module to use python3 on the cluster module load python # Create an empty virtual environment, and activate it virtualenv dlcenv source dlcenv/bin/activate # Install the python packages from the requirements file pip install -r requirements.txt
After the initial package installation, DeepLabCut can be subsequently activated with the following commands inside a job script or interactive session.
module load python module load cudnn/7.4-cuda-9.0 source ~/DeepLabCut/dlcenv/bin/activate
The path to the virtual environment may vary, depending on where it was installed.
Verify the installation¶
The following commands will perform a very basic check of the DeepLabCut installation within an interactive session:
qlogin -l gpu=1 -pe smp 8 -l h_vmem=7.5G Establishing builtin session to host sbg1.apocrita ... ~$ module load python cudnn/7.4-cuda-9.0 ~$ export DLClight=True ~$ python Python 3.6.3 (default, Oct 4 2017, 15:04:38) >> import deeplabcut; import tensorflow as tf DLC loaded in light mode; you cannot use the relabeling GUI! >> print(tf.__version__) 1.8.0 >>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) Device mapping: /job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla V100-PCIE-16GB, pci bus id: 0000:06:00.0, compute capability: 7.0 >>> help(deeplabcut.analyze_videos)
Example GPU job¶
DeepLabCut only uses one GPU, so job scripts should only request one GPU. The following job script will use a script from the DeepLabCut examples directory to create a project and run some tasks.
#!/bin/bash #$ -cwd #$ -j y #$ -pe smp 8 #$ -l h_vmem=7.5G #$ -l h_rt=240:0:0 #$ -l gpu=1 # Do not attempt to use a GUI export DLClight=True module load python module load cudnn/7.4-cuda-9.0 source ~/DeepLabCut/dlcenv/bin/activate cd ~/DeepLabCut/examples python testscript.py
Checking that the GPU is being used correctly
ssh <nodename> nvidia-smi will query the GPU status on a
node. You can determine which node your job is using with the