HPC Introduction

cables.png

High Performance Computing (HPC) provides researchers with the ability to expand their data processing, simulation and computation across hundreds of cores.

Over recent years there has been a huge increase in the number of HPC systems available for researchers and this has led to widespread use across many disciplines.

Architecture of an HPC Cluster

The basic architecture of a cluster consists of login nodes which allow access and submission of jobs to a scheduler, jobs are then dispatched to compute nodes for execution.

Due to the need for high performance, nodes are connected with high speed ethernet or low-latency InfiniBand.

Cluster Diagram

HPC Tiers

Clusters are separated into three tiers: Local, Regional and National. Apocrita is QMUL's local cluster and access to a number of regional clusters is available to QMUL users.

Tier 3 - Local

Apocrita

Apocrita is the local cluster at QMUL, we have a variety of nodes and allow access to QMUL users and collaborators. See HPC Compute Nodes for information.

Tier 2 - Regional

We have access to a number of Tier 2 clusters, if you are running large parallel jobs you may benefit from access to these, please contact us to see if your job is appropriate and to organise access.

Thomas - Hub in Materials and Molecular Modelling

Host Institution Cores Nodes RAM per Node Scheduler EPSRC Grant
UCL 17,280 720 128GB SGE EP/P020194/1

Athena - HPC Midlands Plus

Host Institution Cores Nodes RAM per Node Scheduler EPSRC Grant
Loughborough 14,336 512 128GB Slurm EP/P020232/1

JADE - Joint Academic Data science Endeavour

Host Institution Cores Nodes RAM per Node Scheduler EPSRC Grant
Oxford TBD TBD TBD TBD EP/P020275/1

Tier 1 - National

Archer

Host Institution Cores Nodes RAM per Node Scheduler
Edinburgh 118,080 4920 64GB PBS