HPC introduction¶
High Performance Computing (HPC) provides researchers with the ability to expand their data processing, simulation and computation across hundreds of cores.
Over recent years there has been a huge increase in the number of HPC systems available for researchers and this has led to widespread use across many disciplines.
The HPC cluster at QMUL runs Linux. For a brief introduction on the Linux operating system, please see here.
Architecture of an HPC Cluster¶
The basic architecture of a cluster consists of login nodes which allow access and submission of jobs to a scheduler, jobs are then dispatched to compute nodes for execution.
Due to the need for high performance, nodes are connected with high speed ethernet or low-latency InfiniBand.
You may also learn more in our Introduction to HPC 1/2 and Introduction to HPC 2/2 videos.
HPC Tiers¶
Clusters are separated into three tiers in the UK: Tier 3 Local facilities, Tier 2 Specialist Hubs, and the Tier 1 National service.
Tier 3 - Local¶
Apocrita¶
Apocrita is the local cluster at QMUL, we have a variety of nodes and allow access to QMUL users and collaborators. See HPC Compute Nodes for information.
Tier 2 - High Performance Computing Centres¶
We have access to a number of EPSRC Tier 2 clusters via consortium membership. These clusters are suitable for larger multi-node parallel jobs. The Tier 2 pages have more information.
Tier 1 - National¶
ARCHER2 is the UK National supercomputing service. The documentation, and information on setting up an account is available here.
ARCHER2¶
Host Institution | Cores | Nodes | RAM per Node | Scheduler |
---|---|---|---|---|
Edinburgh | 750,080 | 5680 | 256GB | Slurm |