HPC introduction¶
High Performance Computing (HPC) provides researchers with the ability to expand their data processing, simulation and computation across hundreds of cores.
Over recent years there has been a huge increase in the number of HPC systems available for researchers and this has led to widespread use across many disciplines.
The HPC cluster at QMUL runs Linux. For a brief introduction on the Linux operating system, please see here.
Architecture of an HPC Cluster¶
The basic architecture of a cluster consists of login nodes which allow access and submission of jobs to a scheduler, jobs are then dispatched to compute nodes for execution.
Due to the need for high performance, nodes are connected with high speed ethernet or low-latency InfiniBand.
HPC Tiers¶
Clusters are separated into three tiers in the UK: Tier 3 Local facilities, Tier 2 Specialist Hubs, and the Tier 1 National service.
Tier 3 - Local¶
Apocrita¶
Apocrita is the local cluster at QMUL, we have a variety of nodes and allow access to QMUL users and collaborators. See HPC Compute Nodes for information.
Tier 2 - High Performance Computing Centres¶
We have access to a number of EPSRC Tier 2 clusters via consortium membership. These clusters are suitable for larger multi-node parallel jobs. The Tier 2 pages have more information.
Tier 1 - National¶
Archer¶
Host Institution | Cores | Nodes | RAM per Node | Scheduler |
---|---|---|---|---|
Edinburgh | 118,080 | 4920 | 64GB | PBS |