High Performance Computing (HPC) provides researchers with the ability to expand their data processing, simulation and computation across hundreds of cores.
Over recent years there has been a huge increase in the number of HPC systems available for researchers and this has led to widespread use across many disciplines.
The HPC cluster at QMUL runs Linux. For a brief introduction on the Linux operating system, please see here.
Architecture of an HPC Cluster¶
The basic architecture of a cluster consists of login nodes which allow access and submission of jobs to a scheduler, jobs are then dispatched to compute nodes for execution.
Due to the need for high performance, nodes are connected with high speed ethernet or low-latency InfiniBand.
Clusters are separated into three tiers in the UK: Tier 3 Local facilities, Tier 2 Specialist Hubs, and the Tier 1 National service.
Tier 3 - Local¶
Apocrita is the local cluster at QMUL, we have a variety of nodes and allow access to QMUL users and collaborators. See HPC Compute Nodes for information.
Tier 2 - High Performance Computing Centres¶
Tier 1 - National¶
ARCHER2 is the UK National supercomputing service. The documentation, and information on setting up an account is available here.
|RAM per Node