Announcements

New cluster available for use

We are pleased to announce that the new cluster is available for general use. This has been a large project, involving the following:

  • Upgraded storage - new storage controllers and extra 1PB of storage
  • Cluster Operating System upgrade - now running CentOS7.3
  • Job scheduler upgrade - now running Univa Grid Engine 8.5.1 - please note that your old scripts will need changing before running on the new cluster. Please read the documentation on this site for full details.
  • Application rebuilds for CentOS7 - featuring latest versions of many applications
  • New nxv nodes - including infiniband-connected nodes for parallel jobs
  • GPU nodes - 4 new nxg nodes with Nvidia Tesla K80 for gaining substantial performance increase from gpu-enabled applications
  • Singularity containers - utilise Linux containers to encapsulate your software for portable and reproducible science
  • Documentation site rewrite - all of the pages on this site have been rewritten for use with the new UGE scheduler. Documentation for the previous SGE cluster remain available here
  • Stats site has been written for the new cluster. While both clusters are running, a landing page will be shown to give you the option of which cluster stats you require.

The new cluster is available via login.hpc.qmul.ac.uk whilst the old cluster is now accessible via login-legacy.hpc.qmul.ac.uk. Please see the logging in page for more information on connecting.

Over the coming months we will be migrating more nodes from the existing cluster, as demand requires it. We have been testing with a group of users from a variety of disciplines over the last 6 months. You are free to test your favourite applications and also run production code on the new cluster.

While we have added and tested a substantial number of applications, reaching the full complement of applications is a work-in-progress. Please fill in the application request form if you require an application that has not been provided yet. In the meantime, you can temporarily access the modules built for the older cluster by executing module load use.sl6, but it should be used with caution as many applications will not function correctly since they were built with particular library versions on a different operating system. Note that any new application requests will be built for the new cluster only.

If you are experiencing issues, we recommend that you search this site and read the provided documentation first to see if your question is answered. Please contact us if you are still experiencing an issue relating to the HPC cluster.

Please note that we have a new reference that should be cited for any published research. Citing Apocrita correctly in your published work helps ensure continued funding and upgrades to the service. We also have an updated usage policy - please adhere to the new policy to ensure this shared computing resource runs optimally.

Announcement regarding New Storage

We recently added an additional petabyte of storage, it is necessary to move all files to the new storage to benefit from improved performance.

We will be contacting each group to arrange for migration of their files. If you require more space your files will need to be migrated first. During migration we will need to stop activity on each fileset temporarily.

Once the migration is completed, files will continue to be available under /data on the cluster, you will not need to modify your scripts.

Announcement regarding Midplus Consortium

The Midlands Plus consortium have now deployed a new 14,000 core cluster, located in Loughborough. You can hear more about this from your local institution.

QMUL have also recently purchased new hardware and storage with college funding, and are in the process of migrating to it.

With the new Midplus cluster coming online, the old Midplus arrangement has reached end-of-life. As the new QMUL cluster hardware is deployed, the old Midplus cluster is simultaneously being phased out.

This means that HPC services and storage hosted by QMUL are no longer available for Warwick, Nottingham and Birmingham Midplus users.

Note that Minerva, the Parallel computing part of the original Midplus cluster based at Warwick, has already been decommissioned.