• Test bed
  • In service
  • Discipline-specific system for Astronomy and Cosmology
  • Funded by STFC, DiRAC, ExCALIBUR
  • 8 nodes, with 1 NVIDIA A30 accelerator per node
  • Benchmarks (1) ▾
    • Memory bandwidth (BabelStream): 822 GB/s
      • array_size: 134217728
      • iterations: 100
      • precision: FP64
  • Manufactured by NVIDIA
  • Scheduler: Slurm
  • Interconnects: CerIO composable fabric

COSMA A30

COSMA (The Compute Optimised System for Modelling and Analysis) is a High Performance Computing facility hosted at Durham University, operated by the Institute for Computational Cosmology on behalf of DiRAC.

The A30 nodes form the DINE2 cluster, a GPU testbed within COSMA.

Node RAM CPU Access
gc001-008 2TB 64 cores (Intel Sapphire Rapids) Slurm (dine2)

8 A30 GPUs are shared across 8 nodes via a CerIO composable fabric, allowing GPUs to be dynamically allocated between nodes on request. Contact cosma-support@durham.ac.uk for specific configurations.

Documentation

Gaining access

Access requires a COSMA account, obtained via the DiRAC SAFE portal.

  1. Create a SAFE account with an institutional email.
  2. Upload an SSH public key on SAFE. If you do not have one, generate with ssh-keygen -t ed25519.
  3. Request a login account. This requires selecting a project, either:
    • Project do015 for DINE2 testbed access.
    • A DiRAC project code for a given allocation (provided by a supervisor).
  4. Wait for the account to be approved by the project manager. Keep an eye on your email!
  5. Connect to COSMA via SSH: ssh username@login8.cosma.dur.ac.uk (Note: On first login you will be asked to change the password provided in your email)

Visit https://cosma.readthedocs.io/en/latest/account.html for more details. Contact cosma-support@durham.ac.uk for any questions.

Usage

Jobs are submitted via Slurm to the dine2 partition:

#!/bin/bash
#SBATCH --partition=dine2
#SBATCH --account=do015
#SBATCH --time=01:00:00

nvidia-smi
./gpu_program_to_run

Restrictions

  • dine2 nodes mount /dine not /cosma5. Copy binaries to /dine/data/<project>/<username>/.
  • Maximum wall time: 3 days
  • CUDA is available via module load nvhpc/25.11