2.1 KiB
title, last_updated, sidebar, permalink
title | last_updated | sidebar | permalink |
---|---|---|---|
Cluster 'merlin5' | 07 April 2021 | merlin6_sidebar | /merlin5/cluster-introduction.html |
Slurm 'merlin5' cluster
Merlin5 was the old official PSI Local HPC cluster for development and mission-critical applications which was built in 2016-2017. It was an extension of the Merlin4 cluster and built from existing hardware due to a lack of central investment on Local HPC Resources. Merlin5 was then replaced by the Merlin6 cluster in 2019, with an important central investment of ~1,5M CHF. Merlin5 was mostly based on CPU resources, but also contained a small amount of GPU-based resources which were mostly used by the BIO experiments.
Merlin5 has been kept as a Local HPC Slurm cluster,
called merlin5
. In that way, the old CPU computing nodes are still available as extra computation resources,
and as an extension of the official production merlin6
Slurm cluster.
The old Merlin5 login nodes, GPU nodes and storage were fully migrated to the Merlin6
cluster, which becomes the main Local HPC Cluster. Hence, Merlin6
contains the storage which is mounted on the different Merlin HPC Slurm Clusters (merlin5
, merlin6
, gmerlin6
).
Submitting jobs to 'merlin5'
To submit jobs to the merlin5
Slurm cluster, it must be done from the Merlin6 login nodes by using
the option --clusters=merlin5
on any of the Slurm commands (sbatch
, salloc
, srun
, etc. commands).
The Merlin Architecture
Multi Non-Federated Cluster Architecture Design: The Merlin cluster
The following image shows the Slurm architecture design for Merlin cluster.
It contains a multi non-federated cluster setup, with a central Slurm database
and multiple independent clusters (merlin5
, merlin6
, gmerlin6
):
