Files
gitea-pages/pages/merlin6/accessing-merlin6/accessing-slurm.md
2019-07-01 18:04:15 +02:00

1.9 KiB

title, last_updated, sidebar, permalink
title last_updated sidebar permalink
Accessing Slurm Cluster 13 June 2019 merlin6_sidebar /merlin6/slurm-access.html

The Merlin6 Slurm batch system

Clusters at PSI use the Slurm Workload Manager as the batch system technology for managing and scheduling jobs. Historically, Merlin4 and Merlin5 also used Slurm. In the same way, Merlin6 has been also configured with this batch system.

Slurm has been installed in a multi-clustered configuration, allowing to integrate multiple clusters in the same batch system.

  • Two different Slurm clusters exist: merlin5 and merlin6.
    • merlin5 is a cluster with very old hardware (out-of-warranty).
    • merlin5 will exist as long as hardware incidents are soft and easy to repair/fix (i.e. hard disk replacement)
    • merlin6 is the default cluster when submitting jobs.

Please follow the section Merlin6 Slurm for more details about configuration and job submission.

Merlin5 Access

Keeping the merlin5 cluster will allow running jobs in the old computing nodes until users have fully migrated their codes to the new cluster.

From July 2019, merlin6 becomes the default cluster and any job submitted to Slurm will be submitted to that cluster. However, users can keep submitting to the old merlin5 computing nodes by using the option --cluster=merlin5 and using the corresponding Slurm partition with --partition=merlin. In example:

srun --clusters=merlin5 --partition=merlin hostname
sbatch --clusters=merlin5 --partition=merlin myScript.batch

Merlin6 Access

By default, any job submitted with specifying --clusters= should use the local cluster, so nothing extra should be specified. In any case, you can optionally add --clusters=merlin6 in order to force submission to the Merlin6 cluster.