Files
gitea-pages/pages/merlin6/accessing-merlin6/accessing-slurm.md
2019-07-01 19:14:12 +02:00

1.9 KiB

title, last_updated, sidebar, permalink
title last_updated sidebar permalink
Accessing Slurm Cluster 13 June 2019 merlin6_sidebar /merlin6/slurm-access.html

The Merlin6 Slurm batch system

Clusters at PSI use the Slurm Workload Manager as the batch system technology for managing and scheduling jobs. Historically, Merlin4 and Merlin5 also used Slurm. In the same way, Merlin6 has been also configured with this batch system.

Slurm has been installed in a multi-clustered configuration, allowing to integrate multiple clusters in the same batch system.

  • Two different Slurm clusters exist: merlin5 and merlin6.
    • merlin5 is a cluster with very old hardware (out-of-warranty).
    • merlin5 will exist as long as hardware incidents are soft and easy to repair/fix (i.e. hard disk replacement)
    • merlin6 is the default cluster when running Slurm commands (i.e. sinfo).

Please follow the section Merlin6 Slurm for more details about configuration and job submission.

Merlin5 Access

Keeping the merlin5 cluster will allow running jobs in the old computing nodes until users have fully migrated their codes to the new cluster.

From July 2019, merlin6 becomes the default cluster. However, users can keep submitting to the old merlin5 computing nodes by using the option --cluster=merlin5 and using the corresponding Slurm partition with --partition=merlin. In example:

#SBATCH --clusters=merlin6

Example of how to run a simple command:

srun --clusters=merlin5 --partition=merlin hostname
sbatch --clusters=merlin5 --partition=merlin myScript.batch

Merlin6 Access

In order to run jobs on the Merlin6 cluster, you need to specify the following option in your batch scripts:

#SBATCH --clusters=merlin6

Example of how to run a simple command:

srun --clusters=merlin6 hostname
sbatch --clusters=merlin6 myScript.batch