Files
gitea-pages/docs/merlin6/quick-start-guide/accessing-slurm.md
caubet_m 17053a363f
All checks were successful
Build and deploy documentation / build-and-deploy-docs (push) Successful in 24s
Clean Merlin6 docs
2026-02-10 10:46:02 +01:00

1.7 KiB

Accessing Slurm Cluster

The Merlin Slurm clusters

Merlin contains a multi-cluster setup, where multiple Slurm clusters coexist under the same umbrella. It basically contains the following clusters:

  • The Merlin6 Slurm CPU cluster, which is called merlin6.
  • The Merlin6 Slurm GPU cluster, which is called gmerlin6.

Accessing the Slurm clusters

Any job submission must be performed from a Merlin login node. Please refer to the Accessing the Interactive Nodes documentation for further information about how to access the cluster.

In addition, any job must be submitted from a high performance storage area visible by the login nodes and by the computing nodes. For this, the possible storage areas are the following:

  • /data/user
  • /data/project
  • /shared-scratch

Please, avoid using /psi/home directories for submitting jobs.

Merlin6 CPU cluster access

The Merlin6 CPU cluster (merlin6) is the default cluster configured in the login nodes. Any job submission will use by default this cluster, unless the option --cluster is specified with another of the existing clusters.

For further information about how to use this cluster, please visit: Merlin6 CPU Slurm Cluster documentation.

Merlin6 GPU cluster access

The Merlin6 GPU cluster (gmerlin6) is visible from the login nodes. However, to submit jobs to this cluster, one needs to specify the option --cluster=gmerlin6 when submitting a job or allocation.

For further information about how to use this cluster, please visit: Merlin6 GPU Slurm Cluster documentation.