Changes Slurm order in menu

This commit is contained in:
caubet_m 2019-07-01 18:29:09 +02:00
parent 80be5a4b78
commit 48c8a52dc6
2 changed files with 21 additions and 4 deletions

View File

@ -27,16 +27,16 @@ entries:
url: /merlin6/slurm-access.html
- title: Merlin6 Slurm
folderitems:
- title: Slurm Configuration
url: /merlin6/slurm-configuration.html
- title: Slurm Basic Commands
url: /merlin6/slurm-basics.html
- title: Using PModules
url: /merlin6/using-modules.html
- title: Slurm Basic Commands
url: /merlin6/slurm-basics.html
- title: Running Jobs
url: /merlin6/running-jobs.html
- title: Slurm Examples
url: /merlin6/slurm-examples.html
- title: Slurm Configuration
url: /merlin6/slurm-configuration.html
- title: Announcements
folderitems:
- title: Downtimes

View File

@ -65,3 +65,20 @@ Limits are softed for the **daily** partition during non working hours and durin
| **general** | 704 (user limit) | 704 | 704 | 704 |
| **daily** | 704 (user limit) | 1408 | Unlimited | 1408 |
| **hourly** | Unlimited | Unlimited | Unlimited | Unlimited |
## Understanding the Slurm configuration (for advanced users)
Clusters at PSI use the [Slurm Workload Manager](http://slurm.schedmd.com/) as the batch system technology for managing and scheduling jobs.
Historically, *Merlin4* and *Merlin5* also used Slurm. In the same way, **Merlin6** has been also configured with this batch system.
Slurm has been installed in a **multi-clustered** configuration, allowing to integrate multiple clusters in the same batch system.
For understanding the Slurm configuration setup in the cluster, sometimes may be useful to check the following files:
* ``/etc/slurm/slurm.conf`` - can be found in the login nodes and computing nodes.
* ``/etc/slurm/gres.conf`` - can be found in the GPU nodes, is also propgated to login nodes and computing nodes for user read access.
* ``/etc/slurm/cgroup.conf`` - can be found in the computing nodes, is also propagated to login nodes for user read access.
The previous configuration files which can be found in the login nodes, correspond exclusively to the **merlin6** cluster configuration files.
Configuration files for the old **merlin5** cluster must be checked directly on any of the **merlin5** computing nodes: these are not propagated
to the **merlin6** login nodes.