vibe #3
This commit is contained in:
@@ -30,8 +30,8 @@ In **`merlin6`**, Memory is considered a Consumable Resource, as well as the CPU
|
||||
and by default resources can not be oversubscribed. This is a main difference with the old **`merlin5`** cluster, when only CPU were accounted,
|
||||
and memory was by default oversubscribed.
|
||||
|
||||
{{site.data.alerts.tip}}Always check <b>'/etc/slurm/slurm.conf'</b> for changes in the hardware.
|
||||
{{site.data.alerts.end}}
|
||||
!!! tip "Check Configuration"
|
||||
Always check `/etc/slurm/slurm.conf` for changes in the hardware.
|
||||
|
||||
### Merlin6 CPU cluster
|
||||
|
||||
@@ -78,11 +78,8 @@ and, if possible, they will preempt running jobs from partitions with lower *Pri
|
||||
* For **`hourly`** there are no limits.
|
||||
* **`asa-general`,`asa-daily`,`asa-ansys`,`asa-visas` and `mu3e`** are **private** partitions, belonging to different experiments owning the machines. **Access is restricted** in all cases. However, by agreement with the experiments, nodes are usually added to the **`hourly`** partition as extra resources for the public resources.
|
||||
|
||||
{{site.data.alerts.tip}}Jobs which would run for less than one day should be always sent to <b>daily</b>, while jobs that would run for less
|
||||
than one hour should be sent to <b>hourly</b>. This would ensure that you have highest priority over jobs sent to partitions with less priority,
|
||||
but also because <b>general</b> has limited the number of nodes that can be used for that. The idea behind that, is that the cluster can not
|
||||
be blocked by long jobs and we can always ensure resources for shorter jobs.
|
||||
{{site.data.alerts.end}}
|
||||
!!! tip "Partition Selection"
|
||||
Jobs which would run for less than one day should be always sent to **daily**, while jobs that would run for less than one hour should be sent to **hourly**. This would ensure that you have highest priority over jobs sent to partitions with less priority, but also because **general** has limited the number of nodes that can be used for that. The idea behind that, is that the cluster can not be blocked by long jobs and we can always ensure resources for shorter jobs.
|
||||
|
||||
### Merlin5 CPU Accounts
|
||||
|
||||
@@ -192,14 +189,11 @@ resources from the batch system would drain the entire cluster for fitting the j
|
||||
Hence, there is a need of setting up wise limits and to ensure that there is a fair usage of the resources, by trying to optimize the overall efficiency
|
||||
of the cluster while allowing jobs of different nature and sizes (it is, **single core** based **vs parallel jobs** of different sizes) to run.
|
||||
|
||||
{{site.data.alerts.warning}}Wide limits are provided in the <b>daily</b> and <b>hourly</b> partitions, while for <b>general</b> those limits are
|
||||
more restrictive.
|
||||
<br>However, we kindly ask users to inform the Merlin administrators when there are plans to send big jobs which would require a
|
||||
massive draining of nodes for allocating such jobs. This would apply to jobs requiring the <b>unlimited</b> QoS (see below <i>"Per job limits"</i>)
|
||||
{{site.data.alerts.end}}
|
||||
!!! warning "Resource Limits"
|
||||
Wide limits are provided in the **daily** and **hourly** partitions, while for **general** those limits are more restrictive. However, we kindly ask users to inform the Merlin administrators when there are plans to send big jobs which would require a massive draining of nodes for allocating such jobs. This would apply to jobs requiring the **unlimited** QoS (see below "Per job limits").
|
||||
|
||||
{{site.data.alerts.tip}}If you have different requirements, please let us know, we will try to accomodate or propose a solution for you.
|
||||
{{site.data.alerts.end}}
|
||||
!!! tip "Custom Requirements"
|
||||
If you have different requirements, please let us know, we will try to accommodate or propose a solution for you.
|
||||
|
||||
#### Per job limits
|
||||
|
||||
|
||||
Reference in New Issue
Block a user