Added GPU partition info
This commit is contained in:
@ -34,6 +34,7 @@ The following table show default and maximum resources that can be used per node
|
||||
| merlin-c-[201-224] | 1 core | 44 cores | 2 | 4000 | 352000 | 352000 | 10000 | N/A | N/A |
|
||||
| merlin-g-[001] | 1 core | 8 cores | 1 | 4000 | 102400 | 102400 | 10000 | 1 | 2 |
|
||||
| merlin-g-[002-009] | 1 core | 20 cores | 1 | 4000 | 102400 | 102400 | 10000 | 1 | 4 |
|
||||
| merlin-g-[010-013] | 1 core | 20 cores | 1 | 4000 | 102400 | 102400 | 10000 | 1 | 4 |
|
||||
|
||||
If nothing is specified, by default each core will use up to 8GB of memory. Memory can be increased with the `--mem=<mem_in_MB>` and
|
||||
`--mem-per-cpu=<mem_in_MB>` options, and maximum memory allowed is `Max.Mem/Node`.
|
||||
@ -45,14 +46,20 @@ In *Merlin6*, memory is considered a Consumable Resource, as well as the CPU.
|
||||
Partition can be specified when submitting a job with the ``--partition=<partitionname>`` option.
|
||||
The following *partitions* (also known as *queues*) are configured in Slurm:
|
||||
|
||||
| Partition | Default Time | Max Time | Max Nodes | Priority | PriorityJobFactor\* |
|
||||
| CPU Partition | Default Time | Max Time | Max Nodes | Priority | PriorityJobFactor\* |
|
||||
|:-----------------: | :----------: | :------: | :-------: | :------: | :-----------------: |
|
||||
| **<u>general</u>** | 1 day | 1 week | 50 | low | 1 |
|
||||
| **daily** | 1 day | 1 day | 67 | medium | 500 |
|
||||
| **hourly** | 1 hour | 1 hour | unlimited | highest | 1000 |
|
||||
|
||||
| GPU Partition | Default Time | Max Time | Max Nodes | Priority | PriorityJobFactor\* |
|
||||
|:-----------------: | :----------: | :------: | :-------: | :------: | :-----------------: |
|
||||
| **<u>gpu</u>** | 1 day | 1 week | 4 | low | 1 |
|
||||
| **gpu-short** | 2 hours | 2 hours | 4 | highest | 1000 |
|
||||
|
||||
\*The **PriorityJobFactor** value will be added to the job priority (*PARTITION* column in `sprio -l` ). In other words, jobs sent to higher priority
|
||||
partitions will usually run first (however, other factors such like **job age** or mainly **fair share** might affect to that decision).
|
||||
partitions will usually run first (however, other factors such like **job age** or mainly **fair share** might affect to that decision). For the GPU
|
||||
partitions, Slurm will also attempt first to allocate jobs on partitions with higher priority over partitions with lesser priority.
|
||||
|
||||
The **general** partition is the *default*: when nothing is specified, job will be by default assigned to that partition. General can not have more
|
||||
than 50 nodes running jobs. For **daily** this limitation is extended to 67 nodes while for **hourly** there are no limits.
|
||||
|
Reference in New Issue
Block a user