Updated user documentation
This commit is contained in:
@@ -10,62 +10,59 @@ permalink: /merlin6/slurm-examples.html
|
||||
|
||||
## Single core based job examples
|
||||
|
||||
### Example 1
|
||||
### Example 1: Hyperthreaded job
|
||||
|
||||
In this example we want to do not use hyper-threading (``--ntasks-per-core=1`` and ``--hint=nomultithread``). In our Merlin6 configuration,
|
||||
the default memory per cpu (in Slurm, this is equivalent to memory per thread) is 4000MB, but in this example we are using 1 single thread
|
||||
per core. As we are not using the second thread in the core, we can double the memory used by the single thread to 8000MB. When using one
|
||||
single thread per core, doubling the memory is recommended (however, some applications might not need it).
|
||||
In this example we want to use hyperthreading (``--ntasks-per-core=2`` and ``--hint=multithread``). In our Merlin6 configuration,
|
||||
the default memory per CPU (a CPU is equivalent to a core thread) is 4000MB, hence each task can use up 8000MB (2 threads x 4000MB).
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
#SBATCH --partition=hourly # Using 'hourly' will grant higher priority
|
||||
#SBATCH --ntasks-per-core=1 # Request the max ntasks be invoked on each core
|
||||
#SBATCH --hint=nomultithread # Don't use extra threads with in-core multi-threading
|
||||
#SBATCH --mem-per-cpu=8000 # Double the default memory per cpu
|
||||
#SBATCH --ntasks-per-core=2 # Request the max ntasks be invoked on each core
|
||||
#SBATCH --hint=multithread # Use extra threads with in-core multi-threading
|
||||
#SBATCH --time=00:30:00 # Define max time job will run
|
||||
#SBATCH --output=myscript.out # Define your output file
|
||||
#SBATCH --error=myscript.err # Define your error file
|
||||
|
||||
module purge
|
||||
module load $MODULE_NAME # where $MODULE_NAME is a software in PModules
|
||||
srun $MYEXEC # where $MYEXEC is a path to your binary file
|
||||
```
|
||||
|
||||
### Example 2
|
||||
### Example 2: Non-hyperthreaded job
|
||||
|
||||
In this example we want to do not use hyper-threading (``--ntasks-per-core=1`` and ``--hint=nomultithread``). We want to run a single
|
||||
task but we need to use all the memory available in the node. For that, we need to define that the job will use the whole memory of
|
||||
a node with ``--mem=352000`` (which is the maximum memory available on a single Apollo node). Whenever you want to run a job requiring
|
||||
more memory than the default (4000MB per thread) is very important to specify the amount of memory that the job will use.
|
||||
In this example we do not want hyper-threading (``--ntasks-per-core=1`` and ``--hint=nomultithread``). In our Merlin6 configuration,
|
||||
the default memory per cpu (a CPU is equivalent to a core thread) is 4000MB. If we do not specify anything else, our
|
||||
single core task will use a default of 4000MB. However, one could double it with ``--mem-per-cpu=8000`` if you require more memory
|
||||
(remember, the second thread will not be used so we can safely assign +4000MB to the unique active thread).
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
#SBATCH --partition=hourly # Using 'hourly' will grant higher priority
|
||||
#SBATCH --ntasks-per-core=1 # Request the max ntasks be invoked on each core
|
||||
#SBATCH --hint=nomultithread # Don't use extra threads with in-core multi-threading
|
||||
#SBATCH --mem=352000 # We want to use the whole memory
|
||||
#SBATCH --time=00:30:00 # Define max time job will run
|
||||
#SBATCH --output=myscript.out # Define your output file
|
||||
#SBATCH --error=myscript.err # Define your error file
|
||||
|
||||
module purge
|
||||
module load $MODULE_NAME # where $MODULE_NAME is a software in PModules
|
||||
srun $MYEXEC # where $MYEXEC is a path to your binary file
|
||||
```
|
||||
|
||||
## Multi core based job examples
|
||||
|
||||
### Example 1: with Hyper-Threading
|
||||
### Example 1: MPI with Hyper-Threading
|
||||
|
||||
In this example we run a job that will run 88 tasks. Merlin6 Apollo nodes have 44 cores each one with hyper-threading
|
||||
enabled. This means that we can run 2 threads per core, in total 88 threads. To accomplish that, users should specify
|
||||
``--ntasks-per-core=2`` and ``--hint=multithread``. On the other hand, we add the option ``--exclusive`` to ensure
|
||||
that the node usage is exclusive and no other jobs are running there. Finally, notice that the default memory per
|
||||
thread is 4000MB; hence, in total this job can use up to 352000MB memory which is the maximum allowed in a single node.
|
||||
``--ntasks-per-core=2`` and ``--hint=multithread``.
|
||||
|
||||
Use `--nodes=1` if you want to use a node exclusively (88 hyperthreaded tasks would fit in a Merlin6 node).
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
#SBATCH --partition=hourly # Using 'hourly' will grant higher priority
|
||||
#SBATCH --exclusive # Use the node in exclusive mode
|
||||
#SBATCH --ntasks=88 # Job will run 88 tasks
|
||||
#SBATCH --ntasks-per-core=2 # Request the max ntasks be invoked on each core
|
||||
#SBATCH --hint=multithread # Use extra threads with in-core multi-threading
|
||||
@@ -73,31 +70,85 @@ thread is 4000MB; hence, in total this job can use up to 352000MB memory which i
|
||||
#SBATCH --output=myscript.out # Define your output file
|
||||
#SBATCH --error=myscript.err # Define your error file
|
||||
|
||||
module purge
|
||||
module load $MODULE_NAME # where $MODULE_NAME is a software in PModules
|
||||
srun $MYEXEC # where $MYEXEC is a path to your binary file
|
||||
```
|
||||
|
||||
### Example 2: without Hyper-Threading
|
||||
### Example 2: MPI without Hyper-Threading
|
||||
|
||||
In this example we want to run a job that will run 44 tasks, and for performance reason we want to disable hyper-threading.
|
||||
Merlin6 Apollo nodes have 44 cores each one with hyper-threading enabled. For ensure that only 1 thread will be used, users
|
||||
should specify ``--ntasks-per-core=1`` and ``--hint=nomultithread``. With this configuration, each task will run in 1 thread,
|
||||
and each tasks will be assigned to an independent core. We add the option ``--exclusive`` to ensure that the node usage is
|
||||
exclusive and no other jobs are running there. Finally, in our Slurm configuration the default memory per thread is 4000MB,
|
||||
but we want to use only 1 thread. This means that only half of the memory would be used. If the job requires more memory,
|
||||
users need to increase it by either by setting ``--mem=352000`` or (exclusive) by setting ``--mem-per-cpu=8000``.
|
||||
In this example, we want to run a job that will run 44 tasks, and due to performance reasons we want to disable hyper-threading.
|
||||
Merlin6 Apollo nodes have 44 cores, each one with hyper-threading enabled. For ensuring that only 1 thread will be used per task,
|
||||
users should specify ``--ntasks-per-core=1`` and ``--hint=nomultithread``. With this configuration, we tell Slurm to run only 1
|
||||
tasks per core and no hyperthreading should be used. Hence, each tasks will be assigned to an independent core.
|
||||
|
||||
Use `--nodes=1` if you want to use a node exclusively (44 non-hyperthreaded tasks would fit in a Merlin6 node).
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
#SBATCH --partition=hourly # Using 'hourly' will grant higher priority
|
||||
#SBATCH --ntasks=44 # Job will run 44 tasks
|
||||
#SBATCH --ntasks-per-core=2 # Request the max ntasks be invoked on each core
|
||||
#SBATCH --ntasks-per-core=1 # Request the max ntasks be invoked on each core
|
||||
#SBATCH --hint=nomultithread # Don't use extra threads with in-core multi-threading
|
||||
#SBATCH --mem=352000 # Define the whole memory of the node
|
||||
#SBATCH --time=00:30:00 # Define max time job will run
|
||||
#SBATCH --output=myscript.out # Define your output file
|
||||
#SBATCH --error=myscript.err # Define your output file
|
||||
|
||||
module purge
|
||||
module load $MODULE_NAME # where $MODULE_NAME is a software in PModules
|
||||
srun $MYEXEC # where $MYEXEC is a path to your binary file
|
||||
```
|
||||
|
||||
### Example 3: Hyperthreaded Hybrid MPI/OpenMP job
|
||||
|
||||
In this example, we want to run a Hybrid Job using MPI and OpenMP using hyperthreading. In this job, we want to run 4 MPI
|
||||
tasks by using 8 CPUs per task. Each task in our example requires 128GB of memory. Then we specify 16000MB per CPU
|
||||
(8 x 16000MB = 128000MB). Notice that since hyperthreading is enabled, Slurm will use 4 cores per task (2 threads per core).
|
||||
Also, always consider that **`'--mem-per-cpu' x '--cpus-per-task'`** can **never** exceed the maximum amount of memory per node
|
||||
(352000MB).
|
||||
|
||||
```bash
|
||||
#!/bin/bash -l
|
||||
#SBATCH --clusters=merlin6
|
||||
#SBATCH --job-name=test
|
||||
#SBATCH --ntasks=4
|
||||
#SBATCH --ntasks-per-socket=1
|
||||
#SBATCH --mem-per-cpu=16000
|
||||
#SBATCH --cpus-per-task=8
|
||||
#SBATCH --partition=hourly
|
||||
#SBATCH --time=01:00:00
|
||||
#SBATCH --output=srun_%j.out
|
||||
#SBATCH --error=srun_%j.err
|
||||
#SBATCH --hint=multithread
|
||||
|
||||
module purge
|
||||
module load $MODULE_NAME # where $MODULE_NAME is a software in PModules
|
||||
srun $MYEXEC # where $MYEXEC is a path to your binary file
|
||||
```
|
||||
|
||||
### Example 4: Non-hyperthreaded Hybrid MPI/OpenMP job
|
||||
|
||||
In this example, we want to run a Hybrid Job using MPI and OpenMP without hyperthreading. In this job, we want to run 4 MPI
|
||||
tasks by using 8 CPUs per task. Each task in our example requires 128GB of memory. Then we specify 16000MB per CPU
|
||||
(8 x 16000MB = 128000MB). Notice that since hyperthreading is disabled, Slurm will use 8 cores per task (1 thread per core).
|
||||
Also, always consider that **`'--mem-per-cpu' x '--cpus-per-task'`** can **never** exceed the maximum amount of memory per node
|
||||
(352000MB).
|
||||
|
||||
```bash
|
||||
#!/bin/bash -l
|
||||
#SBATCH --clusters=merlin6
|
||||
#SBATCH --job-name=test
|
||||
#SBATCH --ntasks=4
|
||||
#SBATCH --ntasks-per-socket=1
|
||||
#SBATCH --mem-per-cpu=16000
|
||||
#SBATCH --cpus-per-task=8
|
||||
#SBATCH --partition=hourly
|
||||
#SBATCH --time=01:00:00
|
||||
#SBATCH --output=srun_%j.out
|
||||
#SBATCH --error=srun_%j.err
|
||||
#SBATCH --hint=nomultithread
|
||||
|
||||
module purge
|
||||
module load $MODULE_NAME # where $MODULE_NAME is a software in PModules
|
||||
srun $MYEXEC # where $MYEXEC is a path to your binary file
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user