276 lines
12 KiB
Markdown
276 lines
12 KiB
Markdown
---
|
|
title: Slurm Examples
|
|
#tags:
|
|
keywords: example, template, examples, templates, running jobs, sbatch
|
|
last_updated: 28 June 2019
|
|
summary: "This document shows different template examples for running jobs in the Merlin cluster."
|
|
sidebar: merlin6_sidebar
|
|
permalink: /merlin6/slurm-examples.html
|
|
---
|
|
|
|
## Single core based job examples
|
|
|
|
### Example 1
|
|
|
|
In this example we want to do not use hyper-threading (``--ntasks-per-core=1`` and ``--hint=nomultithread``). In our Merlin6 configuration,
|
|
the default memory per cpu (in Slurm, this is equivalent to memory per thread) is 4000MB, but in this example we are using 1 single thread
|
|
per core. As we are not using the second thread in the core, we can double the memory used by the single thread to 8000MB. When using one
|
|
single thread per core, doubling the memory is recommended (however, some applications might not need it).
|
|
|
|
```bash
|
|
#!/bin/bash
|
|
#SBATCH --partition=hourly # Using 'hourly' will grant higher priority
|
|
#SBATCH --ntasks-per-core=1 # Request the max ntasks be invoked on each core
|
|
#SBATCH --hint=nomultithread # Don't use extra threads with in-core multi-threading
|
|
#SBATCH --mem-per-cpu=8000 # Double the default memory per cpu
|
|
#SBATCH --time=00:30:00 # Define max time job will run
|
|
#SBATCH --output=myscript.out # Define your output file
|
|
#SBATCH --error=myscript.err # Define your error file
|
|
|
|
module load $module # ...
|
|
My_Script || srun $task # ...
|
|
```
|
|
|
|
### Example 2
|
|
|
|
In this example we want to do not use hyper-threading (``--ntasks-per-core=1`` and ``--hint=nomultithread``). We want to run a single
|
|
task but we need to use all the memory available in the node. For that, we need to define that the job will use the whole memory of
|
|
a node with ``--mem=352000`` (which is the maximum memory available on a single Apollo node). Whenever you want to run a job requiring
|
|
more memory than the default (4000MB per thread) is very important to specify the amount of memory that the job will use.
|
|
|
|
```bash
|
|
#!/bin/bash
|
|
#SBATCH --partition=hourly # Using 'hourly' will grant higher priority
|
|
#SBATCH --ntasks-per-core=1 # Request the max ntasks be invoked on each core
|
|
#SBATCH --hint=nomultithread # Don't use extra threads with in-core multi-threading
|
|
#SBATCH --mem=352000 # We want to use the whole memory
|
|
#SBATCH --time=00:30:00 # Define max time job will run
|
|
#SBATCH --output=myscript.out # Define your output file
|
|
#SBATCH --error=myscript.err # Define your error file
|
|
|
|
module load $module # ...
|
|
My_Script || srun $task # ...
|
|
```
|
|
|
|
## Multi core based job examples
|
|
|
|
### Example 1: with Hyper-Threading
|
|
|
|
In this example we run a job that will run 88 tasks. Merlin6 Apollo nodes have 44 cores each one with hyper-threading
|
|
enabled. This means that we can run 2 threads per core, in total 88 threads. To accomplish that, users should specify
|
|
``--ntasks-per-core=2`` and ``--hint=multithread``. On the other hand, we add the option ``--exclusive`` to ensure
|
|
that the node usage is exclusive and no other jobs are running there. Finally, notice that the default memory per
|
|
thread is 4000MB; hence, in total this job can use up to 352000MB memory which is the maximum allowed in a single node.
|
|
|
|
```bash
|
|
#!/bin/bash
|
|
#SBATCH --partition=hourly # Using 'hourly' will grant higher priority
|
|
#SBATCH --exclusive # Use the node in exclusive mode
|
|
#SBATCH --ntasks=88 # Job will run 88 tasks
|
|
#SBATCH --ntasks-per-core=2 # Request the max ntasks be invoked on each core
|
|
#SBATCH --hint=multithread # Use extra threads with in-core multi-threading
|
|
#SBATCH --time=00:30:00 # Define max time job will run
|
|
#SBATCH --output=myscript.out # Define your output file
|
|
#SBATCH --error=myscript.err # Define your error file
|
|
|
|
module load $module # ...
|
|
My_Script || srun $task # ...
|
|
```
|
|
|
|
### Example 2: without Hyper-Threading
|
|
|
|
In this example we want to run a job that will run 44 tasks, and for performance reason we want to disable hyper-threading.
|
|
Merlin6 Apollo nodes have 44 cores each one with hyper-threading enabled. For ensure that only 1 thread will be used, users
|
|
should specify ``--ntasks-per-core=1`` and ``--hint=nomultithread``. With this configuration, each task will run in 1 thread,
|
|
and each tasks will be assigned to an independent core. We add the option ``--exclusive`` to ensure that the node usage is
|
|
exclusive and no other jobs are running there. Finally, in our Slurm configuration the default memory per thread is 4000MB,
|
|
but we want to use only 1 thread. This means that only half of the memory would be used. If the job requires more memory,
|
|
users need to increase it by either by setting ``--mem=352000`` or (exclusive) by setting ``--mem-per-cpu=8000``.
|
|
|
|
```bash
|
|
#!/bin/bash
|
|
#SBATCH --partition=hourly # Using 'hourly' will grant higher priority
|
|
#SBATCH --ntasks=44 # Job will run 44 tasks
|
|
#SBATCH --ntasks-per-core=2 # Request the max ntasks be invoked on each core
|
|
#SBATCH --hint=nomultithread # Don't use extra threads with in-core multi-threading
|
|
#SBATCH --mem=352000 # Define the whole memory of the node
|
|
#SBATCH --time=00:30:00 # Define max time job will run
|
|
#SBATCH --output=myscript.out # Define your output file
|
|
#SBATCH --error=myscript.err # Define your output file
|
|
|
|
module load $module # ...
|
|
My_Script || srun $task # ...
|
|
```
|
|
|
|
## Advanced examples
|
|
|
|
### Array Jobs: launching a large number of related jobs
|
|
|
|
If you need to run a large number of jobs based on the same executable with systematically varying inputs,
|
|
e.g. for a parameter sweep, you can do this most easily in form of a **simple array job**.
|
|
|
|
``` bash
|
|
#!/bin/bash
|
|
#SBATCH --job-name=test-array
|
|
#SBATCH --partition=daily
|
|
#SBATCH --ntasks=1
|
|
#SBATCH --time=08:00:00
|
|
#SBATCH --array=1-8
|
|
|
|
echo $(date) "I am job number ${SLURM_ARRAY_TASK_ID}"
|
|
srun myprogram config-file-${SLURM_ARRAY_TASK_ID}.dat
|
|
|
|
```
|
|
|
|
This will run 8 independent jobs, where each job can use the counter
|
|
variable `SLURM_ARRAY_TASK_ID` defined by Slurm inside of the job's
|
|
environment to feed the correct input arguments or configuration file
|
|
to the "myprogram" executable. Each job will receive the same set of
|
|
configurations (e.g. time limit of 8h in the example above).
|
|
|
|
The jobs are independent, but they will run in parallel (if the cluster resources allow for
|
|
it). The jobs will get JobIDs like {some-number}_0 to {some-number}_7, and they also will each
|
|
have their own output file.
|
|
|
|
**Note:**
|
|
* Do not use such jobs if you have very short tasks, since each array sub job will incur the full overhead for launching an independent Slurm job. For such cases you should used a **packed job** (see below).
|
|
* If you want to control how many of these jobs can run in parallel, you can use the `#SBATCH --array=1-100%5` syntax. The `%5` will define
|
|
that only 5 sub jobs may ever run in parallel.
|
|
|
|
You also can use an array job approach to run over all files in a directory, substituting the payload with
|
|
|
|
``` bash
|
|
FILES=(/path/to/data/*)
|
|
srun ./myprogram ${FILES[$SLURM_ARRAY_TASK_ID]}
|
|
```
|
|
|
|
Or for a trivial case you could supply the values for a parameter scan in form
|
|
of a argument list that gets fed to the program using the counter variable.
|
|
|
|
``` bash
|
|
ARGS=(0.05 0.25 0.5 1 2 5 100)
|
|
srun ./my_program.exe ${ARGS[$SLURM_ARRAY_TASK_ID]}
|
|
```
|
|
|
|
### Array jobs: running very long tasks with checkpoint files
|
|
|
|
If you need to run a job for much longer than the queues (partitions) permit, and
|
|
your executable is able to create checkpoint files, you can use this
|
|
strategy:
|
|
|
|
``` bash
|
|
#!/bin/bash
|
|
#SBATCH --job-name=test-checkpoint
|
|
#SBATCH --partition=general
|
|
#SBATCH --ntasks=1
|
|
#SBATCH --time=7-00:00:00 # each job can run for 7 days
|
|
#SBATCH --cpus-per-task=1
|
|
#SBATCH --array=1-10%1 # Run a 10-job array, one job at a time.
|
|
if test -e checkpointfile; then
|
|
# There is a checkpoint file;
|
|
myprogram --read-checkp checkpointfile
|
|
else
|
|
# There is no checkpoint file, start a new simulation.
|
|
myprogram
|
|
fi
|
|
```
|
|
|
|
The `%1` in the `#SBATCH --array=1-10%1` statement defines that only 1 subjob can ever run in parallel, so
|
|
this will result in subjob n+1 only being started when job n has finished. It will read the checkpoint file
|
|
if it is present.
|
|
|
|
|
|
### Packed jobs: running a large number of short tasks
|
|
|
|
Since the launching of a Slurm job incurs some overhead, you should not submit each short task as a separate
|
|
Slurm job. Use job packing, i.e. you run the short tasks within the loop of a single Slurm job.
|
|
|
|
You can launch the short tasks using `srun` with the `--exclusive` switch (not to be confused with the
|
|
switch of the same name used in the SBATCH commands). This switch will ensure that only a specified
|
|
number of tasks can run in parallel.
|
|
|
|
As an example, the following job submission script will ask Slurm for
|
|
44 cores (threads), then it will run the =myprog= program 1000 times with
|
|
arguments passed from 1 to 1000. But with the =-N1 -n1 -c1
|
|
--exclusive= option, it will control that at any point in time only 44
|
|
instances are effectively running, each being allocated one CPU. You
|
|
can at this point decide to allocate several CPUs or tasks by adapting
|
|
the corresponding parameters.
|
|
|
|
``` bash
|
|
#! /bin/bash
|
|
#SBATCH --job-name=test-checkpoint
|
|
#SBATCH --partition=general
|
|
#SBATCH --ntasks=1
|
|
#SBATCH --time=7-00:00:00
|
|
#SBATCH --ntasks=44 # defines the number of parallel tasks
|
|
for i in {1..1000}
|
|
do
|
|
srun -N1 -n1 -c1 --exclusive ./myprog $i &
|
|
done
|
|
wait
|
|
```
|
|
|
|
**Note:** The `&` at the end of the `srun` line is needed to not have the script waiting (blocking).
|
|
The `wait` command waits for all such background tasks to finish and returns the exit code.
|
|
|
|
## Hands-On Example
|
|
|
|
Copy-paste the following example in a file called myAdvancedTest.batch):
|
|
|
|
```bash
|
|
#!/bin/bash
|
|
#SBATCH --partition=daily # name of slurm partition to submit
|
|
#SBATCH --time=2:00:00 # limit the execution of this job to 2 hours, see sinfo for the max. allowance
|
|
#SBATCH --nodes=2 # number of nodes
|
|
#SBATCH --ntasks=44 # number of tasks
|
|
#SBATCH --ntasks-per-core=1 # Request the max ntasks be invoked on each core
|
|
#SBATCH --hint=nomultithread # Don't use extra threads with in-core multi-threading
|
|
|
|
module load gcc/9.2.0 openmpi/3.1.5-1_merlin6
|
|
module list
|
|
|
|
echo "Example no-MPI:" ; hostname # will print one hostname per node
|
|
echo "Example MPI:" ; mpirun hostname # will print one hostname per ntask
|
|
```
|
|
|
|
In the above example are specified the options ``--nodes=2`` and ``--ntasks=44``. This means that up 2 nodes are requested,
|
|
and is expected to run 44 tasks. Hence, 44 cores are needed for running that job. Slurm will try to allocate a maximum of
|
|
2 nodes, both together having at least 44 cores. Since our nodes have 44 cores / each, if nodes are empty (no other users
|
|
have running jobs there), job can land on a single node (it has enough cores to run 44 tasks).
|
|
|
|
If we want to ensure that job is using at least two different nodes (i.e. for boosting CPU frequency, or because the job
|
|
requires more memory per core) you should specify other options.
|
|
|
|
A good example is ``--ntasks-per-node=22``. This will equally distribute 22 tasks on 2 nodes.
|
|
|
|
```bash
|
|
#SBATCH --ntasks-per-node=22
|
|
```
|
|
|
|
A different example could be by specifying how much memory per core is needed. For instance ``--mem-per-cpu=32000`` will reserve
|
|
~32000MB per core. Since we have a maximum of 352000MB per Apollo node, Slurm will be only able to allocate 11 cores (32000MB x 11cores = 352000MB) per node.
|
|
It means that 4 nodes will be needed (max 11 tasks per node due to memory definition, and we need to run 44 tasks), in this case we need to change ``--nodes=4``
|
|
(or remove ``--nodes``). Alternatively, we can decrease ``--mem-per-cpu`` to a lower value which can allow the use of at least 44 cores per node (i.e. with ``16000``
|
|
should be able to use 2 nodes)
|
|
|
|
```bash
|
|
#SBATCH --mem-per-cpu=16000
|
|
```
|
|
|
|
Finally, in order to ensure exclusivity of the node, an option *--exclusive* can be used (see below). This will ensure that
|
|
the requested nodes are exclusive for the job (no other users jobs will interact with this node, and only completely
|
|
free nodes will be allocated).
|
|
|
|
```bash
|
|
#SBATCH --exclusive
|
|
```
|
|
|
|
This can be combined with the previous examples.
|
|
|
|
More advanced configurations can be defined and can be combined with the previous examples. More information about advanced
|
|
options can be found in the following link: https://slurm.schedmd.com/sbatch.html (or run 'man sbatch').
|
|
|
|
If you have questions about how to properly execute your jobs, please contact us through merlin-admins@lists.psi.ch. Do not run
|
|
advanced configurations unless your are sure of what you are doing.
|