Added interactive-jobs.md and linux/macos/windows client recipes

This commit is contained in:
2019-10-23 12:08:18 +02:00
parent 126d6a79b6
commit 3b8e2fc9d1
17 changed files with 408 additions and 14 deletions

View File

@ -0,0 +1,266 @@
---
title: Slurm Examples
#tags:
#keywords:
last_updated: 28 June 2019
#summary: ""
sidebar: merlin6_sidebar
permalink: /merlin6/slurm-examples.html
---
## Basic single core job
### Basic single core job - Example 1
```bash
#!/bin/bash
#SBATCH --partition=hourly # Using 'hourly' will grant higher priority
#SBATCH --ntasks-per-core=1 # Force no Hyper-Threading, will run 1 task per core
#SBATCH --mem-per-cpu=8000 # Double the default memory per cpu
#SBATCH --time=00:30:00 # Define max time job will run
#SBATCH --output=myscript.out # Define your output file
#SBATCH --error=myscript.err # Define your error file
my_script
```
In this example we run a single core job by defining ``--ntasks-per-core=1`` (which is also the default). Since the default memory per
cpu is 4000MB (in Slurm, this is equivalent to the memory per thread), and we are using 1 single thread per core, default memory per CPU
should be doubled: using a single thread will always be accounted as if the job was using the whole physical core (which has 2 available
hyperthreads), hence we want to use the memory as if we were using 2 threads.
### Basic single core job - Example 2
```bash
#!/bin/bash
#SBATCH --partition=hourly # Using 'hourly' will grant higher priority
#SBATCH --ntasks-per-core=1 # Force no Hyper-Threading, will run 1 task per core
#SBATCH --mem=352000 # We want to use the whole memory
#SBATCH --time=00:30:00 # Define max time job will run
#SBATCH --output=myscript.out # Define your output file
#SBATCH --error=myscript.err # Define your error file
my_script
```
In this example we run a single core job by defining ``--ntasks-per-core=1`` (which is also the default). Also, we define that the
job will use the whole memory of a node with ``--mem=352000`` (which is the maximum memory available per Apollo node). Whenever
you want to run a job needing more memory than the default (4000MB per thread) is very important to specify the amount of memory that
the job will use. This must be done in order to avoid conflicts with other jobs from other users.
## Basic MPI with hyper-threading
```bash
#!/bin/bash
#SBATCH --partition=hourly # Using 'hourly' will grant higher priority
#SBATCH --exclusive # Use the node in exclusive mode
#SBATCH --ntasks=88 # Job will run 88 tasks
#SBATCH --ntasks-per-core=2 # Force Hyper-Threading, will run 2 tasks per core
#SBATCH --time=00:30:00 # Define max time job will run
#SBATCH --output=myscript.out # Define your output file
#SBATCH --error=myscript.err # Define your error file
module load gcc/8.3.0 openmpi/3.1.3
MPI_script
```
In this example we run a job that will run 88 tasks. Merlin6 Apollo nodes have 44 cores each one with HT
enabled. This means that we can run 2 threads per core, in total 88 threads. We add the option ``--exclusive`` to
ensure that the node usage is exclusive and no other jobs are running there. Finally, the default memory
per thread is 4000MB, in total this job can use up to 352000MB memory which is the maximum allowed in a single node.
## Basic MPI without hyper-threading
```bash
#!/bin/bash
#SBATCH --partition=hourly # Using 'hourly' will grant higher priority
#SBATCH --ntasks=44 # Job will run 44 tasks
#SBATCH --ntasks-per-core=1 # Force no Hyper-Threading, will run 1 task per core
#SBATCH --mem=352000 # Define the whole memory of the node
#SBATCH --time=00:30:00 # Define max time job will run
#SBATCH --output=myscript.out # Define your output file
#SBATCH --error=myscript.err # Define your output file
module load gcc/8.3.0 openmpi/3.1.3
MPI_script
```
In this example we run a job that will run 44 tasks, and Hyper-Threading will not be used. Merlin6 Apollo nodes have 44 cores
each one with HT enabled. However, defining ``--ntasks-per-core=1`` we force the use of one single thread per core (if this is
not defined, will be the default, but is recommended to add it explicitly). Each task will
run in 1 thread, and each tasks will be assigned to an independent core. We add the option ``--exclusive`` to ensre that the node
usage is exclusive and no other jobs are running there. Finally, since the default memory per thread is 4000MB and we use only
1 thread, we want to avoid using half of the memory: we have to specify that we will use the whole memory of the node with the
option ``--mem=352000`` (which is the maximum memory available in the node)`.
## Advanced Slurm Example
Copy-paste the following example in a file called myAdvancedTest.batch):
```bash
#!/bin/bash
#SBATCH --partition=daily # name of slurm partition to submit
#SBATCH --time=2:00:00 # limit the execution of this job to 2 hours, see sinfo for the max. allowance
#SBATCH --nodes=2 # number of nodes
#SBATCH --ntasks=44 # number of tasks
module load gcc/8.3.0 openmpi/3.1.3
module list
echo "Example no-MPI:" ; hostname # will print one hostname per node
echo "Example MPI:" ; mpirun hostname # will print one hostname per ntask
```
In the above example are specified the options ``--nodes=2`` and ``--ntasks=44``. This means that up 2 nodes are requested,
and is expected to run 44 tasks. Hence, 44 cores are needed for running that job (we do not specify ``--ntasks-per-core``, so it will
default to ``1``). Slurm will try to allocate a maximum of 2 nodes, both together having at least 44 cores.
Since our nodes have 44 cores / each, if nodes are empty (no other users have running jobs there), job can land on a single node
(it has enough cores to run 44 tasks).
If we want to ensure that job is using at least two different nodes (i.e. for boosting CPU frequency, or because the job requires
more memory per core) you should specify other options.
A good example is ``--ntasks-per-node=22``. This will equally distribute 22 tasks on 2 nodes.
```bash
#SBATCH --ntasks-per-node=22
```
A different example could be by specifying how much memory per core is needed. For instance ``--mem-per-cpu=32000`` will reserve
~32000MB per core. Since we have a maximum of 352000MB per Apollo node, Slurm will be only able to allocate 11 cores (32000MB x 11cores = 352000MB) per node.
It means that 4 nodes will be needed (max 11 tasks per node due to memory definition, and we need to run 44 tasks), in this case we need to change ``--nodes=4``
(or remove ``--nodes``). Alternatively, we can decrease ``--mem-per-cpu`` to a lower value which can allow the use of at least 44 cores per node (i.e. with ``16000``
should be able to use 2 nodes)
```bash
#SBATCH --mem-per-cpu=16000
```
Finally, in order to ensure exclusivity of the node, an option *--exclusive* can be used (see below). This will ensure that
the requested nodes are exclusive for the job (no other users jobs will interact with this node, and only completely
free nodes will be allocated).
```bash
#SBATCH --exclusive
```
This can be combined with the previous examples.
More advanced configurations can be defined and can be combined with the previous examples. More information about advanced
options can be found in the following link: https://slurm.schedmd.com/sbatch.html (or run 'man sbatch').
If you have questions about how to properly execute your jobs, please contact us through merlin-admins@lists.psi.ch. Do not run
advanced configurations unless your are sure of what you are doing.
## Array Jobs - launching a large number of related jobs
If you need to run a large number of jobs based on the same executable with systematically varying inputs,
e.g. for a parameter sweep, you can do this most easily in form of a **simple array job**.
``` bash
#!/bin/bash
#SBATCH --job-name=test-array
#SBATCH --partition=daily
#SBATCH --ntasks=1
#SBATCH --time=08:00:00
#SBATCH --array=1-8
echo $(date) "I am job number ${SLURM_ARRAY_TASK_ID}"
srun myprogram config-file-${SLURM_ARRAY_TASK_ID}.dat
```
This will run 8 independent jobs, where each job can use the counter
variable `SLURM_ARRAY_TASK_ID` defined by Slurm inside of the job's
environment to feed the correct input arguments or configuration file
to the "myprogram" executable. Each job will receive the same set of
configurations (e.g. time limit of 8h in the example above).
The jobs are independent, but they will run in parallel (if the cluster resources allow for
it). The jobs will get JobIDs like {some-number}_0 to {some-number}_7, and they also will each
have their own output file.
**Note:**
* Do not use such jobs if you have very short tasks, since each array sub job will incur the full overhead for launching an independent Slurm job. For such cases you should used a **packed job** (see below).
* If you want to control how many of these jobs can run in parallel, you can use the `#SBATCH --array=1-100%5` syntax. The `%5` will define
that only 5 sub jobs may ever run in parallel.
You also can use an array job approach to run over all files in a directory, substituting the payload with
``` bash
FILES=(/path/to/data/*)
srun ./myprogram ${FILES[$SLURM_ARRAY_TASK_ID]}
```
Or for a trivial case you could supply the values for a parameter scan in form
of a argument list that gets fed to the program using the counter variable.
``` bash
ARGS=(0.05 0.25 0.5 1 2 5 100)
srun ./my_program.exe ${ARGS[$SLURM_ARRAY_TASK_ID]}
```
## Array jobs for running very long tasks with checkpoint files
If you need to run a job for much longer than the queues (partitions) permit, and
your executable is able to create checkpoint files, you can use this
strategy:
``` bash
#!/bin/bash
#SBATCH --job-name=test-checkpoint
#SBATCH --partition=general
#SBATCH --ntasks=1
#SBATCH --time=7-00:00:00 # each job can run for 7 days
#SBATCH --cpus-per-task=1
#SBATCH --array=1-10%1 # Run a 10-job array, one job at a time.
if test -e checkpointfile; then
# There is a checkpoint file;
myprogram --read-checkp checkpointfile
else
# There is no checkpoint file, start a new simulation.
myprogram
fi
```
The `%1` in the `#SBATCH --array=1-10%1` statement defines that only 1 subjob can ever run in parallel, so
this will result in subjob n+1 only being started when job n has finished. It will read the checkpoint file
if it is present.
## Packed jobs - running a large number of short tasks
Since the launching of a Slurm job incurs some overhead, you should not submit each short task as a separate
Slurm job. Use job packing, i.e. you run the short tasks within the loop of a single Slurm job.
You can launch the short tasks using `srun` with the `--exclusive` switch (not to be confused with the
switch of the same name used in the SBATCH commands). This switch will ensure that only a specified
number of tasks can run in parallel.
As an example, the following job submission script will ask Slurm for
44 cores (threads), then it will run the =myprog= program 1000 times with
arguments passed from 1 to 1000. But with the =-N1 -n1 -c1
--exclusive= option, it will control that at any point in time only 44
instances are effectively running, each being allocated one CPU. You
can at this point decide to allocate several CPUs or tasks by adapting
the corresponding parameters.
``` bash
#! /bin/bash
#SBATCH --job-name=test-checkpoint
#SBATCH --partition=general
#SBATCH --ntasks=1
#SBATCH --time=7-00:00:00
#SBATCH --ntasks=44 # defines the number of parallel tasks
for i in {1..1000}
do
srun -N1 -n1 -c1 --exclusive ./myprog $i &
done
wait
```
**Note:** The `&` at the end of the `srun` line is needed to not have the script waiting (blocking).
The `wait` command waits for all such background tasks to finish and returns the exit code.