Modified job submission examples

This commit is contained in:
2020-04-27 16:58:43 +02:00
parent 2cf894c79c
commit 58d9f76388

View File

@ -27,8 +27,8 @@ single thread per core, doubling the memory is recommended (however, some applic
#SBATCH --output=myscript.out # Define your output file
#SBATCH --error=myscript.err # Define your error file
module load $module # ...
My_Script || srun $task # ...
module load $MODULE_NAME # where $MODULE_NAME is a software in PModules
srun $MYEXEC # where $MYEXEC is a path to your binary file
```
### Example 2
@ -49,7 +49,7 @@ more memory than the default (4000MB per thread) is very important to specify th
#SBATCH --error=myscript.err # Define your error file
module load $module # ...
My_Script || srun $task # ...
srun $MYEXEC # ...
```
## Multi core based job examples
@ -73,8 +73,8 @@ thread is 4000MB; hence, in total this job can use up to 352000MB memory which i
#SBATCH --output=myscript.out # Define your output file
#SBATCH --error=myscript.err # Define your error file
module load $module # ...
My_Script || srun $task # ...
module load $MODULE_NAME # where $MODULE_NAME is a software in PModules
srun $MYEXEC # where $MYEXEC is a path to your binary file
```
### Example 2: without Hyper-Threading
@ -98,8 +98,8 @@ users need to increase it by either by setting ``--mem=352000`` or (exclusive) b
#SBATCH --output=myscript.out # Define your output file
#SBATCH --error=myscript.err # Define your output file
module load $module # ...
My_Script || srun $task # ...
module load $MODULE_NAME # where $MODULE_NAME is a software in PModules
srun $MYEXEC # where $MYEXEC is a path to your binary file
```
## Advanced examples
@ -118,8 +118,7 @@ e.g. for a parameter sweep, you can do this most easily in form of a **simple ar
#SBATCH --array=1-8
echo $(date) "I am job number ${SLURM_ARRAY_TASK_ID}"
srun myprogram config-file-${SLURM_ARRAY_TASK_ID}.dat
srun $MYEXEC config-file-${SLURM_ARRAY_TASK_ID}.dat
```
This will run 8 independent jobs, where each job can use the counter
@ -141,7 +140,7 @@ You also can use an array job approach to run over all files in a directory, sub
``` bash
FILES=(/path/to/data/*)
srun ./myprogram ${FILES[$SLURM_ARRAY_TASK_ID]}
srun $MYEXEC ${FILES[$SLURM_ARRAY_TASK_ID]}
```
Or for a trivial case you could supply the values for a parameter scan in form
@ -149,7 +148,7 @@ of a argument list that gets fed to the program using the counter variable.
``` bash
ARGS=(0.05 0.25 0.5 1 2 5 100)
srun ./my_program.exe ${ARGS[$SLURM_ARRAY_TASK_ID]}
srun $MYEXEC ${ARGS[$SLURM_ARRAY_TASK_ID]}
```
### Array jobs: running very long tasks with checkpoint files
@ -168,10 +167,10 @@ strategy:
#SBATCH --array=1-10%1 # Run a 10-job array, one job at a time.
if test -e checkpointfile; then
# There is a checkpoint file;
myprogram --read-checkp checkpointfile
$MYEXEC --read-checkp checkpointfile
else
# There is no checkpoint file, start a new simulation.
myprogram
$MYEXEC
fi
```
@ -206,7 +205,7 @@ the corresponding parameters.
#SBATCH --ntasks=44 # defines the number of parallel tasks
for i in {1..1000}
do
srun -N1 -n1 -c1 --exclusive ./myprog $i &
srun -N1 -n1 -c1 --exclusive $MYEXEC $i &
done
wait
```
@ -231,7 +230,7 @@ module load gcc/9.2.0 openmpi/3.1.5-1_merlin6
module list
echo "Example no-MPI:" ; hostname # will print one hostname per node
echo "Example MPI:" ; mpirun hostname # will print one hostname per ntask
echo "Example MPI:" ; srun hostname # will print one hostname per ntask
```
In the above example are specified the options ``--nodes=2`` and ``--ntasks=44``. This means that up 2 nodes are requested,