From 34a79b5569302b585f35104484d3e73e146a0e12 Mon Sep 17 00:00:00 2001 From: Marc Caubet Serrabou Date: Thu, 14 May 2020 18:21:58 +0200 Subject: [PATCH] Updated user documentation --- .../merlin6/03 Job Submission/slurm-examples.md | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/pages/merlin6/03 Job Submission/slurm-examples.md b/pages/merlin6/03 Job Submission/slurm-examples.md index 42c477a..d30d4c7 100644 --- a/pages/merlin6/03 Job Submission/slurm-examples.md +++ b/pages/merlin6/03 Job Submission/slurm-examples.md @@ -103,9 +103,8 @@ srun $MYEXEC # where $MYEXEC is a path to your binary file In this example, we want to run a Hybrid Job using MPI and OpenMP using hyperthreading. In this job, we want to run 4 MPI tasks by using 8 CPUs per task. Each task in our example requires 128GB of memory. Then we specify 16000MB per CPU -(8 x 16000MB = 128000MB). Notice that since hyperthreading is enabled, Slurm will use 4 cores per task (2 threads per core). -Also, always consider that **`'--mem-per-cpu' x '--cpus-per-task'`** can **never** exceed the maximum amount of memory per node -(352000MB). +(8 x 16000MB = 128000MB). Notice that since hyperthreading is enabled, Slurm will use 4 cores per task (with hyperthreading +2 threads -a.k.a. Slurm CPUs- fit into a core). ```bash #!/bin/bash -l @@ -126,13 +125,15 @@ module load $MODULE_NAME # where $MODULE_NAME is a software in PModules srun $MYEXEC # where $MYEXEC is a path to your binary file ``` +{{site.data.alerts.tip}} Also, always consider that **`'--mem-per-cpu' x '--cpus-per-task'`** can **never** exceed the maximum amount of memory per node (352000MB). +{{site.data.alerts.end}} + ### Example 4: Non-hyperthreaded Hybrid MPI/OpenMP job In this example, we want to run a Hybrid Job using MPI and OpenMP without hyperthreading. In this job, we want to run 4 MPI tasks by using 8 CPUs per task. Each task in our example requires 128GB of memory. Then we specify 16000MB per CPU -(8 x 16000MB = 128000MB). Notice that since hyperthreading is disabled, Slurm will use 8 cores per task (1 thread per core). -Also, always consider that **`'--mem-per-cpu' x '--cpus-per-task'`** can **never** exceed the maximum amount of memory per node -(352000MB). +(8 x 16000MB = 128000MB). Notice that since hyperthreading is disabled, Slurm will use 8 cores per task (disabling hyperthreading +we force the use of only 1 thread -a.k.a. 1 CPU- per core). ```bash #!/bin/bash -l @@ -153,6 +154,9 @@ module load $MODULE_NAME # where $MODULE_NAME is a software in PModules srun $MYEXEC # where $MYEXEC is a path to your binary file ``` +{{site.data.alerts.tip}} Also, always consider that **`'--mem-per-cpu' x '--cpus-per-task'`** can **never** exceed the maximum amount of memory per node (352000MB). +{{site.data.alerts.end}} + ## Advanced examples ### Array Jobs: launching a large number of related jobs