diff --git a/_data/sidebars/merlin6_sidebar.yml b/_data/sidebars/merlin6_sidebar.yml
index d2311fb..701623a 100644
--- a/_data/sidebars/merlin6_sidebar.yml
+++ b/_data/sidebars/merlin6_sidebar.yml
@@ -69,10 +69,10 @@ entries:
url: /merlin6/jupyterlab.html
- title: Jupyterhub Troubleshooting
url: /merlin6/jupyterhub-trouble.html
- # - title: Software Support
- # folderitems:
- # - title: OpenMPI
- # url: /merlin6/openmpi.html
+ - title: Software Support
+ folderitems:
+ - title: OpenMPI
+ url: /merlin6/openmpi.html
- title: Announcements
folderitems:
- title: Downtimes
diff --git a/pages/merlin6/03 Job Submission/running-jobs.md b/pages/merlin6/03 Job Submission/running-jobs.md
index 2fa0a77..33dd992 100644
--- a/pages/merlin6/03 Job Submission/running-jobs.md
+++ b/pages/merlin6/03 Job Submission/running-jobs.md
@@ -41,7 +41,7 @@ Before starting using the cluster, please read the following rules:
**``sbatch``** is the command used for submitting a batch script to Slurm
* Use **``srun``**: to run parallel tasks.
- * As an alternative, ``mpirun`` and ``mpiexec`` can be used. However, ***is strongly recommended to user ``srun``**** instead.
+ * As an alternative, ``mpirun`` and ``mpiexec`` can be used. However, ***is strongly recommended to user ``srun``*** instead.
* Use **``squeue``** for checking jobs status
* Use **``scancel``** for deleting a job from the queue.
@@ -90,17 +90,21 @@ Computing nodes in **merlin6** have hyperthreading enabled: every core is runnin
* For **hyperthreaded based jobs** users ***must*** specify the following options:
```bash
- #SBATCH --ntasks-per-core=2 # Mandatory for multithreaded jobs
- #SBATCH --hint=multithread # Mandatory for multithreaded jobs
+ #SBATCH --hint=multithread # Mandatory for multithreaded jobs
+ #SBATCH --ntasks-per-core=2 # Only needed when a task fits into a core
```
* For **non-hyperthreaded based jobs** users ***must*** specify the following options:
```bash
- #SBATCH --ntasks-per-core=1 # Mandatory for non-multithreaded jobs
- #SBATCH --hint=nomultithread # Mandatory for non-multithreaded jobs
+ #SBATCH --hint=nomultithread # Mandatory for non-multithreaded jobs
+ #SBATCH --ntasks-per-core=1 # Only needed when a task fits into a core
```
+{{site.data.alerts.tip}} In general, --hint=[no]multithread is a mandatory field. On the other hand, --ntasks-per-core is only needed when
+one needs to define how a task should be handled within a core, and this setting will not be generally used on Hybrid MPI/OpenMP jobs where multiple cores are needed for a single tasks.
+{{site.data.alerts.end}}
+
### Shared vs exclusive nodes
The **Merlin5** and **Merlin6** clusters are designed in a way that should allow running MPI/OpenMP processes as well as single core based jobs. For allowing co-existence, nodes are configured by default in a shared mode. It means, that multiple jobs from multiple users may land in the same node. This behaviour can be changed by a user if they require exclusive usage of nodes.
@@ -190,9 +194,9 @@ The following template should be used by any user submitting jobs to CPU nodes:
#SBATCH --time= # Strongly recommended
#SBATCH --output= # Generate custom output file
#SBATCH --error= # Generate custom error file
-#SBATCH --ntasks-per-core=1 # Mandatory for non-multithreaded jobs
#SBATCH --hint=nomultithread # Mandatory for non-multithreaded jobs
##SBATCH --exclusive # Uncomment if you need exclusive node usage
+##SBATCH --ntasks-per-core=1 # Only mandatory for non-multithreaded single tasks
## Advanced options example
##SBATCH --nodes=1 # Uncomment and specify #nodes to use
@@ -211,9 +215,9 @@ The following template should be used by any user submitting jobs to CPU nodes:
#SBATCH --time= # Strongly recommended
#SBATCH --output= # Generate custom output file
#SBATCH --error= # Generate custom error file
-#SBATCH --ntasks-per-core=2 # Mandatory for multithreaded jobs
#SBATCH --hint=multithread # Mandatory for multithreaded jobs
##SBATCH --exclusive # Uncomment if you need exclusive node usage
+##SBATCH --ntasks-per-core=2 # Only mandatory for multithreaded single tasks
## Advanced options example
##SBATCH --nodes=1 # Uncomment and specify #nodes to use
@@ -233,11 +237,9 @@ The following template should be used by any user submitting jobs to GPU nodes:
#SBATCH --time= # Strongly recommended
#SBATCH --output= # Generate custom output file
#SBATCH --error=