diff --git a/pages/merlin6/05 Software Support/impi.md b/pages/merlin6/05 Software Support/impi.md
index 7bf5bba..821ef8b 100644
--- a/pages/merlin6/05 Software Support/impi.md
+++ b/pages/merlin6/05 Software Support/impi.md
@@ -26,6 +26,8 @@ When running with **srun**, one should tell Intel MPI to use the PMI libraries p
```bash
export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so
+
+srun ./app
```
Alternatively, one can use PMI-2, but then one needs to specify it as follows:
@@ -33,6 +35,8 @@ Alternatively, one can use PMI-2, but then one needs to specify it as follows:
```bash
export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi2.so
export I_MPI_PMI2=yes
+
+srun ./app
```
For more information, please read [Slurm Intel MPI Guide](https://slurm.schedmd.com/mpi_guide.html#intel_mpi)
diff --git a/pages/merlin6/05 Software Support/openmpi.md b/pages/merlin6/05 Software Support/openmpi.md
index a106f49..53d35a1 100644
--- a/pages/merlin6/05 Software Support/openmpi.md
+++ b/pages/merlin6/05 Software Support/openmpi.md
@@ -19,6 +19,12 @@ bind tasks in to cores and less customization is needed, while **'mpirun'** and
configuration and should be only used by advanced users. Please, ***always*** adapt your scripts for using **'srun'**
before opening a support ticket. Also, please contact us on any problem when using a module.
+Example:
+
+```bash
+srun ./app
+```
+
{{site.data.alerts.tip}} Always run OpenMPI with the srun command. The only exception is for advanced users, however srun is still recommended.
{{site.data.alerts.end}}
@@ -43,7 +49,7 @@ Alternatively, one can add the following options for debugging purposes (visit [
Full example:
```bash
-mpirun -np $SLURM_NTASKS -mca pml ucx --mca btl ^vader,tcp,openib,uct -x UCX_NET_DEVICES=mlx5_0:1 -x UCX_LOG_LEVEL=data -x UCX_LOG_FILE=UCX-$SLURM_JOB_ID.log
+mpirun -np $SLURM_NTASKS -mca pml ucx --mca btl ^vader,tcp,openib,uct -x UCX_NET_DEVICES=mlx5_0:1 -x UCX_LOG_LEVEL=data -x UCX_LOG_FILE=UCX-$SLURM_JOB_ID.loga ./app
```
## Supported OpenMPI versions