---
title: Intel MPI Support
#tags:
last_updated: 13 March 2020
keywords: software, impi, slurm
summary: "This document describes how to use Intel MPI in the Merlin6 cluster"
sidebar: merlin6_sidebar
permalink: /merlin6/impi.html
---
## Introduction
This document describes which set of Intel MPI versions in PModules are supported in the Merlin6 cluster.
### srun
We strongly recommend the use of **'srun'** over **'mpirun'** or **'mpiexec'**. Using **'srun'** would properly
bind tasks in to cores and less customization is needed, while **'mpirun'** and '**mpiexec**' might need more advanced
configuration and should be only used by advanced users. Please, ***always*** adapt your scripts for using **'srun'**
before opening a support ticket. Also, please contact us on any problem when using a module.
{{site.data.alerts.tip}} Always run Intel MPI with the srun command. The only exception is for advanced users, however srun is still recommended.
{{site.data.alerts.end}}
When running with **srun**, one should tell Intel MPI to use the PMI libraries provided by Slurm. For PMI-1:
```bash
export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so
srun ./app
```
Alternatively, one can use PMI-2, but then one needs to specify it as follows:
```bash
export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi2.so
export I_MPI_PMI2=yes
srun ./app
```
For more information, please read [Slurm Intel MPI Guide](https://slurm.schedmd.com/mpi_guide.html#intel_mpi)
**Note**: Please note that PMI2 might not work properly in some Intel MPI versions. If so, you can either fallback
to PMI-1 or to contact the Merlin administrators.