first stab at mkdocs migration

This commit is contained in:
2025-11-26 17:28:07 +01:00
parent 7475369bc4
commit 10eae1319b
282 changed files with 200 additions and 8940 deletions

View File

@@ -0,0 +1,144 @@
---
title: ANSYS / CFX
#tags:
keywords: software, ansys, cfx5, cfx, slurm, interactive, rsm, batch job
last_updated: 07 September 2022
summary: "This document describes how to run ANSYS/CFX in the Merlin6 cluster"
sidebar: merlin6_sidebar
permalink: /merlin6/ansys-cfx.html
---
This document describes the different ways for running **ANSYS/CFX**
## ANSYS/CFX
Is always recommended to check which parameters are available in CFX and adapt the below examples according to your needs.
For that, run `cfx5solve -help` for getting a list of options.
## Running CFX jobs
### PModules
Is strongly recommended the use of the latest ANSYS software available in PModules.
```bash
module use unstable
module load Pmodules/1.1.6
module use overlay_merlin
module load ANSYS/2022R1
```
### Interactive: RSM from remote PSI Workstations
Is possible to run CFX through RSM from remote PSI (Linux or Windows) Workstation having a local installation of ANSYS CFX and RSM client.
For that, please refer to the ***[ANSYS RSM]*(/merlin6/ansys-rsm.html)** in the Merlin documentation for further information of how to setup a RSM client for submitting jobs to Merlin.
### Non-interactive: sbatch
Running jobs with `sbatch` is always the recommended method. This makes the use of the resources more efficient. Notice that for
running non interactive Mechanical APDL jobs one must specify the `-batch` option.
#### Serial example
This example shows a very basic serial job.
```bash
#!/bin/bash
#SBATCH --job-name=CFX # Job Name
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
#SBATCH --ntasks-per-core=1 # Double if hyperthreading enabled
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define your error file
module use unstable
module load ANSYS/2020R1-1
# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
LICENSE_SERVER=<your_license_server>
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
# [Optional:END]
SOLVER_FILE=/data/user/caubet_m/CFX5/mysolver.in
cfx5solve -batch -def "$JOURNAL_FILE"
```
One can enable hypertheading by defining `--hint=multithread`, `--cpus-per-task=2` and `--ntasks-per-core=2`.
However, this is in general not recommended, unless one can ensure that can be beneficial.
#### MPI-based example
An example for running CFX using a Slurm batch script is the following:
```bash
#!/bin/bash
#SBATCH --job-name=CFX # Job Name
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks=44 # Number of tasks
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
#SBATCH --ntasks-per-core=1 # Double if hyperthreading enabled
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define a file for standard error messages
##SBATCH --exclusive # Uncomment if you want exclusive usage of the nodes
module use unstable
module load ANSYS/2020R1-1
# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
LICENSE_SERVER=<your_license_server>
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
# [Optional:END]
export HOSTLIST=$(scontrol show hostname | tr '\n' ',' | sed 's/,$//g')
JOURNAL_FILE=myjournal.in
# INTELMPI=no for IBM MPI
# INTELMPI=yes for INTEL MPI
INTELMPI=no
if [ "$INTELMPI" == "yes" ]
then
export I_MPI_DEBUG=4
export I_MPI_PIN_CELL=core
# Simple example: cfx5solve -batch -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
# -part $SLURM_NTASKS \
# -start-method 'Intel MPI Distributed Parallel'
cfx5solve -batch -part-large -double -verbose -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
-part $SLURM_NTASKS -par-local -start-method 'Intel MPI Distributed Parallel'
else
# Simple example: cfx5solve -batch -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
# -part $SLURM_NTASKS \
# -start-method 'IBM MPI Distributed Parallel'
cfx5solve -batch -part-large -double -verbose -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
-part $SLURM_NTASKS -par-local -start-method 'IBM MPI Distributed Parallel'
fi
```
In the above example, one can increase the number of *nodes* and/or *ntasks* if needed and combine it
with `--exclusive` whenever needed. In general, **no hypertheading** is recommended for MPI based jobs.
Also, one can combine it with `--exclusive` when necessary. Finally, one can change the MPI technology in `-start-method`
(check CFX documentation for possible values).
## CFX5 Launcher: CFD-Pre/Post, Solve Manager, TurboGrid
Some users might need to visualize or change some parameters when running calculations with the CFX Solver. For running
**TurboGrid**, **CFX-Pre**, **CFX-Solver Manager** or **CFD-Post** one should run it with the **`cfx5` launcher** binary:
```bash
cfx5
```
![CFX5 Launcher Example]({{ "/images/ANSYS/cfx5launcher.png" }})
Then, from the launcher, one can open the proper application (i.e. **CFX-Solver Manager** for visualizing and modifying an
existing job run)
For running CFX5 Launcher, is required a proper SSH + X11 Forwarding access (`ssh -XY`) or *preferrible* **NoMachine**.
If **ssh** does not work for you, please use **NoMachine** instead (which is the supported X based access, and simpler).

View File

@@ -0,0 +1,162 @@
---
title: ANSYS / Fluent
#tags:
keywords: software, ansys, fluent, slurm, interactive, rsm, batch job
last_updated: 07 September 2022
summary: "This document describes how to run ANSYS/Fluent in the Merlin6 cluster"
sidebar: merlin6_sidebar
permalink: /merlin6/ansys-fluent.html
---
This document describes the different ways for running **ANSYS/Fluent**
## ANSYS/Fluent
Is always recommended to check which parameters are available in Fluent and adapt the below example according to your needs.
For that, run `fluent -help` for getting a list of options. However, as when running Fluent one must specify one of the
following flags:
* **2d**: This is a 2D solver with single point precision.
* **3d**: This is a 3D solver with single point precision.
* **2dpp**: This is a 2D solver with double point precision.
* **3dpp**: This is a 3D solver with double point precision.
## Running Fluent jobs
### PModules
Is strongly recommended the use of the latest ANSYS software available in PModules.
```bash
module use unstable
module load Pmodules/1.1.6
module use overlay_merlin
module load ANSYS/2022R1
```
### Interactive: RSM from remote PSI Workstations
Is possible to run Fluent through RSM from remote PSI (Linux or Windows) Workstation having a local installation of ANSYS Fluent and RSM client.
For that, please refer to the ***[ANSYS RSM]*(/merlin6/ansys-rsm.html)** in the Merlin documentation for further information of how to setup a RSM client for submitting jobs to Merlin.
### Non-interactive: sbatch
Running jobs with `sbatch` is always the recommended method. This makes the use of the resources more efficient.
For running it as a job, one needs to run in no graphical mode (`-g` option).
#### Serial example
This example shows a very basic serial job.
```bash
#!/bin/bash
#SBATCH --job-name=Fluent # Job Name
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define your error file
module use unstable
module load ANSYS/2020R1-1
# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
LICENSE_SERVER=<your_license_server>
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
# [Optional:END]
JOURNAL_FILE=/data/user/caubet_m/Fluent/myjournal.in
fluent 3ddp -g -i ${JOURNAL_FILE}
```
One can enable hypertheading by defining `--hint=multithread`, `--cpus-per-task=2` and `--ntasks-per-core=2`.
However, this is in general not recommended, unless one can ensure that can be beneficial.
#### MPI-based example
An example for running Fluent using a Slurm batch script is the following:
```bash
#!/bin/bash
#SBATCH --job-name=Fluent # Job Name
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks=44 # Number of tasks
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
#SBATCH --ntasks-per-core=1 # Run one task per core
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define a file for standard error messages
##SBATCH --exclusive # Uncomment if you want exclusive usage of the nodes
module use unstable
module load ANSYS/2020R1-1
# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
LICENSE_SERVER=<your_license_server>
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
# [Optional:END]
JOURNAL_FILE=/data/user/caubet_m/Fluent/myjournal.in
fluent 3ddp -g -t ${SLURM_NTASKS} -i ${JOURNAL_FILE}
```
In the above example, one can increase the number of *nodes* and/or *ntasks* if needed. One can remove
`--nodes` for running on multiple nodes, but may lead to communication overhead. In general, **no
hyperthreading** is recommended for MPI based jobs. Also, one can combine it with `--exclusive` when necessary.
## Interactive: salloc
Running Fluent interactively is strongly not recommended and one should whenever possible use `sbatch`.
However, sometimes interactive runs are needed. For jobs requiring only few CPUs (in example, 2 CPUs) **and** for a short period of time, one can use the login nodes.
Otherwise, one must use the Slurm batch system using allocations:
* For short jobs requiring more CPUs, one can use the Merlin shortest partitions (`hourly`).
* For longer jobs, one can use longer partitions, however, interactive access is not always possible (depending on the usage of the cluster).
Please refer to the documentation **[Running Interactive Jobs](/merlin6/interactive-jobs.html)** for firther information about different ways for running interactive
jobs in the Merlin6 cluster.
### Requirements
#### SSH Keys
Running Fluent interactively requires the use of SSH Keys. This is the way of communication between the GUI and the different nodes. For doing that, one must have
a **passphrase protected** SSH Key. If the user does not have SSH Keys yet (simply run **`ls $HOME/.ssh/`** to check whether **`id_rsa`** files exist or not). For
deploying SSH Keys for running Fluent interactively, one should follow this documentation: **[Configuring SSH Keys](/merlin6/ssh-keys.html)**
#### List of hosts
For running Fluent using Slurm computing nodes, one needs to get the list of the reserved nodes. For getting that list, once you have the allocation, one can run
the following command:
```bash
scontrol show hostname
```
This list must be included in the settings as the list of hosts where to run Fluent. Alternatively, one can give that list as parameter (`-cnf` option) when running `fluent`,
as follows:
<details>
<summary>[Running Fluent with 'salloc' example]</summary>
<pre class="terminal code highlight js-syntax-highlight plaintext" lang="plaintext" markdown="false">
(base) [caubet_m@merlin-l-001 caubet_m]$ salloc --nodes=2 --ntasks=88 --hint=nomultithread --time=0-01:00:00 --partition=test $SHELL
salloc: Pending job allocation 135030174
salloc: job 135030174 queued and waiting for resources
salloc: job 135030174 has been allocated resources
salloc: Granted job allocation 135030174
(base) [caubet_m@merlin-l-001 caubet_m]$ module use unstable
(
base) [caubet_m@merlin-l-001 caubet_m]$ module load ANSYS/2020R1-1
module load: unstable module has been loaded -- ANSYS/2020R1-1
(base) [caubet_m@merlin-l-001 caubet_m]$ fluent 3ddp -t$SLURM_NPROCS -cnf=$(scontrol show hostname | tr '\n' ',')
(base) [caubet_m@merlin-l-001 caubet_m]$ exit
exit
salloc: Relinquishing job allocation 135030174
salloc: Job allocation 135030174 has been revoked.
</pre>
</details>

View File

@@ -0,0 +1,117 @@
---
title: ANSYS HFSS / ElectroMagnetics
#tags:
keywords: software, ansys, ansysEM, em, slurm, hfss, interactive, rsm, batch job
last_updated: 07 September 2022
summary: "This document describes how to run ANSYS HFSS (ElectroMagnetics) in the Merlin6 cluster"
sidebar: merlin6_sidebar
permalink: /merlin6/ansys-hfss.html
---
This document describes the different ways for running **ANSYS HFSS (ElectroMagnetics)**
## ANSYS HFSS (ElectroMagnetics)
This recipe is intended to show how to run ANSYS HFSS (ElectroMagnetics) in Slurm.
Having in mind that in general, running ANSYS HFSS means running **ANSYS Electronics Desktop**.
## Running HFSS / Electromagnetics jobs
### PModules
Is necessary to run at least ANSYS software **ANSYS/2022R1**, which is available in PModules:
```bash
module use unstable
module load Pmodules/1.1.6
module use overlay_merlin
module load ANSYS/2022R1
```
## Remote job submission: HFSS RSM and SLURM
Running jobs through Remote RSM or Slurm is the recommended way for running ANSYS HFSS.
* **HFSS RSM** can be used from ANSYS HFSS installations running on Windows workstations at PSI (as long as are in the internal PSI network).
* **Slurm** can be used when submitting directly from a Merlin login node (i.e. `sbatch` command or interactively from **ANSYS Electronics Desktop**)
### HFSS RSM (from remote workstations)
Running jobs through Remote RSM is the way for running ANSYS HFSS when submitting from an ANSYS HFSS installation on a PSI Windows workstation.
A HFSS RSM service is running on each **Merlin login node**, and the listening port depends on the ANSYS EM version. Current support ANSYS EM RSM
release and associated listening ports are the following:
<table>
<thead>
<tr>
<th scope='col' style="vertical-align:middle;text-align:center;">ANSYS version</th>
<th scope='col' style="vertical-align:middle;text-align:center;">Login nodes</th>
<th scope='col' style="vertical-align:middle;text-align:center;">Listening port</th>
</tr>
</thead>
<tbody>
<tr align="center">
<td>2022R1</td>
<td><font size="2" face="Courier New">merlin-l-001 merlin-l-001 merlin-l-001</font></td>
<td>32958</td>
</tr>
<tr align="center">
<td>2022R2</td>
<td><font size="2" face="Courier New">merlin-l-001 merlin-l-001 merlin-l-001</font></td>
<td>32959</td>
</tr>
<tr align="center">
<td>2023R2</td>
<td><font size="2" face="Courier New">merlin-l-001 merlin-l-001 merlin-l-001</font></td>
<td>32960</td>
</tr>
</tbody>
</table>
Notice that by default ANSYS EM is listening on port **`32958`**, this is the default for **ANSYS/2022R1** only.
* Workstations connecting to the Merlin ANSYS EM service must ensure that **Electronics Desktop** is connecting to the proper port.
* In the same way, the ANSYS Workstation version must be the same as the version running on Merlin.
Notice that _HFSS RSM is not the same RSM provided for other ANSYS products._ Therefore, the configuration is different from [ANSYS RSM](/merlin6/ansys-rsm.html).
To setup HFSS RSM for using it with the Merlin cluster, it must be done from the following **ANSYS Electronics Desktop** menu:
1. **[Tools]->[Job Management]->[Select Scheduler]**.
![Select_Scheduler]({{"/images/ANSYS/HFSS/01_Select_Scheduler_Menu.png"}})
2. In the new **[Select scheduler]** window, setup the following settings and **Refresh**:
![RSM_Remote_Scheduler]({{"/images/ANSYS/HFSS/02_Select_Scheduler_RSM_Remote.png"}})
* **Select Scheduler**: `Remote RSM`.
* **Server**: Add a Merlin login node.
* **User name**: Add your Merlin username.
* **Password**: Add you Merlin username password.
Once *refreshed*, the **Scheduler info** box must provide **Slurm** information of the server (see above picture). If the box contains that information, then you can save changes (`OK` button).
3. **[Tools]->[Job Management]->[Submit Job...]**.
![Submit_Job]({{"/images/ANSYS/HFSS/04_Submit_Job_Menu.png"}})
4. In the new **[Submite Job]** window, you must specify the location of the **ANSYS Electronics Desktop** binary.
![Product_Path]({{"/images/ANSYS/HFSS/05_Submit_Job_Product_Path.png"}})
* In example, for **ANSYS/2022R1**, the location is `/data/software/pmodules/Tools/ANSYS/2021R1/v211/AnsysEM21.1/Linux64/ansysedt.exe`:.
### HFSS Slurm (from login node only)
Running jobs through Slurm from **ANSYS Electronics Desktop** is the way for running ANSYS HFSS when submitting from an ANSYS HFSS installation in a Merlin login node. **ANSYS Electronics Desktop** usually needs to be run from the **[Merlin NoMachine](/merlin6/nomachine.html)** service, which currently runs on:
- `merlin-l-001.psi.ch`
- `merlin-l-002.psi.ch`
Since the Slurm client is present in the login node (where **ANSYS Electronics Desktop** is running), the application will be able to detect and to submit directly to Slurm. Therefore, we only have to configure **ANSYS Electronics Desktop** to submit to Slurm. This can set as follows:
1. **[Tools]->[Job Management]->[Select Scheduler]**.
![Select_Scheduler]({{"/images/ANSYS/HFSS/01_Select_Scheduler_Menu.png"}})
2. In the new **[Select scheduler]** window, setup the following settings and **Refresh**:
![RSM_Remote_Scheduler]({{"/images/ANSYS/HFSS/03_Select_Scheduler_Slurm.png"}})
* **Select Scheduler**: `Slurm`.
* **Server**: must point to `localhost`.
* **User name**: must be empty.
* **Password**: must be empty.
The **Server, User name** and **Password** boxes can't be modified, but if value do not match with the above settings, they should be changed by selecting another Scheduler which allows editig these boxes (i.e. **RSM Remote**).
Once *refreshed*, the **Scheduler info** box must provide **Slurm** information of the server (see above picture). If the box contains that information, then you can save changes (`OK` button).

View File

@@ -0,0 +1,162 @@
---
title: ANSYS / MAPDL
#tags:
keywords: software, ansys, mapdl, slurm, apdl, interactive, rsm, batch job
last_updated: 07 September 2022
summary: "This document describes how to run ANSYS/Mechanical APDL in the Merlin6 cluster"
sidebar: merlin6_sidebar
permalink: /merlin6/ansys-mapdl.html
---
This document describes the different ways for running **ANSYS/Mechanical APDL**
## ANSYS/Mechanical APDL
Is always recommended to check which parameters are available in Mechanical APDL and adapt the below examples according to your needs.
For that, please refer to the official Mechanical APDL documentation.
## Running Mechanical APDL jobs
### PModules
Is strongly recommended the use of the latest ANSYS software available in PModules.
```bash
module use unstable
module load Pmodules/1.1.6
module use overlay_merlin
module load ANSYS/2022R1
```
### Interactive: RSM from remote PSI Workstations
Is possible to run Mechanical through RSM from remote PSI (Linux or Windows) Workstation having a local installation of ANSYS Mechanical and RSM client.
For that, please refer to the ***[ANSYS RSM]*(/merlin6/ansys-rsm.html)** in the Merlin documentation for further information of how to setup a RSM client for submitting jobs to Merlin.
### Non-interactive: sbatch
Running jobs with `sbatch` is always the recommended method. This makes the use of the resources more efficient. Notice that for
running non interactive Mechanical APDL jobs one must specify the `-b` option.
#### Serial example
This example shows a very basic serial job.
```bash
#!/bin/bash
#SBATCH --job-name=MAPDL # Job Name
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
#SBATCH --ntasks-per-core=1 # Double if hyperthreading enabled
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define your error file
module use unstable
module load ANSYS/2020R1-1
# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
LICENSE_SERVER=<your_license_server>
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
# [Optional:END]
SOLVER_FILE=/data/user/caubet_m/MAPDL/mysolver.in
mapdl -b -i "$SOLVER_FILE"
```
One can enable hypertheading by defining `--hint=multithread`, `--cpus-per-task=2` and `--ntasks-per-core=2`.
However, this is in general not recommended, unless one can ensure that can be beneficial.
#### SMP-based example
This example shows how to running Mechanical APDL in Shared-Memory Parallelism mode. It limits the use
to 1 single node, but by using many cores. In the example below, we use a node by using all his cores
and the whole memory.
```bash
#!/bin/bash
#SBATCH --job-name=MAPDL # Job Name
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks=1 # Number of tasks
#SBATCH --cpus-per-task=44 # Double if hyperthreading enabled
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define a file for standard error messages
#SBATCH --exclusive # Uncomment if you want exclusive usage of the nodes
module use unstable
module load ANSYS/2020R1-1
# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
LICENSE_SERVER=<your_license_server>
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
# [Optional:END]
SOLVER_FILE=/data/user/caubet_m/MAPDL/mysolver.in
mapdl -b -np ${SLURM_CPUS_PER_TASK} -i "$SOLVER_FILE"
```
In the above example, one can reduce the number of **cpus per task**. Here usually `--exclusive`
is recommended if one needs to use the whole memory.
For **SMP** runs, one might try the hyperthreading mode by doubling the proper settings
(`--cpus-per-task`), in some cases it might be beneficial.
Please notice that `--ntasks-per-core=1` is not defined here, this is because we want to run 1
task on many cores! As an alternative, one can explore `--ntasks-per-socket` or `--ntasks-per-node`
for fine grained configurations.
#### MPI-based example
This example enables Distributed ANSYS for running Mechanical APDL using a Slurm batch script.
```bash
#!/bin/bash
#SBATCH --job-name=MAPDL # Job Name
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks=44 # Number of tasks
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
#SBATCH --ntasks-per-core=1 # Run one task per core
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define a file for standard error messages
##SBATCH --exclusive # Uncomment if you want exclusive usage of the nodes
module use unstable
module load ANSYS/2020R1-1
# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
LICENSE_SERVER=<your_license_server>
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
# [Optional:END]
SOLVER_FILE=input.dat
# INTELMPI=no for IBM MPI
# INTELMPI=yes for INTEL MPI
INTELMPI=no
if [ "$INTELMPI" == "yes" ]
then
# When using -mpi=intelmpi, KMP Affinity must be disabled
export KMP_AFFINITY=disabled
# INTELMPI is not aware about distribution of tasks.
# - We need to define tasks distribution.
HOSTLIST=$(srun hostname | sort | uniq -c | awk '{print $2 ":" $1}' | tr '\n' ':' | sed 's/:$/\n/g')
mapdl -b -dis -mpi intelmpi -machines $HOSTLIST -np ${SLURM_NTASKS} -i "$SOLVER_FILE"
else
# IBMMPI (default) will be aware of the distribution of tasks.
# - In principle, no need to force tasks distribution
mapdl -b -dis -mpi ibmmpi -np ${SLURM_NTASKS} -i "$SOLVER_FILE"
fi
```
In the above example, one can increase the number of *nodes* and/or *ntasks* if needed and combine it
with `--exclusive` when necessary. In general, **no hypertheading** is recommended for MPI based jobs.
Also, one can combine it with `--exclusive` when necessary.

View File

@@ -0,0 +1,108 @@
---
title: ANSYS RSM (Remote Resolve Manager)
#tags:
keywords: software, ansys, rsm, slurm, interactive, rsm, windows
last_updated: 07 September 2022
summary: "This document describes how to use the ANSYS Remote Resolve Manager service in the Merlin6 cluster"
sidebar: merlin6_sidebar
permalink: /merlin6/ansys-rsm.html
---
## ANSYS Remote Resolve Manager
**ANSYS Remote Solve Manager (RSM)** is used by ANSYS Workbench to submit computational jobs to HPC clusters directly from Workbench on your desktop.
Therefore, PSI workstations ***with direct access to Merlin*** can submit jobs by using RSM.
Users are responsible for requesting possible necessary network access and debugging any possible connectivity problem with the cluster.
In example, in case that the workstation is behind a firewall, users would need to request a **[firewall rule](https://psi.service-now.com/psisp/?id=psi_new_sc_category&sys_id=6f07ab1e4f3913007f7660fe0310c7ba)** to enable access to Merlin.
{{site.data.alerts.warning}} The Merlin6 administrators <b>are not responsible for connectivity problems</b> between users workstations and the Merlin6 cluster.
{{site.data.alerts.end}}
### The Merlin6 RSM service
A RSM service is running on each login node. This service will listen a specific port and will process any request using RSM (in example, from ANSYS users workstations).
The following login nodes are configured with such services:
* `merlin-l-01.psi.ch`
* `merlin-l-001.psi.ch`
* `merlin-l-002.psi.ch`
Each ANSYS release installed in `/data/software/pmodules/ANSYS` should have its own RSM service running (the listening port is the default one set by that ANSYS release). With the following command users can check which ANSYS releases have an RSM instance running:
```bash
systemctl | grep pli-ansys-rsm-v[0-9][0-9][0-9].service
```
<details>
<summary>[Example] Listing RSM service running on merlin-l-001.psi.ch</summary>
<pre class="terminal code highlight js-syntax-highlight plaintext" lang="plaintext" markdown="false">
(base) ❄ [caubet_m@merlin-l-001:/data/user/caubet_m]# systemctl | grep pli-ansys-rsm-v[0-9][0-9][0-9].service
pli-ansys-rsm-v195.service loaded active exited PSI ANSYS RSM v195
pli-ansys-rsm-v202.service loaded active exited PSI ANSYS RSM v202
pli-ansys-rsm-v211.service loaded active exited PSI ANSYS RSM v211
pli-ansys-rsm-v212.service loaded active exited PSI ANSYS RSM v212
pli-ansys-rsm-v221.service loaded active exited PSI ANSYS RSM v221
</pre>
</details>
## Configuring RSM client on Windows workstations
Users can setup ANSYS RSM in their workstations to connect to the Merlin6 cluster.
The different steps and settings required to make it work are that following:
1. Open the RSM Configuration service in Windows for the ANSYS release you want to configure.
2. Right-click the **HPC Resources** icon followed by **Add HPC Resource...**
![Adding a new HPC Resource]({{ "/images/ANSYS/rsm-1-add_hpc_resource.png" }})
3. In the **HPC Resource** tab, fill up the corresponding fields as follows:
![HPC Resource]({{"/images/ANSYS/rsm-2-add_cluster.png"}})
* **"Name"**: Add here the preffered name for the cluster. In example: `Merlin6 cluster - merlin-l-001`
* **"HPC Type"**: Select `SLURM`
* **"Submit host"**: Add one of the login nodes. In example `merlin-l-001`.
* **"Slurm Job submission arguments (optional)"**: Add any required Slurm options for running your jobs.
* In general, `--hint=nomultithread` should be at least present.
* Check **"Use SSH protocol for inter and intra-node communication (Linux only)"**
* Select **"Able to directly submit and monitor HPC jobs"**.
* **"Apply"** changes.
4. In the **"File Management"** tab, fill up the corresponding fields as follows:
![File Management]({{"/images/ANSYS/rsm-3-add_scratch_info.png"}})
* Select **"RSM internal file transfer mechanism"** and add **`/shared-scratch`** as the **"Staging directory path on Cluster"**
* Select **"Scratch directory local to the execution node(s)"** and add **`/scratch`** as the **HPC scratch directory**.
* **Never check** the option "Keep job files in the staging directory when job is complete" if the previous
option "Scratch directory local to the execution node(s)" was set.
* **"Apply"** changes.
5. In the **"Queues"** tab, use the left button to auto-discover partitions
![Queues]({{"/images/ANSYS/rsm-4-get_slurm_queues.png"}})
* If no authentication method was configured before, an authentication window will appear. Use your
PSI account to authenticate. Notice that the **`PSICH\`** prefix **must not be added**.
![Authenticating]({{"/images/ANSYS/rsm-5-authenticating.png"}})
* From the partition list, select the ones you want to typically use.
* In general, standard Merlin users must use **`hourly`**, **`daily`** and **`general`** only.
* Other partitions are reserved for allowed users only.
* **"Apply"** changes.
![Select partitions]({{"/images/ANSYS/rsm-6-selected-partitions.png"}})
6. *[Optional]* You can perform a test by submitting a test job on each partition by clicking on the **Submit** button
for each selected partition.
{{site.data.alerts.tip}}
Repeat the process from for adding other login nodes if necessary. This will give users the alternative
of using another login node in case of maintenance windows.
{{site.data.alerts.end}}
## Using RSM in ANSYS
Using the RSM service in ANSYS is slightly different depending on the ANSYS software being used.
Please follow the official ANSYS documentation for details about how to use it for that specific software.
Alternativaly, please refer to some the examples showed in the following chapters (ANSYS specific software).
### Using RSM in ANSYS Fluent
For further information for using RSM with Fluent, please visit the **[ANSYS RSM](/merlin6/ansys-fluent.html)** section.
### Using RSM in ANSYS CFX
For further information for using RSM with CFX, please visit the **[ANSYS RSM](/merlin6/ansys-cfx.html)** section.
### Using RSM in ANSYS MAPDL
For further information for using RSM with MAPDL, please visit the **[ANSYS RSM](/merlin6/ansys-mapdl.html)** section.

View File

@@ -0,0 +1,89 @@
---
title: ANSYS
#tags:
keywords: software, ansys, slurm, interactive, rsm, pmodules, overlay, overlays
last_updated: 07 September 2022
summary: "This document describes how to load and use ANSYS in the Merlin6 cluster"
sidebar: merlin6_sidebar
permalink: /merlin6/ansys.html
---
This document describes generic information of how to load and run ANSYS software in the Merlin cluster
## ANSYS software in Pmodules
The ANSYS software can be loaded through **[PModules](/merlin6/using-modules.html)**.
The default ANSYS versions are loaded from the central PModules repository.
However, there are some known problems that can pop up when using some specific ANSYS packages in advanced mode.
Due to this, and also to improve the interactive experience of the user, ANSYS has been also installed in the
Merlin high performance storage and we have made it available from Pmodules.
### Loading Merlin6 ANSYS
For loading the Merlin6 ANSYS software, one needs to run Pmodules v1.1.4 or newer, and then use a specific repository
(called **`overlay_merlin`**) which is ***only available from the Merlin cluster***:
```bash
module load Pmodules/1.1.6
module use overlay_merlin
```
Once `overlay_merlin` is invoked, it will disable central ANSYS installations with the same version, which will be replaced
by the local ones in Merlin. Releases from the central Pmodules repository which have not a local installation will remain
visible. For each ANSYS release, one can identify where it is installed by searching ANSYS in PModules with the `--verbose`
option. This will show the location of the different ANSYS releases as follows:
* For ANSYS releases installed in the central repositories, the path starts with `/opt/psi`
* For ANSYS releases installed in the Merlin6 repository (and/or overwritting the central ones), the path starts with `/data/software/pmodules`
<details>
<summary>[Example] Loading ANSYS from the Merlin6 PModules repository</summary>
<pre class="terminal code highlight js-syntax-highlight plaintext" lang="plaintext" markdown="false">
(base) ❄ [caubet_m@merlin-l-001:/data/user/caubet_m]# module load Pmodules/1.1.6
module load: unstable module has been loaded -- Pmodules/1.1.6
(base) ❄ [caubet_m@merlin-l-001:/data/user/caubet_m]# module use merlin_overlay
(base) ❄ [caubet_m@merlin-l-001:/data/user/caubet_m]# module search ANSYS --verbose
Module Rel.stage Group Dependencies/Modulefile
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ANSYS/2019R3 stable Tools dependencies:
modulefile: /data/software/pmodules/Tools/modulefiles/ANSYS/2019R3
ANSYS/2020R1 stable Tools dependencies:
modulefile: /opt/psi/Tools/modulefiles/ANSYS/2020R1
ANSYS/2020R1-1 stable Tools dependencies:
modulefile: /opt/psi/Tools/modulefiles/ANSYS/2020R1-1
ANSYS/2020R2 stable Tools dependencies:
modulefile: /data/software/pmodules/Tools/modulefiles/ANSYS/2020R2
ANSYS/2021R1 stable Tools dependencies:
modulefile: /data/software/pmodules/Tools/modulefiles/ANSYS/2021R1
ANSYS/2021R2 stable Tools dependencies:
modulefile: /data/software/pmodules/Tools/modulefiles/ANSYS/2021R2
</pre>
</details>
{{site.data.alerts.tip}} Please <b>only use Merlin6 ANSYS installations from `overlay_merlin`</b> in the Merlin cluster.
{{site.data.alerts.end}}
## ANSYS Documentation by product
### ANSYS RSM
**ANSYS Remote Solve Manager (RSM)** is used by ANSYS Workbench to submit computational jobs to HPC clusters directly from Workbench on your desktop.
Therefore, PSI workstations with direct access to Merlin can submit jobs by using RSM.
For further information, please visit the **[ANSYS RSM](/merlin6/ansys-rsm.html)** section.
### ANSYS Fluent
For further information, please visit the **[ANSYS RSM](/merlin6/ansys-fluent.html)** section.
### ANSYS CFX
For further information, please visit the **[ANSYS RSM](/merlin6/ansys-cfx.html)** section.
### ANSYS MAPDL
For further information, please visit the **[ANSYS RSM](/merlin6/ansys-mapdl.html)** section.

View File

@@ -0,0 +1,216 @@
---
title: GOTHIC
#tags:
keywords: software, gothic, slurm, interactive, batch job
last_updated: 07 September 2022
summary: "This document describes how to run Gothic in the Merlin cluster"
sidebar: merlin6_sidebar
permalink: /merlin6/gothic.html
---
This document describes generic information of how to run Gothic in the
Merlin cluster
## Gothic installation
Gothic is locally installed in Merlin in the following directory:
```bash
/data/project/general/software/gothic
```
Multiple versions are available. As of August 22, 2022, the latest
installed version is **Gothic 8.3 QA**.
Future releases will be placed in the PSI Modules system, therefore,
loading it through PModules will be possible at some point. However, in the
meantime one has to use the existing installations present in
`/data/project/general/software/gothic`.
## Running Gothic
### General requirements
When running Gothic in interactive or batch mode, one has to consider
the following requirements:
* **Use always one node only**: Gothic runs a single instance.
Therefore, it can not run on multiple nodes. Adding option `--nodes=1-1`
or `-N 1-1` is strongly recommended: this will prevent Slurm to allocate
multiple nodes if the Slurm allocation definition is ambiguous.
* **Use one task only**: Gothic spawns one main process, which then will
spawn multiple threads depending on the number of available cores.
Therefore, one has to specify 1 task (`--ntasks=1` or `-n 1`).
* **Use multiple CPUs**: since Gothic will spawn multiple threads, then
multiple CPUs can be used. Adding `--cpus-per-task=<num_cpus>`
or `-c <num_cpus>` is in general recommended.
Notice that `<num_cpus>` must never exceed the maximum number of CPUS
in a compute node (usually *88*).
* **Use multithread**: Gothic is an OpenMP based software, therefore,
running in hyper-threading mode is strongly recommended. Use the option
`--hint=multithread` for enforcing hyper-threading.
* **[Optional]** *Memory setup*: The default memory per CPU (4000MB)
is usually enough for running Gothic. If you require more memory, you
can always set the `--mem=<mem_in_MB>` option. This is in general
*not necessary*.
### Interactive
**Is not allowed to run CPU intensive interactive jobs in the
login nodes**. Only applications capable to limit the number of cores are
allowed to run for longer time. Also, **running in the login nodes is not
efficient**, since resources are shared with other processes and users.
Is possible to submit interactive jobs to the cluster by allocating a
full compute node, or even by allocating a few cores only. This will grant
dedicated CPUs and resources and in general it will not affect other users.
For interactive jobs, is strongly recommended to use the `hourly` partition,
which usually has a good availability of nodes.
For longer runs, one should use the `daily` (or `general`) partition.
However, getting interactive access to nodes on these partitions is
sometimes more difficult if the cluster is pretty full.
To submit an interactive job, consider the following requirements:
* **X11 forwarding must be enabled**: Gothic spawns an interactive
window which requires X11 forwarding when using it remotely, therefore
using the Slurm option `--x11` is necessary.
* **Ensure that the scratch area is accessible**: For running Gothic,
one has to define a scratch area with the `GTHTMP` environment variable.
There are two options:
1. **Use local scratch**: Each compute node has its own `/scratch` area.
This area is independent to any other node, therefore not visible by other nodes.
Using the top directory `/scratch` for interactive jobs is the simplest way,
and it can be defined before or after the allocation creation, as follows:
```bash
# Example 1: Define GTHTMP before the allocation
export GTHTMP=/scratch
salloc ...
# Example 2: Define GTHTMP after the allocation
salloc ...
export GTHTMP=/scratch
```
Notice that if you want to create a custom sub-directory (i.e.
`/scratch/$USER`, one has to create the sub-directory on every new
allocation! In example:
```bash
# Example 1:
export GTHTMP=/scratch/$USER
salloc ...
mkdir -p $GTHTMP
# Example 2:
salloc ...
export GTHTMP=/scratch/$USER
mkdir -p $GTHTMP
```
Creating sub-directories makes the process more complex, therefore
using just `/scratch` is simpler and recommended.
2. **Shared scratch**: Using shared scratch allows to have a
directory visible from all compute nodes and login nodes. Therefore,
one can use `/shared-scratch` to achieve the same as in **1.**, but
creating a sub-directory needs to be done just once.
Please, consider that `/scratch` usually provides better performance and,
in addition, will offload the main storage. Therefore, using **local scratch**
is strongly recommended. Use the shared scratch only when strongly necessary.
* **Use the `hourly` partition**: Using the `hourly` partition is
recommended for running interactive jobs (latency is in general
lower). However, `daily` and `general` are also available if you expect
longer runs, but in these cases you should expect longer waiting times.
These requirements are in addition to the requirements previously described
in the [General requirements](/merlin6/gothic.html#general-requirements)
section.
#### Interactive allocations: examples
* Requesting a full node,
```bash
salloc --partition=hourly -N 1 -n 1 -c 88 --hint=multithread --x11 --exclusive --mem=0
```
* Requesting 22 CPUs from a node, with default memory per CPU (4000MB/CPU):
```bash
num_cpus=22
salloc --partition=hourly -N 1 -n 1 -c $num_cpus --hint=multithread --x11
```
### Batch job
The Slurm cluster is mainly used by non interactive batch jobs: Users
submit a job, which goes into a queue, and waits until Slurm can assign
resources to it. In general, the longer the job, the longer the waiting time,
unless there are enough free resources to inmediately start running it.
Running Gothic in a Slurm batch script is pretty simple. One has to mainly
consider the requirements described in the [General requirements](/merlin6/gothic.html#general-requirements)
section, and:
* **Use local scratch** for running batch jobs. In general, defining
`GTHTMP` in a batch script is simpler than on an allocation. If you plan
to run multiple jobs in the same node, you can even create a second sub-directory
level based on the Slurm Job ID:
```bash
mkdir -p /scratch/$USER/$SLURM_JOB_ID
export GTHTMP=/scratch/$USER/$SLURM_JOB_ID
... # Run Gothic here
rm -rf /scratch/$USER/$SLURM_JOB_ID
```
Temporary data generated by the job in `GTHTMP` must be removed at the end of
the job, as showed above.
#### Batch script: examples
* Requesting a full node:
```bash
#!/bin/bash -l
#SBATCH --job-name=Gothic
#SBATCH --time=3-00:00:00
#SBATCH --partition=general
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=88
#SBATCH --hint=multithread
#SBATCH --exclusive
#SBATCH --mem=0
#SBATCH --clusters=merlin6
INPUT_FILE='MY_INPUT.SIN'
mkdir -p /scratch/$USER/$SLURM_JOB_ID
export GTHTMP=/scratch/$USER/$SLURM_JOB_ID
/data/project/general/software/gothic/gothic8.3qa/bin/gothic_s.sh $INPUT_FILE -m -np $SLURM_CPUS_PER_TASK
gth_exit_code=$?
# Clean up data in /scratch
rm -rf /scratch/$USER/$SLURM_JOB_ID
# Return exit code from GOTHIC
exit $gth_exit_code
```
* Requesting 22 CPUs from a node, with default memory per CPU (4000MB/CPU):
```bash
#!/bin/bash -l
#SBATCH --job-name=Gothic
#SBATCH --time=3-00:00:00
#SBATCH --partition=general
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=22
#SBATCH --hint=multithread
#SBATCH --clusters=merlin6
INPUT_FILE='MY_INPUT.SIN'
mkdir -p /scratch/$USER/$SLURM_JOB_ID
export GTHTMP=/scratch/$USER/$SLURM_JOB_ID
/data/project/general/software/gothic/gothic8.3qa/bin/gothic_s.sh $INPUT_FILE -m -np $SLURM_CPUS_PER_TASK
gth_exit_code=$?
# Clean up data in /scratch
rm -rf /scratch/$USER/$SLURM_JOB_ID
# Return exit code from GOTHIC
exit $gth_exit_code
```

View File

@@ -0,0 +1,45 @@
---
title: Intel MPI Support
#tags:
last_updated: 13 March 2020
keywords: software, impi, slurm
summary: "This document describes how to use Intel MPI in the Merlin6 cluster"
sidebar: merlin6_sidebar
permalink: /merlin6/impi.html
---
## Introduction
This document describes which set of Intel MPI versions in PModules are supported in the Merlin6 cluster.
### srun
We strongly recommend the use of **'srun'** over **'mpirun'** or **'mpiexec'**. Using **'srun'** would properly
bind tasks in to cores and less customization is needed, while **'mpirun'** and '**mpiexec**' might need more advanced
configuration and should be only used by advanced users. Please, ***always*** adapt your scripts for using **'srun'**
before opening a support ticket. Also, please contact us on any problem when using a module.
{{site.data.alerts.tip}} Always run Intel MPI with the <b>srun</b> command. The only exception is for advanced users, however <b>srun</b> is still recommended.
{{site.data.alerts.end}}
When running with **srun**, one should tell Intel MPI to use the PMI libraries provided by Slurm. For PMI-1:
```bash
export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so
srun ./app
```
Alternatively, one can use PMI-2, but then one needs to specify it as follows:
```bash
export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi2.so
export I_MPI_PMI2=yes
srun ./app
```
For more information, please read [Slurm Intel MPI Guide](https://slurm.schedmd.com/mpi_guide.html#intel_mpi)
**Note**: Please note that PMI2 might not work properly in some Intel MPI versions. If so, you can either fallback
to PMI-1 or to contact the Merlin administrators.

View File

@@ -0,0 +1,140 @@
---
title: OpenMPI Support
#tags:
last_updated: 13 March 2020
keywords: software, openmpi, slurm
summary: "This document describes how to use OpenMPI in the Merlin6 cluster"
sidebar: merlin6_sidebar
permalink: /merlin6/openmpi.html
---
## Introduction
This document describes which set of OpenMPI versions in PModules are supported in the Merlin6 cluster.
### srun
We strongly recommend the use of **'srun'** over **'mpirun'** or **'mpiexec'**. Using **'srun'** would properly
bind tasks in to cores and less customization is needed, while **'mpirun'** and '**mpiexec**' might need more advanced
configuration and should be only used by advanced users. Please, ***always*** adapt your scripts for using **'srun'**
before opening a support ticket. Also, please contact us on any problem when using a module.
Example:
```bash
srun ./app
```
{{site.data.alerts.tip}} Always run OpenMPI with the <b>srun</b> command. The only exception is for advanced users, however <b>srun</b> is still recommended.
{{site.data.alerts.end}}
### OpenMPI with UCX
**OpenMPI** supports **UCX** starting from version 3.0, but its recommended to use version 4.0 or higher due to stability and performance improvements.
**UCX** should be used only by advanced users, as it requires to run it with **mpirun** (needs advanced knowledge) and is an exception for running MPI
without **srun** (**UCX** is not integrated at PSI within **srun**).
For running UCX, one should:
* add the following options to **mpirun**:
```bash
-mca pml ucx --mca btl ^vader,tcp,openib,uct -x UCX_NET_DEVICES=mlx5_0:1
```
* or alternatively, add the following options **before mpirun**
```bash
export OMPI_MCA_pml="ucx"
export OMPI_MCA_btl="^vader,tcp,openib,uct"
export UCX_NET_DEVICES=mlx5_0:1
```
In addition, one can add the following options for debugging purposes (visit [UCX Logging](https://github.com/openucx/ucx/wiki/Logging) for possible `UCX_LOG_LEVEL` values):
```bash
-x UCX_LOG_LEVEL=<data|debug|warn|info|...> -x UCX_LOG_FILE=<filename>
```
This can be also added externally before the **mpirun** call (see below example). Full example:
* Within the **mpirun** command:
```bash
mpirun -np $SLURM_NTASKS -mca pml ucx --mca btl ^vader,tcp,openib,uct -x UCX_NET_DEVICES=mlx5_0:1 -x UCX_LOG_LEVEL=data -x UCX_LOG_FILE=UCX-$SLURM_JOB_ID.log ./app
```
* Outside the **mpirun** command:
```bash
export OMPI_MCA_pml="ucx"
export OMPI_MCA_btl="^vader,tcp,openib,uct"
export UCX_NET_DEVICES=mlx5_0:1
export UCX_LOG_LEVEL=data
export UCX_LOG_FILE=UCX-$SLURM_JOB_ID.log
mpirun -np $SLURM_NTASKS ./app
```
## Supported OpenMPI versions
For running OpenMPI properly in a Slurm batch system, ***OpenMPI and Slurm must be compiled accordingly***.
We can find a large number of compilations of OpenMPI modules in the PModules central repositories. However, only
some of them are suitable for running in a Slurm cluster: ***any OpenMPI versions with suffixes `_slurm`
are suitable for running in the Merlin6 cluster***. Also, OpenMPI with suffix `_merlin6` can be used, but these will be fully
replaced by the `_slurm` series in the future (so it can be used on any Slurm cluster at PSI). Please, ***avoid using any other OpenMPI releases***.
{{site.data.alerts.tip}} Suitable <b>OpenMPI</b> versions for running in the Merlin6 cluster:
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;
<span class="terminal code highlight js-syntax-highlight plaintext" lang="plaintext" markdown="false"><b>openmpi/&lt;version&gt;&#95;slurm</b>
</span>&nbsp;<b>[<u>Recommended</u>]</b>
</p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;
<span class="terminal code highlight js-syntax-highlight plaintext" lang="plaintext" markdown="false">openmpi/&lt;version&gt;&#95;merlin6
</span>
</p>
{{site.data.alerts.end}}
#### 'unstable' repository
New OpenMPI versions that need to be tested will be compiled first in the **``unstable``** repository, and once validated will be moved to **``stable``**.
We can not ensure that modules in that repository are production ready, but you can use it *at your own risk*.
For using *unstable* modules, you might need to load the **``unstable``** PModules repository as follows:
```bash
module use unstable
```
#### 'stable' repository
Officially supported OpenMPI versions (https://www.open-mpi.org/) will be available in the **``stable``** repository (which is the *default* loaded repository).
For further information, please check [https://www.open-mpi.org/software/ompi/ -> Current & Still Supported](https://www.open-mpi.org/software/ompi/)
versions.
Usually, not more than 2 minor update releases will be present in the **``stable``** repository. Older minor update releases will be moved to **``deprecated``**
despite are officially supported. This will ensure that users compile new software with the latest stable versions, but we keep available the old versions
for software which was compiled with it.
#### 'deprecated' repository
Old OpenMPI versions (it is, any official OpenMPI version which has been moved to **retired** or **ancient**) will be
moved to the ***'deprecated'*** PModules repository.
For further information, please check [https://www.open-mpi.org/software/ompi/ -> Older Versions](https://www.open-mpi.org/software/ompi/)
versions.
Also, as mentioned in [before](/merlin6/openmpi.html#stable-repository), older official supported OpenMPI releases (minor updates) will be moved to ``deprecated``.
For using *deprecated* modules, you might need to load the **``deprecated``** PModules repository as follows:
```bash
module use deprecated
```
However, this is usually not needed: when loading directly a specific version in the ``deprecated`` repository, if this is not found in
``stable`` it try to search and to fallback to other repositories (``deprecated`` or ``unstable``).
#### About missing versions
##### Missing OpenMPI versions
For legacy software, some users might require a different OpenMPI version. **We always encourage** users to try one of the existing stable versions
(*OpenMPI always with suffix ``_slurm`` or ``_merlin6``!*), as they will contain the latest bug fixes and they usually should work. In the worst case, you
can also try with the ones in the deprecated repository (again, *OpenMPI always with suffix ``_slurm`` or ``_merlin6``!*), or for very old software which
was based on OpenMPI v1 you can follow the guide [FAQ: Removed MPI constructs](https://www.open-mpi.org/faq/?category=mpi-removed), which provides
some easy steps for migrating from OpenMPI v1 to v2 or superior or also is useful to find out why your code does not compile properly.
When, after trying the mentioned versions and guide, you are still facing problems, please contact us. Also, please contact us if you require a newer
version with a different ``gcc`` or ``intel`` compiler (in example, Intel v19).

View File

@@ -0,0 +1,63 @@
---
title: Running Paraview
#tags:
last_updated: 03 December 2020
keywords: software, paraview, mesa, OpenGL, interactive
summary: "This document describes how to run ParaView in the Merlin6 cluster"
sidebar: merlin6_sidebar
permalink: /merlin6/paraview.html
---
## Requirements
**[NoMachine](/merlin6/nomachine.html)** is the official **strongly recommended and supported** tool for running *ParaView*.
Consider that running over SSH (X11-Forwarding needed) is very slow, but also configuration might not work as it also depends
on the client configuration (Linux workstation/laptop, Windows with XMing, etc.). Hence, please **avoid running Paraview over SSH**.
The only exception for running over SSH is when running it as a job from a NoMachine client.
## ParaView
### PModules
Is strongly recommended the use of the latest ParaView version available in PModules. In example, for loading **paraview**:
```bash
module use unstable
module load paraview/5.8.1
```
### Running ParaView
For running ParaView, one can run it with **VirtualGL** to take advantatge of the GPU card located on each login node. For that, once loaded, you can start **paraview** as follows:
```bash
vglrun paraview
```
Alternatively, one can run **paraview** with *mesa* support with the below command. This can be useful when running on CPU computing nodes (with `srun` / `salloc`)
which have no graphics card (and where `vglrun` is not possible):
```bash
paraview-mesa paraview
```
#### Running older versions of ParaView
Older versions of ParaView available in PModules (i.e. *paraview/5.0.1* and *paraview/5.4.1*) might require a different command
for running paraview with **Mesa** support. The command is the following:
```bash
# Warning: only for Paraview 5.4.1 and older
paraview --mesa
```
#### Running ParaView interactively in the batch system
One can run ParaView interactively in the CPU cluster as follows:
```bash
# First, load module. In example: "module load paraview/5.8.1"
srun --pty --x11 --partition=general --ntasks=1 paraview-mesa paraview
```
One can change the partition, number of tasks or specify extra parameters to `srun` if needed.

View File

@@ -0,0 +1,162 @@
---
title: Python
#tags:
last_updated: 28 September 2020
keywords: [python, anaconda, conda, jupyter, numpy]
summary: Running Python on Merlin
sidebar: merlin6_sidebar
permalink: /merlin6/python.html
---
PSI provides a variety of ways to execute python code.
2. **Anaconda** - Custom environments for using installation and development
3. **Jupyterhub** - Execute Jupyter notebooks on the cluster
4. **System Python** - Do not use! Only for OS applications.
## Anaconda
[Anaconda](https://www.anaconda.com/) ("conda" for short) is a package manager with
excellent python integration. Using it you can create isolated environments for each
of your python applications, containing exactly the dependencies needed for that app.
It is similar to the [virtualenv](http://virtualenv.readthedocs.org/) python package,
but can also manage non-python requirements.
### Loading conda
Conda is loaded from the module system:
```
module load anaconda
```
### Using pre-made environments
Loading the module provides the `conda` command, but does not otherwise change your
environment. First an environment needs to be activated. Available environments can
be seen with `conda info --envs` and include many specialized environments for
software installs. After activating you should see the environment name in your
prompt:
```
~ $ conda activate datascience_py37
(datascience_py37) ~ $
```
### CondaRC file
Creating a `~/.condarc` file is recommended if you want to create new environments on
merlin. Environments can grow quite large, so you will need to change the default
storage location from the default (your home directory) to a larger volume (usually
`/data/user/$USER`).
Save the following as `$HOME/.condarc`:
```
always_copy: true
envs_dirs:
- /data/user/$USER/conda/envs
pkgs_dirs:
- /data/user/$USER/conda/pkgs
- $ANACONDA_PREFIX/conda/pkgs
channels:
- conda-forge
- nodefaults
```
Run `conda info` to check that the variables are being set correctly.
### Creating environments
We will create an environment named `myenv` which uses an older version of numpy, e.g. to test for backwards compatibility of our code (the `-q` and `--yes` switches are just for not getting prompted and disabling the progress bar). The environment will be created in the default location as defined by the `.condarc` configuration file (see above).
```
~ $ conda create -q --yes -n 'myenv1' numpy=1.8 scipy ipython
Fetching package metadata: ...
Solving package specifications: .
Package plan for installation in environment /gpfs/home/feichtinger/conda-envs/myenv1:
The following NEW packages will be INSTALLED:
ipython: 2.3.0-py27_0
numpy: 1.8.2-py27_0
openssl: 1.0.1h-1
pip: 1.5.6-py27_0
python: 2.7.8-1
readline: 6.2-2
scipy: 0.14.0-np18py27_0
setuptools: 5.8-py27_0
sqlite: 3.8.4.1-0
system: 5.8-1
tk: 8.5.15-0
zlib: 1.2.7-0
To activate this environment, use:
$ source activate myenv1
To deactivate this environment, use:
$ source deactivate
```
The created environment contains **just the packages that are needed to satisfy the
requirements** and it is local to your installation. The python installation is even
independent of the central installation, i.e. your code will still work in such an
environment, even if you are offline or AFS is down. However, you need the central
installation if you want to use the `conda` command itself.
Packages for your new environment will be either copied from the central one into
your new environment, or if there are newer packages available from anaconda and you
did not specify exactly the version from our central installation, they may get
downloaded from the web. **This will require significant space in the `envs_dirs`
that you defined in `.condarc`. If you create other environments on the same local
disk, they will share the packages using hard links.
We can switch to the newly created environment with the `conda activate` command.
```
$ conda activate myenv1
```
{% include callout.html type="info" content="Note that anaconda's activate/deactivate
scripts are compatible with the bash and zsh shells but not with [t]csh." %}
Let's test whether we indeed got the desired numpy version:
```
$ python -c 'import numpy as np; print np.version.version'
1.8.2
```
You can install additional packages into the active environment using the `conda
install` command.
```
$ conda install --yes -q bottle
Fetching package metadata: ...
Solving package specifications: .
Package plan for installation in environment /gpfs/home/feichtinger/conda-envs/myenv1:
The following NEW packages will be INSTALLED:
bottle: 0.12.5-py27_0
```
## Jupyterhub
Jupyterhub is a service for running code notebooks on the cluster, particularly in
python. It is a powerful tool for data analysis and prototyping. For more infomation
see the [Jupyterhub documentation]({{"jupyterhub.html"}}).
## Pythons to avoid
Avoid using the system python (`/usr/bin/python`). It is intended for OS software and
may not be up to date.
Also avoid the 'python' module (`module load python`). This is a minimal install of
python intended for embedding in other modules.