163 lines
6.8 KiB
Markdown
163 lines
6.8 KiB
Markdown
---
|
|
title: ANSYS / Fluent
|
|
#tags:
|
|
keywords: software, ansys, fluent, slurm, interactive, rsm, batch job
|
|
last_updated: 07 September 2022
|
|
summary: "This document describes how to run ANSYS/Fluent in the Merlin6 cluster"
|
|
sidebar: merlin6_sidebar
|
|
permalink: /merlin6/ansys-fluent.html
|
|
---
|
|
|
|
This document describes the different ways for running **ANSYS/Fluent**
|
|
|
|
## ANSYS/Fluent
|
|
|
|
Is always recommended to check which parameters are available in Fluent and adapt the below example according to your needs.
|
|
For that, run `fluent -help` for getting a list of options. However, as when running Fluent one must specify one of the
|
|
following flags:
|
|
* **2d**: This is a 2D solver with single point precision.
|
|
* **3d**: This is a 3D solver with single point precision.
|
|
* **2dpp**: This is a 2D solver with double point precision.
|
|
* **3dpp**: This is a 3D solver with double point precision.
|
|
|
|
## Running Fluent jobs
|
|
|
|
### PModules
|
|
|
|
Is strongly recommended the use of the latest ANSYS software available in PModules.
|
|
|
|
```bash
|
|
module use unstable
|
|
module load Pmodules/1.1.6
|
|
module use overlay_merlin
|
|
module load ANSYS/2022R1
|
|
```
|
|
|
|
### Interactive: RSM from remote PSI Workstations
|
|
|
|
Is possible to run Fluent through RSM from remote PSI (Linux or Windows) Workstation having a local installation of ANSYS Fluent and RSM client.
|
|
For that, please refer to the ***[ANSYS RSM]*(/merlin6/ansys-rsm.html)** in the Merlin documentation for further information of how to setup a RSM client for submitting jobs to Merlin.
|
|
|
|
### Non-interactive: sbatch
|
|
|
|
Running jobs with `sbatch` is always the recommended method. This makes the use of the resources more efficient.
|
|
For running it as a job, one needs to run in no graphical mode (`-g` option).
|
|
|
|
#### Serial example
|
|
|
|
This example shows a very basic serial job.
|
|
|
|
```bash
|
|
#!/bin/bash
|
|
#SBATCH --job-name=Fluent # Job Name
|
|
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
|
|
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
|
|
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
|
|
#SBATCH --hint=nomultithread # Disable Hyperthreading
|
|
#SBATCH --error=slurm-%j.err # Define your error file
|
|
|
|
module use unstable
|
|
module load ANSYS/2020R1-1
|
|
|
|
# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
|
|
LICENSE_SERVER=<your_license_server>
|
|
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
|
|
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
|
|
# [Optional:END]
|
|
|
|
JOURNAL_FILE=/data/user/caubet_m/Fluent/myjournal.in
|
|
fluent 3ddp -g -i ${JOURNAL_FILE}
|
|
```
|
|
|
|
One can enable hypertheading by defining `--hint=multithread`, `--cpus-per-task=2` and `--ntasks-per-core=2`.
|
|
However, this is in general not recommended, unless one can ensure that can be beneficial.
|
|
|
|
#### MPI-based example
|
|
|
|
An example for running Fluent using a Slurm batch script is the following:
|
|
|
|
```bash
|
|
#!/bin/bash
|
|
#SBATCH --job-name=Fluent # Job Name
|
|
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
|
|
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
|
|
#SBATCH --nodes=1 # Number of nodes
|
|
#SBATCH --ntasks=44 # Number of tasks
|
|
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
|
|
#SBATCH --ntasks-per-core=1 # Run one task per core
|
|
#SBATCH --hint=nomultithread # Disable Hyperthreading
|
|
#SBATCH --error=slurm-%j.err # Define a file for standard error messages
|
|
##SBATCH --exclusive # Uncomment if you want exclusive usage of the nodes
|
|
|
|
module use unstable
|
|
module load ANSYS/2020R1-1
|
|
|
|
# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
|
|
LICENSE_SERVER=<your_license_server>
|
|
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
|
|
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
|
|
# [Optional:END]
|
|
|
|
JOURNAL_FILE=/data/user/caubet_m/Fluent/myjournal.in
|
|
fluent 3ddp -g -t ${SLURM_NTASKS} -i ${JOURNAL_FILE}
|
|
```
|
|
|
|
In the above example, one can increase the number of *nodes* and/or *ntasks* if needed. One can remove
|
|
`--nodes` for running on multiple nodes, but may lead to communication overhead. In general, **no
|
|
hyperthreading** is recommended for MPI based jobs. Also, one can combine it with `--exclusive` when necessary.
|
|
|
|
## Interactive: salloc
|
|
|
|
Running Fluent interactively is strongly not recommended and one should whenever possible use `sbatch`.
|
|
However, sometimes interactive runs are needed. For jobs requiring only few CPUs (in example, 2 CPUs) **and** for a short period of time, one can use the login nodes.
|
|
Otherwise, one must use the Slurm batch system using allocations:
|
|
* For short jobs requiring more CPUs, one can use the Merlin shortest partitions (`hourly`).
|
|
* For longer jobs, one can use longer partitions, however, interactive access is not always possible (depending on the usage of the cluster).
|
|
|
|
Please refer to the documentation **[Running Interactive Jobs](/merlin6/interactive-jobs.html)** for firther information about different ways for running interactive
|
|
jobs in the Merlin6 cluster.
|
|
|
|
### Requirements
|
|
|
|
#### SSH Keys
|
|
|
|
Running Fluent interactively requires the use of SSH Keys. This is the way of communication between the GUI and the different nodes. For doing that, one must have
|
|
a **passphrase protected** SSH Key. If the user does not have SSH Keys yet (simply run **`ls $HOME/.ssh/`** to check whether **`id_rsa`** files exist or not). For
|
|
deploying SSH Keys for running Fluent interactively, one should follow this documentation: **[Configuring SSH Keys](/merlin6/ssh-keys.html)**
|
|
|
|
|
|
#### List of hosts
|
|
|
|
For running Fluent using Slurm computing nodes, one needs to get the list of the reserved nodes. For getting that list, once you have the allocation, one can run
|
|
the following command:
|
|
|
|
```bash
|
|
scontrol show hostname
|
|
```
|
|
|
|
This list must be included in the settings as the list of hosts where to run Fluent. Alternatively, one can give that list as parameter (`-cnf` option) when running `fluent`,
|
|
as follows:
|
|
|
|
<details>
|
|
<summary>[Running Fluent with 'salloc' example]</summary>
|
|
<pre class="terminal code highlight js-syntax-highlight plaintext" lang="plaintext" markdown="false">
|
|
(base) [caubet_m@merlin-l-001 caubet_m]$ salloc --nodes=2 --ntasks=88 --hint=nomultithread --time=0-01:00:00 --partition=test $SHELL
|
|
salloc: Pending job allocation 135030174
|
|
salloc: job 135030174 queued and waiting for resources
|
|
salloc: job 135030174 has been allocated resources
|
|
salloc: Granted job allocation 135030174
|
|
|
|
(base) [caubet_m@merlin-l-001 caubet_m]$ module use unstable
|
|
(
|
|
base) [caubet_m@merlin-l-001 caubet_m]$ module load ANSYS/2020R1-1
|
|
module load: unstable module has been loaded -- ANSYS/2020R1-1
|
|
|
|
(base) [caubet_m@merlin-l-001 caubet_m]$ fluent 3ddp -t$SLURM_NPROCS -cnf=$(scontrol show hostname | tr '\n' ',')
|
|
|
|
(base) [caubet_m@merlin-l-001 caubet_m]$ exit
|
|
exit
|
|
salloc: Relinquishing job allocation 135030174
|
|
salloc: Job allocation 135030174 has been revoked.
|
|
</pre>
|
|
</details>
|