85 lines
3.0 KiB
Markdown
85 lines
3.0 KiB
Markdown
---
|
|
title: ANSYS / CFX
|
|
#tags:
|
|
last_updated: 30 June 2020
|
|
keywords: software, ansys, cfx5, cfx, slurm
|
|
summary: "This document describes how to run ANSYS/CFX in the Merlin6 cluster"
|
|
sidebar: merlin6_sidebar
|
|
permalink: /merlin6/ansys-cfx.html
|
|
---
|
|
|
|
This document describes the different ways for running **ANSYS/CFX**
|
|
|
|
## ANSYS/CFX
|
|
|
|
Is always recommended to check which parameters are available in CFX and adapt the below examples according to your needs.
|
|
For that, run `cfx5solve -help` for getting a list of options.
|
|
|
|
## Running CFX jobs
|
|
|
|
### PModules
|
|
|
|
Is strongly recommended the use of the latest ANSYS software **ANSYS/2020R1-1** available in PModules.
|
|
|
|
```bash
|
|
module use unstable
|
|
module load ANSYS/2020R1-1
|
|
```
|
|
|
|
### Non-interactive: sbatch
|
|
|
|
Running jobs with `sbatch` is always the recommended method. This makes the use of the resources more efficient. Notice that for
|
|
running non interactive Mechanical APDL jobs one must specify the `-batch` option.
|
|
|
|
#### Serial example
|
|
|
|
This example shows a very basic serial job.
|
|
|
|
```bash
|
|
#!/bin/bash
|
|
#SBATCH --job-name=CFX # Job Name
|
|
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
|
|
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
|
|
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
|
|
#SBATCH --ntasks-per-core=1 # Double if hyperthreading enabled
|
|
#SBATCH --hint=nomultithread # Disable Hyperthreading
|
|
#SBATCH --error=slurm-%j.err # Define your error file
|
|
|
|
module use unstable
|
|
module load ANSYS/2020R1-1
|
|
|
|
SOLVER_FILE=/data/user/caubet_m/CFX5/mysolver.in
|
|
cfx5solve -batch -def "$JOURNAL_FILE"
|
|
```
|
|
|
|
One can enable hypertheading by defining `--hint=multithread`, `--cpus-per-task=2` and `--ntasks-per-core=2`.
|
|
However, this is in general not recommended, unless one can ensure that can be beneficial.
|
|
|
|
#### MPI-based example
|
|
|
|
An example for running CFX using a Slurm batch script is the following:
|
|
|
|
```bash
|
|
#!/bin/bash
|
|
#SBATCH --job-name=CFX # Job Name
|
|
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
|
|
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
|
|
#SBATCH --nodes=1 # Number of nodes
|
|
#SBATCH --ntasks=44 # Number of tasks
|
|
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
|
|
#SBATCH --ntasks-per-core=1 # Double if hyperthreading enabled
|
|
#SBATCH --hint=nomultithread # Disable Hyperthreading
|
|
#SBATCH --error=slurm-%j.err # Define a file for standard error messages
|
|
##SBATCH --exclusive # Uncomment if you want exclusive usage of the nodes
|
|
|
|
module use unstable
|
|
module load ANSYS/2020R1-1
|
|
|
|
JOURNAL_FILE=/data/user/caubet_m/CFX/myjournal.in
|
|
cfx5solve -batch -def "$JOURNAL_FILE" -part $SLURM_NTASKS
|
|
```
|
|
|
|
In the above example, one can increase the number of *nodes* and/or *ntasks* if needed and combine it
|
|
with `--exclusive` whenever needed. In general, **no hypertheading** is recommended for MPI based jobs.
|
|
Also, one can combine it with `--exclusive` when necessary.
|