added ANSYS/CFX

This commit is contained in:
2020-06-30 17:27:15 +02:00
parent a035437dc5
commit 1c2baa1506
2 changed files with 80 additions and 4 deletions

View File

@ -0,0 +1,76 @@
---
title: ANSYS / CFX
#tags:
last_updated: 30 June 2020
keywords: software, ansys, cfx5, cfx, slurm
summary: "This document describes how to run ANSYS/CFX in the Merlin6 cluster"
sidebar: merlin6_sidebar
permalink: /merlin6/ansys-cfx.html
---
This document describes the different ways for running **ANSYS/CFX**
## ANSYS/CFX
Is always recommended to check which parameters are available in CFX and adapt the below examples according to your needs.
For that, run `cfx5solve -help` for getting a list of options.
## Running CFX jobs
### PModules
Is strongly recommended the use of the latest ANSYS software **ANSYS/2020R1-1** available in PModules.
```bash
module use unstable
module load ANSYS/2020R1-1
```
### Non-interactive: sbatch
Running jobs with `sbatch` is always the recommended method. This makes the use of the resources more efficient.
#### Serial example
```bash
#!/bin/bash
#SBATCH --job-name=CFX # Job Name
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define your error file
module use unstable
module load ANSYS/2020R1-1
SOLVER_FILE=/data/user/caubet_m/CFX5/mysolver.in
cfx5solve -batch -def "$JOURNAL_FILE"
```
#### MPI-based example
An example for running CFX using a Slurm batch script is the following:
```bash
#!/bin/bash
#SBATCH --job-name=CFX # Job Name
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks=44 # Number of tasks
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
#SBATCH --ntasks-per-core=1 # Run one task per core
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define a file for standard error messages
##SBATCH --exclusive # Uncomment if you want exclusive usage of the nodes
module use unstable
module load ANSYS/2020R1-1
JOURNAL_FILE=/data/user/caubet_m/CFX/myjournal.in
cfx5solve -batch -def "$JOURNAL_FILE" -part $SLURM_NTASKS
```
In the above example, one can increase the number of *nodes* and/or *ntasks* if needed. One can remove
`--nodes`` for running on multiple nodes, but may lead to communication overhead.

View File

@ -45,8 +45,7 @@ For running it as a job, one needs to run in no graphical mode (`-g` option).
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --output=%j.out # Define your output file
#SBATCH --error=%j.err # Define your error file
#SBATCH --error=slurm-%j.err # Define your error file
module use unstable
module load ANSYS/2020R1-1
@ -69,7 +68,7 @@ An example for running Fluent using a Slurm batch script is the following:
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
#SBATCH --ntasks-per-core=1 # Run one task per core
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=%j.err # Define a file for standard error messages
#SBATCH --error=slurm-%j.err # Define a file for standard error messages
##SBATCH --exclusive # Uncomment if you want exclusive usage of the nodes
module use unstable
@ -79,7 +78,8 @@ JOURNAL_FILE=/data/user/caubet_m/Fluent/myjournal.in
fluent 3ddp -g -t ${SLURM_NTASKS} -i ${JOURNAL_FILE}
```
In the above example, one can increase the number of *nodes* and/or *ntasks* if needed.
In the above example, one can increase the number of *nodes* and/or *ntasks* if needed. One can remove
`--nodes`` for running on multiple nodes, but may lead to communication overhead.
## Interactive: salloc