Files
gitea-pages/pages/merlin6/05 Software Support/ansys-cfx.md
2020-06-30 17:27:15 +02:00

2.5 KiB

title, last_updated, keywords, summary, sidebar, permalink
title last_updated keywords summary sidebar permalink
ANSYS / CFX 30 June 2020 software, ansys, cfx5, cfx, slurm This document describes how to run ANSYS/CFX in the Merlin6 cluster merlin6_sidebar /merlin6/ansys-cfx.html

This document describes the different ways for running ANSYS/CFX

ANSYS/CFX

Is always recommended to check which parameters are available in CFX and adapt the below examples according to your needs. For that, run cfx5solve -help for getting a list of options.

Running CFX jobs

PModules

Is strongly recommended the use of the latest ANSYS software ANSYS/2020R1-1 available in PModules.

module use unstable
module load ANSYS/2020R1-1

Non-interactive: sbatch

Running jobs with sbatch is always the recommended method. This makes the use of the resources more efficient.

Serial example

#!/bin/bash
#SBATCH --job-name=CFX       # Job Name
#SBATCH --partition=hourly   # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00    # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --cpus-per-task=1    # Double if hyperthreading enabled
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define your error file

module use unstable
module load ANSYS/2020R1-1

SOLVER_FILE=/data/user/caubet_m/CFX5/mysolver.in
cfx5solve -batch -def "$JOURNAL_FILE" 

MPI-based example

An example for running CFX using a Slurm batch script is the following:

#!/bin/bash
#SBATCH --job-name=CFX       # Job Name
#SBATCH --partition=hourly   # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00    # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --nodes=1            # Number of nodes
#SBATCH --ntasks=44          # Number of tasks
#SBATCH --cpus-per-task=1    # Double if hyperthreading enabled
#SBATCH --ntasks-per-core=1  # Run one task per core
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define a file for standard error messages
##SBATCH --exclusive         # Uncomment if you want exclusive usage of the nodes

module use unstable
module load ANSYS/2020R1-1

JOURNAL_FILE=/data/user/caubet_m/CFX/myjournal.in
cfx5solve -batch -def "$JOURNAL_FILE" -part $SLURM_NTASKS

In the above example, one can increase the number of nodes and/or ntasks if needed. One can remove `--nodes`` for running on multiple nodes, but may lead to communication overhead.