5.6 KiB

title, keywords, last_updated, summary, sidebar, permalink
title keywords last_updated summary sidebar permalink
ANSYS / CFX software, ansys, cfx5, cfx, slurm, interactive, rsm, batch job 07 September 2022 This document describes how to run ANSYS/CFX in the Merlin6 cluster merlin6_sidebar /merlin6/ansys-cfx.html

This document describes the different ways for running ANSYS/CFX

ANSYS/CFX

Is always recommended to check which parameters are available in CFX and adapt the below examples according to your needs. For that, run cfx5solve -help for getting a list of options.

Running CFX jobs

PModules

Is strongly recommended the use of the latest ANSYS software available in PModules.

module use unstable
module load Pmodules/1.1.6
module use overlay_merlin
module load ANSYS/2022R1

Interactive: RSM from remote PSI Workstations

Is possible to run CFX through RSM from remote PSI (Linux or Windows) Workstation having a local installation of ANSYS CFX and RSM client. For that, please refer to the [ANSYS RSM](/merlin6/ansys-rsm.html) in the Merlin documentation for further information of how to setup a RSM client for submitting jobs to Merlin.

Non-interactive: sbatch

Running jobs with sbatch is always the recommended method. This makes the use of the resources more efficient. Notice that for running non interactive Mechanical APDL jobs one must specify the -batch option.

Serial example

This example shows a very basic serial job.

#!/bin/bash
#SBATCH --job-name=CFX       # Job Name
#SBATCH --partition=hourly   # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00    # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --cpus-per-task=1    # Double if hyperthreading enabled
#SBATCH --ntasks-per-core=1  # Double if hyperthreading enabled
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define your error file

module use unstable
module load ANSYS/2020R1-1

# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
LICENSE_SERVER=<your_license_server>
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
# [Optional:END]

SOLVER_FILE=/data/user/caubet_m/CFX5/mysolver.in
cfx5solve -batch -def "$JOURNAL_FILE" 

One can enable hypertheading by defining --hint=multithread, --cpus-per-task=2 and --ntasks-per-core=2. However, this is in general not recommended, unless one can ensure that can be beneficial.

MPI-based example

An example for running CFX using a Slurm batch script is the following:

#!/bin/bash
#SBATCH --job-name=CFX       # Job Name
#SBATCH --partition=hourly   # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00    # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --nodes=1            # Number of nodes
#SBATCH --ntasks=44          # Number of tasks
#SBATCH --cpus-per-task=1    # Double if hyperthreading enabled
#SBATCH --ntasks-per-core=1  # Double if hyperthreading enabled
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define a file for standard error messages
##SBATCH --exclusive         # Uncomment if you want exclusive usage of the nodes

module use unstable
module load ANSYS/2020R1-1

# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
LICENSE_SERVER=<your_license_server>
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
# [Optional:END]

export HOSTLIST=$(scontrol show hostname | tr '\n' ',' | sed 's/,$//g')

JOURNAL_FILE=myjournal.in

# INTELMPI=no  for IBM MPI
# INTELMPI=yes for INTEL MPI
INTELMPI=no

if [ "$INTELMPI" == "yes" ]
then
  export I_MPI_DEBUG=4
  export I_MPI_PIN_CELL=core
  
  # Simple example: cfx5solve -batch -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \ 
  #                           -part $SLURM_NTASKS \ 
  #                           -start-method 'Intel MPI Distributed Parallel'
  cfx5solve -batch -part-large -double -verbose -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \ 
            -part $SLURM_NTASKS -par-local -start-method 'Intel MPI Distributed Parallel'
else
  # Simple example: cfx5solve -batch -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \ 
  #                           -part $SLURM_NTASKS \ 
  #                           -start-method 'IBM MPI Distributed Parallel'
  cfx5solve -batch -part-large -double -verbose -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \ 
            -part $SLURM_NTASKS -par-local -start-method 'IBM MPI Distributed Parallel'
fi

In the above example, one can increase the number of nodes and/or ntasks if needed and combine it with --exclusive whenever needed. In general, no hypertheading is recommended for MPI based jobs. Also, one can combine it with --exclusive when necessary. Finally, one can change the MPI technology in -start-method (check CFX documentation for possible values).

CFX5 Launcher: CFD-Pre/Post, Solve Manager, TurboGrid

Some users might need to visualize or change some parameters when running calculations with the CFX Solver. For running TurboGrid, CFX-Pre, CFX-Solver Manager or CFD-Post one should run it with the cfx5 launcher binary:

cfx5

![CFX5 Launcher Example]({{ "/images/ANSYS/cfx5launcher.png" }})

Then, from the launcher, one can open the proper application (i.e. CFX-Solver Manager for visualizing and modifying an existing job run)

For running CFX5 Launcher, is required a proper SSH + X11 Forwarding access (ssh -XY) or preferrible NoMachine. If ssh does not work for you, please use NoMachine instead (which is the supported X based access, and simpler).