From 980466df9707757a1f711c01a5572031d8e73ef6 Mon Sep 17 00:00:00 2001 From: caubet_m Date: Tue, 28 Jul 2020 16:42:33 +0200 Subject: [PATCH] CFX update --- pages/merlin6/05 Software Support/ansys-cfx.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/pages/merlin6/05 Software Support/ansys-cfx.md b/pages/merlin6/05 Software Support/ansys-cfx.md index a1272f3..d4cec77 100644 --- a/pages/merlin6/05 Software Support/ansys-cfx.md +++ b/pages/merlin6/05 Software Support/ansys-cfx.md @@ -75,10 +75,13 @@ An example for running CFX using a Slurm batch script is the following: module use unstable module load ANSYS/2020R1-1 +export HOSTLIST=$(scontrol show hostname | tr '\n' ',' | sed 's/,$//g') + JOURNAL_FILE=/data/user/caubet_m/CFX/myjournal.in -cfx5solve -batch -def "$JOURNAL_FILE" -part $SLURM_NTASKS +cfx5solve -batch -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" -part $SLURM_NTASKS -start-method 'IBM MPI Distributed Parallel' ``` In the above example, one can increase the number of *nodes* and/or *ntasks* if needed and combine it with `--exclusive` whenever needed. In general, **no hypertheading** is recommended for MPI based jobs. -Also, one can combine it with `--exclusive` when necessary. +Also, one can combine it with `--exclusive` when necessary. Finally, one can change the MPI technology in `-start-method` +(check CFX documentation for possible values).