first stab at mkdocs migration
refactor CSCS and Meg content add merlin6 quick start update merlin6 nomachine docs give the userdoc its own color scheme we use the Materials default one refactored slurm general docs merlin6 add merlin6 JB docs add software support m6 docs add all files to nav vibed changes #1 add missing pages further vibing #2 vibe #3 further fixes
This commit is contained in:
141
docs/merlin6/software-support/ansys-cfx.md
Normal file
141
docs/merlin6/software-support/ansys-cfx.md
Normal file
@@ -0,0 +1,141 @@
|
||||
# ANSYS - CFX
|
||||
|
||||
Is always recommended to check which parameters are available in CFX and adapt the below examples according to your needs.
|
||||
For that, run `cfx5solve -help` for getting a list of options.
|
||||
|
||||
## Running CFX jobs
|
||||
|
||||
### PModules
|
||||
|
||||
Is strongly recommended the use of the latest ANSYS software available in PModules.
|
||||
|
||||
```bash
|
||||
module use unstable
|
||||
module load Pmodules/1.1.6
|
||||
module use overlay_merlin
|
||||
module load ANSYS/2022R1
|
||||
```
|
||||
|
||||
### Interactive: RSM from remote PSI Workstations
|
||||
|
||||
Is possible to run CFX through RSM from remote PSI (Linux or Windows)
|
||||
Workstation having a local installation of ANSYS CFX and RSM client. For that,
|
||||
please refer to the **[ANSYS RSM](ansys-rsm.md)** in the Merlin documentation
|
||||
for further information of how to setup a RSM client for submitting jobs to
|
||||
Merlin.
|
||||
|
||||
### Non-interactive: sbatch
|
||||
|
||||
Running jobs with `sbatch` is always the recommended method. This makes the use
|
||||
of the resources more efficient. Notice that for running non interactive
|
||||
Mechanical APDL jobs one must specify the `-batch` option.
|
||||
|
||||
#### Serial example
|
||||
|
||||
This example shows a very basic serial job.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
#SBATCH --job-name=CFX # Job Name
|
||||
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
|
||||
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
|
||||
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
|
||||
#SBATCH --ntasks-per-core=1 # Double if hyperthreading enabled
|
||||
#SBATCH --hint=nomultithread # Disable Hyperthreading
|
||||
#SBATCH --error=slurm-%j.err # Define your error file
|
||||
|
||||
module use unstable
|
||||
module load ANSYS/2020R1-1
|
||||
|
||||
# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
|
||||
LICENSE_SERVER=<your_license_server>
|
||||
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
|
||||
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
|
||||
# [Optional:END]
|
||||
|
||||
SOLVER_FILE=/data/user/caubet_m/CFX5/mysolver.in
|
||||
cfx5solve -batch -def "$JOURNAL_FILE"
|
||||
```
|
||||
|
||||
One can enable hypertheading by defining `--hint=multithread`,
|
||||
`--cpus-per-task=2` and `--ntasks-per-core=2`. However, this is in general not
|
||||
recommended, unless one can ensure that can be beneficial.
|
||||
|
||||
#### MPI-based example
|
||||
|
||||
An example for running CFX using a Slurm batch script is the following:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
#SBATCH --job-name=CFX # Job Name
|
||||
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
|
||||
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
|
||||
#SBATCH --nodes=1 # Number of nodes
|
||||
#SBATCH --ntasks=44 # Number of tasks
|
||||
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
|
||||
#SBATCH --ntasks-per-core=1 # Double if hyperthreading enabled
|
||||
#SBATCH --hint=nomultithread # Disable Hyperthreading
|
||||
#SBATCH --error=slurm-%j.err # Define a file for standard error messages
|
||||
##SBATCH --exclusive # Uncomment if you want exclusive usage of the nodes
|
||||
|
||||
module use unstable
|
||||
module load ANSYS/2020R1-1
|
||||
|
||||
# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
|
||||
LICENSE_SERVER=<your_license_server>
|
||||
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
|
||||
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
|
||||
# [Optional:END]
|
||||
|
||||
export HOSTLIST=$(scontrol show hostname | tr '\n' ',' | sed 's/,$//g')
|
||||
|
||||
JOURNAL_FILE=myjournal.in
|
||||
|
||||
# INTELMPI=no for IBM MPI
|
||||
# INTELMPI=yes for INTEL MPI
|
||||
INTELMPI=no
|
||||
|
||||
if [ "$INTELMPI" == "yes" ]
|
||||
then
|
||||
export I_MPI_DEBUG=4
|
||||
export I_MPI_PIN_CELL=core
|
||||
|
||||
# Simple example: cfx5solve -batch -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
|
||||
# -part $SLURM_NTASKS \
|
||||
# -start-method 'Intel MPI Distributed Parallel'
|
||||
cfx5solve -batch -part-large -double -verbose -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
|
||||
-part $SLURM_NTASKS -par-local -start-method 'Intel MPI Distributed Parallel'
|
||||
else
|
||||
# Simple example: cfx5solve -batch -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
|
||||
# -part $SLURM_NTASKS \
|
||||
# -start-method 'IBM MPI Distributed Parallel'
|
||||
cfx5solve -batch -part-large -double -verbose -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
|
||||
-part $SLURM_NTASKS -par-local -start-method 'IBM MPI Distributed Parallel'
|
||||
fi
|
||||
```
|
||||
|
||||
In the above example, one can increase the number of *nodes* and/or *ntasks* if needed and combine it
|
||||
with `--exclusive` whenever needed. In general, **no hypertheading** is recommended for MPI based jobs.
|
||||
Also, one can combine it with `--exclusive` when necessary. Finally, one can change the MPI technology in `-start-method`
|
||||
(check CFX documentation for possible values).
|
||||
|
||||
## CFX5 Launcher: CFD-Pre/Post, Solve Manager, TurboGrid
|
||||
|
||||
Some users might need to visualize or change some parameters when running calculations with the CFX Solver. For running
|
||||
**TurboGrid**, **CFX-Pre**, **CFX-Solver Manager** or **CFD-Post** one should run it with the **`cfx5` launcher** binary:
|
||||
|
||||
```bash
|
||||
cfx5
|
||||
```
|
||||
|
||||

|
||||
/// caption
|
||||
///
|
||||
|
||||
Then, from the launcher, one can open the proper application (i.e. **CFX-Solver
|
||||
Manager** for visualizing and modifying an existing job run)
|
||||
|
||||
For running CFX5 Launcher, is required a proper SSH + X11 Forwarding access
|
||||
(`ssh -XY`) or *preferrible* **NoMachine**. If **ssh** does not work for you,
|
||||
please use **NoMachine** instead (which is the supported X based access, and
|
||||
simpler).
|
||||
Reference in New Issue
Block a user