first stab at mkdocs migration
refactor CSCS and Meg content add merlin6 quick start update merlin6 nomachine docs give the userdoc its own color scheme we use the Materials default one refactored slurm general docs merlin6 add merlin6 JB docs add software support m6 docs add all files to nav vibed changes #1 add missing pages further vibing #2 vibe #3 further fixes
This commit is contained in:
160
docs/merlin6/software-support/ansys-mapdl.md
Normal file
160
docs/merlin6/software-support/ansys-mapdl.md
Normal file
@@ -0,0 +1,160 @@
|
||||
---
|
||||
title: ANSYS - MAPDL
|
||||
---
|
||||
|
||||
# ANSYS - Mechanical APDL
|
||||
|
||||
Is always recommended to check which parameters are available in Mechanical APDL and adapt the below examples according to your needs.
|
||||
For that, please refer to the official Mechanical APDL documentation.
|
||||
|
||||
## Running Mechanical APDL jobs
|
||||
|
||||
### PModules
|
||||
|
||||
Is strongly recommended the use of the latest ANSYS software available in PModules.
|
||||
|
||||
```bash
|
||||
module use unstable
|
||||
module load Pmodules/1.1.6
|
||||
module use overlay_merlin
|
||||
module load ANSYS/2022R1
|
||||
```
|
||||
|
||||
### Interactive: RSM from remote PSI Workstations
|
||||
|
||||
Is possible to run Mechanical through RSM from remote PSI (Linux or Windows)
|
||||
Workstation having a local installation of ANSYS Mechanical and RSM client.
|
||||
For that, please refer to the ***[ANSYS RSM](ansys-rsm.md)*** in the Merlin
|
||||
documentation for further information of how to setup a RSM client for
|
||||
submitting jobs to Merlin.
|
||||
|
||||
### Non-interactive: sbatch
|
||||
|
||||
Running jobs with `sbatch` is always the recommended method. This makes the use
|
||||
of the resources more efficient. Notice that for running non interactive
|
||||
Mechanical APDL jobs one must specify the `-b` option.
|
||||
|
||||
#### Serial example
|
||||
|
||||
This example shows a very basic serial job.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
#SBATCH --job-name=MAPDL # Job Name
|
||||
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
|
||||
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
|
||||
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
|
||||
#SBATCH --ntasks-per-core=1 # Double if hyperthreading enabled
|
||||
#SBATCH --hint=nomultithread # Disable Hyperthreading
|
||||
#SBATCH --error=slurm-%j.err # Define your error file
|
||||
|
||||
module use unstable
|
||||
module load ANSYS/2020R1-1
|
||||
|
||||
# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
|
||||
LICENSE_SERVER=<your_license_server>
|
||||
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
|
||||
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
|
||||
# [Optional:END]
|
||||
|
||||
SOLVER_FILE=/data/user/caubet_m/MAPDL/mysolver.in
|
||||
mapdl -b -i "$SOLVER_FILE"
|
||||
```
|
||||
|
||||
One can enable hypertheading by defining `--hint=multithread`,
|
||||
`--cpus-per-task=2` and `--ntasks-per-core=2`. However, this is in general not
|
||||
recommended, unless one can ensure that can be beneficial.
|
||||
|
||||
#### SMP-based example
|
||||
|
||||
This example shows how to running Mechanical APDL in Shared-Memory Parallelism
|
||||
mode. It limits the use to 1 single node, but by using many cores. In the
|
||||
example below, we use a node by using all his cores and the whole memory.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
#SBATCH --job-name=MAPDL # Job Name
|
||||
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
|
||||
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
|
||||
#SBATCH --nodes=1 # Number of nodes
|
||||
#SBATCH --ntasks=1 # Number of tasks
|
||||
#SBATCH --cpus-per-task=44 # Double if hyperthreading enabled
|
||||
#SBATCH --hint=nomultithread # Disable Hyperthreading
|
||||
#SBATCH --error=slurm-%j.err # Define a file for standard error messages
|
||||
#SBATCH --exclusive # Uncomment if you want exclusive usage of the nodes
|
||||
|
||||
module use unstable
|
||||
module load ANSYS/2020R1-1
|
||||
|
||||
# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
|
||||
LICENSE_SERVER=<your_license_server>
|
||||
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
|
||||
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
|
||||
# [Optional:END]
|
||||
|
||||
SOLVER_FILE=/data/user/caubet_m/MAPDL/mysolver.in
|
||||
mapdl -b -np ${SLURM_CPUS_PER_TASK} -i "$SOLVER_FILE"
|
||||
```
|
||||
|
||||
In the above example, one can reduce the number of **cpus per task**. Here
|
||||
usually `--exclusive` is recommended if one needs to use the whole memory.
|
||||
|
||||
For **SMP** runs, one might try the hyperthreading mode by doubling the proper
|
||||
settings (`--cpus-per-task`), in some cases it might be beneficial.
|
||||
|
||||
Please notice that `--ntasks-per-core=1` is not defined here, this is because
|
||||
we want to run 1 task on many cores! As an alternative, one can explore
|
||||
`--ntasks-per-socket` or `--ntasks-per-node` for fine grained configurations.
|
||||
|
||||
#### MPI-based example
|
||||
|
||||
This example enables Distributed ANSYS for running Mechanical APDL using a Slurm batch script.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
#SBATCH --job-name=MAPDL # Job Name
|
||||
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
|
||||
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
|
||||
#SBATCH --nodes=1 # Number of nodes
|
||||
#SBATCH --ntasks=44 # Number of tasks
|
||||
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
|
||||
#SBATCH --ntasks-per-core=1 # Run one task per core
|
||||
#SBATCH --hint=nomultithread # Disable Hyperthreading
|
||||
#SBATCH --error=slurm-%j.err # Define a file for standard error messages
|
||||
##SBATCH --exclusive # Uncomment if you want exclusive usage of the nodes
|
||||
|
||||
module use unstable
|
||||
module load ANSYS/2020R1-1
|
||||
|
||||
# [Optional:BEGIN] Specify your license server if this is not 'lic-ansys.psi.ch'
|
||||
LICENSE_SERVER=<your_license_server>
|
||||
export ANSYSLMD_LICENSE_FILE=1055@$LICENSE_SERVER
|
||||
export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
|
||||
# [Optional:END]
|
||||
|
||||
SOLVER_FILE=input.dat
|
||||
|
||||
# INTELMPI=no for IBM MPI
|
||||
# INTELMPI=yes for INTEL MPI
|
||||
INTELMPI=no
|
||||
|
||||
if [ "$INTELMPI" == "yes" ]
|
||||
then
|
||||
# When using -mpi=intelmpi, KMP Affinity must be disabled
|
||||
export KMP_AFFINITY=disabled
|
||||
|
||||
# INTELMPI is not aware about distribution of tasks.
|
||||
# - We need to define tasks distribution.
|
||||
HOSTLIST=$(srun hostname | sort | uniq -c | awk '{print $2 ":" $1}' | tr '\n' ':' | sed 's/:$/\n/g')
|
||||
mapdl -b -dis -mpi intelmpi -machines $HOSTLIST -np ${SLURM_NTASKS} -i "$SOLVER_FILE"
|
||||
else
|
||||
# IBMMPI (default) will be aware of the distribution of tasks.
|
||||
# - In principle, no need to force tasks distribution
|
||||
mapdl -b -dis -mpi ibmmpi -np ${SLURM_NTASKS} -i "$SOLVER_FILE"
|
||||
fi
|
||||
```
|
||||
|
||||
In the above example, one can increase the number of *nodes* and/or *ntasks* if
|
||||
needed and combine it with `--exclusive` when necessary. In general, **no
|
||||
hypertheading** is recommended for MPI based jobs. Also, one can combine it
|
||||
with `--exclusive` when necessary.
|
||||
Reference in New Issue
Block a user