added ANSYS/MAPDL

This commit is contained in:
caubet_m 2020-07-01 12:08:13 +02:00
parent 88e14b1fdd
commit e527fbebb5
4 changed files with 143 additions and 7 deletions

View File

@ -75,10 +75,12 @@ entries:
url: /merlin6/openmpi.html url: /merlin6/openmpi.html
- title: IntelMPI - title: IntelMPI
url: /merlin6/impi.html url: /merlin6/impi.html
- title: ANSYS/Fluent
url: /merlin6/ansys-fluent.html
- title: ANSYS/CFX - title: ANSYS/CFX
url: /merlin6/ansys-cfx.html url: /merlin6/ansys-cfx.html
- title: ANSYS/Fluent
url: /merlin6/ansys-fluent.html
- title: ANSYS/MAPDL
url: /merlin6/ansys-mapdl.html
- title: Announcements - title: Announcements
folderitems: folderitems:
- title: Downtimes - title: Downtimes

View File

@ -28,16 +28,20 @@ module load ANSYS/2020R1-1
### Non-interactive: sbatch ### Non-interactive: sbatch
Running jobs with `sbatch` is always the recommended method. This makes the use of the resources more efficient. Running jobs with `sbatch` is always the recommended method. This makes the use of the resources more efficient. Notice that for
running non interactive Mechanical APDL jobs one must specify the `-batch` option.
#### Serial example #### Serial example
This example shows a very basic serial job.
```bash ```bash
#!/bin/bash #!/bin/bash
#SBATCH --job-name=CFX # Job Name #SBATCH --job-name=CFX # Job Name
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general' #SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits. #SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled #SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
#SBATCH --ntasks-per-core=1 # Double if hyperthreading enabled
#SBATCH --hint=nomultithread # Disable Hyperthreading #SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define your error file #SBATCH --error=slurm-%j.err # Define your error file
@ -48,6 +52,9 @@ SOLVER_FILE=/data/user/caubet_m/CFX5/mysolver.in
cfx5solve -batch -def "$JOURNAL_FILE" cfx5solve -batch -def "$JOURNAL_FILE"
``` ```
One can enable hypertheading by defining `--hint=multithread`, `--cpus-per-task=2` and `--ntasks-per-core=2`.
However, this is in general not recommended, unless one can ensure that can be beneficial.
#### MPI-based example #### MPI-based example
An example for running CFX using a Slurm batch script is the following: An example for running CFX using a Slurm batch script is the following:
@ -60,7 +67,7 @@ An example for running CFX using a Slurm batch script is the following:
#SBATCH --nodes=1 # Number of nodes #SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks=44 # Number of tasks #SBATCH --ntasks=44 # Number of tasks
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled #SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
#SBATCH --ntasks-per-core=1 # Run one task per core #SBATCH --ntasks-per-core=1 # Double if hyperthreading enabled
#SBATCH --hint=nomultithread # Disable Hyperthreading #SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define a file for standard error messages #SBATCH --error=slurm-%j.err # Define a file for standard error messages
##SBATCH --exclusive # Uncomment if you want exclusive usage of the nodes ##SBATCH --exclusive # Uncomment if you want exclusive usage of the nodes
@ -72,5 +79,6 @@ JOURNAL_FILE=/data/user/caubet_m/CFX/myjournal.in
cfx5solve -batch -def "$JOURNAL_FILE" -part $SLURM_NTASKS cfx5solve -batch -def "$JOURNAL_FILE" -part $SLURM_NTASKS
``` ```
In the above example, one can increase the number of *nodes* and/or *ntasks* if needed. One can remove In the above example, one can increase the number of *nodes* and/or *ntasks* if needed and combine it
`--nodes`` for running on multiple nodes, but may lead to communication overhead. with `--exclusive` whenever needed. In general, **no hypertheading** is recommended for MPI based jobs.
Also, one can combine it with `--exclusive` when necessary.

View File

@ -38,6 +38,8 @@ For running it as a job, one needs to run in no graphical mode (`-g` option).
#### Serial example #### Serial example
This example shows a very basic serial job.
```bash ```bash
#!/bin/bash #!/bin/bash
#SBATCH --job-name=Fluent # Job Name #SBATCH --job-name=Fluent # Job Name
@ -54,6 +56,9 @@ JOURNAL_FILE=/data/user/caubet_m/Fluent/myjournal.in
fluent 3ddp -g -i ${JOURNAL_FILE} fluent 3ddp -g -i ${JOURNAL_FILE}
``` ```
One can enable hypertheading by defining `--hint=multithread`, `--cpus-per-task=2` and `--ntasks-per-core=2`.
However, this is in general not recommended, unless one can ensure that can be beneficial.
#### MPI-based example #### MPI-based example
An example for running Fluent using a Slurm batch script is the following: An example for running Fluent using a Slurm batch script is the following:
@ -79,7 +84,9 @@ fluent 3ddp -g -t ${SLURM_NTASKS} -i ${JOURNAL_FILE}
``` ```
In the above example, one can increase the number of *nodes* and/or *ntasks* if needed. One can remove In the above example, one can increase the number of *nodes* and/or *ntasks* if needed. One can remove
`--nodes`` for running on multiple nodes, but may lead to communication overhead. `--nodes`` for running on multiple nodes, but may lead to communication overhead. In general, **no
hyperthreading** is recommended for MPI based jobs. Also, one can combine it with `--exclusive` when necessary.
## Interactive: salloc ## Interactive: salloc

View File

@ -0,0 +1,119 @@
---
title: ANSYS / MAPDL
#tags:
last_updated: 30 June 2020
keywords: software, ansys, mapdl, slurm, apdl
summary: "This document describes how to run ANSYS/Mechanical APDL in the Merlin6 cluster"
sidebar: merlin6_sidebar
permalink: /merlin6/ansys-mapdl.html
---
This document describes the different ways for running **ANSYS/Mechanical APDL**
## ANSYS/Mechanical APDL
Is always recommended to check which parameters are available in Mechanical APDL and adapt the below examples according to your needs.
For that, please refer to the official Mechanical APDL documentation.
## Running Mechanical APDL jobs
### PModules
Is strongly recommended the use of the latest ANSYS software **ANSYS/2020R1-1** available in PModules.
```bash
module use unstable
module load ANSYS/2020R1-1
```
### Non-interactive: sbatch
Running jobs with `sbatch` is always the recommended method. This makes the use of the resources more efficient. Notice that for
running non interactive Mechanical APDL jobs one must specify the `-b` option.
#### Serial example
This example shows a very basic serial job.
```bash
#!/bin/bash
#SBATCH --job-name=MAPDL # Job Name
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
#SBATCH --ntasks-per-core=1 # Double if hyperthreading enabled
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define your error file
module use unstable
module load ANSYS/2020R1-1
SOLVER_FILE=/data/user/caubet_m/MAPDL/mysolver.in
mapdl -b -i "$SOLVER_FILE"
```
One can enable hypertheading by defining `--hint=multithread`, `--cpus-per-task=2` and `--ntasks-per-core=2`.
However, this is in general not recommended, unless one can ensure that can be beneficial.
#### SMP-based example
This example shows how to running Mechanical APDL in Shared-Memory Parallelism mode. It limits the use
to 1 single node, but by using many cores. In the example below, we use a node by using all his cores
and the whole memory.
```bash
#!/bin/bash
#SBATCH --job-name=MAPDL # Job Name
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks=1 # Number of tasks
#SBATCH --cpus-per-task=44 # Double if hyperthreading enabled
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define a file for standard error messages
#SBATCH --exclusive # Uncomment if you want exclusive usage of the nodes
module use unstable
module load ANSYS/2020R1-1
SOLVER_FILE=/data/user/caubet_m/MAPDL/mysolver.in
mapdl -b -np ${SLURM_CPUS_PER_TASK} -i "$SOLVER_FILE"
```
In the above example, one can reduce the number of **cpus per task**. Here usually `--exclusive`
is recommended if one needs to use the whole memory.
For **SMP** runs, one might try the hyperthreading mode by doubling the proper settings
(`--cpus-per-task`), in some cases it might be beneficial.
Please notice that `--ntasks-per-core=1` is not defined here, this is because we want to run 1
task on many cores! As an alternative, one can explore `--ntasks-per-socket` or `--ntasks-per-node`
for fine grained configurations.
#### MPI-based example
This example enables Distributed ANSYS for running Mechanical APDL using a Slurm batch script.
```bash
#!/bin/bash
#SBATCH --job-name=MAPDL # Job Name
#SBATCH --partition=hourly # Using 'daily' will grant higher priority than 'general'
#SBATCH --time=0-01:00:00 # Time needed for running the job. Must match with 'partition' limits.
#SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks=44 # Number of tasks
#SBATCH --cpus-per-task=1 # Double if hyperthreading enabled
#SBATCH --ntasks-per-core=1 # Run one task per core
#SBATCH --hint=nomultithread # Disable Hyperthreading
#SBATCH --error=slurm-%j.err # Define a file for standard error messages
##SBATCH --exclusive # Uncomment if you want exclusive usage of the nodes
module use unstable
module load ANSYS/2020R1-1
SOLVER_FILE=/data/user/caubet_m/MAPDL/mysolver.in
mapdl -b -dis -np ${SLURM_NTASKS} -i "$SOLVER_FILE"
```
In the above example, one can increase the number of *nodes* and/or *ntasks* if needed and combine it
with `--exclusive` when necessary. In general, **no hypertheading** is recommended for MPI based jobs.
Also, one can combine it with `--exclusive` when necessary.