initial formatting changes complete

This commit is contained in:
2026-01-06 16:40:15 +01:00
parent 173f822230
commit 5f759a629a
81 changed files with 806 additions and 1113 deletions

View File

@@ -54,7 +54,7 @@ export ANSYSLI_SERVERS=2325@$LICENSE_SERVER
# [Optional:END]
SOLVER_FILE=/data/user/caubet_m/CFX5/mysolver.in
cfx5solve -batch -def "$JOURNAL_FILE"
cfx5solve -batch -def "$JOURNAL_FILE"
```
One can enable hypertheading by defining `--hint=multithread`,
@@ -99,23 +99,24 @@ if [ "$INTELMPI" == "yes" ]
then
export I_MPI_DEBUG=4
export I_MPI_PIN_CELL=core
# Simple example: cfx5solve -batch -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
# -part $SLURM_NTASKS \
# Simple example: cfx5solve -batch -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
# -part $SLURM_NTASKS \
# -start-method 'Intel MPI Distributed Parallel'
cfx5solve -batch -part-large -double -verbose -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
cfx5solve -batch -part-large -double -verbose -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
-part $SLURM_NTASKS -par-local -start-method 'Intel MPI Distributed Parallel'
else
# Simple example: cfx5solve -batch -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
# -part $SLURM_NTASKS \
# Simple example: cfx5solve -batch -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
# -part $SLURM_NTASKS \
# -start-method 'IBM MPI Distributed Parallel'
cfx5solve -batch -part-large -double -verbose -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
cfx5solve -batch -part-large -double -verbose -def "$JOURNAL_FILE" -par-dist "$HOSTLIST" \
-part $SLURM_NTASKS -par-local -start-method 'IBM MPI Distributed Parallel'
fi
```
In the above example, one can increase the number of *nodes* and/or *ntasks* if needed and combine it
with `--exclusive` whenever needed. In general, **no hypertheading** is recommended for MPI based jobs.
Also, one can combine it with `--exclusive` when necessary. Finally, one can change the MPI technology in `-start-method`
(check CFX documentation for possible values).

View File

@@ -75,10 +75,10 @@ To setup HFSS RSM for using it with the Merlin cluster, it must be done from the
![RSM_Remote_Scheduler](../../images/ANSYS/HFSS/02_Select_Scheduler_RSM_Remote.png)
* **Select Scheduler**: `Remote RSM`.
* **Server**: Add a Merlin login node.
* **User name**: Add your Merlin username.
* **Password**: Add you Merlin username password.
* **Select Scheduler**: `Remote RSM`.
* **Server**: Add a Merlin login node.
* **User name**: Add your Merlin username.
* **Password**: Add you Merlin username password.
Once *refreshed*, the **Scheduler info** box must provide **Slurm**
information of the server (see above picture). If the box contains that
@@ -92,7 +92,7 @@ To setup HFSS RSM for using it with the Merlin cluster, it must be done from the
![Product_Path](../../images/ANSYS/HFSS/05_Submit_Job_Product_Path.png)
* In example, for **ANSYS/2022R1**, the location is `/data/software/pmodules/Tools/ANSYS/2021R1/v211/AnsysEM21.1/Linux64/ansysedt.exe`.
* In example, for **ANSYS/2022R1**, the location is `/data/software/pmodules/Tools/ANSYS/2021R1/v211/AnsysEM21.1/Linux64/ansysedt.exe`.
### HFSS Slurm (from login node only)
@@ -118,10 +118,10 @@ Desktop** to submit to Slurm. This can set as follows:
![RSM_Remote_Scheduler](../../images/ANSYS/HFSS/03_Select_Scheduler_Slurm.png)
* **Select Scheduler**: `Slurm`.
* **Server**: must point to `localhost`.
* **User name**: must be empty.
* **Password**: must be empty.
* **Select Scheduler**: `Slurm`.
* **Server**: must point to `localhost`.
* **User name**: must be empty.
* **Password**: must be empty.
The **Server, User name** and **Password** boxes can't be modified, but if
value do not match with the above settings, they should be changed by

View File

@@ -1,6 +1,4 @@
---
title: ANSYS - MAPDL
---
# ANSYS - MAPDL
# ANSYS - Mechanical APDL
@@ -143,12 +141,12 @@ then
# When using -mpi=intelmpi, KMP Affinity must be disabled
export KMP_AFFINITY=disabled
# INTELMPI is not aware about distribution of tasks.
# INTELMPI is not aware about distribution of tasks.
# - We need to define tasks distribution.
HOSTLIST=$(srun hostname | sort | uniq -c | awk '{print $2 ":" $1}' | tr '\n' ':' | sed 's/:$/\n/g')
mapdl -b -dis -mpi intelmpi -machines $HOSTLIST -np ${SLURM_NTASKS} -i "$SOLVER_FILE"
else
# IBMMPI (default) will be aware of the distribution of tasks.
# IBMMPI (default) will be aware of the distribution of tasks.
# - In principle, no need to force tasks distribution
mapdl -b -dis -mpi ibmmpi -np ${SLURM_NTASKS} -i "$SOLVER_FILE"
fi

View File

@@ -56,25 +56,25 @@ The different steps and settings required to make it work are that following:
2. Right-click the **HPC Resources** icon followed by **Add HPC Resource...**
![Adding a new HPC Resource](../../images/ANSYS/rsm-1-add_hpc_resource.png)
3. In the **HPC Resource** tab, fill up the corresponding fields as follows:
![HPC Resource](../../images/ANSYS/rsm-2-add_cluster.png)
* **"Name"**: Add here the preffered name for the cluster. In example: `Merlin6 cluster - merlin-l-001`
* **"HPC Type"**: Select `SLURM`
* **"Submit host"**: Add one of the login nodes. In example `merlin-l-001`.
* **"Slurm Job submission arguments (optional)"**: Add any required Slurm options for running your jobs.
![HPC Resource](../../images/ANSYS/rsm-2-add_cluster.png)
* **"Name"**: Add here the preffered name for the cluster. In example: `Merlin6 cluster - merlin-l-001`
* **"HPC Type"**: Select `SLURM`
* **"Submit host"**: Add one of the login nodes. In example `merlin-l-001`.
* **"Slurm Job submission arguments (optional)"**: Add any required Slurm options for running your jobs.
* In general, `--hint=nomultithread` should be at least present.
* Check **"Use SSH protocol for inter and intra-node communication (Linux only)"**
* Select **"Able to directly submit and monitor HPC jobs"**.
* **"Apply"** changes.
* Check **"Use SSH protocol for inter and intra-node communication (Linux only)"**
* Select **"Able to directly submit and monitor HPC jobs"**.
* **"Apply"** changes.
4. In the **"File Management"** tab, fill up the corresponding fields as follows:
![File Management](../../images/ANSYS/rsm-3-add_scratch_info.png)
* Select **"RSM internal file transfer mechanism"** and add **`/shared-scratch`** as the **"Staging directory path on Cluster"**
* Select **"Scratch directory local to the execution node(s)"** and add **`/scratch`** as the **HPC scratch directory**.
* **Never check** the option "Keep job files in the staging directory when job is complete" if the previous
![File Management](../../images/ANSYS/rsm-3-add_scratch_info.png)
* Select **"RSM internal file transfer mechanism"** and add **`/shared-scratch`** as the **"Staging directory path on Cluster"**
* Select **"Scratch directory local to the execution node(s)"** and add **`/scratch`** as the **HPC scratch directory**.
* **Never check** the option "Keep job files in the staging directory when job is complete" if the previous
option "Scratch directory local to the execution node(s)" was set.
* **"Apply"** changes.
* **"Apply"** changes.
5. In the **"Queues"** tab, use the left button to auto-discover partitions
![Queues](../../images/ANSYS/rsm-4-get_slurm_queues.png)
* If no authentication method was configured before, an authentication window will appear. Use your
* If no authentication method was configured before, an authentication window will appear. Use your
PSI account to authenticate. Notice that the **`PSICH\`** prefix **must not be added**.
![Authenticating](../../images/ANSYS/rsm-5-authenticating.png)
* From the partition list, select the ones you want to typically use.

View File

@@ -40,17 +40,17 @@ option. This will show the location of the different ANSYS releases as follows:
Module Rel.stage Group Dependencies/Modulefile
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ANSYS/2019R3 stable Tools dependencies:
ANSYS/2019R3 stable Tools dependencies:
modulefile: /data/software/pmodules/Tools/modulefiles/ANSYS/2019R3
ANSYS/2020R1 stable Tools dependencies:
ANSYS/2020R1 stable Tools dependencies:
modulefile: /opt/psi/Tools/modulefiles/ANSYS/2020R1
ANSYS/2020R1-1 stable Tools dependencies:
ANSYS/2020R1-1 stable Tools dependencies:
modulefile: /opt/psi/Tools/modulefiles/ANSYS/2020R1-1
ANSYS/2020R2 stable Tools dependencies:
ANSYS/2020R2 stable Tools dependencies:
modulefile: /data/software/pmodules/Tools/modulefiles/ANSYS/2020R2
ANSYS/2021R1 stable Tools dependencies:
ANSYS/2021R1 stable Tools dependencies:
modulefile: /data/software/pmodules/Tools/modulefiles/ANSYS/2021R1
ANSYS/2021R2 stable Tools dependencies:
ANSYS/2021R2 stable Tools dependencies:
modulefile: /data/software/pmodules/Tools/modulefiles/ANSYS/2021R2
```
@@ -62,6 +62,7 @@ option. This will show the location of the different ANSYS releases as follows:
### ANSYS RSM
**ANSYS Remote Solve Manager (RSM)** is used by ANSYS Workbench to submit computational jobs to HPC clusters directly from Workbench on your desktop.
Therefore, PSI workstations with direct access to Merlin can submit jobs by using RSM.
For further information, please visit the **[ANSYS RSM](ansys-rsm.md)** section.

View File

@@ -78,7 +78,7 @@ To submit an interactive job, consider the following requirements:
# Example 1: Define GTHTMP before the allocation
export GTHTMP=/scratch
salloc ...
# Example 2: Define GTHTMP after the allocation
salloc ...
export GTHTMP=/scratch
@@ -89,7 +89,7 @@ To submit an interactive job, consider the following requirements:
allocation! In example:
```bash
# Example 1:
# Example 1:
export GTHTMP=/scratch/$USER
salloc ...
mkdir -p $GTHTMP
@@ -125,7 +125,7 @@ the [General requirements](#general-requirements) section.
* Requesting a full node:
```bash
salloc --partition=hourly -N 1 -n 1 -c 88 --hint=multithread --x11 --exclusive --mem=0
salloc --partition=hourly -N 1 -n 1 -c 88 --hint=multithread --x11 --exclusive --mem=0
```
* Requesting 22 CPUs from a node, with default memory per CPU (4000MB/CPU):
@@ -177,16 +177,16 @@ requirements](#general-requirements) section, and:
#SBATCH --exclusive
#SBATCH --mem=0
#SBATCH --clusters=merlin6
INPUT_FILE='MY_INPUT.SIN'
mkdir -p /scratch/$USER/$SLURM_JOB_ID
export GTHTMP=/scratch/$USER/$SLURM_JOB_ID
/data/project/general/software/gothic/gothic8.3qa/bin/gothic_s.sh $INPUT_FILE -m -np $SLURM_CPUS_PER_TASK
gth_exit_code=$?
# Clean up data in /scratch
# Clean up data in /scratch
rm -rf /scratch/$USER/$SLURM_JOB_ID
# Return exit code from GOTHIC
@@ -205,16 +205,16 @@ requirements](#general-requirements) section, and:
#SBATCH --cpus-per-task=22
#SBATCH --hint=multithread
#SBATCH --clusters=merlin6
INPUT_FILE='MY_INPUT.SIN'
mkdir -p /scratch/$USER/$SLURM_JOB_ID
export GTHTMP=/scratch/$USER/$SLURM_JOB_ID
/data/project/general/software/gothic/gothic8.3qa/bin/gothic_s.sh $INPUT_FILE -m -np $SLURM_CPUS_PER_TASK
gth_exit_code=$?
# Clean up data in /scratch
# Clean up data in /scratch
rm -rf /scratch/$USER/$SLURM_JOB_ID
# Return exit code from GOTHIC