first stab at mkdocs migration

This commit is contained in:
2025-11-26 17:28:07 +01:00
parent 149de6fb18
commit 1d9c01572d
282 changed files with 200 additions and 8940 deletions

View File

@@ -0,0 +1,86 @@
---
title: ANSYS RSM (Remote Resolve Manager)
#tags:
keywords: software, ansys, rsm, slurm, interactive, rsm, windows
last_updated: 23 August 2024
summary: "This document describes how to use the ANSYS Remote Resolve Manager service in the Merlin7 cluster"
sidebar: merlin7_sidebar
permalink: /merlin7/ansys-rsm.html
---
## ANSYS Remote Resolve Manager
**ANSYS Remote Solve Manager (RSM)** is used by ANSYS Workbench to submit computational jobs to HPC clusters directly from Workbench on your desktop.
{{site.data.alerts.warning}} Merlin7 is running behind a firewall, however, there are firewall policies in place to access the Merlin7 ANSYS RSM service from the main PSI networks. If you can not connect to it, please contact us, and please provide the IP address for the corresponding workstation: we will check the PSI firewall rules in place and request for an update if necessary.
{{site.data.alerts.end}}
### The Merlin7 RSM service
A RSM service is running on a dedicated Virtual Machine server. This service will listen a specific port and will process any request using RSM (in example, from ANSYS users workstations).
The following nodes are configured with such services:
* `service03.merlin7.psi.ch`
The earliest version supported in the Merlin7 cluster is ANSYS/2022R2. Older versions are not supported due to existing bugs or missing functionalities. In case you strongly need to run an older version, please do not hesitate to contact the Merlin admins.
## Configuring RSM client on Windows workstations
Users can setup ANSYS RSM in their workstations to connect to the Merlin7 cluster.
The different steps and settings required to make it work are that following:
1. Open the RSM Configuration service in Windows for the ANSYS release you want to configure.
2. Right-click the **HPC Resources** icon followed by **Add HPC Resource...**
![Adding a new HPC Resource]({{ "/images/ANSYS/merlin7/rsm-1-add_hpc_resource.png" }})
3. In the **HPC Resource** tab, fill up the corresponding fields as follows:
![HPC Resource]({{"/images/ANSYS/merlin7/rsm-2-add_cluster.png"}})
* **"Name"**: Add here the preffered name for the cluster. For example: `Merlin7 cluster`
* **"HPC Type"**: Select `SLURM`
* **"Submit host"**: `service03.merlin7.psi.ch`
* **"Slurm Job submission arguments (optional)"**: Add any required Slurm options for running your jobs.
* `--hint=nomultithread` must be present.
* `--exclusive` must also be present for now, due to a bug in the `Slingshot` interconnect which does not allow running shared nodes.
* Check **"Use SSH protocol for inter and intra-node communication (Linux only)"**
* Select **"Able to directly submit and monitor HPC jobs"**.
* **"Apply"** changes.
4. In the **"File Management"** tab, fill up the corresponding fields as follows:
![File Management]({{"/images/ANSYS/merlin7/rsm-3-add_scratch_info.png"}})
* Select **"RSM internal file transfer mechanism"** and add **`/data/scratch/shared`** as the **"Staging directory path on Cluster"**
* Select **"Scratch directory local to the execution node(s)"** and add **`/scratch`** as the **HPC scratch directory**.
* **Never check** the option "Keep job files in the staging directory when job is complete" if the previous
option "Scratch directory local to the execution node(s)" was set.
* **"Apply"** changes.
5. In the **"Queues"** tab, use the left button to auto-discover partitions
![Queues]({{"/images/ANSYS/merlin7/rsm-4-get_slurm_queues.png"}})
* If no authentication method was configured before, an authentication window will appear. Use your
PSI account to authenticate. Notice that the **`PSICH\`** prefix **must not be added**.
![Authenticating]({{"/images/ANSYS/merlin7/rsm-5-authenticating.png"}})
* From the partition list, select the ones you want to typically use.
* In general, standard Merlin users must use **`hourly`**, **`daily`** and **`general`** only.
* Other partitions are reserved for allowed users only.
* **"Apply"** changes.
![Select partitions]({{"/images/ANSYS/merlin7/rsm-6-selected-partitions.png"}})
6. *[Optional]* You can perform a test by submitting a test job on each partition by clicking on the **Submit** button
for each selected partition.
{{site.data.alerts.tip}}
In the future, we might provide this service also from the login nodes for better transfer performance.
{{site.data.alerts.end}}
## Using RSM in ANSYS
Using the RSM service in ANSYS is slightly different depending on the ANSYS software being used.
Please follow the official ANSYS documentation for details about how to use it for that specific software.
Alternativaly, please refer to some the examples showed in the following chapters (ANSYS specific software).
### Using RSM in ANSYS Fluent
For further information for using RSM with Fluent, please visit the **[ANSYS RSM](/merlin7/ansys-fluent.html)** section.
### Using RSM in ANSYS CFX
For further information for using RSM with CFX, please visit the **[ANSYS RSM](/merlin7/ansys-cfx.html)** section.
### Using RSM in ANSYS MAPDL
For further information for using RSM with MAPDL, please visit the **[ANSYS RSM](/merlin7/ansys-mapdl.html)** section.

View File

@@ -0,0 +1,95 @@
---
title: ANSYS
#tags:
keywords: software, ansys, slurm, interactive, rsm, pmodules, overlay, overlays
last_updated: 23 August 2024
summary: "This document describes how to load and use ANSYS in the Merlin7 cluster"
sidebar: merlin7_sidebar
permalink: /merlin7/ansys.html
---
This document describes generic information of how to load and run ANSYS software in the Merlin cluster
## ANSYS software in Pmodules
The ANSYS software can be loaded through **[PModules](/merlin7/pmodules.html)**.
The default ANSYS versions are loaded from the central PModules repository.
However, we provide local installations on Merlin7 which are needed mainly for some ANSYS packages, like Ansys RSM.
Due to this, and also to improve the interactive experience of the user, ANSYS has been also installed in the
Merlin high performance storage and we have made it available from Pmodules.
### Loading Merlin7 ANSYS
```bash
module purge
module use unstable # Optional
module search ANSYS
# Load the proper ANSYS version, in example for 2022R2
module load ANSYS/2025R2
```
**We strongly recommend only using ANSYS/2024R2 or superior**.
<details>
<summary>[Example] Loading ANSYS from the Merlin7 PModules repository</summary>
<pre class="terminal code highlight js-syntax-highlight plaintext" lang="plaintext" markdown="false">
🔥 [caubet_m@login001:~]# module purge
🔥 [caubet_m@login001:~]# module use unstable
🔥 [caubet_m@login001:~]# module load cray
🔥 [caubet_m@login002:~]# module search ANSYS --verbose
ANSYS/2022R2:
release stage: stable
group: Tools
overlay: merlin
modulefile: /data/software/pmodules/Tools/modulefiles/ANSYS/2022R2
dependencies: (none)
ANSYS/2023R2:
release stage: unstable
group: Tools
overlay: merlin
modulefile: /data/software/pmodules/Tools/modulefiles/ANSYS/2023R2
dependencies: (none)
ANSYS/2024R2:
release stage: stable
group: Tools
overlay: merlin
modulefile: /data/software/pmodules/Tools/modulefiles/ANSYS/2024R2
dependencies: (none)
ANSYS/2025R2:
release stage: unstable
group: Tools
overlay: merlin
modulefile: /data/software/pmodules/Tools/modulefiles/ANSYS/2025R2
dependencies: (none)
</pre>
</details>
{{site.data.alerts.tip}}Please always run <b>ANSYS/2024R2 or superior</b>.
{{site.data.alerts.end}}
## ANSYS Documentation by product
### ANSYS RSM
**ANSYS Remote Solve Manager (RSM)** is used by ANSYS Workbench to submit computational jobs to HPC clusters directly from Workbench on your desktop.
Therefore, PSI workstations with direct access to Merlin can submit jobs by using RSM.
For further information, please visit the **[ANSYS RSM](/merlin7/ansys-rsm.html)** section.
### ANSYS Fluent
For further information, please visit the **[ANSYS RSM](/merlin7/ansys-fluent.html)** section.
### ANSYS CFX
For further information, please visit the **[ANSYS RSM](/merlin7/ansys-cfx.html)** section.
### ANSYS MAPDL
For further information, please visit the **[ANSYS RSM](/merlin7/ansys-mapdl.html)** section.

View File

@@ -0,0 +1,159 @@
---
title: CP2k
keywords: CP2k software, compile
summary: "CP2k is a quantum chemistry and solid state physics software package"
sidebar: merlin7_sidebar
toc: false
permalink: /merlin7/cp2k.html
---
## CP2k
CP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems.
CP2K provides a general framework for different modeling methods such as DFT using the mixed Gaussian and plane waves approaches GPW and GAPW. Supported theory levels include DFTB, LDA, GGA, MP2, RPA, semi-empirical methods (AM1, PM3, PM6, RM1, MNDO, …), and classical force fields (AMBER, CHARMM, …). CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimization, and transition state optimization using NEB or dimer method
## Licensing Terms and Conditions
CP2k is a joint effort, with contributions from developers around the world: users agree to acknowledge use of CP2k in any reports or publications of results obtained with the Software (see CP2k Homepage for details).
## How to run on Merlin7
### CPU nodes
```bash
module use unstable Spack
module load gcc/12.3 openmpi/5.0.8-hgej cp2k/2025.2-yb6g-omp
```
### A100 nodes
```bash
module use unstable Spack
module load gcc/12.3 openmpi/5.0.8-r5lz-A100-gpu cp2k/2025.2-hkub-A100-gpu-omp
```
### GH nodes
```bash
module use unstable Spack
module load gcc/12.3 openmpi/5.0.8-tx2w-GH200-gpu cp2k/2025.2-xk4q-GH200-gpu-omp
```
### SBATCH CPU, 4 MPI ranks, 16 OMP threads
```bash
#!/bin/bash
#SBATCH --time=00:10:00 # maximum execution time of 10 minutes
#SBATCH --nodes=1 # requesting 1 compute node
#SBATCH --ntasks=4 # use 4 MPI rank (task)
#SBATCH --partition=hourly
#SBATCH --cpus-per-task=16 # modify this number of CPU cores per MPI task
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
unset PMODULES_ENV
module purge
module use unstable Spack
module load gcc/12.3 openmpi/5.0.8-hgej cp2k/2025.2-yb6g-omp
export FI_CXI_RX_MATCH_MODE=software
export OMP_NUM_THREADS=$((SLURM_CPUS_PER_TASK - 1))
srun cp2k.psmp -i <CP2K_INPUT> -o <CP2K_OUTPUT>
```
### SBATCH A100, 4 GPU, 16 OMP threads, 4 MPI ranks
```bash
#!/bin/bash
#SBATCH --time=00:10:00 # maximum execution time of 10 minutes
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
#SBATCH --nodes=1 # number of A100 nodes
#SBATCH --ntasks-per-node=4 # 4 MPI ranks per node
#SBATCH --cpus-per-task=16 # 16 OMP threads per MPI rank
#SBATCH --cluster=gmerlin7
#SBATCH --hint=nomultithread
#SBATCH --partition=a100-hourly
#SBATCH --gpus=4
unset PMODULES_ENV
module purge
module use unstable Spack
module load gcc/12.3 openmpi/5.0.8-r5lz-A100-gpu cp2k/2025.2-hkub-A100-gpu-omp
export FI_CXI_RX_MATCH_MODE=software
export OMP_NUM_THREADS=$((SLURM_CPUS_PER_TASK - 1))
srun cp2k.psmp -i <CP2K_INPUT> -o <CP2K_OUTPUT>
```
### SBATCH GH, 2 GPU, 18 OMP threads, 2 MPI ranks
```bash
#!/bin/bash
#SBATCH --time=00:10:00 # maximum execution time of 10 minutes
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
#SBATCH --nodes=1 # number of GH200 nodes with each node having 4 CPU+GPU
#SBATCH --ntasks-per-node=2 # 2 MPI ranks per node
#SBATCH --cpus-per-task=18 # 18 OMP threads per MPI rank
#SBATCH --cluster=gmerlin7
#SBATCH --hint=nomultithread
#SBATCH --partition=gh-hourly
#SBATCH --gpus=2
unset PMODULES_ENV
module purge
module use unstable Spack
module load gcc/12.3 openmpi/5.0.8-tx2w-GH200-gpu cp2k/2025.2-xk4q-GH200-gpu-omp
export FI_CXI_RX_MATCH_MODE=software
export OMP_NUM_THREADS=$((SLURM_CPUS_PER_TASK - 1))
srun cp2k.psmp -i <CP2K_INPUT> -o <CP2K_OUTPUT>
```
## Developing your own CPU code
[![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/cp2k_cpu_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi)
```bash
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-hgej dbcsr/2.8.0-4yld-omp openblas/0.3.30-gye6-omp netlib-scalapack/2.2.2-2trj libxsmm/1.17-hwwi libxc/7.0.0-mibp libint/2.11.1-nxhl hdf5/1.14.6-tgzo fftw/3.3.10-t7bo-omp py-fypp/3.1-bteo sirius/7.8.0-uh3i-omp cmake/3.31.8-j47l ninja/1.12.1-afxy
git clone https://github.com/cp2k/cp2k.git
cd cp2k
mkdir build && cd build
CC=mpicc CXX=mpic++ FC=mpifort cmake -GNinja -DCMAKE_CUDA_HOST_COMPILER=mpicc -DCP2K_USE_LIBXC=ON -DCP2K_USE_LIBINT2=ON -DCP2K_USE_SIRIUS=ON -DCP2K_USE_SPLA=ON -DCP2K_USE_SPGLIB=ON -DCP2K_USE_HDF5=ON -DCP2K_USE_FFTW3=ON ..
ninja -j 16
```
## Developing your own GPU code
#### A100
[![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/cp2k_gpu_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi)
```bash
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-r5lz-A100-gpu dbcsr/2.8.0-3r22-A100-gpu-omp cosma/2.7.0-y2tr-gpu cuda/12.6.0-3y6a dftd4/3.7.0-4k4c-omp elpa/2025.01.002-bovg-A100-gpu-omp fftw/3.3.10-syba-omp hdf5/1.14.6-pcsd libint/2.11.1-3lxv libxc/7.0.0-u556 libxsmm/1.17-2azz netlib-scalapack/2.2.2-rmcf openblas/0.3.30-ynou-omp plumed/2.9.2-47hk py-fypp/3.1-z25p py-numpy/2.3.2-45ay python/3.13.5-qivs sirius/develop-qz4c-A100-gpu-omp spglib/2.5.0-jl5l-omp spla/1.6.1-hrgf-gpu cmake/3.31.8-j47l ninja/1.12.1-afxy
git clone https://github.com/cp2k/cp2k.git
cd cp2k
mkdir build && cd build
CC=mpicc CXX=mpic++ FC=mpifort cmake -GNinja -DCMAKE_CUDA_HOST_COMPILER=mpicc -DCP2K_USE_LIBXC=ON -DCP2K_USE_LIBINT2=ON -DCP2K_USE_SPGLIB=ON -DCP2K_USE_ELPA=ON -DCP2K_USE_SPLA=ON -DCP2K_USE_SIRIUS=ON -DCP2K_USE_PLUMED=ON -DCP2K_USE_DFTD4=ON -DCP2K_USE_COSMA=ON -DCP2K_USE_ACCEL=CUDA -DCMAKE_CUDA_ARCHITECTURES=80 -DCP2K_USE_FFTW3=ON ..
ninja -j 16
```
#### GH200
[![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/cp2k_gh_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi)
```bash
salloc --partition=gh-daily --clusters=gmerlin7 --time=08:00:00 --ntasks=4 --nodes=1 --gpus=1 --mem=40000 $SHELL
ssh <allocated_gpu>
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-tx2w-GH200-gpu dbcsr/2.8.0-h3bo-GH200-gpu-omp cosma/2.7.0-dc23-gpu cuda/12.6.0-wak5 dbcsr/2.8.0-h3bo-GH200-gpu-omp dftd4/3.7.0-aa6l-omp elpa/2025.01.002-nybd-GH200-gpu-omp fftw/3.3.10-alp3-omp hdf5/1.14.6-qjob libint/2.11.1-dpqq libxc/7.0.0-ojgl netlib-scalapack/2.2.2-cj5m openblas/0.3.30-rv46-omp plumed/2.9.2-nbay py-fypp/3.1-j4yw py-numpy/2.3.2-yoqr python/3.13.5-xbg5 sirius/develop-v5tb-GH200-gpu-omp spglib/2.5.0-da2i-omp spla/1.6.1-uepy-gpu cmake/3.31.8-2jne ninja/1.13.0-xn4a
git clone https://github.com/cp2k/cp2k.git
cd cp2k
mkdir build && cd build
CC=mpicc CXX=mpic++ FC=mpifort cmake -GNinja -DCMAKE_CUDA_HOST_COMPILER=mpicc -DCP2K_USE_LIBXC=ON -DCP2K_USE_LIBINT2=ON -DCP2K_USE_SPGLIB=ON -DCP2K_USE_ELPA=ON -DCP2K_USE_SPLA=ON -DCP2K_USE_SIRIUS=ON -DCP2K_USE_PLUMED=ON -DCP2K_USE_DFTD4=ON -DCP2K_USE_COSMA=ON -DCP2K_USE_ACCEL=CUDA -DCMAKE_CUDA_ARCHITECTURES=90 -DCP2K_USE_FFTW3=ON -DCP2K_USE_HDF5=ON ..
ninja -j 16
```

View File

@@ -0,0 +1,64 @@
---
title: Cray Programming Environment
#tags:
keywords: cray, module
last_updated: 24 Mai 2023
summary: "This document describes how to use the Cray Programming Environment on Merlin7."
sidebar: merlin7_sidebar
permalink: /merlin7/cray-module-env.html
---
## Loading the Cray module
The Cray Programming Environment, with Cray's compilers and MPI, is not loaded by default.
To load it, one has to run the following command:
```bash
module load cray
```
The Cray Programming Environment will load all the necessary dependencies. In example:
```bash
🔥 [caubet_m@login001:~]# module list
Currently Loaded Modules:
1) craype-x86-rome 2) libfabric/1.15.2.0
3) craype-network-ofi
4) xpmem/2.9.6-1.1_20240510205610__g087dc11fc19d 5) PrgEnv-cray/8.5.0
6) cce/17.0.0 7) cray-libsci/23.12.5
8) cray-mpich/8.1.28 9) craype/2.7.30
10) perftools-base/23.12.0 11) cpe/23.12
12) cray/23.12
```
You will notice an unfamiliar `PrgEnv-cray/8.5.0` that was loaded. This is a meta-module that Cray provides to simplify the switch of compilers and their associated dependencies and libraries,
as a whole called Programming Environment. In the Cray Programming Environment, there are 4 key modules.
* `cray-libsci` is a collection of numerical routines tuned for performance on Cray systems.
* `libfabric` is an important low-level library that allows you to take advantage of the high performance Slingshot network.
* `cray-mpich` is a CUDA-aware MPI implementation, optimized for Cray systems.
* `cce` is the compiler from Cray. C/C++ compilers are based on Clang/LLVM while Fortran supports Fortran 2018 standard. More info: https://user.cscs.ch/computing/compilation/cray/
You can switch between different programming environments. You can check the available module with the `module avail` command, as follows:
```bash
🔥 [caubet_m@login001:~]# module avail PrgEnv
--------------------- /opt/cray/pe/lmod/modulefiles/core ---------------------
PrgEnv-cray/8.5.0 PrgEnv-gnu/8.5.0
PrgEnv-nvhpc/8.5.0 PrgEnv-nvidia/8.5.0
```
## Switching compiler suites
Compiler suites can be exchanged with PrgEnv (Programming Environments) provided by HPE-Cray. The wrappers call the correct compiler with appropriate options to build
and link applications with relevant libraries, as required by the loaded modules (only dynamic linking is supported) and therefore should replace direct calls to compiler
drivers in Makefiles and build scripts.
To swap the the compiler suite from the default Cray to GNU compiler, one can run the following.
```bash
🔥 [caubet_m@login001:~]# module swap PrgEnv-cray/8.5.0 PrgEnv-gnu/8.5.0
Lmod is automatically replacing "cce/17.0.0" with "gcc-native/12.3".
```

View File

@@ -0,0 +1,163 @@
---
title: GROMACS
keywords: GROMACS software, compile
summary: "GROMACS (GROningen Machine for Chemical Simulations) is a versatile and widely-used open source package to perform molecular dynamics"
sidebar: merlin7_sidebar
toc: false
permalink: /merlin7/gromacs.html
---
## GROMACS
GROMACS (GROningen Machine for Chemical Simulations) is a versatile and widely-used open source package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.
It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.)
## Licensing Terms and Conditions
GROMACS is a joint effort, with contributions from developers around the world: users agree to acknowledge use of GROMACS in any reports or publications of results obtained with the Software (see GROMACS Homepage for details).
## How to run on Merlin7
## 2025.2
### CPU nodes
```bash
module use Spack unstable
module load gcc/12.3 openmpi/5.0.7-ax23-A100-gpu gromacs/2025.2-whcq-omp
```
### A100 nodes
```bash
module use Spack unstable
module load gcc/12.3 openmpi/5.0.7-3vzj-A100-gpu gromacs/2025.2-vbj4-A100-gpu-omp
```
### GH nodes
```bash
module use Spack unstable
module load gcc/12.3 openmpi/5.0.7-blxc-GH200-gpu gromacs/2025.2-cjnq-GH200-gpu-omp
```
## 2025.3
### CPU nodes
```bash
module use Spack unstable
module load gcc/12.3 openmpi/5.0.9-n4yf-A100-gpu gromacs/2025.3-6ken-omp
```
### A100 nodes
```bash
module use Spack unstable
module load gcc/12.3 openmpi/5.0.9-xqhy-A100-gpu gromacs/2025.3-ohlj-A100-gpu-omp
```
### GH nodes
```bash
module use Spack unstable
module load gcc/12.3 openmpi/5.0.9-inxi-GH200-gpu gromacs/2025.3-yqlu-GH200-gpu-omp
```
### SBATCH CPU, 4 MPI ranks, 16 OMP threads
```bash
#!/bin/bash
#SBATCH --time=00:10:00 # maximum execution time of 10 minutes
#SBATCH --nodes=1 # requesting 1 compute node
#SBATCH --ntasks=4 # use 4 MPI rank (task)
#SBATCH --partition=hourly
#SBATCH --cpus-per-task=16 # modify this number of CPU cores per MPI task
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
unset PMODULES_ENV
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.7-ax23-A100-gpu gromacs/2025.2-whcq-omp
export FI_CXI_RX_MATCH_MODE=software
# Add your input (tpr) file in the command below
srun gmx_mpi grompp -f step6.0_minimization.mdp -o step6.0_minimization.tpr -c step5_input.gro -r step5_input.gro -p topol.top -n index.ndx
srun gmx_mpi mdrun -s step6.0_minimization.tpr -pin on -ntomp ${SLURM_CPUS_PER_TASK}
```
### SBATCH A100, 4 GPU, 16 OMP threads, 4 MPI ranks
```bash
#!/bin/bash
#SBATCH --time=00:10:00 # maximum execution time of 10 minutes
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
#SBATCH --nodes=1 # number of GH200 nodes with each node having 4 CPU+GPU
#SBATCH --ntasks-per-node=4 # 4 MPI ranks per node
#SBATCH --cpus-per-task=16 # 16 OMP threads per MPI rank
#SBATCH --cluster=gmerlin7
#SBATCH --hint=nomultithread
#SBATCH --partition=a100-hourly
#SBATCH --gpus=4
unset PMODULES_ENV
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.7-3vzj-A100-gpu gromacs/2025.2-vbj4-A100-gpu-omp
export FI_CXI_RX_MATCH_MODE=software
export GMX_GPU_DD_COMMS=true
export GMX_GPU_PME_PP_COMMS=true
export GMX_FORCE_UPDATE_DEFAULT_GPU=true
export GMX_ENABLE_DIRECT_GPU_COMM=1
export GMX_FORCE_GPU_AWARE_MPI=1
# Add your input (tpr) file in the command below
srun gmx_mpi grompp -f step6.0_minimization.mdp -o step6.0_minimization.tpr -c step5_input.gro -r step5_input.gro -p topol.top -n index.ndx
srun gmx_mpi mdrun -s step6.0_minimization.tpr -ntomp ${SLURM_CPUS_PER_TASK}
```
### SBATCH GH, 2 GPU, 18 OMP threads, 2 MPI ranks
```bash
#!/bin/bash
#SBATCH --time=00:10:00 # maximum execution time of 10 minutes
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
#SBATCH --nodes=1 # number of GH200 nodes with each node having 4 CPU+GPU
#SBATCH --ntasks-per-node=2 # 2 MPI ranks per node
#SBATCH --cpus-per-task=18 # 18 OMP threads per MPI rank
#SBATCH --cluster=gmerlin7
#SBATCH --hint=nomultithread
#SBATCH --partition=gh-hourly
#SBATCH --gpus=2
unset PMODULES_ENV
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.7-blxc-GH200-gpu gromacs/2025.2-cjnq-GH200-gpu-omp
export FI_CXI_RX_MATCH_MODE=software
export GMX_GPU_DD_COMMS=true
export GMX_GPU_PME_PP_COMMS=true
export GMX_FORCE_UPDATE_DEFAULT_GPU=true
export GMX_ENABLE_DIRECT_GPU_COMM=1
export GMX_FORCE_GPU_AWARE_MPI=1
# Add your input (tpr) file in the command below
srun gmx_mpi grompp -f step6.0_minimization.mdp -o step6.0_minimization.tpr -c step5_input.gro -r step5_input.gro -p topol.top -n index.ndx
srun gmx_mpi mdrun -s step6.0_minimization.tpr -ntomp ${SLURM_CPUS_PER_TASK}
```
## Developing your own GPU code
#### A100
```bash
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.7-3vzj-A100-gpu gromacs/2025.2-vbj4-A100-gpu-omp cmake/3.31.6-o3lb python/3.13.1-cyro
git clone https://github.com/gromacs/gromacs.git
cd gromacs
mkdir build && cd build
cmake -DCMAKE_C_COMPILER=gcc-12 \
-DCMAKE_CXX_COMPILER=g++-12 \
-DGMX_MPI=on \
-DGMX_GPU=CUDA \
-GMX_CUDA_TARGET_SM="80" \ # 90 for the Hopper GPUs
-DGMX_DOUBLE=off \ # turn on double precision only if useful
..
make
```

View File

@@ -0,0 +1,50 @@
---
title: IPPL
keywords: IPPL software, compile
summary: "Independent Parallel Particle Layer (IPPL) is a performance portable C++ library for Particle-Mesh methods"
sidebar: merlin7_sidebar
toc: false
permalink: /merlin7/ippl.html
---
## IPPL
Independent Parallel Particle Layer (IPPL) is a performance portable C++ library for Particle-Mesh methods. IPPL makes use of Kokkos (https://github.com/kokkos/kokkos), HeFFTe (https://github.com/icl-utk-edu/heffte), and MPI (Message Passing Interface) to deliver a portable, massively parallel toolkit for particle-mesh methods. IPPL supports simulations in one to six dimensions, mixed precision, and asynchronous execution in different execution spaces (e.g. CPUs and GPUs).
## Licensing Terms and Conditions
GNU GPLv3
## How to run on Merlin7
### A100 nodes
[![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/ippl_gpu_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi)
```bash
module use Spack unstable
module load gcc/13.2.0 openmpi/5.0.7-dnpr-A100-gpu boost/1.82.0-lgrt fftw/3.3.10.6-zv2b-omp googletest/1.14.0-msmu h5hut/2.0.0rc7-zy7s openblas/0.3.29-zkwb cmake/3.31.6-ufy7
cd <path to IPPL source directory>
mkdir build_gpu
cd build_gpu
cmake -DCMAKE_BUILD_TYPE=Release -DKokkos_ARCH_AMPERE80=ON -DCMAKE_CXX_STANDARD=20 -DIPPL_ENABLE_FFT=ON -DIPPL_ENABLE_TESTS=ON -DUSE_ALTERNATIVE_VARIANT=ON -DIPPL_ENABLE_SOLVERS=ON -DIPPL_ENABLE_ALPINE=True -DIPPL_PLATFORMS=cuda ..
make [-jN]
```
### GH nodes
[![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/ippl_gh_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi)
```bash
salloc --partition=gh-daily --clusters=gmerlin7 --time=08:00:00 --ntasks=4 --nodes=1 --gpus=1 --mem=40000 $SHELL
ssh <allocated_gpu>
module use Spack unstable
module load gcc/13.2.0 openmpi/5.0.3-3lmi-GH200-gpu
module load boost/1.82.0-3ns6 fftw/3.3.10 gnutls/3.8.3 googletest/1.14.0 gsl/2.7.1 h5hut/2.0.0rc7 openblas/0.3.26 cmake/3.31.4-u2nm
cd <path to IPPL source directory>
mkdir build_gh
cd build_gh
cmake -DCMAKE_BUILD_TYPE=Release -DKokkos_ARCH_HOPPER90=ON -DCMAKE_CXX_STANDARD=20 -DIPPL_ENABLE_FFT=ON -DIPPL_ENABLE_TESTS=ON -DUSE_ALTERNATIVE_VARIANT=ON -DIPPL_ENABLE_SOLVERS=ON -DIPPL_ENABLE_ALPINE=True -DIPPL_PLATFORMS=cuda ..
make [-jN]
```

View File

@@ -0,0 +1,117 @@
---
title: LAMMPS
keywords: LAMMPS software, compile
summary: "LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state"
sidebar: merlin7_sidebar
toc: false
permalink: /merlin7/lammps.html
---
## LAMMPS
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state. It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions. The current version of LAMMPS is written in C++.
## Licensing Terms and Conditions
LAMMPS is an open-source code, available free-of-charge, and distributed under the terms of the GNU Public License Version 2 (GPLv2), which means you can use or modify the code however you wish for your own purposes, but have to adhere to certain rules when redistributing it - specifically in binary form - or are distributing software derived from it or that includes parts of it.
LAMMPS comes with no warranty of any kind.
As each source file states in its header, it is a copyrighted code, and thus not in the public domain. For more information about open-source software and open-source distribution, see www.gnu.org or www.opensource.org. The legal text of the GPL as it applies to LAMMPS is in the LICENSE file included in the LAMMPS distribution.
Here is a more specific summary of what the GPL means for LAMMPS users:
(1) Anyone is free to use, copy, modify, or extend LAMMPS in any way they choose, including for commercial purposes.
(2) If you distribute a modified version of LAMMPS, it must remain open-source, meaning you are required to distribute all of it under the terms of the GPLv2. You should clearly annotate such a modified code as a derivative version of LAMMPS. This is best done by changing the name (example: LIGGGHTS is such a modified and extended version of LAMMPS).
(3) If you release any code that includes or uses LAMMPS source code, then it must also be open-sourced, meaning you distribute it under the terms of the GPLv2. You may write code that interfaces LAMMPS to a differently licensed library. In that case the code that provides the interface must be licensed GPLv2, but not necessarily that library unless you are distributing binaries that require the library to run.
(4) If you give LAMMPS files to someone else, the GPLv2 LICENSE file and source file headers (including the copyright and GPLv2 notices) should remain part of the code.
## How to run on Merlin7
### CPU nodes
```bash
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-jsrx-A100-gpu lammps/20250722-37gs-omp
```
### A100 nodes
```bash
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-jsrx-A100-gpu lammps/20250722-xcaf-A100-gpu-omp
```
### GH nodes
```bash
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-fvlo-GH200-gpu lammps/20250722-3tfv-GH200-gpu-omp
```
### SBATCH CPU, 4 MPI ranks, 16 OMP threads
```bash
#!/bin/bash
#SBATCH --time=00:10:00 # maximum execution time of 10 minutes
#SBATCH --nodes=1 # requesting 1 compute node
#SBATCH --ntasks=4 # use 4 MPI rank (task)
#SBATCH --partition=hourly
#SBATCH --cpus-per-task=16 # modify this number of CPU cores per MPI task
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
unset PMODULES_ENV
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-jsrx-A100-gpu lammps/20250722-37gs-omp
export FI_CXI_RX_MATCH_MODE=software
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
srun --cpu-bind=cores lmp -k on t $OMP_NUM_THREADS -sf kk -in lj_kokkos.in
```
### SBATCH A100, 4 GPU, 16 OMP threads, 4 MPI ranks
```bash
#!/bin/bash
#SBATCH --time=00:10:00 # maximum execution time of 10 minutes
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
#SBATCH --nodes=1 # number of GH200 nodes with each node having 4 CPU+GPU
#SBATCH --ntasks-per-node=4 # 4 MPI ranks per node
#SBATCH --cluster=gmerlin7
#SBATCH --hint=nomultithread
#SBATCH --partition=a100-hourly
#SBATCH --gpus-per-task=1
unset PMODULES_ENV
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-jsrx-A100-gpu lammps/20250722-xcaf-A100-gpu-omp
export FI_CXI_RX_MATCH_MODE=software
srun lmp -in lj_kokkos.in -k on g ${SLURM_GPUS_PER_TASK} -sf kk -pk kokkos gpu/aware on
```
### SBATCH GH, 2 GPU, 18 OMP threads, 2 MPI ranks
```bash
#!/bin/bash
#SBATCH --time=00:10:00 # maximum execution time of 10 minutes
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
#SBATCH --nodes=1 # number of GH200 nodes with each node having 4 CPU+GPU
#SBATCH --ntasks-per-node=2 # 2 MPI ranks per node
#SBATCH --cluster=gmerlin7
#SBATCH --hint=nomultithread
#SBATCH --partition=gh-hourly
#SBATCH --gpus-per-task=1
unset PMODULES_ENV
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-fvlo-GH200-gpu lammps/20250722-3tfv-GH200-gpu-omp
export FI_CXI_RX_MATCH_MODE=software
srun lmp -in lj_kokkos.in -k on g ${SLURM_GPUS_PER_TASK} -sf kk -pk kokkos gpu/aware on
```

View File

@@ -0,0 +1,74 @@
---
title: OPAL-X
keywords: OPAL-X software, compile
summary: "OPAL (Object Oriented Particle Accelerator Library) is an open source C++ framework for general particle accelerator simulations including 3D space charge, short range wake fields and particle matter interaction."
sidebar: merlin7_sidebar
toc: false
permalink: /merlin7/opal-x.html
---
## OPAL
OPAL (Object Oriented Particle Accelerator Library) is an open source C++ framework for general particle accelerator simulations including 3D space charge, short range wake fields and particle matter interaction.
## Licensing Terms and Conditions
GNU GPLv3
## How to run on Merlin7
### A100 nodes
```bash
module purge
module use Spack unstable
module load gcc/13.2.0 openmpi/5.0.7-dnpr-A100-gpu opal-x/master-cbgs-A100-gpu
```
### GH nodes
```bash
module purge
module use Spack unstable
module load gcc/13.2.0 openmpi/5.0.7-z3y6-GH200-gpu opal-x/master-v6v2-GH200-gpu
```
## Developing your own code
### A100 nodes
[![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/opal-x_gpu_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi)
```bash
module purge
module use Spack unstable
module load gcc/13.2.0 openmpi/5.0.7-dnpr-A100-gpu
module load boost/1.82.0-lgrt fftw/3.3.10.6-zv2b-omp gnutls/3.8.9-mcdr googletest/1.14.0-msmu gsl/2.7.1-hxwy h5hut/2.0.0rc7-zy7s openblas/0.3.29-zkwb cmake/3.31.6-oe7u
git clone https://github.com/OPALX-project/OPALX.git opal-x
cd opal-x
./gen_OPALrevision
mkdir build_gpu
cd build_gpu
cmake -DCMAKE_BUILD_TYPE=Release -DKokkos_ARCH_AMPERE80=ON -DCMAKE_CXX_STANDARD=20 -DIPPL_ENABLE_FFT=ON -DIPPL_ENABLE_TESTS=OFF -DIPPL_ENABLE_SOLVERS=ON -DIPPL_ENABLE_ALPINE=True -DIPPL_PLATFORMS=cuda ..
make [-jN]
```
### GH nodes
[![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/opal-x_gh_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi)
```bash
salloc --partition=gh-daily --clusters=gmerlin7 --time=08:00:00 --ntasks=4 --nodes=1 --gpus=1 --mem=40000 $SHELL
ssh <allocated_gpu>
module purge
module use Spack unstable
module load gcc/13.2.0 openmpi/5.0.7-z3y6-GH200-gpu
module load boost/1.82.0-znbt fftw/3.3.10-jctz gnutls/3.8.9-rtrg googletest/1.15.2-odox gsl/2.7.1-j2dk h5hut/2.0.0rc7-k63k openblas/0.3.29-d3m2 cmake/3.31.4-u2nm
git clone https://github.com/OPALX-project/OPALX.git opal-x
cd opal-x
./gen_OPALrevision
mkdir build_gh
cd build_gh
cmake -DCMAKE_BUILD_TYPE=Release -DKokkos_ARCH_HOPPER90=ON -DCMAKE_CXX_STANDARD=20 -DIPPL_ENABLE_FFT=ON -DIPPL_ENABLE_TESTS=OFF -DIPPL_ENABLE_SOLVERS=ON -DIPPL_ENABLE_ALPINE=OFF -DIPPL_PLATFORMS=cuda ..
make [-jN]
```

View File

@@ -0,0 +1,80 @@
---
title: OpenMPI Support
#tags:
last_updated: 15 January 2025
keywords: software, openmpi, slurm
summary: "This document describes how to use OpenMPI in the Merlin7 cluster"
sidebar: merlin7_sidebar
permalink: /merlin7/openmpi.html
---
## Introduction
This document outlines the supported OpenMPI versions in the Merlin7 cluster.
### OpenMPI supported versionso
The Merlin cluster supports OpenMPI versions across three distinct stages: stable, unstable, and deprecated. Below is an overview of each stage:
#### Stable
Versions in the `stable` stage are fully functional, thoroughly tested, and officially supported by the Merlin administrators.
These versions are available via [Pmodules](/merlin7/pmodules.html) and [Spack](/merlin7/spack.html), ensuring compatibility and reliability for production use.
#### Unstable
Versions in the `unstable` stage are available for testing and early access to new OpenMPI features.
While these versions can be used, their compilation and configuration are subject to change before they are promoted to the `stable` stage.
Administrators recommend caution when relying on `unstable` versions for critical workloads.
#### Deprecated
Versions in the `deprecated` stage are no longer supported by the Merlin administrators.
Typically, these include versions no longer supported by the official [OpenMPI](https://www.open-mpi.org/software/ompi/v5.0/) project.
While deprecated versions may still be available for use, their functionality cannot be guaranteed, and they will not receive updates or bug fixes.
### Using srun in Merlin7
In OpenMPI versions prior to 5.0.x, using `srun` for direct task launches was faster than `mpirun`.
Although this is no longer the case, `srun` remains the recommended method due to its simplicity and ease of use.
Key benefits of `srun`:
* Automatically handles task binding to cores.
* In general, requires less configuration compared to `mpirun`.
* Best suited for most users, while `mpirun` is recommended only for advanced MPI configurations.
Guidelines:
* Always adapt your scripts to use srun before seeking support.
* For any module-related issues, please contact the Merlin7 administrators.
Example Usage:
```bash
srun ./app
```
{{site.data.alerts.tip}}
Always run OpenMPI applications with <b>srun</b> for a seamless experience.
{{site.data.alerts.end}}
### PMIx Support in Merlin7
Merlin7's SLURM installation includes support for multiple PMI types, including pmix. To view the available options, use the following command:
```bash
🔥 [caubet_m@login001:~]# srun --mpi=list
MPI plugin types are...
none
pmix
pmi2
cray_shasta
specific pmix plugin versions available: pmix_v5,pmix_v4,pmix_v3,pmix_v2
```
Important Notes:
* For OpenMPI, always use `pmix` by specifying the appropriate version (`pmix_$version`).
When loading an OpenMPI module (via [Pmodules](/merlin7/pmodules.html) or [Spack](/merlin7/spack.html)), the corresponding PMIx version will be automatically loaded.
* Users do not need to manually manage PMIx compatibility.
{{site.data.alerts.warning}}
PMI-2 is not supported in OpenMPI 5.0.0 or later releases.
Despite this, <b>pmi2</b> remains the default SLURM PMI type in Merlin7 as it is the officially supported type and maintains compatibility with other MPI implementations.
{{site.data.alerts.end}}

View File

@@ -0,0 +1,153 @@
---
title: PSI Modules
#tags:
keywords: Pmodules, software, stable, unstable, deprecated, overlay, overlays, release stage, module, package, packages, library, libraries
last_updated: 07 September 2022
#summary: ""
sidebar: merlin7_sidebar
permalink: /merlin7/pmodules.html
---
## PSI Environment Modules
On top of the operating system stack we provide different software using the PSI developed PModule system.
PModules is the official supported way and each package is deployed by a specific expert. Usually, in PModules
software which is used by many people will be found.
If you miss any package/versions or a software with a specific missing feature, contact us. We will study if is feasible or not to install it.
### Module Release Stages
To ensure proper software lifecycle management, PModules uses three release stages: unstable, stable, and deprecated.
1. **Unstable Release Stage:**
* Contains experimental or under-development software versions.
* Not visible to users by default. Use explicitly:
```bash
module use unstable
```
* Software is promoted to **stable** after validation.
2. **Stable Release Stage:**
* Default stage, containing fully tested and supported software versions.
* Recommended for all production workloads.
3. **Deprecated Release Stage:**
* Contains software versions that are outdated or discontinued.
* These versions are hidden by default but can be explicitly accessed:
```bash
module use deprecated
```
* Deprecated software can still be loaded directly without additional configuration to ensure user transparency.
## PModules commands
Below is listed a summary of common `module` commands:
```bash
module use # show all available PModule Software Groups as well as Release Stages
module avail # to see the list of available software packages provided via pmodules
module use unstable # to get access to a set of packages not fully tested by the community
module load <package>/<version> # to load specific software package with a specific version
module search <string> # to search for a specific software package and its dependencies.
module list # to list which software is loaded in your environment
module purge # unload all loaded packages and cleanup the environment
```
Please refer to the **external [PSI Modules](https://pmodules.gitpages.psi.ch/chap3.html) document** for
detailed information about the `module` command.
### module use/unuse
Without any parameter, `use` **lists** all available PModule **Software Groups and Release Stages**.
```bash
module use
```
When followed by a parameter, `use`/`unuse` invokes/uninvokes a PModule **Software Group** or **Release Stage**.
```bash
module use EM # Invokes the 'EM' software group
module unuse EM # Uninvokes the 'EM' software group
module use unstable # Invokes the 'unstable' Release stable
module unuse unstable # Uninvokes the 'unstable' Release stable
```
### module avail
This option **lists** all available PModule **Software Groups and their packages**.
Please run `module avail --help` for further listing options.
### module search
This is used to **search** for **software packages**. By default, if no **Release Stage** or **Software Group** is specified
in the options of the `module search` command, it will search from the already invoked *Software Groups* and *Release Stages*.
Direct package dependencies will be also showed.
```bash
🔥 [caubet_m@login001:~]# module search openmpi
Module Rel.stage Group Overlay Requires
--------------------------------------------------------------------------------
openmpi/4.1.6 stable Compiler Alps gcc/12.3.0
openmpi/4.1.6 stable Compiler Alps gcc/13.3.0
openmpi/4.1.6 stable Compiler Alps gcc/14.2.0
openmpi/4.1.6 stable Compiler Alps intelcc/22.2
openmpi/5.0.5 stable Compiler Alps gcc/8.5.0
openmpi/5.0.5 stable Compiler Alps gcc/12.3.0
openmpi/5.0.5 stable Compiler Alps gcc/14.2.0
openmpi/5.0.5 stable Compiler Alps intelcc/22.2
```
Please run `module search --help` for further search options.
### module load/unload
This loads/unloads specific software packages. Packages might have direct dependencies that need to be loaded first. Other dependencies
will be automatically loaded.
In the example below, the ``openmpi/5.0.5`` package will be loaded, however ``gcc/14.2.0`` must be loaded as well as this is a strict dependency. Direct dependencies must be loaded in advance. Users can load multiple packages one by one or at once. This can be useful for instance when loading a package with direct dependencies.
```bash
# Single line
module load gcc/14.2.0 openmpi/5.0.5
# Multiple line
module load gcc/14.2.0
module load openmpi/5.0.5
```
#### module purge
This command is an alternative to `module unload`, which can be used to unload **all** loaded module files.
```bash
module purge
```
## Requesting New PModules Packages
The PModules system is designed to accommodate the diverse software needs of Merlin7 users. Below are guidelines for requesting new software or versions to be added to PModules.
### Requesting Missing Software
If a specific software package is not available in PModules and there is interest from multiple users:
* **[Contact Support](/merlin7/contact.html):** Let us know about the software, and we will assess its feasibility for deployment.
* **Deployment Timeline:** Adding new software to PModules typically takes a few days, depending on complexity and compatibility.
* **User Involvement:** If you are interested in maintaining the software package, please inform us. Collaborative maintenance helps
ensure timely updates and support.
### Requesting a Missing Version
If the currently available versions of a package do not meet your requirements:
* **New Versions:** Requests for newer versions are generally supported, especially if there is interest from multiple users.
* **Intermediate Versions:** Installation of intermediate versions (e.g., versions between the current stable and deprecated versions)
can be considered if there is a strong justification, such as specific features or compatibility requirements.
### General Notes
* New packages or versions are prioritized based on their relevance and usage.
* For any request, providing detailed information about the required software or version (e.g., name, version, features) will help
expedite the process.

View File

@@ -0,0 +1,169 @@
---
title: Quantum Espresso
keywords: Quantum Espresso software, compile
summary: "Quantum Espresso code for electronic-structure calculations and materials modeling at the nanoscale"
sidebar: merlin7_sidebar
toc: false
permalink: /merlin7/quantum-espresso.html
---
## Quantum ESPRESSO
Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials:
PWscf (Plane-Wave Self-Consistent Field)
FPMD (First Principles Molecular Dynamics)
CP (Car-Parrinello)
## Licensing Terms and Conditions
Quantum ESPRESSO is an open initiative, in collaboration with many groups world-wide, coordinated by the Quantum ESPRESSO Foundation. Scientific work done using Quantum ESPRESSO should contain an explicit acknowledgment and reference to the main papers (see Quantum Espresso Homepage for the details).
## How to run on Merlin7
### 7.5
### CPU nodes
```bash
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.9-xqhy-A100-gpu quantum-espresso/7.5-zfwh-omp
```
### GH nodes
```bash
module purge
module use Spack unstable
module load nvhpc/25.7 openmpi/4.1.8-l3jj-GH200-gpu quantum-espresso/7.5-2ysd-gpu-omp
```
### 7.4.1
### A100 nodes
```bash
module purge
module use Spack unstable
module load nvhpc/25.3 openmpi/main-6bnq-A100-gpu quantum-espresso/7.4.1-nxsw-gpu-omp
```
### GH nodes
```bash
module purge
module use Spack unstable
module load nvhpc/25.3 openmpi/5.0.7-e3bf-GH200-gpu quantum-espresso/7.4.1-gxvj-gpu-omp
```
### SBATCH A100, 1 GPU, 64 OpenMP threads, one MPI rank example
```bash
#!/bin/bash
#SBATCH --no-requeue
#SBATCH --job-name="si64"
#SBATCH --get-user-env
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
#SBATCH --partition=a100-daily
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --time=06:00:00
#SBATCH --cpus-per-task=64
#SBATCH --cluster=gmerlin7
#SBATCH --gpus=1
#SBATCH --hint=nomultithread
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
# Load necessary modules
module purge
module use Spack unstable
module load nvhpc/25.3 openmpi/main-6bnq-A100-gpu quantum-espresso/7.4.1-nxsw-gpu-omp
"srun" '$(which pw.x)' '-npool' '1' '-in' 'aiida.in' > "aiida.out"
```
## Developing your own GPU code
### Spack
2. ```spack config edit ```
3. Add granularity: microarchitectures to your config (if you use nvhpc compiler! Not needed for CPU builds!)
```bash
spack:
concretizer:
unify: false
targets:
granularity: microarchitectures
```
4. ```spack add quantum-espresso@develop +cuda +mpi +mpigpu hdf5=parallel %nvhpc arch=linux-sles15-zen3 # GPU```
5. ```spack add quantum-espresso@develop +mpi hdf5=parallel %gcc # CPU```
6. ```spack develop quantum-espresso@develop # clone the code under /afs/psi.ch/sys/spack/user/$USER/spack-environment/quantum-espresso```
7. Make changes in /afs/psi.ch/sys/spack/user/$USER/spack-environment/quantum-espresso
8. Build: ```spack install [-jN] -v --until=build quantum-espresso@develop```
### Environment modules
#### CPU
[![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/q-e_cpu_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi)
```bash
module purge
module use Spack unstable
module load gcc/12.3 openmpi/main-syah fftw/3.3.10.6-qbxu-omp hdf5/1.14.5-t46c openblas/0.3.29-omp cmake/3.31.6-oe7u
cd <path to QE source directory>
mkdir build
cd build
cmake -DQE_ENABLE_MPI:BOOL=ON -DQE_ENABLE_OPENMP:BOOL=ON -DCMAKE_C_COMPILER:STRING=mpicc -DCMAKE_Fortran_COMPILER:STRING=mpif90 -DQE_ENABLE_HDF5:BOOL=ON ..
make [-jN]
```
#### A100
[![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/q-e_gpu_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi)
```bash
module purge
module use Spack unstable
module load nvhpc/25.3 openmpi/main-6bnq-A100-gpu fftw/3.3.10.6-qbxu-omp hdf5/develop-2.0-rjgu netlib-scalapack/2.2.2-3hgw cmake/3.31.6-oe7u
cd <path to QE source directory>
mkdir build
cd build
cmake -DQE_ENABLE_MPI:BOOL=ON -DQE_ENABLE_OPENMP:BOOL=ON -DQE_ENABLE_SCALAPACK:BOOL=ON -DQE_ENABLE_CUDA:BOOL=ON -DQE_ENABLE_MPI_GPU_AWARE:BOOL=ON -DQE_ENABLE_OPENACC:BOOL=ON -DCMAKE_C_COMPILER:STRING=mpicc -DCMAKE_Fortran_COMPILER:STRING=mpif90 -DQE_ENABLE_HDF5:BOOL=ON ..
make [-jN]
```
#### GH200
[![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/q-e_gh_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi)
```bash
salloc --partition=gh-daily --clusters=gmerlin7 --time=08:00:00 --ntasks=4 --nodes=1 --gpus=1 --mem=40000 $SHELL
ssh <allocated_gpu>
module purge
module use Spack unstable
module load nvhpc/25.3 openmpi/5.0.7-e3bf-GH200-gpu fftw/3.3.10-sfpw-omp hdf5/develop-2.0-ztvo nvpl-blas/0.4.0.1-3zpg nvpl-lapack/0.3.0-ymy5 netlib-scalapack/2.2.2-qrhq cmake/3.31.6-5dl7
cd <path to QE source directory>
mkdir build
cd build
cmake -DQE_ENABLE_MPI:BOOL=ON -DQE_ENABLE_OPENMP:BOOL=ON -DQE_ENABLE_SCALAPACK:BOOL=ON -DQE_ENABLE_CUDA:BOOL=ON -DQE_ENABLE_MPI_GPU_AWARE:BOOL=ON -DQE_ENABLE_OPENACC:BOOL=ON -DCMAKE_C_COMPILER:STRING=mpicc -DCMAKE_Fortran_COMPILER:STRING=mpif90 -DQE_ENABLE_HDF5:BOOL=ON ..
make [-jN]
```
## Q-E-SIRIUS
SIRIUS enabled fork of QuantumESPRESSO
### CPU
```bash
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-mx6f q-e-sirius/1.0.1-dtn4-omp
```
### A100 nodes
```bash
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-lsff-A100-gpu q-e-sirius/1.0.1-7snv-omp
```
### GH nodes
```bash
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-tx2w-GH200-gpu q-e-sirius/1.0.1-3dwi-omp
```

View File

@@ -0,0 +1,18 @@
---
title: Spack
keywords: spack, python, software, compile
summary: "Spack the HPC package manager documentation"
sidebar: merlin7_sidebar
toc: false
permalink: /merlin7/spack.html
---
For Merlin7 the *package manager for supercomputing* [Spack](https://spack.io/) is available. It is meant to compliment the existing PModules
solution, giving users the opertunity to manage their own software environments.
Documentation for how to use Spack on Merlin7 is provided [here](https://gitea.psi.ch/HPCE/spack-psi/src/branch/main/README.md).
## The Spack PSI packages
An initial collection of packages (and Spack reciepes) are located at **[Spack PSI](https://gitea.psi.ch/HPCE/spack-psi)**, users can directly use these
through calls like `spack add ...`.