4 Commits

Author SHA1 Message Date
c332469434 added average allocation number for GPU projects
All checks were successful
Build and Deploy Documentation / build-and-deploy (push) Successful in 7s
2025-12-09 13:09:22 +01:00
f95af6babe FIX: old IPPL modules
All checks were successful
Build and Deploy Documentation / build-and-deploy (push) Successful in 7s
2025-12-03 13:10:47 +01:00
643d0873be ADD: q-e@7.5
All checks were successful
Build and Deploy Documentation / build-and-deploy (push) Successful in 6s
2025-11-28 18:01:36 +01:00
921a62b702 ADD: gromacs@2025.3 all systems
All checks were successful
Build and Deploy Documentation / build-and-deploy (push) Successful in 8s
2025-11-27 16:41:09 +01:00
4 changed files with 35 additions and 3 deletions

View File

@@ -34,7 +34,9 @@ Applications will be reviewed and the final resource allocations, in case of ove
#### Instructions for filling out the 2026 survey
* We have a budget of 100 kCHF for 2026, which translates to 435'000 multicore node hours or 35'600 node hours on the GPU Grace Hopper nodes. The minimum allocation is 10'000 node hours for multicore projects, an average project allocation would amount to 30'000 node hours
* We have a budget of 100 kCHF for 2026, which translates to 435'000 multicore node hours or 35'600 node hours on the GPU Grace Hopper nodes.
* multicore projects: The minimum allocation is 10'000 node hours, an average project allocation amounts to 30'000 node hours
* GPU projects: The minimum allocation is 800 node hours, an average project allocation is 2000 node hours.
* You need to specify the total resource request for your project in node hours, and how you would like to split the resources over the 4 quarters. For the allocations per quarter year, please enter the number in percent (e.g. 25%, 25%, 25%, 25%). If you indicate nothing, a 25% per quarter will be assumed.
* We currently have a total of 65 TB of storage for all projects. Additional storage
can be obtained, but large storage assignments are not in scope for these projects.

View File

@@ -18,6 +18,7 @@ It is primarily designed for biochemical molecules like proteins, lipids and nuc
GROMACS is a joint effort, with contributions from developers around the world: users agree to acknowledge use of GROMACS in any reports or publications of results obtained with the Software (see GROMACS Homepage for details).
## How to run on Merlin7
## 2025.2
### CPU nodes
```bash
module use Spack unstable
@@ -33,6 +34,22 @@ module load gcc/12.3 openmpi/5.0.7-3vzj-A100-gpu gromacs/2025.2-vbj4-A100-gpu-om
module use Spack unstable
module load gcc/12.3 openmpi/5.0.7-blxc-GH200-gpu gromacs/2025.2-cjnq-GH200-gpu-omp
```
## 2025.3
### CPU nodes
```bash
module use Spack unstable
module load gcc/12.3 openmpi/5.0.9-n4yf-A100-gpu gromacs/2025.3-6ken-omp
```
### A100 nodes
```bash
module use Spack unstable
module load gcc/12.3 openmpi/5.0.9-xqhy-A100-gpu gromacs/2025.3-ohlj-A100-gpu-omp
```
### GH nodes
```bash
module use Spack unstable
module load gcc/12.3 openmpi/5.0.9-inxi-GH200-gpu gromacs/2025.3-yqlu-GH200-gpu-omp
```
### SBATCH CPU, 4 MPI ranks, 16 OMP threads
```bash

View File

@@ -20,8 +20,7 @@ GNU GPLv3
[![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/ippl_gpu_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi)
```bash
module use Spack unstable
module load gcc/13.2.0 openmpi/4.1.6-57rc-A100-gpu
module load boost/1.82.0-e7gp fftw/3.3.10 gnutls/3.8.3 googletest/1.14.0 gsl/2.8 h5hut/2.0.0rc7 openblas/0.3.26-omp cmake/3.31.6-oe7u
module load gcc/13.2.0 openmpi/5.0.7-dnpr-A100-gpu boost/1.82.0-lgrt fftw/3.3.10.6-zv2b-omp googletest/1.14.0-msmu h5hut/2.0.0rc7-zy7s openblas/0.3.29-zkwb cmake/3.31.6-ufy7
cd <path to IPPL source directory>
mkdir build_gpu

View File

@@ -20,6 +20,20 @@ Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electr
Quantum ESPRESSO is an open initiative, in collaboration with many groups world-wide, coordinated by the Quantum ESPRESSO Foundation. Scientific work done using Quantum ESPRESSO should contain an explicit acknowledgment and reference to the main papers (see Quantum Espresso Homepage for the details).
## How to run on Merlin7
### 7.5
### CPU nodes
```bash
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.9-xqhy-A100-gpu quantum-espresso/7.5-zfwh-omp
```
### GH nodes
```bash
module purge
module use Spack unstable
module load nvhpc/25.7 openmpi/4.1.8-l3jj-GH200-gpu quantum-espresso/7.5-2ysd-gpu-omp
```
### 7.4.1
### A100 nodes
```bash
module purge