Files
gitea-pages/pages/merlin7/05-Software-Support/quantum-espresso.md
germann_e 85fcf9afc0
All checks were successful
Build and Deploy Documentation / build-and-deploy (push) Successful in 6s
FIX: remove salloc obligation for openmpi+cuda (quantum-espresso
2025-05-16 15:07:48 +02:00

140 lines
5.3 KiB
Markdown

---
title: Quantum Espresso
keywords: Quantum Espresso software, compile
summary: "Quantum Espresso code for electronic-structure calculations and materials modeling at the nanoscale"
sidebar: merlin7_sidebar
toc: false
permalink: /merlin7/quantum-espresso.html
---
## Quantum ESPRESSO
Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials:
PWscf (Plane-Wave Self-Consistent Field)
FPMD (First Principles Molecular Dynamics)
CP (Car-Parrinello)
## Licensing Terms and Conditions
Quantum ESPRESSO is an open initiative, in collaboration with many groups world-wide, coordinated by the Quantum ESPRESSO Foundation. Scientific work done using Quantum ESPRESSO should contain an explicit acknowledgment and reference to the main papers (see Quantum Espresso Homepage for the details).
## How to run on Merlin7
### A100 nodes
```bash
module purge
module use Spack
module use unstable
module load nvhpc/25.3 openmpi/main-6bnq-A100-gpu quantum-espresso/7.4.1-nxsw-gpu-omp
```
### GH nodes
```bash
module purge
module use Spack
module use unstable
module load nvhpc/24.11 openmpi/main-7zgw-GH200-gpu quantum-espresso/7.4-gpu-omp
```
### SBATCH A100, 1 GPU, 64 OpenMP threads, one MPI rank example
```bash
#!/bin/bash
#SBATCH --no-requeue
#SBATCH --job-name="si64"
#SBATCH --get-user-env
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
#SBATCH --partition=a100-daily
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --time=06:00:00
#SBATCH --cpus-per-task=64
#SBATCH --cluster=gmerlin7
#SBATCH --gpus=1
#SBATCH --hint=nomultithread
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
# Load necessary modules
module purge
module use Spack
module use unstable
module load nvhpc/25.3 openmpi/main-6bnq-A100-gpu quantum-espresso/7.4.1-nxsw-gpu-omp
"srun" '$(which pw.x)' '-npool' '1' '-in' 'aiida.in' > "aiida.out"
```
## Developing your own GPU code
### Spack
2. ```spack config edit ```
3. Add granularity: microarchitectures to your config (if you use nvhpc compiler! Not needed for CPU builds!)
```bash
spack:
concretizer:
unify: false
targets:
granularity: microarchitectures
```
4. ```spack add quantum-espresso@develop +cuda +mpi +mpigpu hdf5=parallel %nvhpc arch=linux-sles15-zen3 # GPU```
5. ```spack add quantum-espresso@develop +mpi hdf5=parallel %gcc # CPU```
6. ```spack develop quantum-espresso@develop # clone the code under /afs/psi.ch/sys/spack/user/$USER/spack-environment/quantum-espresso```
7. Make changes in /afs/psi.ch/sys/spack/user/$USER/spack-environment/quantum-espresso
8. Build: ```spack install [-jN] -v --until=build quantum-espresso@develop```
### Environment modules
#### CPU
[![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/q-e_cpu_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi)
```bash
module purge
module use Spack
module use unstable
module load gcc/12.3 openmpi/main-syah fftw/3.3.10.6-omp hdf5/1.14.5-t46c openblas/0.3.29-omp cmake/3.31.6-oe7u
cd <path to QE source directory>
mkdir build
cd build
cmake -DQE_ENABLE_MPI:BOOL=ON -DQE_ENABLE_OPENMP:BOOL=ON -DCMAKE_C_COMPILER:STRING=mpicc -DCMAKE_Fortran_COMPILER:STRING=mpif90 -DQE_ENABLE_HDF5:BOOL=ON ..
make [-jN]
```
#### A100
[![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/q-e_gpu_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi)
```bash
module purge
module use Spack
module use unstable
module load nvhpc/25.3 openmpi/main-6bnq-A100-gpu fftw/3.3.10.6-qbxu-omp hdf5/develop-2.0-rjgu netlib-scalapack/2.2.2-3hgw cmake/3.31.6-oe7u
cd <path to QE source directory>
mkdir build
cd build
cmake -DQE_ENABLE_MPI:BOOL=ON -DQE_ENABLE_OPENMP:BOOL=ON -DQE_ENABLE_SCALAPACK:BOOL=ON -DQE_ENABLE_CUDA:BOOL=ON -DQE_ENABLE_MPI_GPU_AWARE:BOOL=ON -DQE_ENABLE_OPENACC:BOOL=ON -DCMAKE_C_COMPILER:STRING=mpicc -DCMAKE_Fortran_COMPILER:STRING=mpif90 -DQE_ENABLE_HDF5:BOOL=ON ..
make [-jN]
```
#### GH200
[![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/q-e_gh_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi)
Unfortunately this doesn't work with the develop version of Q-E, it fails at ~40% because of an internal compiler error. This could be solved with a new nvhpc compiler, but since PSI decided to go away from GH nodes I am not sure this is worth invastigating. Please go back to A100 if you absolutely want the develop version or try to go back to 7.4 for the fortran module that fail...
!! UPDATE: Actually it works with qe-7.4! So you can also just checkout from this branch and not development and add your changes to this branch !!
```bash
module purge
module use Spack
module use unstable
module load nvhpc/24.11 openmpi/main-7zgw-GH200-gpu fftw/3.3.10-omp hdf5/1.14.5-zi5b nvpl-blas/0.3.0-omp nvpl-lapack/0.2.3.1-omp netlib-scalapack/2.2.0 cmake/3.30.5-f4b7
cd <path to QE source directory>
git checkout qe-7.4 # + cherry-pick your changes or rebase
mkdir build
cd build
cmake -DQE_ENABLE_MPI:BOOL=ON -DQE_ENABLE_OPENMP:BOOL=ON -DQE_ENABLE_SCALAPACK:BOOL=ON -DQE_ENABLE_CUDA:BOOL=ON -DQE_ENABLE_MPI_GPU_AWARE:BOOL=ON -DQE_ENABLE_OPENACC:BOOL=ON -DCMAKE_C_COMPILER:STRING=mpicc -DCMAKE_Fortran_COMPILER:STRING=mpif90 -DQE_ENABLE_HDF5:BOOL=ON ..
make [-jN]
```