# Quantum Espresso ## Quantum ESPRESSO Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials: PWscf (Plane-Wave Self-Consistent Field) FPMD (First Principles Molecular Dynamics) CP (Car-Parrinello) ## Licensing Terms and Conditions Quantum ESPRESSO is an open initiative, in collaboration with many groups world-wide, coordinated by the Quantum ESPRESSO Foundation. Scientific work done using Quantum ESPRESSO should contain an explicit acknowledgment and reference to the main papers (see Quantum Espresso Homepage for the details). ## How to run on Merlin7 ### 7.5 ### CPU nodes ```bash module purge module use Spack unstable module load gcc/12.3 openmpi/5.0.9-xqhy-A100-gpu quantum-espresso/7.5-zfwh-omp ``` ### GH nodes ```bash module purge module use Spack unstable module load nvhpc/25.7 openmpi/4.1.8-l3jj-GH200-gpu quantum-espresso/7.5-2ysd-gpu-omp ``` ### 7.4.1 ### A100 nodes ```bash module purge module use Spack unstable module load nvhpc/25.3 openmpi/main-6bnq-A100-gpu quantum-espresso/7.4.1-nxsw-gpu-omp ``` ### GH nodes ```bash module purge module use Spack unstable module load nvhpc/25.3 openmpi/5.0.7-e3bf-GH200-gpu quantum-espresso/7.4.1-gxvj-gpu-omp ``` ### SBATCH A100, 1 GPU, 64 OpenMP threads, one MPI rank example ```bash #!/bin/bash #SBATCH --no-requeue #SBATCH --job-name="si64" #SBATCH --get-user-env #SBATCH --output=_scheduler-stdout.txt #SBATCH --error=_scheduler-stderr.txt #SBATCH --partition=a100-daily #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --time=06:00:00 #SBATCH --cpus-per-task=64 #SBATCH --cluster=gmerlin7 #SBATCH --gpus=1 #SBATCH --hint=nomultithread export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK export OMP_PROC_BIND=spread export OMP_PLACES=threads # Load necessary modules module purge module use Spack unstable module load nvhpc/25.3 openmpi/main-6bnq-A100-gpu quantum-espresso/7.4.1-nxsw-gpu-omp "srun" '$(which pw.x)' '-npool' '1' '-in' 'aiida.in' > "aiida.out" ``` ## Developing your own GPU code ### Spack 2. ```spack config edit ``` 3. Add granularity: microarchitectures to your config (if you use nvhpc compiler! Not needed for CPU builds!) ```bash spack: concretizer: unify: false targets: granularity: microarchitectures ``` 4. ```spack add quantum-espresso@develop +cuda +mpi +mpigpu hdf5=parallel %nvhpc arch=linux-sles15-zen3 # GPU``` 5. ```spack add quantum-espresso@develop +mpi hdf5=parallel %gcc # CPU``` 6. ```spack develop quantum-espresso@develop # clone the code under /afs/psi.ch/sys/spack/user/$USER/spack-environment/quantum-espresso``` 7. Make changes in /afs/psi.ch/sys/spack/user/$USER/spack-environment/quantum-espresso 8. Build: ```spack install [-jN] -v --until=build quantum-espresso@develop``` ### Environment modules #### CPU [![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/q-e_cpu_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi) ```bash module purge module use Spack unstable module load gcc/12.3 openmpi/main-syah fftw/3.3.10.6-qbxu-omp hdf5/1.14.5-t46c openblas/0.3.29-omp cmake/3.31.6-oe7u cd mkdir build cd build cmake -DQE_ENABLE_MPI:BOOL=ON -DQE_ENABLE_OPENMP:BOOL=ON -DCMAKE_C_COMPILER:STRING=mpicc -DCMAKE_Fortran_COMPILER:STRING=mpif90 -DQE_ENABLE_HDF5:BOOL=ON .. make [-jN] ``` #### A100 [![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/q-e_gpu_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi) ```bash module purge module use Spack unstable module load nvhpc/25.3 openmpi/main-6bnq-A100-gpu fftw/3.3.10.6-qbxu-omp hdf5/develop-2.0-rjgu netlib-scalapack/2.2.2-3hgw cmake/3.31.6-oe7u cd mkdir build cd build cmake -DQE_ENABLE_MPI:BOOL=ON -DQE_ENABLE_OPENMP:BOOL=ON -DQE_ENABLE_SCALAPACK:BOOL=ON -DQE_ENABLE_CUDA:BOOL=ON -DQE_ENABLE_MPI_GPU_AWARE:BOOL=ON -DQE_ENABLE_OPENACC:BOOL=ON -DCMAKE_C_COMPILER:STRING=mpicc -DCMAKE_Fortran_COMPILER:STRING=mpif90 -DQE_ENABLE_HDF5:BOOL=ON .. make [-jN] ``` #### GH200 [![Pipeline](https://gitea.psi.ch/HPCE/spack-psi/actions/workflows/q-e_gh_merlin7.yml/badge.svg?branch=main)](https://gitea.psi.ch/HPCE/spack-psi) ```bash salloc --partition=gh-daily --clusters=gmerlin7 --time=08:00:00 --ntasks=4 --nodes=1 --gpus=1 --mem=40000 $SHELL ssh module purge module use Spack unstable module load nvhpc/25.3 openmpi/5.0.7-e3bf-GH200-gpu fftw/3.3.10-sfpw-omp hdf5/develop-2.0-ztvo nvpl-blas/0.4.0.1-3zpg nvpl-lapack/0.3.0-ymy5 netlib-scalapack/2.2.2-qrhq cmake/3.31.6-5dl7 cd mkdir build cd build cmake -DQE_ENABLE_MPI:BOOL=ON -DQE_ENABLE_OPENMP:BOOL=ON -DQE_ENABLE_SCALAPACK:BOOL=ON -DQE_ENABLE_CUDA:BOOL=ON -DQE_ENABLE_MPI_GPU_AWARE:BOOL=ON -DQE_ENABLE_OPENACC:BOOL=ON -DCMAKE_C_COMPILER:STRING=mpicc -DCMAKE_Fortran_COMPILER:STRING=mpif90 -DQE_ENABLE_HDF5:BOOL=ON .. make [-jN] ``` ## Q-E-SIRIUS SIRIUS enabled fork of QuantumESPRESSO ### CPU ```bash module purge module use Spack unstable module load gcc/12.3 openmpi/5.0.8-mx6f q-e-sirius/1.0.1-dtn4-omp ``` ### A100 nodes ```bash module purge module use Spack unstable module load gcc/12.3 openmpi/5.0.8-lsff-A100-gpu q-e-sirius/1.0.1-7snv-omp ``` ### GH nodes ```bash module purge module use Spack unstable module load gcc/12.3 openmpi/5.0.8-tx2w-GH200-gpu q-e-sirius/1.0.1-3dwi-omp ```