All checks were successful
Build and Deploy Documentation / build-and-deploy (push) Successful in 5s
135 lines
4.6 KiB
Markdown
135 lines
4.6 KiB
Markdown
---
|
|
title: Quantum Espresso
|
|
keywords: Quantum Espresso software, compile
|
|
summary: "Quantum Espresso code for electronic-structure calculations and materials modeling at the nanoscale"
|
|
sidebar: merlin7_sidebar
|
|
toc: false
|
|
permalink: /merlin7/quantum-espresso.html
|
|
---
|
|
|
|
## Quantum ESPRESSO
|
|
|
|
Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials:
|
|
|
|
PWscf (Plane-Wave Self-Consistent Field)
|
|
FPMD (First Principles Molecular Dynamics)
|
|
CP (Car-Parrinello)
|
|
|
|
## Licensing Terms and Conditions
|
|
|
|
Quantum ESPRESSO is an open initiative, in collaboration with many groups world-wide, coordinated by the Quantum ESPRESSO Foundation. Scientific work done using Quantum ESPRESSO should contain an explicit acknowledgment and reference to the main papers (see Quantum Espresso Homepage for the details).
|
|
|
|
## How to run on Merlin7
|
|
### A100 nodes
|
|
```bash
|
|
module purge
|
|
module use Spack
|
|
module use unstable
|
|
module load nvhpc/25.3 openmpi/main-6bnq-A100-gpu quantum-espresso/7.4.1-nxsw-gpu-omp
|
|
```
|
|
### GH nodes
|
|
```bash
|
|
module purge
|
|
module use Spack
|
|
module use unstable
|
|
module load nvhpc/24.11 openmpi/main-7zgw-GH200-gpu quantum-espresso/7.4-gpu-omp
|
|
```
|
|
|
|
### SBATCH A100, 1 GPU, 64 OpenMP threads, one MPI rank example
|
|
```bash
|
|
#!/bin/bash
|
|
#SBATCH --no-requeue
|
|
#SBATCH --job-name="si64"
|
|
#SBATCH --get-user-env
|
|
#SBATCH --output=_scheduler-stdout.txt
|
|
#SBATCH --error=_scheduler-stderr.txt
|
|
#SBATCH --partition=a100-daily
|
|
#SBATCH --nodes=1
|
|
#SBATCH --ntasks-per-node=1
|
|
#SBATCH --time=06:00:00
|
|
#SBATCH --cpus-per-task=64
|
|
#SBATCH --cluster=gmerlin7
|
|
#SBATCH --gpus=1
|
|
#SBATCH --hint=nomultithread
|
|
|
|
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
|
|
export OMP_PROC_BIND=spread
|
|
export OMP_PLACES=threads
|
|
|
|
# Load necessary modules
|
|
module purge
|
|
module use Spack
|
|
module use unstable
|
|
module load nvhpc/25.3 openmpi/main-6bnq-A100-gpu quantum-espresso/7.4.1-nxsw-gpu-omp
|
|
|
|
"srun" '$(which pw.x)' '-npool' '1' '-in' 'aiida.in' > "aiida.out"
|
|
```
|
|
|
|
## Developing your own GPU code
|
|
### Spack
|
|
1. ```spack config edit ```
|
|
2. Add granularity: microarchitectures to your config (if you use nvhpc compiler! Not needed for CPU builds!)
|
|
```bash
|
|
spack:
|
|
concretizer:
|
|
targets:
|
|
granularity: microarchitectures
|
|
```
|
|
3. ```spack add quantum-espresso@develop +cuda +mpi +mpigpu hdf5=parallel %nvhpc arch=linux-sles15-zen3 # GPU```
|
|
3. ```spack add quantum-espresso@develop +mpi hdf5=parallel %gcc # CPU```
|
|
4.
|
|
```bash
|
|
mkdir -p /afs/psi.ch/sys/spack/user/$USER/spack-environment/quantum-espresso
|
|
cd /afs/psi.ch/sys/spack/user/$USER/spack-environment/quantum-espresso
|
|
git clone https://gitlab.com/QEF/q-e.git
|
|
```
|
|
5. ```spack develop quantum-espresso@develop```
|
|
6. ```spack install -v```
|
|
|
|
### Environment modules
|
|
#### CPU
|
|
```bash
|
|
module purge
|
|
module use Spack
|
|
module use unstable
|
|
module load gcc/12.3 openmpi/main-syah fftw/3.3.10.6-omp hdf5/1.14.5-t46c openblas/0.3.29-omp cmake/3.31.6-oe7u
|
|
|
|
cd <path to QE source directory>
|
|
mkdir build
|
|
cd build
|
|
|
|
cmake -DQE_ENABLE_MPI:BOOL=ON -DQE_ENABLE_OPENMP:BOOL=ON -DCMAKE_C_COMPILER:STRING=mpicc -DCMAKE_Fortran_COMPILER:STRING=mpif90 -DQE_ENABLE_HDF5:BOOL=ON ..
|
|
make [-jN]
|
|
```
|
|
#### A100
|
|
```bash
|
|
module purge
|
|
module use Spack
|
|
module use unstable
|
|
module load nvhpc/25.3 openmpi/main-6bnq-A100-gpu fftw/3.3.10.6-qbxu-omp hdf5/develop-2.0-rjgu netlib-scalapack/2.2.2-3hgw cmake/3.31.6-oe7u
|
|
|
|
cd <path to QE source directory>
|
|
mkdir build
|
|
cd build
|
|
|
|
cmake -DQE_ENABLE_MPI:BOOL=ON -DQE_ENABLE_OPENMP:BOOL=ON -DQE_ENABLE_SCALAPACK:BOOL=ON -DQE_ENABLE_CUDA:BOOL=ON -DQE_ENABLE_MPI_GPU_AWARE:BOOL=ON -DQE_ENABLE_OPENACC:BOOL=ON -DCMAKE_C_COMPILER:STRING=mpicc -DCMAKE_Fortran_COMPILER:STRING=mpif90 -DQE_ENABLE_HDF5:BOOL=ON ..
|
|
make [-jN]
|
|
|
|
```
|
|
#### GH200
|
|
Unfortunately this doesn't work with the develop version of Q-E, it fails at ~40% because of an internal error. This could be solved with a new nvhpc compiler, but since PSI decided to go away from GH nodes I am not sure this is worth invastigating. Please go back to A100 or try to correct the fortran module that fails...
|
|
|
|
```bash
|
|
module purge
|
|
module use Spack
|
|
module use unstable
|
|
module load nvhpc/24.11 openmpi/main-7zgw-GH200-gpu fftw/3.3.10-omp hdf5/1.14.5-zi5b nvpl-blas/0.3.0-omp nvpl-lapack/0.2.3.1-omp netlib-scalapack/2.2.0 cmake/3.30.5-f4b7
|
|
|
|
cd <path to QE source directory>
|
|
mkdir build
|
|
cd build
|
|
|
|
cmake -DQE_ENABLE_MPI:BOOL=ON -DQE_ENABLE_OPENMP:BOOL=ON -DQE_ENABLE_SCALAPACK:BOOL=ON -DQE_ENABLE_CUDA:BOOL=ON -DQE_ENABLE_MPI_GPU_AWARE:BOOL=ON -DQE_ENABLE_OPENACC:BOOL=ON -DCMAKE_C_COMPILER:STRING=mpicc -DCMAKE_Fortran_COMPILER:STRING=mpif90 -DQE_ENABLE_HDF5:BOOL=ON ..
|
|
make [-jN]
|
|
```
|