ADD: CP2k
All checks were successful
Build and Deploy Documentation / build-and-deploy (push) Successful in 11s

This commit is contained in:
2025-11-06 15:21:35 +01:00
parent db840034ce
commit 62b25a11a5
2 changed files with 80 additions and 0 deletions

View File

@@ -74,6 +74,8 @@ entries:
url: /merlin7/ansys-rsm.html
- title: GROMACS
url: /merlin7/gromacs.html
- title: CP2K
url: /merlin7/cp2k.html
- title: Quantum ESPRESSO
url: /merlin7/quantum-espresso.html
- title: OPAL-X

View File

@@ -0,0 +1,78 @@
---
title: CP2k
keywords: CP2k software, compile
summary: "CP2k is a quantum chemistry and solid state physics software package"
sidebar: merlin7_sidebar
toc: false
permalink: /merlin7/cp2k.html
---
## CP2k
CP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems.
CP2K provides a general framework for different modeling methods such as DFT using the mixed Gaussian and plane waves approaches GPW and GAPW. Supported theory levels include DFTB, LDA, GGA, MP2, RPA, semi-empirical methods (AM1, PM3, PM6, RM1, MNDO, …), and classical force fields (AMBER, CHARMM, …). CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimization, and transition state optimization using NEB or dimer method
## Licensing Terms and Conditions
CP2k is a joint effort, with contributions from developers around the world: users agree to acknowledge use of CP2k in any reports or publications of results obtained with the Software (see CP2k Homepage for details).
## How to run on Merlin7
### CPU nodes
```bash
module use unstable Spack
module load gcc/12.3 openmpi/5.0.8-hgej cp2k/2025.2-yb6g-omp
```
### A100 nodes
```bash
module use unstable Spack
module load gcc/12.3 openmpi/5.0.8-5tb3-A100-gpu cp2k/2025.2-osvk-A100-gpu-omp
```
### SBATCH CPU, 4 MPI ranks, 16 OMP threads
```bash
#!/bin/bash
#SBATCH --time=00:10:00 # maximum execution time of 10 minutes
#SBATCH --nodes=1 # requesting 1 compute node
#SBATCH --ntasks=4 # use 4 MPI rank (task)
#SBATCH --partition=hourly
#SBATCH --cpus-per-task=16 # modify this number of CPU cores per MPI task
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
unset PMODULES_ENV
module purge
module use unstable Spack
module load gcc/12.3 openmpi/5.0.8-hgej cp2k/2025.2-yb6g-omp
export FI_CXI_RX_MATCH_MODE=software
export OMP_NUM_THREADS=$((SLURM_CPUS_PER_TASK - 1))
srun cp2k.psmp -i <CP2K_INPUT> -o <CP2K_OUTPUT>
```
### SBATCH A100, 4 GPU, 16 OMP threads, 4 MPI ranks
```bash
#!/bin/bash
#SBATCH --time=00:10:00 # maximum execution time of 10 minutes
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
#SBATCH --nodes=1 # number of A100 nodes
#SBATCH --ntasks-per-node=4 # 4 MPI ranks per node
#SBATCH --cpus-per-task=16 # 16 OMP threads per MPI rank
#SBATCH --cluster=gmerlin7
#SBATCH --hint=nomultithread
#SBATCH --partition=a100-hourly
#SBATCH --gpus=4
unset PMODULES_ENV
module purge
module use unstable Spack
module load gcc/12.3 openmpi/5.0.8-5tb3-A100-gpu cp2k/2025.2-osvk-A100-gpu-omp
export FI_CXI_RX_MATCH_MODE=software
export OMP_NUM_THREADS=$((SLURM_CPUS_PER_TASK - 1))
srun cp2k.psmp -i <CP2K_INPUT> -o <CP2K_OUTPUT>
```