ADD: q-e-sirius support
All checks were successful
Build and Deploy Documentation / build-and-deploy (push) Successful in 7s

This commit is contained in:
2025-11-18 16:11:45 +01:00
parent 1d6e8da71b
commit 387bc2e04e
3 changed files with 120 additions and 1 deletions

View File

@@ -76,6 +76,8 @@ entries:
url: /merlin7/gromacs.html
- title: CP2K
url: /merlin7/cp2k.html
- title: LAMMPS
url: /merlin7/lammps.html
- title: Quantum ESPRESSO
url: /merlin7/quantum-espresso.html
- title: OPAL-X

View File

@@ -0,0 +1,117 @@
---
title: LAMMPS
keywords: LAMMPS software, compile
summary: "LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state"
sidebar: merlin7_sidebar
toc: false
permalink: /merlin7/lammps.html
---
## LAMMPS
LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state. It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions. The current version of LAMMPS is written in C++.
## Licensing Terms and Conditions
LAMMPS is an open-source code, available free-of-charge, and distributed under the terms of the GNU Public License Version 2 (GPLv2), which means you can use or modify the code however you wish for your own purposes, but have to adhere to certain rules when redistributing it - specifically in binary form - or are distributing software derived from it or that includes parts of it.
LAMMPS comes with no warranty of any kind.
As each source file states in its header, it is a copyrighted code, and thus not in the public domain. For more information about open-source software and open-source distribution, see www.gnu.org or www.opensource.org. The legal text of the GPL as it applies to LAMMPS is in the LICENSE file included in the LAMMPS distribution.
Here is a more specific summary of what the GPL means for LAMMPS users:
(1) Anyone is free to use, copy, modify, or extend LAMMPS in any way they choose, including for commercial purposes.
(2) If you distribute a modified version of LAMMPS, it must remain open-source, meaning you are required to distribute all of it under the terms of the GPLv2. You should clearly annotate such a modified code as a derivative version of LAMMPS. This is best done by changing the name (example: LIGGGHTS is such a modified and extended version of LAMMPS).
(3) If you release any code that includes or uses LAMMPS source code, then it must also be open-sourced, meaning you distribute it under the terms of the GPLv2. You may write code that interfaces LAMMPS to a differently licensed library. In that case the code that provides the interface must be licensed GPLv2, but not necessarily that library unless you are distributing binaries that require the library to run.
(4) If you give LAMMPS files to someone else, the GPLv2 LICENSE file and source file headers (including the copyright and GPLv2 notices) should remain part of the code.
## How to run on Merlin7
### CPU nodes
```bash
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-jsrx-A100-gpu lammps/20250722-37gs-omp
```
### A100 nodes
```bash
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-jsrx-A100-gpu lammps/20250722-xcaf-A100-gpu-omp
```
### GH nodes
```bash
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-fvlo-GH200-gpu lammps/20250722-3tfv-GH200-gpu-omp
```
### SBATCH CPU, 4 MPI ranks, 16 OMP threads
```bash
#!/bin/bash
#SBATCH --time=00:10:00 # maximum execution time of 10 minutes
#SBATCH --nodes=1 # requesting 1 compute node
#SBATCH --ntasks=4 # use 4 MPI rank (task)
#SBATCH --partition=hourly
#SBATCH --cpus-per-task=16 # modify this number of CPU cores per MPI task
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
unset PMODULES_ENV
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-jsrx-A100-gpu lammps/20250722-37gs-omp
export FI_CXI_RX_MATCH_MODE=software
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
srun --cpu-bind=cores lmp -k on t $OMP_NUM_THREADS -sf kk -in lj_kokkos.in
```
### SBATCH A100, 4 GPU, 16 OMP threads, 4 MPI ranks
```bash
#!/bin/bash
#SBATCH --time=00:10:00 # maximum execution time of 10 minutes
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
#SBATCH --nodes=1 # number of GH200 nodes with each node having 4 CPU+GPU
#SBATCH --ntasks-per-node=4 # 4 MPI ranks per node
#SBATCH --cluster=gmerlin7
#SBATCH --hint=nomultithread
#SBATCH --partition=a100-hourly
#SBATCH --gpus-per-task=1
unset PMODULES_ENV
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-jsrx-A100-gpu lammps/20250722-xcaf-A100-gpu-omp
export FI_CXI_RX_MATCH_MODE=software
srun lmp -in lj_kokkos.in -k on g ${SLURM_GPUS_PER_TASK} -sf kk -pk kokkos gpu/aware on
```
### SBATCH GH, 2 GPU, 18 OMP threads, 2 MPI ranks
```bash
#!/bin/bash
#SBATCH --time=00:10:00 # maximum execution time of 10 minutes
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
#SBATCH --nodes=1 # number of GH200 nodes with each node having 4 CPU+GPU
#SBATCH --ntasks-per-node=2 # 2 MPI ranks per node
#SBATCH --cluster=gmerlin7
#SBATCH --hint=nomultithread
#SBATCH --partition=gh-hourly
#SBATCH --gpus-per-task=1
unset PMODULES_ENV
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-fvlo-GH200-gpu lammps/20250722-3tfv-GH200-gpu-omp
export FI_CXI_RX_MATCH_MODE=software
srun lmp -in lj_kokkos.in -k on g ${SLURM_GPUS_PER_TASK} -sf kk -pk kokkos gpu/aware on
```

View File

@@ -136,7 +136,7 @@ SIRIUS enabled fork of QuantumESPRESSO
```bash
module purge
module use Spack unstable
module load gcc/12.3 openmpi/5.0.8-mx6f
module load gcc/12.3 openmpi/5.0.8-mx6f q-e-sirius/1.0.1-dtn4-omp
```
### A100 nodes