--- title: GROMACS keywords: GROMACS software, compile summary: "GROMACS (GROningen Machine for Chemical Simulations) is a versatile and widely-used open source package to perform molecular dynamics" sidebar: merlin7_sidebar toc: false permalink: /merlin7/gromacs.html --- ## GROMACS GROMACS (GROningen Machine for Chemical Simulations) is a versatile and widely-used open source package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.) ## Licensing Terms and Conditions GROMACS is a joint effort, with contributions from developers around the world: users agree to acknowledge use of GROMACS in any reports or publications of results obtained with the Software (see GROMACS Homepage for details). ## How to run on Merlin7 ### CPU nodes ```bash module use Spack unstable module load gcc/12.3 openmpi/5.0.7-ax23-A100-gpu gromacs/2025.2-whcq-omp ``` ### A100 nodes ```bash module use Spack unstable module load gcc/12.3 openmpi/5.0.7-3vzj-A100-gpu gromacs/2025.2-vbj4-A100-gpu-omp ``` ### GH nodes ```bash module use Spack unstable module load gcc/12.3 openmpi/5.0.7-blxc-GH200-gpu gromacs/2025.2-cjnq-GH200-gpu-omp ``` ### SBATCH GH, 4 GPU, 32 OMP threads, 4 MPI ranks ```bash #!/bin/bash #SBATCH --get-user-env #SBATCH --output=_scheduler-stdout.txt #SBATCH --error=_scheduler-stderr.txt #SBATCH --job-name="Testing GROMACS GH" #SBATCH --nodes=1 # number of GH200 nodes with each node having 4 CPU+GPU #SBATCH --ntasks-per-node=4 # 8 MPI ranks per node #SBATCH --cpus-per-task 32 # 32 OMP threads per MPI rank #SBATCH --cluster=gmerlin7 #SBATCH --hint=nomultithread #SBATCH --partition=gh-hourly #SBATCH --gpus=4 #SBATCH --gpus-per-task=1 unset PMODULES_ENV module purge module use Spack unstable module load gcc/12.3 openmpi/5.0.7-blxc-GH200-gpu gromacs/2025.2-cjnq-GH200-gpu-omp export FI_CXI_RX_MATCH_MODE=software export GMX_GPU_DD_COMMS=true export GMX_GPU_PME_PP_COMMS=true export GMX_FORCE_UPDATE_DEFAULT_GPU=true export GMX_ENABLE_DIRECT_GPU_COMM=1 export GMX_FORCE_GPU_AWARE_MPI=1 srun gmx_mpi mdrun -s input.tpr -ntomp 32 -bonded gpu -nb gpu -pme gpu -pin on -v -noconfout -dlb yes -nstlist 300 -npme 1 -nsteps 10000 -update gpu ``` ## Developing your own GPU code #### A100 ```bash module purge module use Spack unstable module load gcc/12.3 openmpi/5.0.7-3vzj-A100-gpu gromacs/2025.2-vbj4-A100-gpu-omp cmake/3.31.6-o3lb python/3.13.1-cyro git clone https://github.com/gromacs/gromacs.git cd gromacs mkdir build && cd build cmake -DCMAKE_C_COMPILER=gcc-12 \ -DCMAKE_CXX_COMPILER=g++-12 \ -DGMX_MPI=on \ -DGMX_GPU=CUDA \ -GMX_CUDA_TARGET_SM="80" \ # 90 for the Hopper GPUs -DGMX_DOUBLE=off \ # turn on double precision only if useful .. make ```