Files
gitea-pages/pages/merlin7/slurm-configuration.md
2024-08-07 13:05:17 +02:00

1.4 KiB

title, keywords, last_updated, summary, sidebar, permalink
title keywords last_updated summary sidebar permalink
Slurm cluster 'merlin7' configuration, partitions, node definition 24 Mai 2023 This document describes a summary of the Merlin7 configuration. merlin7_sidebar /merlin7/slurm-configuration.html

Work In Progress{:style="display:block; margin-left:auto; margin-right:auto"}

{{site.data.alerts.warning}}The Merlin7 documentation is Work In Progress. Please do not use or rely on this documentation until this becomes official.
This applies to any page under https://lsm-hpce.gitpages.psi.ch/merlin7/ {{site.data.alerts.end}}

This documentation shows basic Slurm configuration and options needed to run jobs in the Merlin7 cluster.

Infrastructure

Hardware

The current configuration for the preproduction phase is made up as:

  • nodes for the PSI-Dev development system
  • 2 CPU-only login nodes
  • 77 CPU-only compute nodes
  • 4 GPU nodes
Node CPU RAM GRES Notes
Login node 2x AMD EPYC 7742 (x86_64 Rome, 64 Cores, 3.2GHz) 512GB DRR4 3200Mhz
CPU node 2x AMD EPYC 7742 (x86_64 Rome, 64 Cores, 3.2GHz) 512GB DRR4 3200Mhz
GPU node 2x AMD EPYC 7713 (x86_64 Milan, 64 Cores, 3.2GHz) 512GB DDR4 3200Mhz 4x NVidia A100 (Ampere, 80GB)