--- title: Slurm cluster 'merlin7' #tags: keywords: configuration, partitions, node definition last_updated: 24 Mai 2023 summary: "This document describes a summary of the Merlin6 configuration." sidebar: merlin7_sidebar permalink: /merlin7/slurm-configuration.html --- ![Work In Progress](/images/WIP/WIP1.webp){:style="display:block; margin-left:auto; margin-right:auto"} {{site.data.alerts.warning}}The Merlin7 documentation is Work In Progress. Please do not use or rely on this documentation until this becomes official. This applies to any page under https://lsm-hpce.gitpages.psi.ch/merlin7/ {{site.data.alerts.end}} This documentation shows basic Slurm configuration and options needed to run jobs in the Merlin7 cluster. ### Infrastructure #### Hardware The current configuration for the _test_ phase is made up as: * 9 nodes for the _PSI-Dev_ development system * 8 nodes were meant for baremetal and k8s * 1 login node | Node | CPU | RAM | GRES | Notes | | ---- | --- | --- | ---- | ----- | | Compute node | _2x_ AMD EPYC 7713 (x86_64 Milan, 64 Cores, 3.2GHz) | 512GB DDR4 3200Mhz | _4x_ NVidia A100 (Ampere, 80GB) | | | Login node | _2x_ AMD EPYC 7742 (x86_64 Rome, 64 Cores, 3.2GHz) | 512GB DRR4 3200Mhz | | | #### Storage * CephFS only for `/home` -- 1 TB * ClusterStor L300 for `/scratch` -- 224 TB usable space * CephRBD `/local` -- 100GB #### Node IDs Cray user various identifies to uniquely label each node, details on this can be found on the [Crayism page](cray-conventions.html). The table below collates these together for the current configuration: | Node ID | Cray XNAME | Notes | | ---------- | ---------- | - | | nid003204 | x1500c4s7b0n0 | login node, to which **psi-dev.cscs.ch** points | | nid002808 | x1007c0s4b0n0 | | | nid002809 | x1007c0s4b0n1 | | | nid002812 | x1007c0s5b0n0 | | | nid002813 | x1007c0s5b0n1 | | | nid002824 | x1007c1s0b0n0 | | | nid002825 | x1007c1s0b0n1 | | | nid002828 | x1007c1s1b0n0 | | | nid002829 | x1007c1s1b0n1 | |