Files
gitea-pages/docs/merlin5/cluster-introduction.md
Hans-Nikolai Viessmann bde174b726 first stab at mkdocs migration
refactor CSCS and Meg content

add merlin6 quick start

update merlin6 nomachine docs

give the userdoc its own color scheme

we use the Materials default one

refactored slurm general docs merlin6

add merlin6 JB docs

add software support m6 docs

add all files to nav

vibed changes #1

add missing pages

further vibing #2

vibe #3

further fixes
2026-01-12 17:49:41 +01:00

45 lines
2.1 KiB
Markdown

---
title: Cluster 'merlin5'
#tags:
#keywords:
last_updated: 07 April 2021
#summary: "Merlin 5 cluster overview"
sidebar: merlin6_sidebar
permalink: /merlin5/cluster-introduction.html
---
## Slurm 'merlin5' cluster
**Merlin5** was the old official PSI Local HPC cluster for development and
mission-critical applications which was built in 2016-2017. It was an
extension of the Merlin4 cluster and built from existing hardware due
to a lack of central investment on Local HPC Resources. **Merlin5** was
then replaced by the **[Merlin6](../merlin6/cluster-introduction.md)** cluster in 2019,
with an important central investment of ~1,5M CHF. **Merlin5** was mostly
based on CPU resources, but also contained a small amount of GPU-based
resources which were mostly used by the BIO experiments.
**Merlin5** has been kept as a **Local HPC [Slurm](https://slurm.schedmd.com/overview.html) cluster**,
called **`merlin5`**. In that way, the old CPU computing nodes are still available as extra computation resources,
and as an extension of the official production **`merlin6`** [Slurm](https://slurm.schedmd.com/overview.html) cluster.
The old Merlin5 _**login nodes**_, _**GPU nodes**_ and _**storage**_ were fully migrated to the **[Merlin6](../merlin6/index.md)**
cluster, which becomes the **main Local HPC Cluster**. Hence, **[Merlin6](/merlin6/index.html)**
contains the storage which is mounted on the different Merlin HPC [Slurm](https://slurm.schedmd.com/overview.html) Clusters (`merlin5`, `merlin6`, `gmerlin6`).
### Submitting jobs to 'merlin5'
To submit jobs to the **`merlin5`** Slurm cluster, it must be done from the **Merlin6** login nodes by using
the option `--clusters=merlin5` on any of the Slurm commands (`sbatch`, `salloc`, `srun`, etc. commands).
## The Merlin Architecture
### Multi Non-Federated Cluster Architecture Design: The Merlin cluster
The following image shows the Slurm architecture design for Merlin cluster.
It contains a multi non-federated cluster setup, with a central Slurm database
and multiple independent clusters (`merlin5`, `merlin6`, `gmerlin6`):
![Merlin6 Slurm Architecture Design](../images/merlin-slurm-architecture.png)