Clean Merlin6 docs
All checks were successful
Build and deploy documentation / build-and-deploy-docs (push) Successful in 24s

This commit is contained in:
2026-02-10 10:46:02 +01:00
parent 6812bb6bad
commit 17053a363f
14 changed files with 85 additions and 185 deletions

View File

@@ -46,4 +46,3 @@ The Merlin6 login nodes are the following:
| ------------------- | --- | --------- | ------ |:--------:| :-------------------- | ------ | ---------- | :------------------ |
| merlin-l-001.psi.ch | yes | yes | 2 x 22 | 2 | Intel Xeon Gold 6152 | 384GB | 1.8TB NVMe | ``/scratch`` |
| merlin-l-002.psi.ch | yes | yes | 2 x 22 | 2 | Intel Xeon Gold 6142 | 384GB | 1.8TB NVMe | ``/scratch`` |
| merlin-l-01.psi.ch | yes | - | 2 x 16 | 2 | Intel Xeon E5-2697Av4 | 512GB | 100GB SAS | ``/scratch`` |

View File

@@ -7,7 +7,6 @@ It basically contains the following clusters:
* The **Merlin6 Slurm CPU cluster**, which is called [**`merlin6`**](#merlin6-cpu-cluster-access).
* The **Merlin6 Slurm GPU cluster**, which is called [**`gmerlin6`**](#merlin6-gpu-cluster-access).
* The *old Merlin5 Slurm CPU cluster*, which is called [**`merlin5`**](#merlin5-cpu-cluster-access), still supported in a best effort basis.
## Accessing the Slurm clusters
@@ -35,15 +34,3 @@ For further information about how to use this cluster, please visit: [**Merlin6
The **Merlin6 GPU cluster** (**`gmerlin6`**) is visible from the login nodes. However, to submit jobs to this cluster, one needs to specify the option `--cluster=gmerlin6` when submitting a job or allocation.
For further information about how to use this cluster, please visit: [**Merlin6 GPU Slurm Cluster documentation**](../../gmerlin6/slurm-configuration.md).
### Merlin5 CPU cluster access
The **Merlin5 CPU cluster** (**`merlin5`**) is visible from the login nodes. However, to submit jobs
to this cluster, one needs to specify the option `--cluster=merlin5` when submitting a job or allocation.
Using this cluster is in general not recommended, however this is still
available for old users needing extra computational resources or longer jobs.
Have in mind that this cluster is only supported in a **best effort basis**,
and it contains very old hardware and configurations.
For further information about how to use this cluster, please visit the [**Merlin5 CPU Slurm Cluster documentation**](../../merlin5/slurm-configuration.md).

View File

@@ -5,13 +5,6 @@
Historically, the local HPC clusters at PSI were named **Merlin**. Over the years,
multiple generations of Merlin have been deployed.
At present, the **Merlin local HPC cluster** contains _two_ generations of it:
* the old **Merlin5** cluster (`merlin5` Slurm cluster), and
* the newest generation **Merlin6**, which is divided in two Slurm clusters:
* `merlin6` as the Slurm CPU cluster
* `gmerlin6` as the Slurm GPU cluster.
Access to the different Slurm clusters is possible from the [**Merlin login nodes**](accessing-interactive-nodes.md),
which can be accessed through the [SSH protocol](accessing-interactive-nodes.md#ssh-access) or the [NoMachine (NX) service](../how-to-use-merlin/nomachine.md).
@@ -36,8 +29,8 @@ connected to the central PSI IT infrastructure.
#### CPU and GPU Slurm clusters
The Merlin6 **computing nodes** are mostly based on **CPU** resources. However,
it also contains a small amount of **GPU**-based resources, which are mostly used
by the BIO Division and by Deep Leaning project.
in the past it also contained a small amount of **GPU**-based resources, which were mostly used
by the BIO Division and by Deep Leaning projects. Today, only Gwendolen is available on `gmerlin6`.
These computational resources are split into **two** different **[Slurm](https://slurm.schedmd.com/overview.html)** clusters:
@@ -45,12 +38,3 @@ These computational resources are split into **two** different **[Slurm](https:/
* This is the **default Slurm cluster** configured in the login nodes: any job submitted without the option `--cluster` will be submited to this cluster.
* The Merlin6 GPU resources are in a dedicated **[Slurm](https://slurm.schedmd.com/overview.html)** cluster called [**`gmerlin6`**](../../gmerlin6/slurm-configuration.md).
* Users submitting to the **`gmerlin6`** GPU cluster need to specify the option ``--cluster=gmerlin6``.
### Merlin5
The old Slurm **CPU** _Merlin_ cluster is still active and is maintained in a best effort basis.
**Merlin5** only contains **computing nodes** resources in a dedicated **[Slurm](https://slurm.schedmd.com/overview.html)** cluster.
* The Merlin5 CPU cluster is called [**merlin5**](../../merlin5/slurm-configuration.md).

View File

@@ -2,41 +2,26 @@
## Requesting Access to Merlin6
Access to Merlin6 is regulated by a PSI user's account being a member of the **`svc-cluster_merlin6`** group. Access to this group will also grant access to older generations of Merlin (`merlin5`).
In the past, access to the public Merlin6 cluster was regulated via the `svc-cluster_merlin6` group, which is no longer in use.
Merlin6 has become a private cluster, and to request access, **users must now be members of one of the Unix groups authorized to use it**, including Gwendolen.
Requesting **Merlin6** access *has to be done* with the corresponding **[Request Linux Group Membership](https://psi.service-now.com/psisp?id=psi_new_sc_cat_item&sys_id=84f2c0c81b04f110679febd9bb4bcbb1)** form, available in the [PSI Service Now Service Catalog](https://psi.service-now.com/psisp).
Requests for Merlin6 access must be submitted using the [Request Linux Group Membership](https://psi.service-now.com/psisp?id=psi_new_sc_cat_item&sys_id=84f2c0c81b04f110679febd9bb4bcbb1) form, available in the [PSI ServiceNow Service Catalog](https://psi.service-now.com/psisp). Access is granted by requesting membership in a Unix group that is permitted to use the cluster.
![Example: Requesting access to Merlin6](../../images/Access/01-request-merlin6-membership.png)
Mandatory customizable fields are the following:
### Mandatory fields
* **`Order Access for user`**, which defaults to the logged in user. However, requesting access for another user it's also possible.
* **`Request membership for group`**, for Merlin6 the **`svc-cluster_merlin6`** must be selected.
* **`Justification`**, please add here a short justification why access to Merlin6 is necessary.
The following fields must be completed:
Once submitted, the Merlin responsible will approve the request as soon as possible (within the next few hours on working days). Once the request is approved, *it may take up to 30 minutes to get the account fully configured*.
* **Order Access for user**: Defaults to the currently logged-in user. Access may also be requested on behalf of another user.
* **Request membership for group**: Select a valid Unix group that has access to Merlin6.
* **Justification**: Provide a brief explanation of why access to this group is required.
## Requesting Access to Merlin5
Access to Merlin5 is regulated by a PSI user's account being a member of the **`svc-cluster_merlin5`** group. Access to this group does not grant access to newer generations of Merlin (`merlin6`, `gmerlin6`, and future ones).
Requesting **Merlin5** access *has to be done* with the corresponding **[Request Linux Group Membership](https://psi.service-now.com/psisp?id=psi_new_sc_cat_item&sys_id=84f2c0c81b04f110679febd9bb4bcbb1)** form, available in the [PSI Service Now Service Catalog](https://psi.service-now.com/psisp).
![Example: Requesting access to Merlin5](../../images/Access/01-request-merlin5-membership.png)
Mandatory customizable fields are the following:
* **`Order Access for user`**, which defaults to the logged in user. However, requesting access for another user it's also possible.
* **`Request membership for group`**, for Merlin5 the **`svc-cluster_merlin5`** must be selected.
* **`Justification`**, please add here a short justification why access to Merlin5 is necessary.
Once submitted, the Merlin responsible will approve the request as soon as possible (within the next few hours on working days). Once the request is approved, *it may take up to 30 minutes to get the account fully configured*.
Once the request is submitted, the corresponding group administrators will review and approve it as soon as possible (typically within a few working hours). After approval, it may take up to 30 minutes for the account to be fully configured and access to become effective.
## Further documentation
Further information it's also available in the Linux Central Documentation:
Additional information is available in the Linux Central Documentation:
* [Unix Group / Group Management for users](https://linux.psi.ch/documentation/services/user-guide/unix_groups.html)
* [Unix Group / Group Management for group managers](https://linux.psi.ch/documentation/services/admin-guide/unix_groups.html)
**Special thanks** to the **Linux Central Team** and **AIT** to make this possible.