vibed changes #1
This commit is contained in:
@@ -5,13 +5,13 @@
|
||||
Merlin contains a multi-cluster setup, where multiple Slurm clusters coexist under the same umbrella.
|
||||
It basically contains the following clusters:
|
||||
|
||||
* The **Merlin6 Slurm CPU cluster**, which is called [**`merlin6`**](/merlin6/slurm-access.html#merlin6-cpu-cluster-access).
|
||||
* The **Merlin6 Slurm GPU cluster**, which is called [**`gmerlin6`**](/merlin6/slurm-access.html#merlin6-gpu-cluster-access).
|
||||
* The *old Merlin5 Slurm CPU cluster*, which is called [**`merlin5`**](/merlin6/slurm-access.html#merlin5-cpu-cluster-access), still supported in a best effort basis.
|
||||
* The **Merlin6 Slurm CPU cluster**, which is called [**`merlin6`**](#merlin6-cpu-cluster-access).
|
||||
* The **Merlin6 Slurm GPU cluster**, which is called [**`gmerlin6`**](#merlin6-gpu-cluster-access).
|
||||
* The *old Merlin5 Slurm CPU cluster*, which is called [**`merlin5`**](#merlin5-cpu-cluster-access), still supported in a best effort basis.
|
||||
|
||||
## Accessing the Slurm clusters
|
||||
|
||||
Any job submission must be performed from a **Merlin login node**. Please refer to the [**Accessing the Interactive Nodes documentation**](/merlin6/interactive.html)
|
||||
Any job submission must be performed from a **Merlin login node**. Please refer to the [**Accessing the Interactive Nodes documentation**](accessing-interactive-nodes.md)
|
||||
for further information about how to access the cluster.
|
||||
|
||||
In addition, any job *must be submitted from a high performance storage area visible by the login nodes and by the computing nodes*. For this, the possible storage areas are the following:
|
||||
@@ -28,13 +28,13 @@ The **Merlin6 CPU cluster** (**`merlin6`**) is the default cluster configured
|
||||
in the login nodes. Any job submission will use by default this cluster, unless
|
||||
the option `--cluster` is specified with another of the existing clusters.
|
||||
|
||||
For further information about how to use this cluster, please visit: [**Merlin6 CPU Slurm Cluster documentation**](/merlin6/slurm-configuration.html).
|
||||
For further information about how to use this cluster, please visit: [**Merlin6 CPU Slurm Cluster documentation**](../slurm-configuration.md).
|
||||
|
||||
### Merlin6 GPU cluster access
|
||||
|
||||
The **Merlin6 GPU cluster** (**`gmerlin6`**) is visible from the login nodes. However, to submit jobs to this cluster, one needs to specify the option `--cluster=gmerlin6` when submitting a job or allocation.
|
||||
|
||||
For further information about how to use this cluster, please visit: [**Merlin6 GPU Slurm Cluster documentation**](/gmerlin6/slurm-configuration.html).
|
||||
For further information about how to use this cluster, please visit: [**Merlin6 GPU Slurm Cluster documentation**](../../gmerlin6/slurm-configuration.md).
|
||||
|
||||
### Merlin5 CPU cluster access
|
||||
|
||||
@@ -46,4 +46,4 @@ available for old users needing extra computational resources or longer jobs.
|
||||
Have in mind that this cluster is only supported in a **best effort basis**,
|
||||
and it contains very old hardware and configurations.
|
||||
|
||||
For further information about how to use this cluster, please visit the [**Merlin5 CPU Slurm Cluster documentation**](/gmerlin6/slurm-configuration.html).
|
||||
For further information about how to use this cluster, please visit the [**Merlin5 CPU Slurm Cluster documentation**](../../merlin5/slurm-configuration.md).
|
||||
|
||||
@@ -12,12 +12,12 @@ At present, the **Merlin local HPC cluster** contains _two_ generations of it:
|
||||
* `merlin6` as the Slurm CPU cluster
|
||||
* `gmerlin6` as the Slurm GPU cluster.
|
||||
|
||||
Access to the different Slurm clusters is possible from the [**Merlin login nodes**](/merlin6/interactive.html),
|
||||
which can be accessed through the [SSH protocol](/merlin6/interactive.html#ssh-access) or the [NoMachine (NX) service](/merlin6/nomachine.html).
|
||||
Access to the different Slurm clusters is possible from the [**Merlin login nodes**](accessing-interactive-nodes.md),
|
||||
which can be accessed through the [SSH protocol](accessing-interactive-nodes.md#ssh-access) or the [NoMachine (NX) service](../how-to-use-merlin/nomachine.md).
|
||||
|
||||
The following image shows the Slurm architecture design for the Merlin5 & Merlin6 (CPU & GPU) clusters:
|
||||
|
||||

|
||||

|
||||
|
||||
### Merlin6
|
||||
|
||||
@@ -41,9 +41,9 @@ by the BIO Division and by Deep Leaning project.
|
||||
|
||||
These computational resources are split into **two** different **[Slurm](https://slurm.schedmd.com/overview.html)** clusters:
|
||||
|
||||
* The Merlin6 CPU nodes are in a dedicated **[Slurm](https://slurm.schedmd.com/overview.html)** cluster called [**`merlin6`**](/merlin6/slurm-configuration.html).
|
||||
* The Merlin6 CPU nodes are in a dedicated **[Slurm](https://slurm.schedmd.com/overview.html)** cluster called [**`merlin6`**](../slurm-configuration.md).
|
||||
* This is the **default Slurm cluster** configured in the login nodes: any job submitted without the option `--cluster` will be submited to this cluster.
|
||||
* The Merlin6 GPU resources are in a dedicated **[Slurm](https://slurm.schedmd.com/overview.html)** cluster called [**`gmerlin6`**](/gmerlin6/slurm-configuration.html).
|
||||
* The Merlin6 GPU resources are in a dedicated **[Slurm](https://slurm.schedmd.com/overview.html)** cluster called [**`gmerlin6`**](../../gmerlin6/slurm-configuration.md).
|
||||
* Users submitting to the **`gmerlin6`** GPU cluster need to specify the option ``--cluster=gmerlin6``.
|
||||
|
||||
### Merlin5
|
||||
@@ -52,5 +52,5 @@ The old Slurm **CPU** _Merlin_ cluster is still active and is maintained in a be
|
||||
|
||||
**Merlin5** only contains **computing nodes** resources in a dedicated **[Slurm](https://slurm.schedmd.com/overview.html)** cluster.
|
||||
|
||||
* The Merlin5 CPU cluster is called [**merlin5**](/merlin5/slurm-configuration.html).
|
||||
* The Merlin5 CPU cluster is called [**merlin5**](../../merlin5/slurm-configuration.md).
|
||||
|
||||
|
||||
Reference in New Issue
Block a user