The Merlin7 cluster builds on top of HPE/Cray technologies, including a high-performance network fabric called Slingshot. This network fabric is able
to provide up to 200 Gbit/s throughput between nodes. Further information on Slignshot can be found on at [HPE](https://www.hpe.com/psnow/doc/PSN1012904596HREN) and
at <https://www.glennklockwood.com/garden/slingshot>.
Through software interfaces like [libFabric](https://ofiwg.github.io/libfabric/) (which available on Merlin7), application can leverage the network seamlessly.
### Storage
Unlike previous iteration of the Merlin HPC clusters, Merlin7 _does not_ have any local storage. Instead storage for the entire cluster is provided through
a dedicated storage appliance from HPE/Cray called [ClusterStor](https://www.hpe.com/psnow/doc/PSN1012842049INEN.pdf).
The appliance is built of several storage servers:
* 2 management nodes
* 2 MDS servers, 12 drives per server, 2.9TiB (Raid10)
By default, jobs will be submitted to **`merlin7`**, as it is the primary cluster configured on the login nodes.
Specifying the cluster name is typically unnecessary unless you have defined environment variables that could override the default cluster name.
However, when necessary, one can specify the cluster as follows:
```bash
#SBATCH --cluster=merlin7
```
### Network
## Slurm nodes definition
The Merlin7 cluster builds on top of HPE/Cray technologies, including a high-performance network fabric called Slingshot. This network fabric is able
to provide up to 200 Gbit/s throughput between nodes. Further information on Slignshot can be found on at [HPE](https://www.hpe.com/psnow/doc/PSN1012904596HREN) and
at <https://www.glennklockwood.com/garden/slingshot>.
The table below provides an overview of the Slurm configuration for the different node types in the Merlin7 cluster.
This information is essential for understanding how resources are allocated, enabling users to tailor their submission
scripts accordingly.
Through software interfaces like [libFabric](https://ofiwg.github.io/libfabric/) (which available on Merlin7), application can leverage the network seamlessly.
* **Memory allocation options:** To request additional memory, use the following options in your submission script:
* **`--mem=<mem_in_MB>`**: Allocates memory per node.
* **`--mem-per-cpu=<mem_in_MB>`**: Allocates memory per CPU (equivalent to a core thread).
Unlike previous iteration of the Merlin HPC clusters, Merlin7 _does not_ have any local storage. Instead storage for the entire cluster is provided through
a dedicated storage appliance from HPE/Cray called [ClusterStor](https://www.hpe.com/psnow/doc/PSN1012842049INEN.pdf).
The total memory requested cannot exceed the **`MaxMemPerNode`** value.
* **Impact of disabling Hyper-Threading:** Using the **`--hint=nomultithread`** option disables one thread per core,
effectively halving the number of available CPUs. Consequently, memory allocation will also be halved unless explicitly
adjusted.
The appliance is built of several storage servers:
For MPI-based jobs, where performance generally improves with single-threaded CPUs, this option is recommended.
In such cases, you should double the **`--mem-per-cpu`** value to account for the reduced number of threads.
* 2 management nodes
* 2 MDS servers, 12 drives per server, 2.9TiB (Raid10)
| **meg-short** | 0-01:00:00 | 0-01:00:00 | unlimited | 1000 | 2 | normal | meg |
| **meg-long** | 1-00:00:00 | 5-00:00:00 | unlimited | 1 | 2 | normal | meg |
| **meg-prod** | 1-00:00:00 | 5-00:00:00 | unlimited | 1000 | 4 | normal | meg |
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.