gitea-pages/pages/merlin5/hardware-and-software-description.md
2021-04-15 17:38:45 +02:00

5.7 KiB

title, last_updated, sidebar, permalink
title last_updated sidebar permalink
Hardware And Software Description 09 April 2021 merlin6_sidebar /merlin5/hardware-and-software.html

Hardware

Computing Nodes

Merlin5 is built from recycled nodes, and hardware will be decomissioned as soon as it fails (due to expired warranty and age of the cluster).

  • Merlin5 is based on the HPE c7000 Enclosure solution, with 16 x HPE ProLiant BL460c Gen8 nodes per chassis.
  • Connectivity is based on Infiniband ConnectX-3 QDR-40Gbps
    • 16 internal ports for intra chassis communication
    • 2 connected external ports for inter chassis communication and storage access.

The below table summarizes the hardware setup for the Merlin5 computing nodes:

Merlin5 CPU Computing Nodes
Chassis Node Processor Sockets Cores Threads Scratch Memory
#0 merlin-c-[18-30] Intel Xeon E5-2670 2 16 1 50GB 64GB
merlin-c-[31,32] 128GB
#1 merlin-c-[33-45] Intel Xeon E5-2670 2 16 1 50GB 64GB
merlin-c-[46,47] 128GB

Login Nodes

The login nodes are part of the Merlin6 HPC cluster, and are used to compile and to submit jobs to the different Merlin Slurm clusters (merlin5,merlin6,gmerlin6,etc.). Please refer to the Merlin6 Hardware Documentation for further information.

Storage

The storage is part of the Merlin6 HPC cluster, and is mounted in all the Slurm clusters (merlin5,merlin6,gmerlin6,etc.). Please refer to the Merlin6 Hardware Documentation for further information.

Network

Merlin5 cluster connectivity is based on the Infiniband QDR technology. This allows fast access with very low latencies to the data as well as running extremely efficient MPI-based jobs. However, this is an old version of Infiniband which requires older drivers and software can not take advantage of the latest features.

Software

In Merlin5, we try to keep software stack coherency with the main cluster Merlin6.

Due to this, Merlin5 runs: