--- title: Hardware And Software Description #tags: #keywords: last_updated: 09 April 2021 #summary: "" sidebar: merlin6_sidebar permalink: /merlin5/hardware-and-software.html --- ## Hardware ### Computing Nodes Merlin5 is built from recycled nodes, and hardware will be decomissioned as soon as it fails (due to expired warranty and age of the cluster). * Merlin5 is based on the [**HPE c7000 Enclosure**](https://h20195.www2.hpe.com/v2/getdocument.aspx?docname=c04128339) solution, with 16 x [**HPE ProLiant BL460c Gen8**](https://h20195.www2.hpe.com/v2/getdocument.aspx?docname=c04123239) nodes per chassis. * Connectivity is based on Infiniband **ConnectX-3 QDR-40Gbps** * 16 internal ports for intra chassis communication * 2 connected external ports for inter chassis communication and storage access. The below table summarizes the hardware setup for the Merlin5 computing nodes:
Merlin5 CPU Computing Nodes
Chassis Node Processor Sockets Cores Threads Scratch Memory
#0 merlin-c-[18-30] Intel Xeon E5-2670 2 16 1 50GB 64GB
merlin-c-[31,32] 128GB
#1 merlin-c-[33-45] Intel Xeon E5-2670 2 16 1 50GB 64GB
merlin-c-[46,47] 128GB
### Login Nodes The login nodes are part of the **[Merlin6](/merlin6/introduction.html)** HPC cluster, and are used to compile and to submit jobs to the different ***Merlin Slurm clusters*** (`merlin5`,`merlin6`,`gmerlin6`,etc.). Please refer to the **[Merlin6 Hardware Documentation](/merlin6/hardware-and-software.html)** for further information. ### Storage The storage is part of the **[Merlin6](/merlin6/introduction.html)** HPC cluster, and is mounted in all the ***Slurm clusters*** (`merlin5`,`merlin6`,`gmerlin6`,etc.). Please refer to the **[Merlin6 Hardware Documentation](/merlin6/hardware-and-software.html)** for further information. ### Network Merlin5 cluster connectivity is based on the [Infiniband QDR](https://en.wikipedia.org/wiki/InfiniBand) technology. This allows fast access with very low latencies to the data as well as running extremely efficient MPI-based jobs. However, this is an old version of Infiniband which requires older drivers and software can not take advantage of the latest features. ## Software In Merlin5, we try to keep software stack coherency with the main cluster [Merlin6](/merlin6/index.html). Due to this, Merlin5 runs: * [**RedHat Enterprise Linux 7**](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index) * [**Slurm**](https://slurm.schedmd.com/), we usually try to keep it up to date with the most recent versions. * [**GPFS v5**](https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/ibmspectrumscale502_welcome.html) * [**MLNX_OFED LTS v.4.9-2.2.4.0**](https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed), which is an old version, but required because **ConnectX-3** support has been dropped on newer OFED versions.