6.2 KiB
title, last_updated, sidebar, permalink
title | last_updated | sidebar | permalink |
---|---|---|---|
Hardware And Software Description | 13 June 2019 | merlin6_sidebar | /merlin6/hardware-and-software.html |
Hardware And Software Description
{: .no_toc }
Table of contents
{: .no_toc .text-delta }
- TOC {:toc}
Computing Nodes
The new Merlin6 cluster contains an homogeneous solution based on three HP Apollo k6000 systems. Each HP Apollo k6000 chassis contains 22 HP XL320k Gen10 blades. However, each chassis can contain up to 24 blades, so is possible to upgradew with up to 2 nodes per chassis.
Each HP XL320k Gen 10 blade can contain up to two processors of the latest Intel® Xeon® Scalable Processor family. The hardware and software configuration is the following:
- 3 x HP Apollo k6000 chassis systems, each one:
- 22 x HP Apollo XL230K Gen10, each one:
- 2 x 22 core Intel® Xeon® Gold 6152 Scalable Processor (2.10-3.70GHz).
- 12 x 32 GB (384 GB in total) of DDR4 memory clocked 2666 MHz.
- Dual Port !InfiniBand !ConnectX-5 EDR-100Gbps (low latency network); one active port per chassis.
- 1 x 1.6TB NVMe SSD Disk
- ~300GB reserved for the O.S.
- ~1.2TB reserved for local fast scratch
/scratch
.
- Software:
- 1 x HPE Apollo InfiniBand EDR 36-port Unmanaged Switch
- 24 internal EDR-100Gbps ports (1 port per blade for internal low latency connectivity)
- 12 external EDR-100Gbps ports (for external for internal low latency connectivity)
- 22 x HP Apollo XL230K Gen10, each one:
Login Nodes
merlin-l-0[1,2]
Two login nodes are inherit from the previous Merlin5 cluster: merlin-l-01.psi.ch
, merlin-l-02.psi.ch
. The hardware and software configuration is the following:
- 2 x HP DL380 Gen9, each one:
- 2 x 16 core Intel® Xeon® Processor E5-2697AV4 Family (2.60-3.60GHz)
merlin-l-01.psi.ch
hyper-threading disabledmerlin-l-02.psi.ch
hyper-threading enabled
- 16 x 32 GB (512 GB in total) of DDR4 memory clocked 2400 MHz.
- Dual Port Infiniband !ConnectIB FDR-56Gbps (low latency network).
- Software:
- 2 x 16 core Intel® Xeon® Processor E5-2697AV4 Family (2.60-3.60GHz)
merlin-l-00[1,2]
Two new login nodes are available in the new cluster: merlin-l-001.psi.ch
, merlin-l-002.psi.ch
. The hardware and software configuration is the following:
- 2 x HP DL380 Gen10, each one:
- 2 x 22 core Intel® Xeon® Gold 6152 Scalable Processor (2.10-3.70GHz).
- Hyper-threading disabled.
- 24 x 16GB (384 GB in total) of DDR4 memory clocked 2666 MHz.
- Dual Port Infiniband !ConnectX-5 EDR-100Gbps (low latency network).
- Software:
- NoMachine Terminal Server
- Currently only on:
merlin-l-001.psi.ch
.
- Currently only on:
- RedHat Enterprise Linux 7.6
- Slurm v18.08
- GPFS v5.0.2
- NoMachine Terminal Server
- 2 x 22 core Intel® Xeon® Gold 6152 Scalable Processor (2.10-3.70GHz).
Storage
The storage node is based on the Lenovo Distributed Storage Solution for IBM Spectrum Scale. The solution is equipped with 334 x 10TB disks providing a useable capacity of 2.316 PiB (2.608PB). THe overall solution can provide a maximum read performance of 20GB/s.
- 1 x Lenovo DSS G240, composed by:
- 2 x ThinkSystem SR650, each one:
- 2 x Dual Port Infiniband ConnectX-5 EDR-100Gbps (low latency network).
- 2 x Dual Port Infiniband ConnectX-4 EDR-100Gbps (low latency network).
- 1 x ThinkSystem RAID 930-8i 2GB Flash PCIe 12Gb Adapter
- 1 x ThinkSystem SR630
- 1 x Dual Port Infiniband ConnectX-5 EDR-100Gbps (low latency network).
- 1 x Dual Port Infiniband ConnectX-4 EDR-100Gbps (low latency network).
- 4 x Lenovo Storage D3284 High Density Expansion Enclosure, each one:
- Holds 84 x 3.5" hot-swap drive bays in two drawers. Each drawer has three rows of drives, and each row has 14 drives.
- Each drive bay will contain a 10TB Helium 7.2K NL-SAS HDD.
- 2 x Mellanox SB7800 InfiniBand 1U Switch for High Availability and fast access to the storage with very low latency. Each one:
- 36 EDR-100Gbps ports
- 2 x ThinkSystem SR650, each one:
Network
Merlin6 cluster connectivity is based on the Infiniband technology. This allows fast access with very low latencies to the data as well as running extremely efficient MPI-based jobs:
- Connectivity amongst different computing nodes on different chassis ensures up to 1200Gbps of aggregated bandwidth.
- Inter connectivity (communication amongst computing nodes in the same chassis) ensures up to 2400Gbps of aggregated bandwidth.
- Communication to the storage ensures up to 800Gbps of aggregated bandwidth.
Merlin6 cluster currently contains 5 Infiniband Managed switches and 3 Infiniband Unmanaged switches (one per HP Apollo chassis):
- 1 * MSX6710 (FDR) for connecting old GPU nodes, old login nodes and MeG cluster to the Merlin6 cluster (and storage). No High Availability mode possible.
- 2 * MSB7800 (EDR) for connecting Login Nodes, Storage and other nodes in High Availability mode.
- 3 * HP EDR Unmanaged switches, each one embedded to each HP Apollo k6000 chassis solution.
- 2 * MSB7700 (EDR) are the top switches, interconnecting the Apollo unmanaged switches and the managed switches (MSX6710, MSB7800).