3.1 KiB

title, keywords, last_updated, sidebar, permalink, redirect_from
title keywords last_updated sidebar permalink redirect_from
Introduction introduction, home, welcome, architecture, design 07 September 2022 merlin7_sidebar /merlin7/introduction.html
/merlin7
/merlin7/index.html

About Merlin7

The Merlin7 cluster is moving toward production state since August 2024, this is expected latest by Q4 2025. Since January 2025 the system has been generally available, but due to some remaining issues with the platform, the schedule of the migration of users and communities has been delayed. You will be notified well in advance regarding the migration of data.

All PSI users can request access to Merlin7, please go to the Requesting Merlin Accounts page and complete the steps given there.

In case you identify errors or missing information, please provide feedback through merlin-admins mailing list mailing list or submit a ticket using the PSI service portal.

Infrastructure

Hardware

The Merlin7 cluster contains the following node specification:

Node #N CPU RAM GPU #GPUs
Login 2 2 AMD EPYC 7742 (64 Cores 2.25GHz) 512GB
CPU 77 2 AMD EPYC 7742 (64 Cores 2.25GHz) 512GB
GPU A100 8 2 AMD EPYC 7713 (64 Cores 3.2GHz) 512GB A100 80GB 4
GPU GH 5 NVIDIA ARM Grace Neoverse v2 (144 Cores 3.1GHz) 864GB (Unified) GH200 120GB 4

Network

The Merlin7 cluster builds on top of HPE/Cray technologies, including a high-performance network fabric called Slingshot. This network fabric is able to provide up to 200 Gbit/s throughput between nodes. Further information on Slignshot can be found on at HPE and at https://www.glennklockwood.com/garden/slingshot.

Through software interfaces like libFabric (which available on Merlin7), application can leverage the network seamlessly.

Storage

Unlike previous iteration of the Merlin HPC clusters, Merlin7 does not have any local storage. Instead storage for the entire cluster is provided through a dedicated storage appliance from HPE/Cray called ClusterStor.

The appliance is built of several storage servers:

  • 2 management nodes
  • 2 MDS servers, 12 drives per server, 2.9TiB (Raid10)
  • 8 OSS-D servers, 106 drives per server, 14.5 T.B HDDs (Gridraid / Raid6)
  • 4 OSS-F servers, 12 drives per server 7TiB SSDs (Raid10)

With effective storage capacity of:

  • 10 PB HDD
    • value visible on linux: HDD 9302.4 TiB
  • 162 TB SSD
    • value visible on linux: SSD 151.6 TiB
  • 23.6 TiB on Metadata

The storage is directly connected to the cluster (and each individual node) through the Slingshot NIC.