# Introduction ![Merlin](../../images/merlin_cave.png){ width="400px" } /// caption _Within his lair, the wizard ever strives for the perfection of his art._ /// ## About Merlin7 PSI's Merlin7 cluster is run on top of an IaaS (Infrastructure as a Service) *vCluster* on the [CSCS Alps infrastructure](https://www.cscs.ch/computers/alps). It is fully integrated with the PSI service landscape was designed to provide the same end user experience as its PSI-local predecessor clusters. Merlin7 has been in **production** since beginning of June 2025. All PSI users can request access to Merlin7, please go to the [Requesting Merlin Accounts](requesting-accounts.md) page and complete the steps given there. In case you identify errors or missing information, please provide feedback through [merlin-admins mailing list](mailto:merlin-admins@lists.psi.ch) mailing list or [submit a ticket using the PSI service portal](https://psi.service-now.com/psisp). ## Infrastructure ### Hardware The Merlin7 cluster contains the following node specification: | Node | #N | CPU | RAM | GPU | #GPUs | | ----: | -- | --- | --- | ----: | ---: | | Login | 2 | 2 AMD EPYC 7742 (64 Cores 2.25GHz) | 512GB | | | | CPU | 77 | 2 AMD EPYC 7742 (64 Cores 2.25GHz) | 512GB | | | | GPU A100 | 8 | 2 AMD EPYC 7713 (64 Cores 3.2GHz) | 512GB | A100 80GB | 4 | | GPU GH | 5 | NVIDIA ARM Grace Neoverse v2 (144 Cores 3.1GHz) | 864GB (Unified) | GH200 120GB | 4 | ### Network The Merlin7 cluster builds on top of HPE/Cray technologies, including a high-performance network fabric called Slingshot. This network fabric is able to provide up to 200 Gbit/s throughput between nodes. Further information on Slignshot can be found on at [HPE](https://www.hpe.com/psnow/doc/PSN1012904596HREN). Through software interfaces like [libFabric](https://ofiwg.github.io/libfabric/) (which available on Merlin7), application can leverage the network seamlessly. ### Storage Unlike previous iteration of the Merlin HPC clusters, Merlin7 _does not_ have any local storage. Instead storage for the entire cluster is provided through a dedicated storage appliance from HPE/Cray called [ClusterStor](https://www.hpe.com/psnow/doc/PSN1012842049INEN.pdf). The appliance is built of several storage servers: * 2 management nodes * 2 MDS servers, 12 drives per server, 2.9TiB (Raid10) * 8 OSS-D servers, 106 drives per server, 14.5 T.B HDDs (Gridraid / Raid6) * 4 OSS-F servers, 12 drives per server 7TiB SSDs (Raid10) With effective storage capacity of: * 10 PB HDD * value visible on linux: HDD 9302.4 TiB * 162 TB SSD * value visible on linux: SSD 151.6 TiB * 23.6 TiB on Metadata The storage is directly connected to the cluster (and each individual node) through the Slingshot NIC.