Doc changes

This commit is contained in:
2021-05-21 12:34:19 +02:00
parent 42d8f38934
commit fcfdbf1344
46 changed files with 447 additions and 528 deletions

View File

@ -0,0 +1,56 @@
---
title: Accessing Interactive Nodes
#tags:
#keywords:
last_updated: 20 May 2021
#summary: ""
sidebar: merlin6_sidebar
permalink: /merlin6/interactive.html
---
## SSH Access
For interactive command shell access, use an SSH client. We recommend to activate SSH's X11 forwarding to allow you to use graphical
applications (e.g. a text editor, but for more performant graphical access, refer to the sections below). X applications are supported
in the login nodes and X11 forwarding can be used for those users who have properly configured X11 support in their desktops, however:
* Merlin6 administrators **do not offer support** for user desktop configuration (Windows, MacOS, Linux).
* Hence, Merlin6 administrators **do not offer official support** for X11 client setup.
* Nevertheless, a generic guide for X11 client setup (*Linux*, *Windows* and *MacOS*) is provided below.
* PSI desktop configuration issues must be addressed through **[PSI Service Now](https://psi.service-now.com/psisp)** as an *Incident Request*.
* Ticket will be redirected to the corresponding Desktop support group (Windows, Linux).
### Accessing from a Linux client
Refer to [{How To Use Merlin -> Accessing from Linux Clients}](/merlin6/connect-from-linux.html) for **Linux** SSH client and X11 configuration.
### Accessing from a Windows client
Refer to [{How To Use Merlin -> Accessing from Windows Clients}](/merlin6/connect-from-windows.html) for **Windows** SSH client and X11 configuration.
### Accessing from a MacOS client
Refer to [{How To Use Merlin -> Accessing from MacOS Clients}](/merlin6/connect-from-macos.html) for **MacOS** SSH client and X11 configuration.
## NoMachine Remote Desktop Access
X applications are supported in the login nodes and can run efficiently through a **NoMachine** client. This is the officially supported way to run more demanding X applications on Merlin6.
* For PSI Windows workstations, this can be installed from the Software Kiosk as 'NX Client'. If you have difficulties installing, please request support through **[PSI Service Now](https://psi.service-now.com/psisp)** as an *Incident Request*.
* For other workstations The client software can be downloaded from the [Nomachine Website](https://www.nomachine.com/product&p=NoMachine%20Enterprise%20Client).
### Configuring NoMachine
Refer to [{How To Use Merlin -> Remote Desktop Access}](/merlin6/nomachine.html) for further instructions of how to configure the NoMachine client and how to access it from PSI and from outside PSI.
## Login nodes hardware description
The Merlin6 login nodes are the official machines for accessing the recources of Merlin6.
From these machines, users can submit jobs to the Slurm batch system as well as visualize or compile their software.
The Merlin6 login nodes are the following:
| Hostname | SSH | NoMachine | #cores | #Threads | CPU | Memory | Scratch | Scratch Mountpoint |
| ------------------- | --- | --------- | ------ |:--------:| :-------------------- | ------ | ---------- | :------------------ |
| merlin-l-001.psi.ch | yes | yes | 2 x 22 | 2 | Intel Xeon Gold 6152 | 384GB | 1.8TB NVMe | ``/scratch`` |
| merlin-l-002.psi.ch | yes | yes | 2 x 22 | 2 | Intel Xeon Gold 6142 | 384GB | 1.8TB NVMe | ``/scratch`` |
| merlin-l-01.psi.ch | yes | - | 2 x 16 | 2 | Intel Xeon E5-2697Av4 | 512GB | 100GB SAS | ``/scratch`` |

View File

@ -0,0 +1,53 @@
---
title: Accessing Slurm Cluster
#tags:
#keywords:
last_updated: 13 June 2019
#summary: ""
sidebar: merlin6_sidebar
permalink: /merlin6/slurm-access.html
---
## The Merlin Slurm clusters
Merlin contains a multi-cluster setup, where multiple Slurm clusters coexist under the same umbrella.
It basically contains the following clusters:
* The **Merlin6 Slurm CPU cluster**, which is called [**`merlin6`**](/merlin6/slurm-access.html#merlin6-cpu-cluster-access).
* The **Merlin6 Slurm GPU cluster**, which is called [**`gmerlin6`**](/merlin6/slurm-access.html#merlin6-gpu-cluster-access).
* The *old Merlin5 Slurm CPU cluster*, which is called [**`merlin5`**](/merlin6/slurm-access.html#merlin5-cpu-cluster-access), still supported in a best effort basis.
## Accessing the Slurm clusters
Any job submission must be performed from a **Merlin login node**. Please refer to the [**Accessing the Interactive Nodes documentation**](/merlin6/interactive.html)
for further information about how to access the cluster.
In addition, any job *must be submitted from a high performance storage area visible by the login nodes and by the computing nodes*. For this, the possible storage areas are the following:
* `/data/user`
* `/data/project`
* `/shared-scratch`
Please, avoid using `/psi/home` directories for submitting jobs.
### Merlin6 CPU cluster access
The **Merlin6 CPU cluster** (**`merlin6`**) is the default cluster configured in the login nodes. Any job submission will use by default this cluster, unless
the option `--cluster` is specified with another of the existing clusters.
For further information about how to use this cluster, please visit: [**Merlin6 CPU Slurm Cluster documentation**](/merlin6/slurm-configuration.html).
### Merlin6 GPU cluster access
The **Merlin6 GPU cluster** (**`gmerlin6`**) is visible from the login nodes. However, to submit jobs to this cluster, one needs to specify the option `--cluster=gmerlin6` when submitting a job or allocation.
For further information about how to use this cluster, please visit: [**Merlin6 GPU Slurm Cluster documentation**](/gmerlin6/slurm-configuration.html).
### Merlin5 CPU cluster access
The **Merlin5 CPU cluster** (**`merlin5`**) is visible from the login nodes. However, to submit jobs
to this cluster, one needs to specify the option `--cluster=merlin5` when submitting a job or allocation.
Using this cluster is in general not recommended, however this is still available for old users needing
extra computational resources or longer jobs. Have in mind that this cluster is only supported in a
**best effort basis**, and it contains very old hardware and configurations.
For further information about how to use this cluster, please visit the [**Merlin5 CPU Slurm Cluster documentation**](/gmerlin6/slurm-configuration.html).

View File

@ -0,0 +1,27 @@
---
title: Introduction
#tags:
#keywords:
last_updated: 28 June 2019
#summary: "Merlin 6 cluster overview"
sidebar: merlin6_sidebar
permalink: /merlin6/cluster-introduction.html
---
## Slurm clusters
* The new Slurm CPU cluster is called [**`merlin6`**](/merlin6/cluster-introduction.html).
* The new Slurm GPU cluster is called [**`gmerlin6`**](/gmerlin6/cluster-introduction.html)
* The old Slurm *merlin* cluster is still active and best effort support is provided.
The cluster, was renamed as [**merlin5**](/merlin5/cluster-introduction.html).
From July 2019, **`merlin6`** becomes the **default Slurm cluster** and any job submitted from the login node will be submitted to that cluster if not .
* Users can keep submitting to the old *`merlin5`* computing nodes by using the option ``--cluster=merlin5``.
* Users submitting to the **`gmerlin6`** GPU cluster need to specify the option ``--cluster=gmerlin6``.
### Slurm 'merlin6'
**CPU nodes** are configured in a **Slurm** cluster, called **`merlin6`**, and
this is the _**default Slurm cluster**_. Hence, by default, if no Slurm cluster is
specified (with the `--cluster` option), this will be the cluster to which the jobs
will be sent.

View File

@ -0,0 +1,42 @@
---
title: Code Of Conduct
#tags:
#keywords:
last_updated: 13 June 2019
#summary: ""
sidebar: merlin6_sidebar
permalink: /merlin6/code-of-conduct.html
---
## The Basic principle
The basic principle is courtesy and consideration for other users.
* Merlin6 is a system shared by many users, therefore you are kindly requested to apply common courtesy in using its resources. Please follow our guidelines which aim at providing and maintaining an efficient compute environment for all our users.
* Basic shell programming skills are an essential requirement in a Linux/UNIX HPC cluster environment; a proficiency in shell programming is greatly beneficial.
## Interactive nodes
* The interactive nodes (also known as login nodes) are for development and quick testing:
* It is **strictly forbidden to run production jobs** on the login nodes. All production jobs must be submitted to the batch system.
* It is **forbidden to run long processes** occupying big parts of a login node's resources.
* According to the previous rules, **misbehaving running processes will have to be killed.**
in order to keep the system responsive for other users.
## Batch system
* Make sure that no broken or run-away processes are left when your job is done. Keep the process space clean on all nodes.
* During the runtime of a job, it is mandatory to use the ``/scratch`` and ``/shared-scratch`` partitions for temporary data:
* It is **forbidden** to use the ``/data/user``, ``/data/project`` or ``/psi/home/`` for that purpose.
* Always remove files you do not need any more (e.g. core dumps, temporary files) as early as possible. Keep the disk space clean on all nodes.
* Prefer ``/scratch`` over ``/shared-scratch`` and use the latter only when you require the temporary files to be visible from multiple nodes.
* Read the description in **[Merlin6 directory structure](### Merlin6 directory structure)** for learning about the correct usage of each partition type.
## System Administrator Rights
* The system administrator has the right to temporarily block the access to Merlin6 for an account violating the Code of Conduct in order to maintain the efficiency and stability of the system.
* Repetitive violations by the same user will be escalated to the user's supervisor.
* The system administrator has the right to delete files in the **scratch** directories
* after a job, if the job failed to clean up its files.
* during the job in order to prevent a job from destabilizing a node or multiple nodes.
* The system administrator has the right to kill any misbehaving running processes.

View File

@ -0,0 +1,166 @@
---
title: Hardware And Software Description
#tags:
#keywords:
last_updated: 13 June 2019
#summary: ""
sidebar: merlin6_sidebar
permalink: /merlin6/hardware-and-software.html
---
## Hardware
### Computing Nodes
The new Merlin6 cluster contains a solution based on **four** [**HPE Apollo k6000 Chassis**](https://h20195.www2.hpe.com/v2/getdocument.aspx?docname=a00016641enw)
* *Three* of them contain 24 x [**HP Apollo XL230K Gen10**](https://h20195.www2.hpe.com/v2/GetDocument.aspx?docname=a00016634enw) blades.
* A *fourth* chassis was purchased on 2021 with [**HP Apollo XL230K Gen10**](https://h20195.www2.hpe.com/v2/GetDocument.aspx?docname=a00016634enw) blades dedicated to few experiments. Blades have slighly different components depending on specific project requirements.
The connectivity for the Merlin6 cluster is based on **ConnectX-5 EDR-100Gbps**, and each chassis contains:
* 1 x [HPE Apollo InfiniBand EDR 36-port Unmanaged Switch](https://h20195.www2.hpe.com/v2/getdocument.aspx?docname=a00016643enw)
* 24 internal EDR-100Gbps ports (1 port per blade for internal low latency connectivity)
* 12 external EDR-100Gbps ports (for external for internal low latency connectivity)
<table>
<thead>
<tr>
<th scope='colgroup' style="vertical-align:middle;text-align:center;" colspan="8">Merlin6 CPU Computing Nodes</th>
</tr>
<tr>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Chassis</th>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Node</th>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Processor</th>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Sockets</th>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Cores</th>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Threads</th>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Scratch</th>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Memory</th>
</tr>
</thead>
<tbody>
<tr style="vertical-align:middle;text-align:center;" ralign="center">
<td style="vertical-align:middle;text-align:center;" rowspan="1"><b>#0</b></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1"><b>merlin-c-0[01-24]</b></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1"><a href="https://ark.intel.com/content/www/us/en/ark/products/120491/intel-xeon-gold-6152-processor-30-25m-cache-2-10-ghz.html">Intel Xeon Gold 6152</a></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">2</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">44</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">2</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">1.2TB</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">384GB</td>
</tr>
<tr style="vertical-align:middle;text-align:center;" ralign="center">
<td style="vertical-align:middle;text-align:center;" rowspan="1"><b>#1</b></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1"><b>merlin-c-1[01-24]</b></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1"><a href="https://ark.intel.com/content/www/us/en/ark/products/120491/intel-xeon-gold-6152-processor-30-25m-cache-2-10-ghz.html">Intel Xeon Gold 6152</a></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">2</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">44</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">2</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">1.2TB</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">384GB</td>
</tr>
<tr style="vertical-align:middle;text-align:center;" ralign="center">
<td style="vertical-align:middle;text-align:center;" rowspan="1"><b>#2</b></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1"><b>merlin-c-2[01-24]</b></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1"><a href="https://ark.intel.com/content/www/us/en/ark/products/120491/intel-xeon-gold-6152-processor-30-25m-cache-2-10-ghz.html">Intel Xeon Gold 6152</a></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">2</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">44</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">2</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">1.2TB</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">384GB</td>
</tr>
<tr style="vertical-align:middle;text-align:center;" ralign="center">
<td style="vertical-align:middle;text-align:center;" rowspan="2"><b>#3</b></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1"><b>merlin-c-3[01-06]</b></td>
<td style="vertical-align:middle;text-align:center;" rowspan="2"><a href="https://ark.intel.com/content/www/us/en/ark/products/199343/intel-xeon-gold-6240r-processor-35-75m-cache-2-40-ghz.html">Intel Xeon Gold 6240R</a></td>
<td style="vertical-align:middle;text-align:center;" rowspan="2">2</td>
<td style="vertical-align:middle;text-align:center;" rowspan="2">48</td>
<td style="vertical-align:middle;text-align:center;" rowspan="2">2</td>
<td style="vertical-align:middle;text-align:center;" rowspan="2">1.2TB</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">384GB</td>
</tr>
<tr style="vertical-align:middle;text-align:center;" ralign="center">
<td rowspan="1"><b>merlin-c-3[07-12]</b></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">768GB</td>
</tr>
</tbody>
</table>
Each blade contains a NVMe disk, where up to 300TB are dedicated to the O.S., and ~1.2TB are reserved for local `/scratch`.
### Login Nodes
*One old login node* (``merlin-l-01.psi.ch``) is inherit from the previous Merlin5 cluster. Its mainly use is for running some BIO services (`cryosparc`) and for submitting jobs.
*Two new login nodes* (``merlin-l-001.psi.ch``,``merlin-l-002.psi.ch``) with similar configuration to the Merlin6 computing nodes are available for the users. The mainly use
is for compiling software and submitting jobs.
The connectivity is based on **ConnectX-5 EDR-100Gbps** for the new login nodes, and **ConnectIB FDR-56Gbps** for the old one.
<table>
<thead>
<tr>
<th scope='colgroup' style="vertical-align:middle;text-align:center;" colspan="8">Merlin6 CPU Computing Nodes</th>
</tr>
<tr>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Hardware</th>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Node</th>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Processor</th>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Sockets</th>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Cores</th>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Threads</th>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Scratch</th>
<th scope='col' style="vertical-align:middle;text-align:center;" colspan="1">Memory</th>
</tr>
</thead>
<tbody>
<tr style="vertical-align:middle;text-align:center;" ralign="center">
<td style="vertical-align:middle;text-align:center;" rowspan="1"><b>Old</b></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1"><b>merlin-l-01</b></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1"><a href="https://ark.intel.com/products/91768/Intel-Xeon-Processor-E5-2697A-v4-40M-Cache-2-60-GHz-">Intel Xeon E5-2697AV4</a></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">2</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">16</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">2</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">100GB</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">512GB</td>
</tr>
<tr style="vertical-align:middle;text-align:center;" ralign="center">
<td style="vertical-align:middle;text-align:center;" rowspan="1"><b>New</b></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1"><b>merlin-l-00[1,2]</b></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1"><a href="https://ark.intel.com/content/www/us/en/ark/products/120491/intel-xeon-gold-6152-processor-30-25m-cache-2-10-ghz.html">Intel Xeon Gold 6152</a></td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">2</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">44</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">2</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">1.8TB</td>
<td style="vertical-align:middle;text-align:center;" rowspan="1">384GB</td>
</tr>
</tbody>
</table>
### Storage
The storage node is based on the [Lenovo Distributed Storage Solution for IBM Spectrum Scale](https://lenovopress.com/lp0626-lenovo-distributed-storage-solution-for-ibm-spectrum-scale-x3650-m5).
* 2 x **Lenovo DSS G240** systems, each one composed by 2 IO Nodes **ThinkSystem SR650** mounting 4 x **Lenovo Storage D3284 High Density Expansion** enclosures.
* Each IO node has a connectivity of 400Gbps (4 x EDR 100Gbps ports, 2 of them are **ConnectX-5** and 2 are **ConnectX-4**).
The storage solution is connected to the HPC clusters through 2 x **Mellanox SB7800 InfiniBand 1U Switches** for high availability and load balancing.
### Network
Merlin6 cluster connectivity is based on the [**Infiniband**](https://en.wikipedia.org/wiki/InfiniBand) technology. This allows fast access with very low latencies to the data as well as running
extremely efficient MPI-based jobs:
* Connectivity amongst different computing nodes on different chassis ensures up to 1200Gbps of aggregated bandwidth.
* Inter connectivity (communication amongst computing nodes in the same chassis) ensures up to 2400Gbps of aggregated bandwidth.
* Communication to the storage ensures up to 800Gbps of aggregated bandwidth.
Merlin6 cluster currently contains 5 Infiniband Managed switches and 3 Infiniband Unmanaged switches (one per HP Apollo chassis):
* 1 x **MSX6710** (FDR) for connecting old GPU nodes, old login nodes and MeG cluster to the Merlin6 cluster (and storage). No High Availability mode possible.
* 2 x **MSB7800** (EDR) for connecting Login Nodes, Storage and other nodes in High Availability mode.
* 3 x **HP EDR Unmanaged** switches, each one embedded to each HP Apollo k6000 chassis solution.
* 2 x **MSB7700** (EDR) are the top switches, interconnecting the Apollo unmanaged switches and the managed switches (MSX6710, MSB7800).
## Software
In Merlin6, we try to keep the latest software stack release to get the latest features and improvements. Due to this, **Merlin6** runs:
* [**RedHat Enterprise Linux 7**](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index)
* [**Slurm**](https://slurm.schedmd.com/), we usually try to keep it up to date with the most recent versions.
* [**GPFS v5**](https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/ibmspectrumscale502_welcome.html)
* [**MLNX_OFED LTS v.5.2-2.2.0.0 or newer**](https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed) for all **ConnectX-5** or superior cards.
* [MLNX_OFED LTS v.4.9-2.2.4.0](https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed) is installed for remaining **ConnectX-3** and **ConnectIB** cards.

View File

@ -0,0 +1,52 @@
---
title: Introduction
#tags:
#keywords:
last_updated: 28 June 2019
#summary: "Merlin 6 cluster overview"
sidebar: merlin6_sidebar
permalink: /merlin6/introduction.html
redirect_from:
- /merlin6
- /merlin6/index.html
---
## The Merlin local HPC cluster
Historically, the local HPC clusters at PSI were named Merlin. Over the years,
multiple generations of Merlin have been deployed.
### Merlin6
Merlin6 is a the official PSI Local HPC cluster for development and
mission-critical applications that has been built in 2019. It replaces
the Merlin5 cluster.
Merlin6 is designed to be extensible, so is technically possible to add
more compute nodes and cluster storage without significant increase of
the costs of the manpower and the operations.
Merlin6 is mostly based on **CPU** resources, but also contains a small amount
of **GPU**-based resources which are mostly used by the BIO Division and Deep Learning projects:
* The Merlin6 CPU nodes are in a dedicated Slurm cluster called [**`merlin6`**](/merlin6/slurm-configuration.html).
* This is the default Slurm cluster configured in the login nodes, and any job submitted without the option `--cluster` will be submited to this cluster.
* The Merlin6 GPU resources are in a dedicated Slurm cluster called [**`gmerlin6`**](/gmerlin6/slurm-configuration.html).
* Users submitting to the **`gmerlin6`** GPU cluster need to specify the option ``--cluster=gmerlin6``.
### Merlin5
The old Slurm **CPU** *merlin* cluster is still active and is maintained in a best effort basis.
* The Merlin5 CPU cluster is called [**merlin5**](/merlin5/slurm-configuration.html).
## Merlin Architecture
The following image shows the Slurm architecture design for the Merlin5 & Merlin6 clusters:
![Merlin6 Slurm Architecture Design]({{ "/images/merlin-slurm-architecture.png" }})
### Merlin6 Architecture Diagram
The following image shows the Merlin6 cluster architecture diagram:
![Merlin6 Architecture Diagram]({{ "/images/merlinschema3.png" }})

View File

@ -0,0 +1,124 @@
---
title: Requesting Accounts
#tags:
#keywords:
last_updated: 28 June 2019
#summary: ""
sidebar: merlin6_sidebar
permalink: /merlin6/request-account.html
---
Requesting access to the cluster must be done through **[PSI Service Now](https://psi.service-now.com/psisp)** as an
*Incident Request*. AIT and us are working on a ServiceNow integrated form to ease this process in the future.
Due to the ticket *priority* being *Low* for non-emergency requests of this kind, it might take up to 56h in the worst case until access to the cluster is granted (raise the priority if you have strong reasons for faster access) .
---
## Requesting Access to Merlin6
Access to Merlin6 is regulated by a PSI user's account being a member of the **svc-cluster_merlin6** group.
Registration for **Merlin6** access *must be done* through **[PSI Service Now](https://psi.service-now.com/psisp)**:
* Please open a ticket as *Incident Request*, with subject:
```
Subject: [Merlin6] Access Request for user xxxxx
```
* Text content (please use always this template and fill the fields marked by `xxxxx`):
```
Dear HelpDesk,
I would like to request access to the Merlin6 cluster. This is my account information
* Last Name: xxxxx
* First Name: xxxxx
* PSI user account: xxxxx
Please add me to the following Unix groups:
* 'svc-cluster_merlin6'
Thanks,
```
---
## Requesting Access to Merlin5
Merlin5 computing nodes will be available for some time as a **best effort** service.
For accessing the old Merlin5 resources, users should belong to the **svc-cluster_merlin5** Unix Group.
Registration for **Merlin5** access *must be done* through **[PSI Service Now](https://psi.service-now.com/psisp)**:
* Please open a ticket as *Incident Request*, with subject:
```
Subject: [Merlin5] Access Request for user xxxxx
```
* Text content (please use always this template):
* Text content (please use always this template and fill the fields marked by `xxxxx`):
```
Dear HelpDesk,
I would like to request access to the Merlin5 cluster. This is my account information
* Last Name: xxxxx
* First Name: xxxxx
* PSI user account: xxxxx
Please add me to the following Unix groups:
* 'svc-cluster_merlin5'
Thanks,
```
Alternatively, if you want to request access to both Merlin5 and Merlin6, you can request it in the same ticket as follows:
* Use the template **[Requesting Access to Merlin6](##Requesting-Access-to-Merlin6)**
* Add the **``'svc-cluster_merlin5'``** Unix Group after the line containing the merlin6 group **`'svc-cluster_merlin6'`**)
---
## Requesting extra Unix groups
Some users may require to be added to some extra specific Unix groups.
* This will grant access to specific resources.
* In example, some BIO groups may belong to a specific BIO group for having access to the project area for that group.
* Supervisors should inform new users which extra groups are needed for their project(s).
When requesting access to **[Merlin6](##Requesting-Access-to-Merlin6)** or **[Merlin5](##Requesting-Access-to-Merlin5)**,
these extra Unix Groups can be added in the same *Incident Request* by supplying additional lines specifying the respective Groups.
Naturally, this step can also be done later when the need arises in a separate **[PSI Service Now](https://psi.service-now.com/psisp)** ticket.
* Please open a ticket as *Incident Request*, with subject:
```
Subject: [Unix Group] Access Request for user xxxxx
```
* Text content (please use always this template):
```
Dear HelpDesk,
I would like to request membership for the Unix Groups listed below. This is my account information
* Last Name: xxxxx
* First Name: xxxxx
* PSI user account: xxxxx
List of unix groups I would like to be added to:
* unix_group_1
* unix_group_2
* ...
* unix_group_N
Thanks,
```
**Important note**: Requesting access to specific Unix Groups will require validation from the responsible of the Unix Group. If you ask for inclusion in many groups it may take longer, since the fulfillment of the request will depend on more people.

View File

@ -0,0 +1,72 @@
---
title: Requesting a Project
#tags:
#keywords:
last_updated: 01 July 2019
#summary: ""
sidebar: merlin6_sidebar
permalink: /merlin6/request-project.html
---
A project owns its own storage area which can be accessed by the storage members.
Projects can receive a higher storage quota than user areas and should be the primary way of organizing bigger storage requirements
in a multi-user collaboration.
Access to a project's directories is governed by project members belonging to a common **Unix group**. You may use an existing
Unix group or you may have a new Unix group created especially for the project. The **project responsible** will be the owner of
the Unix group (this is important)!
The **default storage quota** for a project is 1TB (with a maximal *Number of Files* of 1M). If you need a larger assignment, you
need to request this and provide a description of your storage needs.
To request a project, please provide the following information in a **[PSI Service Now ticket](https://psi.service-now.com/psisp)**
* Please open an *Incident Request* with subject:
```
Subject: [Merlin6] Project Request for project name xxxxxx
```
* and base the text field of the request on this template
```
Dear HelpDesk
I would like to request a new Merlin6 project.
Project Name: xxxxx
UnixGroup: xxxxx # Must be an existing Unix Group
The project responsible is the Owner of the Unix Group.
If you need a storage quota exceeding the defaults, please provide a description
and motivation for the higher storage needs:
Storage Quota: 1TB with a maximum of 1M Files
Reason: (None for default 1TB/1M)
Best regards,
```
**If you need a new Unix group** to be created, you need to first get this group through
a separate ***[PSI Service Now ticket](https://psi.service-now.com/psisp)**. Please
use the following template. You can also specify the login names of the initial group
members and the **owner** of the group. The owner of the group is the person who
will be allowed to modify the group.
* Please open an *Incident Request* with subject:
```
Subject: Request for new unix group xxxx
```
* and base the text field of the request on this template
```
Dear HelpDesk
I would like to request a new unix group.
Unix Group Name: unx-xxxxx
Initial Group Members: xxxxx, yyyyy, zzzzz, ...
Group Owner: xxxxx
Best regards,
```