further vibing #2
This commit is contained in:
@@ -37,10 +37,10 @@ independently to ease access for the users and keep independent user accounting.
|
||||
|
||||
The following image shows the Merlin6 cluster architecture diagram:
|
||||
|
||||

|
||||

|
||||
|
||||
### Merlin5 + Merlin6 Slurm Cluster Architecture Design
|
||||
|
||||
The following image shows the Slurm architecture design for the Merlin5 & Merlin6 clusters:
|
||||
|
||||

|
||||

|
||||
|
||||
@@ -120,15 +120,15 @@ The below table summarizes the hardware setup for the Merlin6 GPU computing node
|
||||
|
||||
### Login Nodes
|
||||
|
||||
The login nodes are part of the **[Merlin6](../merlin6/introduction.md)** HPC cluster,
|
||||
The login nodes are part of the **[Merlin6](../merlin6/cluster-introduction.md)** HPC cluster,
|
||||
and are used to compile and to submit jobs to the different ***Merlin Slurm clusters*** (`merlin5`,`merlin6`,`gmerlin6`,etc.).
|
||||
Please refer to the **[Merlin6 Hardware Documentation](/merlin6/hardware-and-software.html)** for further information.
|
||||
Please refer to the **[Merlin6 Hardware Documentation](../merlin6/hardware-and-software-description.md)** for further information.
|
||||
|
||||
### Storage
|
||||
|
||||
The storage is part of the **[Merlin6](/merlin6/introduction.html)** HPC cluster,
|
||||
The storage is part of the **[Merlin6](../merlin6/cluster-introduction.md)** HPC cluster,
|
||||
and is mounted in all the ***Slurm clusters*** (`merlin5`,`merlin6`,`gmerlin6`,etc.).
|
||||
Please refer to the **[Merlin6 Hardware Documentation](/merlin6/hardware-and-software.html)** for further information.
|
||||
Please refer to the **[Merlin6 Hardware Documentation](../merlin6/hardware-and-software-description.md)** for further information.
|
||||
|
||||
### Network
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ permalink: /merlin5/cluster-introduction.html
|
||||
mission-critical applications which was built in 2016-2017. It was an
|
||||
extension of the Merlin4 cluster and built from existing hardware due
|
||||
to a lack of central investment on Local HPC Resources. **Merlin5** was
|
||||
then replaced by the **[Merlin6](../merlin6/index.md)** cluster in 2019,
|
||||
then replaced by the **[Merlin6](../merlin6/cluster-introduction.md)** cluster in 2019,
|
||||
with an important central investment of ~1,5M CHF. **Merlin5** was mostly
|
||||
based on CPU resources, but also contained a small amount of GPU-based
|
||||
resources which were mostly used by the BIO experiments.
|
||||
@@ -40,5 +40,5 @@ The following image shows the Slurm architecture design for Merlin cluster.
|
||||
It contains a multi non-federated cluster setup, with a central Slurm database
|
||||
and multiple independent clusters (`merlin5`, `merlin6`, `gmerlin6`):
|
||||
|
||||

|
||||

|
||||
|
||||
|
||||
@@ -140,9 +140,9 @@ module load paraview
|
||||
vglrun paraview
|
||||
```
|
||||
|
||||
Officially, the supported method for running `vglrun` is by using the [NoMachine remote desktop](../../how-to-use-merlin/nomachine.md).
|
||||
Officially, the supported method for running `vglrun` is by using the [NoMachine remote desktop](../how-to-use-merlin/nomachine.md).
|
||||
Running `vglrun` it's also possible using SSH with X11 Forwarding. However, it's very slow and it's only recommended when running
|
||||
in Slurm (from [NoMachine](../../how-to-use-merlin/nomachine.md)). Please, avoid running `vglrun` over SSH from a desktop or laptop.
|
||||
in Slurm (from [NoMachine](../how-to-use-merlin/nomachine.md)). Please, avoid running `vglrun` over SSH from a desktop or laptop.
|
||||
|
||||
## Software
|
||||
|
||||
|
||||
@@ -11,9 +11,9 @@ permalink: /merlin6/cluster-introduction.html
|
||||
## Slurm clusters
|
||||
|
||||
* The new Slurm CPU cluster is called [**`merlin6`**](cluster-introduction.md).
|
||||
* The new Slurm GPU cluster is called [**`gmerlin6`**](../../gmerlin6/cluster-introduction.md)
|
||||
* The new Slurm GPU cluster is called [**`gmerlin6`**](../gmerlin6/cluster-introduction.md)
|
||||
* The old Slurm *merlin* cluster is still active and best effort support is provided.
|
||||
The cluster, was renamed as [**merlin5**](../../merlin5/cluster-introduction.md).
|
||||
The cluster, was renamed as [**merlin5**](../merlin5/cluster-introduction.md).
|
||||
|
||||
From July 2019, **`merlin6`** becomes the **default Slurm cluster** and any job submitted from the login node will be submitted to that cluster if not .
|
||||
* Users can keep submitting to the old *`merlin5`* computing nodes by using the option ``--cluster=merlin5``.
|
||||
|
||||
@@ -70,7 +70,7 @@ collected at a ***beamline***, you may have been assigned a **`p-group`**
|
||||
Groups are usually assigned to a PI, and then individual user accounts are added to the group. This must be done
|
||||
under user request through PSI Service Now. For existing **a-groups** and **p-groups**, you can follow the standard
|
||||
central procedures. Alternatively, if you do not know how to do that, follow the Merlin6
|
||||
**[Requesting extra Unix groups](../../quick-start-guide/requesting-accounts.md#requesting-extra-unix-groups)** procedure, or open
|
||||
**[Requesting extra Unix groups](../quick-start-guide/requesting-accounts.md)** procedure, or open
|
||||
a **[PSI Service Now](https://psi.service-now.com/psisp)** ticket.
|
||||
|
||||
### Documentation
|
||||
|
||||
@@ -22,11 +22,11 @@ If they are missing, you can install them using the Software Kiosk icon on the D
|
||||
|
||||
2. *[Optional]* Enable ``xterm`` to have similar mouse behavour as in Linux:
|
||||
|
||||

|
||||

|
||||
|
||||
3. Create session to a Merlin login node and *Open*:
|
||||
|
||||

|
||||

|
||||
|
||||
|
||||
## SSH with PuTTY with X11 Forwarding
|
||||
@@ -44,4 +44,4 @@ using the Software Kiosk icon (should be located on the Desktop).
|
||||
|
||||
2. Enable X11 Forwarding in your SSH client. In example, for Putty:
|
||||
|
||||

|
||||

|
||||
|
||||
@@ -28,7 +28,7 @@ visibility.
|
||||
## Direct transfer via Merlin6 login nodes
|
||||
|
||||
The following methods transfer data directly via the [login
|
||||
nodes](../../quick-start-guide/accessing-interactive-nodes.md#login-nodes-hardware-description). They are suitable
|
||||
nodes](../quick-start-guide/accessing-interactive-nodes.md). They are suitable
|
||||
for use from within the PSI network.
|
||||
|
||||
### Rsync
|
||||
@@ -72,7 +72,7 @@ The purpose of the software is to send a large file to someone, have that file a
|
||||
|
||||
From August 2024, Merlin is connected to the **[PSI Data Transfer](https://www.psi.ch/en/photon-science-data-services/data-transfer)** service,
|
||||
`datatransfer.psi.ch`. This is a central service managed by the **[Linux team](https://linux.psi.ch/index.html)**. However, any problems or questions related to it can be directly
|
||||
[reported](../../99-support/contact.md) to the Merlin administrators, which will forward the request if necessary.
|
||||
[reported](../99-support/contact.md) to the Merlin administrators, which will forward the request if necessary.
|
||||
|
||||
The PSI Data Transfer servers supports the following protocols:
|
||||
* Data Transfer - SSH (scp / rsync)
|
||||
@@ -167,4 +167,4 @@ provides a helpful wrapper over the Gnome storage utilities, and provides suppor
|
||||
- [others](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/managing-storage-volumes-in-gnome_using-the-desktop-environment-in-rhel-8#gvfs-back-ends_managing-storage-volumes-in-gnome)
|
||||
|
||||
|
||||
[More instruction on using `merlin_rmount`](merlin-rmount.md)
|
||||
[More instruction on using `merlin_rmount`](../software-support/merlin-rmount.md)
|
||||
|
||||
@@ -36,7 +36,7 @@ recalculate the notebook cells with this new kernel.
|
||||
|
||||
These environments are also available for standard work in a shell session. You
|
||||
can activate an environment in a normal merlin terminal session by using the
|
||||
`module` (q.v. [using Pmodules](../../how-to-use-merlin/using-modules.md)) command to load anaconda
|
||||
`module` (q.v. [using Pmodules](../how-to-use-merlin/using-modules.md)) command to load anaconda
|
||||
python, and from there using the `conda` command to switch to the desired
|
||||
environment:
|
||||
|
||||
@@ -79,7 +79,7 @@ queue. Additional customization can be implemented using the *'Optional user
|
||||
defined line to be added to the batch launcher script'* option. This line is
|
||||
added to the submission script at the end of other `#SBATCH` lines. Parameters can
|
||||
be passed to SLURM by starting the line with `#SBATCH`, like in [Running Slurm
|
||||
Scripts](../../slurm-general-docs/running-jobs.md). Some ideas:
|
||||
Scripts](../slurm-general-docs/running-jobs.md). Some ideas:
|
||||
|
||||
### Request additional memory
|
||||
|
||||
|
||||
@@ -119,16 +119,16 @@ salloc --clusters=merlin6 -N 2 -n 2 $SHELL
|
||||
|
||||
#### Graphical access
|
||||
|
||||
[NoMachine](../../how-to-use-merlin/nomachine.md) is the official supported service for graphical
|
||||
[NoMachine](../how-to-use-merlin/nomachine.md) is the official supported service for graphical
|
||||
access in the Merlin cluster. This service is running on the login nodes. Check the
|
||||
document [{Accessing Merlin -> NoMachine}](../../how-to-use-merlin/nomachine.md) for details about
|
||||
document [{Accessing Merlin -> NoMachine}](../how-to-use-merlin/nomachine.md) for details about
|
||||
how to connect to the **NoMachine** service in the Merlin cluster.
|
||||
|
||||
For other non officially supported graphical access (X11 forwarding):
|
||||
|
||||
* For Linux clients, please follow [{How To Use Merlin -> Accessing from Linux Clients}](../../how-to-use-merlin/connect-from-linux.md)
|
||||
* For Windows clients, please follow [{How To Use Merlin -> Accessing from Windows Clients}](../../how-to-use-merlin/connect-from-windows.md)
|
||||
* For MacOS clients, please follow [{How To Use Merlin -> Accessing from MacOS Clients}](../../how-to-use-merlin/connect-from-macos.md)
|
||||
* For Linux clients, please follow [{How To Use Merlin -> Accessing from Linux Clients}](../how-to-use-merlin/connect-from-linux.md)
|
||||
* For Windows clients, please follow [{How To Use Merlin -> Accessing from Windows Clients}](../how-to-use-merlin/connect-from-windows.md)
|
||||
* For MacOS clients, please follow [{How To Use Merlin -> Accessing from MacOS Clients}](../how-to-use-merlin/connect-from-macos.md)
|
||||
|
||||
### 'srun' with x11 support
|
||||
|
||||
|
||||
@@ -112,7 +112,7 @@ Slurm batch system using allocations:
|
||||
is not always possible (depending on the usage of the cluster).
|
||||
|
||||
Please refer to the documentation **[Running Interactive
|
||||
Jobs](../../slurm-general-docs/interactive-jobs.md)** for firther information about different
|
||||
Jobs](../slurm-general-docs/interactive-jobs.md)** for firther information about different
|
||||
ways for running interactive jobs in the Merlin6 cluster.
|
||||
|
||||
### Requirements
|
||||
@@ -124,7 +124,7 @@ communication between the GUI and the different nodes. For doing that, one must
|
||||
have a **passphrase protected** SSH Key. If the user does not have SSH Keys yet
|
||||
(simply run **`ls $HOME/.ssh/`** to check whether **`id_rsa`** files exist or
|
||||
not). For deploying SSH Keys for running Fluent interactively, one should
|
||||
follow this documentation: **[Configuring SSH Keys](../../how-to-use-merlin/ssh-keys.md)**
|
||||
follow this documentation: **[Configuring SSH Keys](../how-to-use-merlin/ssh-keys.md)**
|
||||
|
||||
#### List of hosts
|
||||
|
||||
|
||||
@@ -99,7 +99,7 @@ To setup HFSS RSM for using it with the Merlin cluster, it must be done from the
|
||||
Running jobs through Slurm from **ANSYS Electronics Desktop** is the way for
|
||||
running ANSYS HFSS when submitting from an ANSYS HFSS installation in a Merlin
|
||||
login node. **ANSYS Electronics Desktop** usually needs to be run from the
|
||||
**[Merlin NoMachine](../../how-to-use-merlin/nomachine.md)** service, which currently runs
|
||||
**[Merlin NoMachine](../how-to-use-merlin/nomachine.md)** service, which currently runs
|
||||
on:
|
||||
|
||||
* `merlin-l-001.psi.ch`
|
||||
|
||||
@@ -4,7 +4,7 @@ This document describes generic information of how to load and run ANSYS softwar
|
||||
|
||||
## ANSYS software in Pmodules
|
||||
|
||||
The ANSYS software can be loaded through **[PModules](../../how-to-use-merlin/using-modules.md)**.
|
||||
The ANSYS software can be loaded through **[PModules](../how-to-use-merlin/using-modules.md)**.
|
||||
|
||||
The default ANSYS versions are loaded from the central PModules repository.
|
||||
However, there are some known problems that can pop up when using some specific ANSYS packages in advanced mode.
|
||||
|
||||
@@ -14,7 +14,7 @@ All PSI users can ask for access to the Merlin7 cluster. Access to Merlin7 is re
|
||||
|
||||
Requesting **Merlin7** access *has to be done* using the **[Request Linux Group Membership](https://psi.service-now.com/psisp?id=psi_new_sc_cat_item&sys_id=84f2c0c81b04f110679febd9bb4bcbb1)** form, available in [PSI's central Service Catalog](https://psi.service-now.com/psisp) on Service Now.
|
||||
|
||||

|
||||

|
||||
|
||||
Mandatory fields you need to fill:
|
||||
* **`Order Access for user:`** Defaults to the logged in user. However, requesting access for another user it's also possible.
|
||||
|
||||
@@ -70,7 +70,7 @@ Supervisors should inform new users which extra groups are needed for their proj
|
||||
|
||||
Requesting membership for a specific Unix group *has to be done* with the corresponding **[Request Linux Group Membership](https://psi.service-now.com/psisp?id=psi_new_sc_cat_item&sys_id=84f2c0c81b04f110679febd9bb4bcbb1)** form, available in the [PSI Service Now Service Catalog](https://psi.service-now.com/psisp).
|
||||
|
||||

|
||||

|
||||
|
||||
Once submitted, the responsible of the Unix group has to approve the request.
|
||||
|
||||
|
||||
@@ -15,11 +15,11 @@ If they are missing, you can install them using the Software Kiosk icon on the D
|
||||
|
||||
2. *[Optional]* Enable ``xterm`` to have similar mouse behavour as in Linux:
|
||||
|
||||

|
||||

|
||||
|
||||
3. Create session to a Merlin login node and *Open*:
|
||||
|
||||

|
||||

|
||||
|
||||
|
||||
## SSH with PuTTY with X11 Forwarding
|
||||
@@ -37,4 +37,4 @@ using the Software Kiosk icon (should be located on the Desktop).
|
||||
|
||||
2. Enable X11 Forwarding in your SSH client. In example, for Putty:
|
||||
|
||||

|
||||

|
||||
|
||||
@@ -83,12 +83,12 @@ The NoMachine client can be downloaded from [NoMachine's download page](https://
|
||||
- **Port**: Enter `4000`.
|
||||
- **Protocol**: Select `NX`.
|
||||
|
||||

|
||||

|
||||
|
||||
- On the **Configuration** tab ensure:
|
||||
- **Authentication**: Select `Use password authentication`.
|
||||
|
||||

|
||||

|
||||
|
||||
- Click the **Add** button to finish creating the new connection.
|
||||
|
||||
@@ -96,7 +96,7 @@ The NoMachine client can be downloaded from [NoMachine's download page](https://
|
||||
|
||||
When prompted, use your PSI credentials to authenticate.
|
||||
|
||||

|
||||

|
||||
|
||||
## Managing Sessions
|
||||
|
||||
@@ -120,7 +120,7 @@ Access to the login node desktops must be initiated through the `merlin7-nx.psi.
|
||||
When connecting to the `merlin7-nx.psi.ch` front-end, a new session automatically opens if no existing session is found. Users can manage their sessions as follows:
|
||||
|
||||
- **Reconnect to an Existing Session**: If you have an active session, you can reconnect to it by selecting the appropriate icon in the NoMachine client interface. This allows you to resume work without losing any progress.
|
||||

|
||||

|
||||
- **Create a Second Session**: If you require a separate session, you can select the **`New Desktop`** button. This option creates a second session on another login node, provided the node is available and operational.
|
||||
|
||||
### Session Management Considerations
|
||||
|
||||
@@ -30,9 +30,9 @@ The different steps and settings required to make it work are that following:
|
||||
|
||||
1. Open the RSM Configuration service in Windows for the ANSYS release you want to configure.
|
||||
2. Right-click the **HPC Resources** icon followed by **Add HPC Resource...**
|
||||

|
||||

|
||||
3. In the **HPC Resource** tab, fill up the corresponding fields as follows:
|
||||

|
||||

|
||||
* **"Name"**: Add here the preffered name for the cluster. For example: `Merlin7 cluster`
|
||||
* **"HPC Type"**: Select `SLURM`
|
||||
* **"Submit host"**: `service03.merlin7.psi.ch`
|
||||
@@ -43,22 +43,22 @@ The different steps and settings required to make it work are that following:
|
||||
* Select **"Able to directly submit and monitor HPC jobs"**.
|
||||
* **"Apply"** changes.
|
||||
4. In the **"File Management"** tab, fill up the corresponding fields as follows:
|
||||

|
||||

|
||||
* Select **"RSM internal file transfer mechanism"** and add **`/data/scratch/shared`** as the **"Staging directory path on Cluster"**
|
||||
* Select **"Scratch directory local to the execution node(s)"** and add **`/scratch`** as the **HPC scratch directory**.
|
||||
* **Never check** the option "Keep job files in the staging directory when job is complete" if the previous
|
||||
option "Scratch directory local to the execution node(s)" was set.
|
||||
* **"Apply"** changes.
|
||||
5. In the **"Queues"** tab, use the left button to auto-discover partitions
|
||||

|
||||

|
||||
* If no authentication method was configured before, an authentication window will appear. Use your
|
||||
PSI account to authenticate. Notice that the **`PSICH\`** prefix **must not be added**.
|
||||

|
||||

|
||||
* From the partition list, select the ones you want to typically use.
|
||||
* In general, standard Merlin users must use **`hourly`**, **`daily`** and **`general`** only.
|
||||
* Other partitions are reserved for allowed users only.
|
||||
* **"Apply"** changes.
|
||||

|
||||

|
||||
6. *[Optional]* You can perform a test by submitting a test job on each partition by clicking on the **Submit** button
|
||||
for each selected partition.
|
||||
|
||||
|
||||
@@ -84,12 +84,12 @@ For further information, please visit the **[ANSYS RSM](ansys-rsm.md)** section.
|
||||
|
||||
### ANSYS Fluent
|
||||
|
||||
For further information, please visit the **[ANSYS RSM](ansys-fluent.md)** section.
|
||||
ANSYS Fluent is not currently documented for Merlin7. Please refer to the [Merlin6 documentation](../merlin6/software-support/ansys-fluent.md) for information about ANSYS Fluent on Merlin6.
|
||||
|
||||
### ANSYS CFX
|
||||
|
||||
For further information, please visit the **[ANSYS RSM](ansys-cfx.md)** section.
|
||||
ANSYS CFX is not currently documented for Merlin7. Please refer to the [Merlin6 documentation](../merlin6/software-support/ansys-cfx.md) for information about ANSYS CFX on Merlin6.
|
||||
|
||||
### ANSYS MAPDL
|
||||
|
||||
For further information, please visit the **[ANSYS RSM](ansys-mapdl.md)** section.
|
||||
ANSYS MAPDL is not currently documented for Merlin7. Please refer to the [Merlin6 documentation](../merlin6/software-support/ansys-mapdl.md) for information about ANSYS MAPDL on Merlin6.
|
||||
|
||||
@@ -70,7 +70,7 @@ Before starting the migration, make sure you:
|
||||
|
||||
* are **registered on Merlin7**.
|
||||
|
||||
* If not yet registered, please do so following [these instructions](../merlin7/request-account.html)
|
||||
* If not yet registered, please do so following [these instructions](../01-Quick-Start-Guide/requesting-accounts.md)
|
||||
|
||||
* **have cleaned up your data to reduce migration time and space usage**.
|
||||
* **For the user data migration**, ensure your total usage on Merlin6 (`/psi/home`+`/data/user`) is **well below the 1 TB quota** (use the `merlin_quotas` command). Remember:
|
||||
|
||||
@@ -7,6 +7,6 @@ tags:
|
||||
|
||||
# Merlin 6 documentation available
|
||||
|
||||
Merlin 6 docs are now available at [Merlin6 docs](../../merlin6/index.md)!
|
||||
Merlin 6 docs are now available at [Merlin6 docs](../../merlin6/cluster-introduction.md)!
|
||||
|
||||
More complete documentation will be coming shortly.
|
||||
|
||||
Reference in New Issue
Block a user