vibed changes #1

This commit is contained in:
2025-12-11 10:32:15 +01:00
parent e64c265280
commit 4a43d69a1a
45 changed files with 116 additions and 147 deletions

View File

@@ -2,6 +2,8 @@
services: services:
mkdocs: mkdocs:
build: . build: .
# explicitly force live-reloading per https://github.com/squidfunk/mkdocs-material/issues/8478
command: serve --dev-addr=0.0.0.0:8000 --livereload
security_opt: security_opt:
- no-new-privileges:true - no-new-privileges:true
volumes: volumes:

View File

@@ -120,7 +120,7 @@ The below table summarizes the hardware setup for the Merlin6 GPU computing node
### Login Nodes ### Login Nodes
The login nodes are part of the **[Merlin6](/merlin6/introduction.html)** HPC cluster, The login nodes are part of the **[Merlin6](../merlin6/introduction.md)** HPC cluster,
and are used to compile and to submit jobs to the different ***Merlin Slurm clusters*** (`merlin5`,`merlin6`,`gmerlin6`,etc.). and are used to compile and to submit jobs to the different ***Merlin Slurm clusters*** (`merlin5`,`merlin6`,`gmerlin6`,etc.).
Please refer to the **[Merlin6 Hardware Documentation](/merlin6/hardware-and-software.html)** for further information. Please refer to the **[Merlin6 Hardware Documentation](/merlin6/hardware-and-software.html)** for further information.

View File

@@ -14,7 +14,7 @@ permalink: /merlin5/cluster-introduction.html
mission-critical applications which was built in 2016-2017. It was an mission-critical applications which was built in 2016-2017. It was an
extension of the Merlin4 cluster and built from existing hardware due extension of the Merlin4 cluster and built from existing hardware due
to a lack of central investment on Local HPC Resources. **Merlin5** was to a lack of central investment on Local HPC Resources. **Merlin5** was
then replaced by the **[Merlin6](/merlin6/index.html)** cluster in 2019, then replaced by the **[Merlin6](../merlin6/index.md)** cluster in 2019,
with an important central investment of ~1,5M CHF. **Merlin5** was mostly with an important central investment of ~1,5M CHF. **Merlin5** was mostly
based on CPU resources, but also contained a small amount of GPU-based based on CPU resources, but also contained a small amount of GPU-based
resources which were mostly used by the BIO experiments. resources which were mostly used by the BIO experiments.

View File

@@ -12,7 +12,7 @@ On the first Monday of each month the Merlin6 cluster might be subject to interr
Users will be informed with at least one week in advance when a downtime is scheduled for the next month. Users will be informed with at least one week in advance when a downtime is scheduled for the next month.
Downtimes will be informed to users through the <merlin-users@lists.psi.ch> mail list. Also, a detailed description Downtimes will be informed to users through the <merlin-users@lists.psi.ch> mail list. Also, a detailed description
for the nexts scheduled interventions will be available in [Next Scheduled Downtimes](/merlin6/downtimes.html#next-scheduled-downtimes)). for the nexts scheduled interventions will be available in [Next Scheduled Downtimes](#next-scheduled-downtimes)).
--- ---

View File

@@ -12,28 +12,28 @@ permalink: /merlin6/faq.html
## How do I register for Merlin? ## How do I register for Merlin?
See [Requesting Merlin Access](/merlin6/request-account.html). See [Requesting Merlin Access](../quick-start-guide/requesting-accounts.md).
## How do I get information about downtimes and updates? ## How do I get information about downtimes and updates?
See [Get updated through the Merlin User list!](/merlin6/contact.html#get-updated-through-the-merlin-user-list) See [Get updated through the Merlin User list!](contact.md#get-updated-through-the-merlin-user-list)
## How can I request access to a Merlin project directory? ## How can I request access to a Merlin project directory?
Merlin projects are placed in the `/data/project` directory. Access to each project is controlled by Unix group membership. Merlin projects are placed in the `/data/project` directory. Access to each project is controlled by Unix group membership.
If you require access to an existing project, please request group membership as described in [Requesting Unix Group Membership](/merlin6/request-project.html#requesting-unix-group-membership). If you require access to an existing project, please request group membership as described in [Requesting Unix Group Membership](../quick-start-guide/requesting-projects.md#requesting-unix-group-membership).
Your project leader or project colleagues will know what Unix group you should belong to. Otherwise, you can check what Unix group is allowed to access that project directory (simply run `ls -ltrhd` for the project directory). Your project leader or project colleagues will know what Unix group you should belong to. Otherwise, you can check what Unix group is allowed to access that project directory (simply run `ls -ltrhd` for the project directory).
## Can I install software myself? ## Can I install software myself?
Most software can be installed in user directories without any special permissions. We recommend using `/data/user/$USER/bin` for software since home directories are fairly small. For software that will be used by multiple groups/users you can also [request the admins](/merlin6/contact.html) install it as a [module](/merlin6/using-modules.html). Most software can be installed in user directories without any special permissions. We recommend using `/data/user/$USER/bin` for software since home directories are fairly small. For software that will be used by multiple groups/users you can also [request the admins](contact.md) install it as a [module](../how-to-use-merlin/using-modules.md).
How to install depends a bit on the software itself. There are three common installation procedures: How to install depends a bit on the software itself. There are three common installation procedures:
1. *binary distributions*. These are easy; just put them in a directory (eg `/data/user/$USER/bin`) and add that to your PATH. 1. *binary distributions*. These are easy; just put them in a directory (eg `/data/user/$USER/bin`) and add that to your PATH.
2. *source compilation* using make/cmake/autoconfig/etc. Usually the compilation scripts accept a `--prefix=/data/user/$USER` directory for where to install it. Then they place files under `<prefix>/bin`, `<prefix>/lib`, etc. The exact syntax should be documented in the installation instructions. 2. *source compilation* using make/cmake/autoconfig/etc. Usually the compilation scripts accept a `--prefix=/data/user/$USER` directory for where to install it. Then they place files under `<prefix>/bin`, `<prefix>/lib`, etc. The exact syntax should be documented in the installation instructions.
3. *conda environment*. This is now becoming standard for python-based software, including lots of the AI tools. First follow the [initial setup instructions](/merlin6/python.html#anaconda) to configure conda to use /data/user instead of your home directory. Then you can create environments like: 3. *conda environment*. This is now becoming standard for python-based software, including lots of the AI tools. First follow the [initial setup instructions](../software-support/python.md#anaconda) to configure conda to use /data/user instead of your home directory. Then you can create environments like:
``` ```
module load anaconda/2019.07 module load anaconda/2019.07
@@ -48,5 +48,5 @@ conda activate myenv
## Something doesn't work ## Something doesn't work
Check the list of [known problems](/merlin6/known-problems.html) to see if a solution is known. Check the list of [known problems](known-problems.md) to see if a solution is known.
If not, please [contact the admins](/merlin6/contact.html). If not, please [contact the admins](contact.md).

View File

@@ -100,7 +100,7 @@ getent passwd $USER | awk -F: '{print $NF}'
``` ```
If SHELL does not correspond to the one you need to use, you should request a central change for it. If SHELL does not correspond to the one you need to use, you should request a central change for it.
This is because Merlin accounts are central PSI accounts. Hence, **change must be requested via [PSI Service Now](/merlin6/contact.html#psi-service-now)**. This is because Merlin accounts are central PSI accounts. Hence, **change must be requested via [PSI Service Now](contact.md#psi-service-now)**.
Alternatively, if you work on other PSI Linux systems but for Merlin you need a different SHELL type, a temporary change can be performed during login startup. Alternatively, if you work on other PSI Linux systems but for Merlin you need a different SHELL type, a temporary change can be performed during login startup.
You can update one of the following files: You can update one of the following files:
@@ -140,9 +140,9 @@ module load paraview
vglrun paraview vglrun paraview
``` ```
Officially, the supported method for running `vglrun` is by using the [NoMachine remote desktop](/merlin6/nomachine.html). Officially, the supported method for running `vglrun` is by using the [NoMachine remote desktop](../../how-to-use-merlin/nomachine.md).
Running `vglrun` it's also possible using SSH with X11 Forwarding. However, it's very slow and it's only recommended when running Running `vglrun` it's also possible using SSH with X11 Forwarding. However, it's very slow and it's only recommended when running
in Slurm (from [NoMachine](/merlin6/nomachine.html)). Please, avoid running `vglrun` over SSH from a desktop or laptop. in Slurm (from [NoMachine](../../how-to-use-merlin/nomachine.md)). Please, avoid running `vglrun` over SSH from a desktop or laptop.
## Software ## Software

View File

@@ -53,7 +53,7 @@ Merlin6 introduces the concept of a *project* directory. These are the recommend
#### Requesting a *project* #### Requesting a *project*
Refer to [Requesting a project](/merlin6/request-project.html) Refer to [Requesting a project](../quick-start-guide/requesting-projects.md)
--- ---
@@ -64,7 +64,7 @@ Refer to [Requesting a project](/merlin6/request-project.html)
* Users keep working on Merlin5 * Users keep working on Merlin5
* Merlin5 production directories: ``'/gpfs/home/'``, ``'/gpfs/data'``, ``'/gpfs/group'`` * Merlin5 production directories: ``'/gpfs/home/'``, ``'/gpfs/data'``, ``'/gpfs/group'``
* Users may raise any problems (quota limits, unaccessible files, etc.) to merlin-admins@lists.psi.ch * Users may raise any problems (quota limits, unaccessible files, etc.) to merlin-admins@lists.psi.ch
* Users can start migrating data (see [Migration steps](/merlin6/migrating.html#migration-steps)) * Users can start migrating data (see [Migration steps](#migration-steps))
* Users should copy their data from Merlin5 ``/gpfs/data`` to Merlin6 ``/data/user`` * Users should copy their data from Merlin5 ``/gpfs/data`` to Merlin6 ``/data/user``
* Users should copy their home from Merlin5 ``/gpfs/home`` to Merlin6 ``/psi/home`` * Users should copy their home from Merlin5 ``/gpfs/home`` to Merlin6 ``/psi/home``
* Users should inform when migration is done, and which directories were migrated. Deletion for such directories can be requested by admins. * Users should inform when migration is done, and which directories were migrated. Deletion for such directories can be requested by admins.
@@ -76,7 +76,7 @@ Refer to [Requesting a project](/merlin6/request-project.html)
* Merlin5 directories available in RW in login nodes: ``'/gpfs/home/'``, ``'/gpfs/data'``, ``'/gpfs/group'`` * Merlin5 directories available in RW in login nodes: ``'/gpfs/home/'``, ``'/gpfs/data'``, ``'/gpfs/group'``
* In Merlin5 computing nodes, Merlin5 directories are mounted in RW: ``'/gpfs/home/'``, ``'/gpfs/data'``, ``'/gpfs/group'`` * In Merlin5 computing nodes, Merlin5 directories are mounted in RW: ``'/gpfs/home/'``, ``'/gpfs/data'``, ``'/gpfs/group'``
* In Merlin5 computing nodes, Merlin6 directories are mounted in RW: ``'/psi/home/'``, ``'/data/user'``, ``'/data/project'`` * In Merlin5 computing nodes, Merlin6 directories are mounted in RW: ``'/psi/home/'``, ``'/data/user'``, ``'/data/project'``
* Users must migrate their data (see [Migration steps](/merlin6/migrating.html#migration-steps)) * Users must migrate their data (see [Migration steps](#migration-steps))
* ALL data must be migrated * ALL data must be migrated
* Job submissions by default to Merlin6. Submission to Merlin5 computing nodes possible. * Job submissions by default to Merlin6. Submission to Merlin5 computing nodes possible.
* Users should inform when migration is done, and which directories were migrated. Deletion for such directories can be requested by admins. * Users should inform when migration is done, and which directories were migrated. Deletion for such directories can be requested by admins.
@@ -94,7 +94,7 @@ Refer to [Requesting a project](/merlin6/request-project.html)
### Cleanup / Archive files ### Cleanup / Archive files
* Users must cleanup and/or archive files, according to the quota limits for the target storage. * Users must cleanup and/or archive files, according to the quota limits for the target storage.
* If extra space is needed, we advise users to request a [project](/merlin6/request-project.html) * If extra space is needed, we advise users to request a [project](../quick-start-guide/requesting-projects.md)
* If you need a larger quota in respect to the maximal allowed number of files, you can request an increase of your user quota. * If you need a larger quota in respect to the maximal allowed number of files, you can request an increase of your user quota.
#### File list #### File list

View File

@@ -8,12 +8,12 @@ sidebar: merlin6_sidebar
permalink: /merlin6/troubleshooting.html permalink: /merlin6/troubleshooting.html
--- ---
For troubleshooting, please contact us through the official channels. See [Contact](/merlin6/contact.html) For troubleshooting, please contact us through the official channels. See [Contact](contact.md)
for more information. for more information.
## Known Problems ## Known Problems
Before contacting us for support, please check the **[Merlin6 Support: Known Problems](/merlin6/known-problems.html)** page to see if there is an existing Before contacting us for support, please check the **[Merlin6 Support: Known Problems](known-problems.md)** page to see if there is an existing
workaround for your specific problem. workaround for your specific problem.
## Troubleshooting Slurm Jobs ## Troubleshooting Slurm Jobs

View File

@@ -10,10 +10,10 @@ permalink: /merlin6/cluster-introduction.html
## Slurm clusters ## Slurm clusters
* The new Slurm CPU cluster is called [**`merlin6`**](/merlin6/cluster-introduction.html). * The new Slurm CPU cluster is called [**`merlin6`**](cluster-introduction.md).
* The new Slurm GPU cluster is called [**`gmerlin6`**](/gmerlin6/cluster-introduction.html) * The new Slurm GPU cluster is called [**`gmerlin6`**](../../gmerlin6/cluster-introduction.md)
* The old Slurm *merlin* cluster is still active and best effort support is provided. * The old Slurm *merlin* cluster is still active and best effort support is provided.
The cluster, was renamed as [**merlin5**](/merlin5/cluster-introduction.html). The cluster, was renamed as [**merlin5**](../../merlin5/cluster-introduction.md).
From July 2019, **`merlin6`** becomes the **default Slurm cluster** and any job submitted from the login node will be submitted to that cluster if not . From July 2019, **`merlin6`** becomes the **default Slurm cluster** and any job submitted from the login node will be submitted to that cluster if not .
* Users can keep submitting to the old *`merlin5`* computing nodes by using the option ``--cluster=merlin5``. * Users can keep submitting to the old *`merlin5`* computing nodes by using the option ``--cluster=merlin5``.

View File

@@ -70,7 +70,7 @@ collected at a ***beamline***, you may have been assigned a **`p-group`**
Groups are usually assigned to a PI, and then individual user accounts are added to the group. This must be done Groups are usually assigned to a PI, and then individual user accounts are added to the group. This must be done
under user request through PSI Service Now. For existing **a-groups** and **p-groups**, you can follow the standard under user request through PSI Service Now. For existing **a-groups** and **p-groups**, you can follow the standard
central procedures. Alternatively, if you do not know how to do that, follow the Merlin6 central procedures. Alternatively, if you do not know how to do that, follow the Merlin6
**[Requesting extra Unix groups](/merlin6/request-account.html#requesting-extra-unix-groups)** procedure, or open **[Requesting extra Unix groups](../../quick-start-guide/requesting-accounts.md#requesting-extra-unix-groups)** procedure, or open
a **[PSI Service Now](https://psi.service-now.com/psisp)** ticket. a **[PSI Service Now](https://psi.service-now.com/psisp)** ticket.
### Documentation ### Documentation

View File

@@ -14,7 +14,7 @@ ssh $username@merlin-l-002.psi.ch
## SSH with X11 Forwarding ## SSH with X11 Forwarding
Official X11 Forwarding support is through NoMachine. Please follow the document Official X11 Forwarding support is through NoMachine. Please follow the document
[{Job Submission -> Interactive Jobs}](##/merlin6/interactive-jobs.html#Requirements) and [{Job Submission -> Interactive Jobs}](../slurm-general-docs/interactive-jobs.md#requirements) and
[{Accessing Merlin -> NoMachine}](nomachine.md) for more details. However, [{Accessing Merlin -> NoMachine}](nomachine.md) for more details. However,
we provide a small recipe for enabling X11 Forwarding in Linux. we provide a small recipe for enabling X11 Forwarding in Linux.

View File

@@ -10,7 +10,7 @@ permalink: /merlin6/connect-from-macos.html
## SSH without X11 Forwarding ## SSH without X11 Forwarding
This is the standard method. Official X11 support is provided through [NoMachine](/merlin6/nomachine.html). This is the standard method. Official X11 support is provided through [NoMachine](nomachine.md).
For normal SSH sessions, use your SSH client as follows: For normal SSH sessions, use your SSH client as follows:
```bash ```bash
@@ -30,8 +30,8 @@ you have it running before starting a SSH connection with X11 forwarding.
### SSH with X11 Forwarding in MacOS ### SSH with X11 Forwarding in MacOS
Official X11 support is through NoMachine. Please follow the document Official X11 support is through NoMachine. Please follow the document
[{Job Submission -> Interactive Jobs}](/merlin6/interactive-jobs.html#Requirements) and [{Job Submission -> Interactive Jobs}](../slurm-general-docs/interactive-jobs.md#requirements) and
[{Accessing Merlin -> NoMachine}](/merlin6/nomachine.html) for more details. However, [{Accessing Merlin -> NoMachine}](nomachine.md) for more details. However,
we provide a small recipe for enabling X11 Forwarding in MacOS. we provide a small recipe for enabling X11 Forwarding in MacOS.
* Ensure that **[XQuartz](https://www.xquartz.org/)** is installed and running in your MacOS. * Ensure that **[XQuartz](https://www.xquartz.org/)** is installed and running in your MacOS.

View File

@@ -14,7 +14,7 @@ PuTTY is one of the most common tools for SSH.
Check, if the following software packages are installed on the Windows workstation by Check, if the following software packages are installed on the Windows workstation by
inspecting the *Start* menu (hint: use the *Search* box to save time): inspecting the *Start* menu (hint: use the *Search* box to save time):
* PuTTY (should be already installed) * PuTTY (should be already installed)
* *[Optional]* Xming (needed for [SSH with X11 Forwarding](/merlin6/connect-from-windows.html#ssh-with-x11-forwarding)) * *[Optional]* Xming (needed for [SSH with X11 Forwarding](#ssh-with-putty-with-x11-forwarding))
If they are missing, you can install them using the Software Kiosk icon on the Desktop. If they are missing, you can install them using the Software Kiosk icon on the Desktop.
@@ -32,8 +32,8 @@ If they are missing, you can install them using the Software Kiosk icon on the D
## SSH with PuTTY with X11 Forwarding ## SSH with PuTTY with X11 Forwarding
Official X11 Forwarding support is through NoMachine. Please follow the document Official X11 Forwarding support is through NoMachine. Please follow the document
[{Job Submission -> Interactive Jobs}](/merlin6/interactive-jobs.html#Requirements) and [{Job Submission -> Interactive Jobs}](../slurm-general-docs/interactive-jobs.md#requirements) and
[{Accessing Merlin -> NoMachine}](/merlin6/nomachine.html) for more details. However, [{Accessing Merlin -> NoMachine}](nomachine.md) for more details. However,
we provide a small recipe for enabling X11 Forwarding in Windows. we provide a small recipe for enabling X11 Forwarding in Windows.
Check, if the **Xming** is installed on the Windows workstation by inspecting the Check, if the **Xming** is installed on the Windows workstation by inspecting the

View File

@@ -101,7 +101,7 @@ aklog
## Slurm jobs accessing AFS ## Slurm jobs accessing AFS
Some jobs may require to access private areas in AFS. For that, having a valid [**keytab**](/merlin6/kerberos.html#generating-granting-tickets-with-keytab) file is required. Some jobs may require to access private areas in AFS. For that, having a valid [**keytab**](#generating-granting-tickets-with-keytab) file is required.
Then, from inside the batch script one can obtain granting tickets for Kerberos and AFS, which can be used for accessing AFS private areas. Then, from inside the batch script one can obtain granting tickets for Kerberos and AFS, which can be used for accessing AFS private areas.
The steps should be the following: The steps should be the following:

View File

@@ -28,7 +28,7 @@ visibility.
## Direct transfer via Merlin6 login nodes ## Direct transfer via Merlin6 login nodes
The following methods transfer data directly via the [login The following methods transfer data directly via the [login
nodes](/merlin6/interactive.html#login-nodes-hardware-description). They are suitable nodes](../../quick-start-guide/accessing-interactive-nodes.md#login-nodes-hardware-description). They are suitable
for use from within the PSI network. for use from within the PSI network.
### Rsync ### Rsync
@@ -72,7 +72,7 @@ The purpose of the software is to send a large file to someone, have that file a
From August 2024, Merlin is connected to the **[PSI Data Transfer](https://www.psi.ch/en/photon-science-data-services/data-transfer)** service, From August 2024, Merlin is connected to the **[PSI Data Transfer](https://www.psi.ch/en/photon-science-data-services/data-transfer)** service,
`datatransfer.psi.ch`. This is a central service managed by the **[Linux team](https://linux.psi.ch/index.html)**. However, any problems or questions related to it can be directly `datatransfer.psi.ch`. This is a central service managed by the **[Linux team](https://linux.psi.ch/index.html)**. However, any problems or questions related to it can be directly
[reported](/merlin6/contact.html) to the Merlin administrators, which will forward the request if necessary. [reported](../../99-support/contact.md) to the Merlin administrators, which will forward the request if necessary.
The PSI Data Transfer servers supports the following protocols: The PSI Data Transfer servers supports the following protocols:
* Data Transfer - SSH (scp / rsync) * Data Transfer - SSH (scp / rsync)
@@ -167,4 +167,4 @@ provides a helpful wrapper over the Gnome storage utilities, and provides suppor
- [others](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/managing-storage-volumes-in-gnome_using-the-desktop-environment-in-rhel-8#gvfs-back-ends_managing-storage-volumes-in-gnome) - [others](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/managing-storage-volumes-in-gnome_using-the-desktop-environment-in-rhel-8#gvfs-back-ends_managing-storage-volumes-in-gnome)
[More instruction on using `merlin_rmount`](/merlin6/merlin-rmount.html) [More instruction on using `merlin_rmount`](merlin-rmount.md)

View File

@@ -1,5 +0,0 @@
# Merlin6 HPC Cluster
!!! danger "Decommissioned"
This cluster is no longer generally available.

View File

@@ -23,7 +23,7 @@ Especially the following extensions make working with larger notebooks easier
to add and update a TOC at the head of the document. to add and update a TOC at the head of the document.
* **Collapsible Headings**: allows you to fold all the cells below a heading * **Collapsible Headings**: allows you to fold all the cells below a heading
It may also be interesting for you to explore the [Jupytext](jupytext.html) server extension. It may also be interesting for you to explore the [Jupytext](jupytext.md) server extension.
## Variable Inspector ## Variable Inspector

View File

@@ -55,7 +55,7 @@ Inside of a Merlin6 terminal shell, you can run the standard commands like
### Your user environment is not among the kernels offered for choice ### Your user environment is not among the kernels offered for choice
Refer to our documentation about [using your own custom made Refer to our documentation about [using your own custom made
environments with jupyterhub](/merlin6/jupyterhub.html). environments with jupyterhub](jupyterhub.md).
### Cannot save notebook - *xsrf argument missing* ### Cannot save notebook - *xsrf argument missing*

View File

@@ -36,7 +36,7 @@ recalculate the notebook cells with this new kernel.
These environments are also available for standard work in a shell session. You These environments are also available for standard work in a shell session. You
can activate an environment in a normal merlin terminal session by using the can activate an environment in a normal merlin terminal session by using the
`module` (q.v. [using Pmodules](using-modules.html)) command to load anaconda `module` (q.v. [using Pmodules](../../how-to-use-merlin/using-modules.md)) command to load anaconda
python, and from there using the `conda` command to switch to the desired python, and from there using the `conda` command to switch to the desired
environment: environment:
@@ -79,7 +79,7 @@ queue. Additional customization can be implemented using the *'Optional user
defined line to be added to the batch launcher script'* option. This line is defined line to be added to the batch launcher script'* option. This line is
added to the submission script at the end of other `#SBATCH` lines. Parameters can added to the submission script at the end of other `#SBATCH` lines. Parameters can
be passed to SLURM by starting the line with `#SBATCH`, like in [Running Slurm be passed to SLURM by starting the line with `#SBATCH`, like in [Running Slurm
Scripts](/merlin6/running-jobs.html). Some ideas: Scripts](../../slurm-general-docs/running-jobs.md). Some ideas:
### Request additional memory ### Request additional memory

View File

@@ -5,13 +5,13 @@
Merlin contains a multi-cluster setup, where multiple Slurm clusters coexist under the same umbrella. Merlin contains a multi-cluster setup, where multiple Slurm clusters coexist under the same umbrella.
It basically contains the following clusters: It basically contains the following clusters:
* The **Merlin6 Slurm CPU cluster**, which is called [**`merlin6`**](/merlin6/slurm-access.html#merlin6-cpu-cluster-access). * The **Merlin6 Slurm CPU cluster**, which is called [**`merlin6`**](#merlin6-cpu-cluster-access).
* The **Merlin6 Slurm GPU cluster**, which is called [**`gmerlin6`**](/merlin6/slurm-access.html#merlin6-gpu-cluster-access). * The **Merlin6 Slurm GPU cluster**, which is called [**`gmerlin6`**](#merlin6-gpu-cluster-access).
* The *old Merlin5 Slurm CPU cluster*, which is called [**`merlin5`**](/merlin6/slurm-access.html#merlin5-cpu-cluster-access), still supported in a best effort basis. * The *old Merlin5 Slurm CPU cluster*, which is called [**`merlin5`**](#merlin5-cpu-cluster-access), still supported in a best effort basis.
## Accessing the Slurm clusters ## Accessing the Slurm clusters
Any job submission must be performed from a **Merlin login node**. Please refer to the [**Accessing the Interactive Nodes documentation**](/merlin6/interactive.html) Any job submission must be performed from a **Merlin login node**. Please refer to the [**Accessing the Interactive Nodes documentation**](accessing-interactive-nodes.md)
for further information about how to access the cluster. for further information about how to access the cluster.
In addition, any job *must be submitted from a high performance storage area visible by the login nodes and by the computing nodes*. For this, the possible storage areas are the following: In addition, any job *must be submitted from a high performance storage area visible by the login nodes and by the computing nodes*. For this, the possible storage areas are the following:
@@ -28,13 +28,13 @@ The **Merlin6 CPU cluster** (**`merlin6`**) is the default cluster configured
in the login nodes. Any job submission will use by default this cluster, unless in the login nodes. Any job submission will use by default this cluster, unless
the option `--cluster` is specified with another of the existing clusters. the option `--cluster` is specified with another of the existing clusters.
For further information about how to use this cluster, please visit: [**Merlin6 CPU Slurm Cluster documentation**](/merlin6/slurm-configuration.html). For further information about how to use this cluster, please visit: [**Merlin6 CPU Slurm Cluster documentation**](../slurm-configuration.md).
### Merlin6 GPU cluster access ### Merlin6 GPU cluster access
The **Merlin6 GPU cluster** (**`gmerlin6`**) is visible from the login nodes. However, to submit jobs to this cluster, one needs to specify the option `--cluster=gmerlin6` when submitting a job or allocation. The **Merlin6 GPU cluster** (**`gmerlin6`**) is visible from the login nodes. However, to submit jobs to this cluster, one needs to specify the option `--cluster=gmerlin6` when submitting a job or allocation.
For further information about how to use this cluster, please visit: [**Merlin6 GPU Slurm Cluster documentation**](/gmerlin6/slurm-configuration.html). For further information about how to use this cluster, please visit: [**Merlin6 GPU Slurm Cluster documentation**](../../gmerlin6/slurm-configuration.md).
### Merlin5 CPU cluster access ### Merlin5 CPU cluster access
@@ -46,4 +46,4 @@ available for old users needing extra computational resources or longer jobs.
Have in mind that this cluster is only supported in a **best effort basis**, Have in mind that this cluster is only supported in a **best effort basis**,
and it contains very old hardware and configurations. and it contains very old hardware and configurations.
For further information about how to use this cluster, please visit the [**Merlin5 CPU Slurm Cluster documentation**](/gmerlin6/slurm-configuration.html). For further information about how to use this cluster, please visit the [**Merlin5 CPU Slurm Cluster documentation**](../../merlin5/slurm-configuration.md).

View File

@@ -12,12 +12,12 @@ At present, the **Merlin local HPC cluster** contains _two_ generations of it:
* `merlin6` as the Slurm CPU cluster * `merlin6` as the Slurm CPU cluster
* `gmerlin6` as the Slurm GPU cluster. * `gmerlin6` as the Slurm GPU cluster.
Access to the different Slurm clusters is possible from the [**Merlin login nodes**](/merlin6/interactive.html), Access to the different Slurm clusters is possible from the [**Merlin login nodes**](accessing-interactive-nodes.md),
which can be accessed through the [SSH protocol](/merlin6/interactive.html#ssh-access) or the [NoMachine (NX) service](/merlin6/nomachine.html). which can be accessed through the [SSH protocol](accessing-interactive-nodes.md#ssh-access) or the [NoMachine (NX) service](../how-to-use-merlin/nomachine.md).
The following image shows the Slurm architecture design for the Merlin5 & Merlin6 (CPU & GPU) clusters: The following image shows the Slurm architecture design for the Merlin5 & Merlin6 (CPU & GPU) clusters:
![Merlin6 Slurm Architecture Design](/images/merlin-slurm-architecture.png) ![Merlin6 Slurm Architecture Design](../../images/merlin-slurm-architecture.png)
### Merlin6 ### Merlin6
@@ -41,9 +41,9 @@ by the BIO Division and by Deep Leaning project.
These computational resources are split into **two** different **[Slurm](https://slurm.schedmd.com/overview.html)** clusters: These computational resources are split into **two** different **[Slurm](https://slurm.schedmd.com/overview.html)** clusters:
* The Merlin6 CPU nodes are in a dedicated **[Slurm](https://slurm.schedmd.com/overview.html)** cluster called [**`merlin6`**](/merlin6/slurm-configuration.html). * The Merlin6 CPU nodes are in a dedicated **[Slurm](https://slurm.schedmd.com/overview.html)** cluster called [**`merlin6`**](../slurm-configuration.md).
* This is the **default Slurm cluster** configured in the login nodes: any job submitted without the option `--cluster` will be submited to this cluster. * This is the **default Slurm cluster** configured in the login nodes: any job submitted without the option `--cluster` will be submited to this cluster.
* The Merlin6 GPU resources are in a dedicated **[Slurm](https://slurm.schedmd.com/overview.html)** cluster called [**`gmerlin6`**](/gmerlin6/slurm-configuration.html). * The Merlin6 GPU resources are in a dedicated **[Slurm](https://slurm.schedmd.com/overview.html)** cluster called [**`gmerlin6`**](../../gmerlin6/slurm-configuration.md).
* Users submitting to the **`gmerlin6`** GPU cluster need to specify the option ``--cluster=gmerlin6``. * Users submitting to the **`gmerlin6`** GPU cluster need to specify the option ``--cluster=gmerlin6``.
### Merlin5 ### Merlin5
@@ -52,5 +52,5 @@ The old Slurm **CPU** _Merlin_ cluster is still active and is maintained in a be
**Merlin5** only contains **computing nodes** resources in a dedicated **[Slurm](https://slurm.schedmd.com/overview.html)** cluster. **Merlin5** only contains **computing nodes** resources in a dedicated **[Slurm](https://slurm.schedmd.com/overview.html)** cluster.
* The Merlin5 CPU cluster is called [**merlin5**](/merlin5/slurm-configuration.html). * The Merlin5 CPU cluster is called [**merlin5**](../../merlin5/slurm-configuration.md).

View File

@@ -119,16 +119,16 @@ salloc --clusters=merlin6 -N 2 -n 2 $SHELL
#### Graphical access #### Graphical access
[NoMachine](/merlin6/nomachine.html) is the official supported service for graphical [NoMachine](../../how-to-use-merlin/nomachine.md) is the official supported service for graphical
access in the Merlin cluster. This service is running on the login nodes. Check the access in the Merlin cluster. This service is running on the login nodes. Check the
document [{Accessing Merlin -> NoMachine}](/merlin6/nomachine.html) for details about document [{Accessing Merlin -> NoMachine}](../../how-to-use-merlin/nomachine.md) for details about
how to connect to the **NoMachine** service in the Merlin cluster. how to connect to the **NoMachine** service in the Merlin cluster.
For other non officially supported graphical access (X11 forwarding): For other non officially supported graphical access (X11 forwarding):
* For Linux clients, please follow [{How To Use Merlin -> Accessing from Linux Clients}](/merlin6/connect-from-linux.html) * For Linux clients, please follow [{How To Use Merlin -> Accessing from Linux Clients}](../../how-to-use-merlin/connect-from-linux.md)
* For Windows clients, please follow [{How To Use Merlin -> Accessing from Windows Clients}](/merlin6/connect-from-windows.html) * For Windows clients, please follow [{How To Use Merlin -> Accessing from Windows Clients}](../../how-to-use-merlin/connect-from-windows.md)
* For MacOS clients, please follow [{How To Use Merlin -> Accessing from MacOS Clients}](/merlin6/connect-from-macos.html) * For MacOS clients, please follow [{How To Use Merlin -> Accessing from MacOS Clients}](../../how-to-use-merlin/connect-from-macos.md)
### 'srun' with x11 support ### 'srun' with x11 support

View File

@@ -6,18 +6,18 @@ Before starting using the cluster, please read the following rules:
1. To ease and improve *scheduling* and *backfilling*, always try to **estimate and** to **define a proper run time** of your jobs: 1. To ease and improve *scheduling* and *backfilling*, always try to **estimate and** to **define a proper run time** of your jobs:
* Use `--time=<D-HH:MM:SS>` for that. * Use `--time=<D-HH:MM:SS>` for that.
* For very long runs, please consider using ***[Job Arrays with Checkpointing](/merlin6/running-jobs.html#array-jobs-running-very-long-tasks-with-checkpoint-files)*** * For very long runs, please consider using ***[Job Arrays with Checkpointing](#array-jobs-running-very-long-tasks-with-checkpoint-files)***
2. Try to optimize your jobs for running at most within **one day**. Please, consider the following: 2. Try to optimize your jobs for running at most within **one day**. Please, consider the following:
* Some software can simply scale up by using more nodes while drastically reducing the run time. * Some software can simply scale up by using more nodes while drastically reducing the run time.
* Some software allow to save a specific state, and a second job can start from that state: ***[Job Arrays with Checkpointing](/merlin6/running-jobs.html#array-jobs-running-very-long-tasks-with-checkpoint-files)*** can help you with that. * Some software allow to save a specific state, and a second job can start from that state: ***[Job Arrays with Checkpointing](#array-jobs-running-very-long-tasks-with-checkpoint-files)*** can help you with that.
* Jobs submitted to **`hourly`** get more priority than jobs submitted to **`daily`**: always use **`hourly`** for jobs shorter than 1 hour. * Jobs submitted to **`hourly`** get more priority than jobs submitted to **`daily`**: always use **`hourly`** for jobs shorter than 1 hour.
* Jobs submitted to **`daily`** get more priority than jobs submitted to **`general`**: always use **`daily`** for jobs shorter than 1 day. * Jobs submitted to **`daily`** get more priority than jobs submitted to **`general`**: always use **`daily`** for jobs shorter than 1 day.
3. Is **forbidden** to run **very short jobs** as they cause a lot of overhead but also can cause severe problems to the main scheduler. 3. Is **forbidden** to run **very short jobs** as they cause a lot of overhead but also can cause severe problems to the main scheduler.
* ***Question:*** Is my job a very short job? ***Answer:*** If it lasts in few seconds or very few minutes, yes. * ***Question:*** Is my job a very short job? ***Answer:*** If it lasts in few seconds or very few minutes, yes.
* ***Question:*** How long should my job run? ***Answer:*** as the *Rule of Thumb*, from 5' would start being ok, from 15' would preferred. * ***Question:*** How long should my job run? ***Answer:*** as the *Rule of Thumb*, from 5' would start being ok, from 15' would preferred.
* Use ***[Packed Jobs](/merlin6/running-jobs.html#packed-jobs-running-a-large-number-of-short-tasks)*** for running a large number of short tasks. * Use ***[Packed Jobs](#packed-jobs-running-a-large-number-of-short-tasks)*** for running a large number of short tasks.
4. Do not submit hundreds of similar jobs! 4. Do not submit hundreds of similar jobs!
* Use ***[Array Jobs](/merlin6/running-jobs.html#array-jobs-launching-a-large-number-of-related-jobs)*** for gathering jobs instead. * Use ***[Array Jobs](#array-jobs-launching-a-large-number-of-related-jobs)*** for gathering jobs instead.
!!! tip !!! tip
Having a good estimation of the *time* needed by your jobs, a proper way for Having a good estimation of the *time* needed by your jobs, a proper way for
@@ -51,7 +51,7 @@ The following settings are the minimum required for running a job in the Merlin
#SBATCH --clusters=<cluster_name> # Possible values: merlin5, merlin6, gmerlin6 #SBATCH --clusters=<cluster_name> # Possible values: merlin5, merlin6, gmerlin6
``` ```
Refer to the documentation of each cluster ([**`merlin6`**](/merlin6/slurm-configuration.html),[**`gmerlin6`**](/gmerlin6/slurm-configuration.html),[**`merlin5`**](/merlin5/slurm-configuration.html) for further information. Refer to the documentation of each cluster ([**`merlin6`**](../slurm-configuration.md),[**`gmerlin6`**](../../gmerlin6/slurm-configuration.md),[**`merlin5`**](../../merlin5/slurm-configuration.md) for further information.
* **Partitions:** except when using the *default* partition for each cluster, one needs to specify the partition: * **Partitions:** except when using the *default* partition for each cluster, one needs to specify the partition:
@@ -59,7 +59,7 @@ The following settings are the minimum required for running a job in the Merlin
#SBATCH --partition=<partition_name> # Check each cluster documentation for possible values #SBATCH --partition=<partition_name> # Check each cluster documentation for possible values
``` ```
Refer to the documentation of each cluster ([**`merlin6`**](/merlin6/slurm-configuration.html),[**`gmerlin6`**](/gmerlin6/slurm-configuration.html),[**`merlin5`**](/merlin5/slurm-configuration.html) for further information. Refer to the documentation of each cluster ([**`merlin6`**](../slurm-configuration.md),[**`gmerlin6`**](../../gmerlin6/slurm-configuration.md),[**`merlin5`**](../../merlin5/slurm-configuration.md) for further information.
* **[Optional] Disabling shared nodes**: by default, nodes are not exclusive. Hence, multiple users can run in the same node. One can request exclusive node usage with the following option: * **[Optional] Disabling shared nodes**: by default, nodes are not exclusive. Hence, multiple users can run in the same node. One can request exclusive node usage with the following option:
@@ -73,7 +73,7 @@ The following settings are the minimum required for running a job in the Merlin
#SBATCH --time=<D-HH:MM:SS> # Can not exceed the partition `MaxTime` #SBATCH --time=<D-HH:MM:SS> # Can not exceed the partition `MaxTime`
``` ```
Refer to the documentation of each cluster ([**`merlin6`**](/merlin6/slurm-configuration.html),[**`gmerlin6`**](/gmerlin6/slurm-configuration.html),[**`merlin5`**](/merlin5/slurm-configuration.html) for further information about partition `MaxTime` values. Refer to the documentation of each cluster ([**`merlin6`**](../slurm-configuration.md),[**`gmerlin6`**](../../gmerlin6/slurm-configuration.md),[**`merlin5`**](../../merlin5/slurm-configuration.md) for further information about partition `MaxTime` values.
* **Output and error files**: by default, Slurm script will generate standard output (`slurm-%j.out`, where `%j` is the job_id) and error (`slurm-%j.err`, where `%j` is the job_id) files in the directory from where the job was submitted. Users can change default name with the following options: * **Output and error files**: by default, Slurm script will generate standard output (`slurm-%j.out`, where `%j` is the job_id) and error (`slurm-%j.err`, where `%j` is the job_id) files in the directory from where the job was submitted. Users can change default name with the following options:
@@ -91,7 +91,7 @@ The following settings are the minimum required for running a job in the Merlin
#SBATCH --hint=nomultithread # Don't use extra threads with in-core multi-threading. #SBATCH --hint=nomultithread # Don't use extra threads with in-core multi-threading.
``` ```
Refer to the documentation of each cluster ([**`merlin6`**](/merlin6/slurm-configuration.html),[**`gmerlin6`**](/gmerlin6/slurm-configuration.html),[**`merlin5`**](/merlin5/slurm-configuration.html) for further information about node configuration and Hyper-Threading. Refer to the documentation of each cluster ([**`merlin6`**](../slurm-configuration.md),[**`gmerlin6`**](../../gmerlin6/slurm-configuration.md),[**`merlin5`**](../../merlin5/slurm-configuration.md) for further information about node configuration and Hyper-Threading.
Consider that, sometimes, depending on your job requirements, you might need also to setup how many `--ntasks-per-core` or `--cpus-per-task` (even other options) in addition to the `--hint` command. Please, contact us in case of doubts. Consider that, sometimes, depending on your job requirements, you might need also to setup how many `--ntasks-per-core` or `--cpus-per-task` (even other options) in addition to the `--hint` command. Please, contact us in case of doubts.
!!! tip !!! tip

View File

@@ -26,7 +26,7 @@ module load ANSYS/2022R1
Is possible to run Fluent through RSM from remote PSI (Linux or Windows) Is possible to run Fluent through RSM from remote PSI (Linux or Windows)
Workstation having a local installation of ANSYS Fluent and RSM client. For Workstation having a local installation of ANSYS Fluent and RSM client. For
that, please refer to the [ANSYS RSM](/merlin6/ansys-rsm.html) in the Merlin that, please refer to the [ANSYS RSM](ansys-rsm.md) in the Merlin
documentation for further information of how to setup a RSM client for documentation for further information of how to setup a RSM client for
submitting jobs to Merlin. submitting jobs to Merlin.
@@ -112,7 +112,7 @@ Slurm batch system using allocations:
is not always possible (depending on the usage of the cluster). is not always possible (depending on the usage of the cluster).
Please refer to the documentation **[Running Interactive Please refer to the documentation **[Running Interactive
Jobs](/merlin6/interactive-jobs.html)** for firther information about different Jobs](../../slurm-general-docs/interactive-jobs.md)** for firther information about different
ways for running interactive jobs in the Merlin6 cluster. ways for running interactive jobs in the Merlin6 cluster.
### Requirements ### Requirements
@@ -124,7 +124,7 @@ communication between the GUI and the different nodes. For doing that, one must
have a **passphrase protected** SSH Key. If the user does not have SSH Keys yet have a **passphrase protected** SSH Key. If the user does not have SSH Keys yet
(simply run **`ls $HOME/.ssh/`** to check whether **`id_rsa`** files exist or (simply run **`ls $HOME/.ssh/`** to check whether **`id_rsa`** files exist or
not). For deploying SSH Keys for running Fluent interactively, one should not). For deploying SSH Keys for running Fluent interactively, one should
follow this documentation: **[Configuring SSH Keys](/merlin6/ssh-keys.html)** follow this documentation: **[Configuring SSH Keys](../../how-to-use-merlin/ssh-keys.md)**
#### List of hosts #### List of hosts

View File

@@ -99,7 +99,7 @@ To setup HFSS RSM for using it with the Merlin cluster, it must be done from the
Running jobs through Slurm from **ANSYS Electronics Desktop** is the way for Running jobs through Slurm from **ANSYS Electronics Desktop** is the way for
running ANSYS HFSS when submitting from an ANSYS HFSS installation in a Merlin running ANSYS HFSS when submitting from an ANSYS HFSS installation in a Merlin
login node. **ANSYS Electronics Desktop** usually needs to be run from the login node. **ANSYS Electronics Desktop** usually needs to be run from the
**[Merlin NoMachine](/merlin6/nomachine.html)** service, which currently runs **[Merlin NoMachine](../../how-to-use-merlin/nomachine.md)** service, which currently runs
on: on:
* `merlin-l-001.psi.ch` * `merlin-l-001.psi.ch`

View File

@@ -4,7 +4,7 @@ This document describes generic information of how to load and run ANSYS softwar
## ANSYS software in Pmodules ## ANSYS software in Pmodules
The ANSYS software can be loaded through **[PModules](/merlin6/using-modules.html)**. The ANSYS software can be loaded through **[PModules](../../how-to-use-merlin/using-modules.md)**.
The default ANSYS versions are loaded from the central PModules repository. The default ANSYS versions are loaded from the central PModules repository.
However, there are some known problems that can pop up when using some specific ANSYS packages in advanced mode. However, there are some known problems that can pop up when using some specific ANSYS packages in advanced mode.

View File

@@ -1,12 +1,4 @@
--- # Accessing Interactive Nodes
title: Accessing Interactive Nodes
#tags:
keywords: How to, HowTo, access, accessing, nomachine, ssh
last_updated: 07 September 2022
#summary: ""
sidebar: merlin7_sidebar
permalink: /merlin7/interactive.html
---
## SSH Access ## SSH Access

View File

@@ -18,7 +18,7 @@ It basically contains the following clusters:
## Accessing the Slurm clusters ## Accessing the Slurm clusters
Any job submission must be performed from a **Merlin login node**. Please refer to the [**Accessing the Interactive Nodes documentation**](/merlin7/interactive.html) Any job submission must be performed from a **Merlin login node**. Please refer to the [**Accessing the Interactive Nodes documentation**](accessing-interactive-nodes.md)
for further information about how to access the cluster. for further information about how to access the cluster.
In addition, any job *must be submitted from a high performance storage area visible by the login nodes and by the computing nodes*. For this, the possible storage areas are the following: In addition, any job *must be submitted from a high performance storage area visible by the login nodes and by the computing nodes*. For this, the possible storage areas are the following:

View File

@@ -16,7 +16,7 @@ The Merlin7 cluster is moving toward **production** state since August 2024, thi
but due to some remaining issues with the platform, the schedule of the migration of users and communities has been delayed. You will be notified well in advance but due to some remaining issues with the platform, the schedule of the migration of users and communities has been delayed. You will be notified well in advance
regarding the migration of data. regarding the migration of data.
All PSI users can request access to Merlin7, please go to the [Requesting Merlin Accounts](/merlin7/request-account.html) page and complete the steps given there. All PSI users can request access to Merlin7, please go to the [Requesting Merlin Accounts](requesting-accounts.md) page and complete the steps given there.
In case you identify errors or missing information, please provide feedback through [merlin-admins mailing list](mailto:merlin-admins@lists.psi.ch) mailing list or [submit a ticket using the PSI service portal](https://psi.service-now.com/psisp). In case you identify errors or missing information, please provide feedback through [merlin-admins mailing list](mailto:merlin-admins@lists.psi.ch) mailing list or [submit a ticket using the PSI service portal](https://psi.service-now.com/psisp).

View File

@@ -10,7 +10,7 @@ permalink: /merlin7/connect-from-linux.html
## SSH without X11 Forwarding ## SSH without X11 Forwarding
This is the standard method. Official X11 support is provided through [NoMachine](/merlin7/nomachine.html). This is the standard method. Official X11 support is provided through [NoMachine](nomachine.md).
For normal SSH sessions, use your SSH client as follows: For normal SSH sessions, use your SSH client as follows:
```bash ```bash
@@ -21,8 +21,8 @@ ssh $username@login002.merlin7.psi.ch
## SSH with X11 Forwarding ## SSH with X11 Forwarding
Official X11 Forwarding support is through NoMachine. Please follow the document Official X11 Forwarding support is through NoMachine. Please follow the document
[{Job Submission -> Interactive Jobs}](/merlin7/interactive-jobs.html#Requirements) and [{Job Submission -> Interactive Jobs}](../03-Slurm-General-Documentation/interactive-jobs.md#requirements) and
[{Accessing Merlin -> NoMachine}](/merlin7/nomachine.html) for more details. However, [{Accessing Merlin -> NoMachine}](nomachine.md) for more details. However,
we provide a small recipe for enabling X11 Forwarding in Linux. we provide a small recipe for enabling X11 Forwarding in Linux.
* For enabling client X11 forwarding, add the following to the start of ``~/.ssh/config`` * For enabling client X11 forwarding, add the following to the start of ``~/.ssh/config``

View File

@@ -10,7 +10,7 @@ permalink: /merlin7/connect-from-macos.html
## SSH without X11 Forwarding ## SSH without X11 Forwarding
This is the standard method. Official X11 support is provided through [NoMachine](/merlin7/nomachine.html). This is the standard method. Official X11 support is provided through [NoMachine](nomachine.md).
For normal SSH sessions, use your SSH client as follows: For normal SSH sessions, use your SSH client as follows:
```bash ```bash
@@ -29,8 +29,8 @@ you have it running before starting a SSH connection with X11 forwarding.
### SSH with X11 Forwarding in MacOS ### SSH with X11 Forwarding in MacOS
Official X11 support is through NoMachine. Please follow the document Official X11 support is through NoMachine. Please follow the document
[{Job Submission -> Interactive Jobs}](/merlin7/interactive-jobs.html#Requirements) and [{Job Submission -> Interactive Jobs}](../03-Slurm-General-Documentation/interactive-jobs.md#requirements) and
[{Accessing Merlin -> NoMachine}](/merlin7/nomachine.html) for more details. However, [{Accessing Merlin -> NoMachine}](nomachine.md) for more details. However,
we provide a small recipe for enabling X11 Forwarding in MacOS. we provide a small recipe for enabling X11 Forwarding in MacOS.
* Ensure that **[XQuartz](https://www.xquartz.org/)** is installed and running in your MacOS. * Ensure that **[XQuartz](https://www.xquartz.org/)** is installed and running in your MacOS.

View File

@@ -1,11 +1,4 @@
--- # Connecting from a Windows Client
title: Connecting from a Windows Client
keywords: microsoft, mocosoft, windows, putty, xming, connecting, client, configuration, SSH, X11
last_updated: 07 September 2022
summary: "This document describes a recommended setup for a Windows client."
sidebar: merlin7_sidebar
permalink: /merlin7/connect-from-windows.html
---
## SSH with PuTTY without X11 Forwarding ## SSH with PuTTY without X11 Forwarding
@@ -14,7 +7,7 @@ PuTTY is one of the most common tools for SSH.
Check, if the following software packages are installed on the Windows workstation by Check, if the following software packages are installed on the Windows workstation by
inspecting the *Start* menu (hint: use the *Search* box to save time): inspecting the *Start* menu (hint: use the *Search* box to save time):
* PuTTY (should be already installed) * PuTTY (should be already installed)
* *[Optional]* Xming (needed for [SSH with X11 Forwarding](/merlin7/connect-from-windows.html#ssh-with-x11-forwarding)) * *[Optional]* Xming (needed for [SSH with X11 Forwarding](connect-from-windows.md#ssh-with-x11-forwarding))
If they are missing, you can install them using the Software Kiosk icon on the Desktop. If they are missing, you can install them using the Software Kiosk icon on the Desktop.
@@ -32,7 +25,7 @@ If they are missing, you can install them using the Software Kiosk icon on the D
## SSH with PuTTY with X11 Forwarding ## SSH with PuTTY with X11 Forwarding
Official X11 Forwarding support is through NoMachine. Please follow the document Official X11 Forwarding support is through NoMachine. Please follow the document
[{Job Submission -> Interactive Jobs}](/merlin7/interactive-jobs.html#Requirements) and [{Job Submission -> Interactive Jobs}](../03-Slurm-General-Documentation/interactive-jobs.md#requirements) and
[{Accessing Merlin -> NoMachine}](/merlin7/nomachine.html) for more details. However, [{Accessing Merlin -> NoMachine}](/merlin7/nomachine.html) for more details. However,
we provide a small recipe for enabling X11 Forwarding in Windows. we provide a small recipe for enabling X11 Forwarding in Windows.

View File

@@ -134,7 +134,7 @@ aklog
## Slurm jobs accessing AFS ## Slurm jobs accessing AFS
Some jobs may require to access private areas in AFS. For that, having a valid [**keytab**](/merlin7/kerberos.html#generating-granting-tickets-with-keytab) file is required. Some jobs may require to access private areas in AFS. For that, having a valid [**keytab**](kerberos.md#creating-a-keytab-file) file is required.
Then, from inside the batch script one can obtain granting tickets for Kerberos and AFS, which can be used for accessing AFS private areas. Then, from inside the batch script one can obtain granting tickets for Kerberos and AFS, which can be used for accessing AFS private areas.
The steps should be the following: The steps should be the following:

View File

@@ -20,7 +20,7 @@ described here are organised by use case and include usage examples.
This tool is available on all of the login nodes and provides a brief overview of This tool is available on all of the login nodes and provides a brief overview of
a user's filesystem quotas. These are limits which restrict how much storage (or a user's filesystem quotas. These are limits which restrict how much storage (or
number of files) a user can create. A generic table of filesystem quotas can be number of files) a user can create. A generic table of filesystem quotas can be
found on the [Storage page](/merlin7/storage.html#dir_classes). found on the [Storage page](storage.md#dir_classes).
#### Example #1: Viewing quotas #### Example #1: Viewing quotas

View File

@@ -23,7 +23,7 @@ Key Features:
* **Custom Requests:** If a package, version, or feature is missing, users can contact the support team to explore feasibility for installation. * **Custom Requests:** If a package, version, or feature is missing, users can contact the support team to explore feasibility for installation.
{{site.data.alerts.tip}} {{site.data.alerts.tip}}
For further information about <b>Pmodules</b> on Merlin7 please refer to the <b><a href="/merlin7/pmodules.html">PSI Modules</a></b> chapter. For further information about **PModules** on Merlin7 please refer to the [PSI Modules](../05-Software-Support/pmodules.md) chapter.
{{site.data.alerts.end}} {{site.data.alerts.end}}
### Spack Modules ### Spack Modules
@@ -31,7 +31,7 @@ For further information about <b>Pmodules</b> on Merlin7 please refer to the <b>
Merlin7 also provides Spack modules, offering a modern and flexible package management system. Spack supports a wide variety of software packages and versions. For more information, refer to the **external [PSI Spack](https://gitea.psi.ch/HPCE/spack-psi) documentation**. Merlin7 also provides Spack modules, offering a modern and flexible package management system. Spack supports a wide variety of software packages and versions. For more information, refer to the **external [PSI Spack](https://gitea.psi.ch/HPCE/spack-psi) documentation**.
{{site.data.alerts.tip}} {{site.data.alerts.tip}}
For further information about <b>Spack</b> on Merlin7 please refer to the <b><a href="/merlin7/spack.html">Spack</a></b> chapter. For further information about **Spack** on Merlin7 please refer to the [Spack](../05-Software-Support/spack.md) chapter.
{{site.data.alerts.end}} {{site.data.alerts.end}}
### Cray Environment Modules ### Cray Environment Modules
@@ -45,6 +45,6 @@ Recommendations:
* **General Use:** For most applications, prefer PModules, which ensure stability, backward compatibility, and long-term support. * **General Use:** For most applications, prefer PModules, which ensure stability, backward compatibility, and long-term support.
{{site.data.alerts.tip}} {{site.data.alerts.tip}}
For further information about <b>CPE</b> on Merlin7 please refer to the <b><a href="/merlin7/cray-module-env.html">Cray Modules</a></b> chapter. For further information about **CPE** on Merlin7 please refer to the [Cray Modules](../05-Software-Support/cray-module.env.md) chapter.
{{site.data.alerts.end}} {{site.data.alerts.end}}

View File

@@ -104,12 +104,12 @@ The home directories are mounted in the login and computing nodes under the dire
Directory policies: Directory policies:
* Read **[Important: Code of Conduct](/merlin7/code-of-conduct.html)** for more information about Merlin7 policies. * Read **[Important: Code of Conduct](../01-Quick-Start-Guide/code-of-conduct.md)** for more information about Merlin7 policies.
* Is **forbidden** to use the home directories for IO-intensive tasks, instead use one of the **[scratch](/merlin7/storage.html#scratch-directories)** areas instead! * Is **forbidden** to use the home directories for IO-intensive tasks, instead use one of the **[scratch](storage.md#scratch-directories)** areas instead!
* No backup policy is applied for the user home directories: **users are responsible for backing up their data**. * No backup policy is applied for the user home directories: **users are responsible for backing up their data**.
Home directory quotas are defined in a per Lustre project basis. The quota can be checked using the `merlin_quotas` command described Home directory quotas are defined in a per Lustre project basis. The quota can be checked using the `merlin_quotas` command described
[above](/merlin7/storage.html#how-to-check-quotas). [above](storage.md#how-to-check-quotas).
### Project data directory ### Project data directory
@@ -118,7 +118,7 @@ shared by all members of the project (the project's corresponding UNIX group). W
project related storage spaces, since it allows users to coordinate. Also, project spaces have more flexible policies project related storage spaces, since it allows users to coordinate. Also, project spaces have more flexible policies
regarding extending the available storage space. regarding extending the available storage space.
Scientists can request a Merlin project space as described in **[[Accessing Merlin -> Requesting a Project]](/merlin7/request-project.html)**. Scientists can request a Merlin project space as described in **[[Accessing Merlin -> Requesting a Project]](../01-Quick-Start-Guide/requesting-projects.md)**.
By default, Merlin can offer **general** project space, centrally covered, as long as it does not exceed 10TB (otherwise, it has to be justified). By default, Merlin can offer **general** project space, centrally covered, as long as it does not exceed 10TB (otherwise, it has to be justified).
General Merlin projects might need to be reviewed after one year of their creation. General Merlin projects might need to be reviewed after one year of their creation.
@@ -140,7 +140,7 @@ In the future, a list of `projectid` will be provided, so users can check their
Directory policies: Directory policies:
* Read **[Important: Code of Conduct](/merlin7/code-of-conduct.html)** for more information about Merlin7 policies. * Read **[Important: Code of Conduct](../01-Quick-Start-Guide/code-of-conduct.md)** for more information about Merlin7 policies.
* It is **forbidden** to use the data directories as `/scratch` area during a job's runtime, i.e. for high throughput I/O for a job's temporary files. * It is **forbidden** to use the data directories as `/scratch` area during a job's runtime, i.e. for high throughput I/O for a job's temporary files.
* Please Use `/scratch`, `/data/scratch/shared` for this purpose. * Please Use `/scratch`, `/data/scratch/shared` for this purpose.
* No backups: users are responsible for managing the backups of their data directories. * No backups: users are responsible for managing the backups of their data directories.

View File

@@ -41,7 +41,7 @@ The next chapters contain detailed information about the different transfer meth
## Direct Transfer via Merlin7 Login Nodes ## Direct Transfer via Merlin7 Login Nodes
The following methods transfer data directly via the [login nodes](/merlin7/interactive.html#login-nodes-hardware-description). They are suitable for use from **within the PSI network**. The following methods transfer data directly via the [login nodes](../01-Quick-Start-Guide/accessing-interactive-nodes.md#login-nodes-hardware-description). They are suitable for use from **within the PSI network**.
### Rsync (Recommended for Linux/macOS) ### Rsync (Recommended for Linux/macOS)
@@ -133,7 +133,7 @@ SWITCHfilesender <b>is not</b> a long-term storage or archiving solution.
From August 2024, Merlin is connected to the **[PSI Data Transfer](https://www.psi.ch/en/photon-science-data-services/data-transfer)** service, From August 2024, Merlin is connected to the **[PSI Data Transfer](https://www.psi.ch/en/photon-science-data-services/data-transfer)** service,
`datatransfer.psi.ch`. This is a central service managed by the **[Linux team](https://linux.psi.ch/index.html)**. However, any problems or questions related to it can be directly `datatransfer.psi.ch`. This is a central service managed by the **[Linux team](https://linux.psi.ch/index.html)**. However, any problems or questions related to it can be directly
[reported](/merlin7/contact.html) to the Merlin administrators, which will forward the request if necessary. [reported](../99-support/contact.md) to the Merlin administrators, which will forward the request if necessary.
The PSI Data Transfer servers supports the following protocols: The PSI Data Transfer servers supports the following protocols:
* Data Transfer - SSH (scp / rsync) * Data Transfer - SSH (scp / rsync)
@@ -172,6 +172,6 @@ provides a helpful wrapper over the Gnome storage utilities, and provides suppor
- [others](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/managing-storage-volumes-in-gnome_using-the-desktop-environment-in-rhel-8#gvfs-back-ends_managing-storage-volumes-in-gnome) - [others](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/managing-storage-volumes-in-gnome_using-the-desktop-environment-in-rhel-8#gvfs-back-ends_managing-storage-volumes-in-gnome)
[More instruction on using `merlin_rmount`](/merlin7/merlin-rmount.html) [More instruction on using `merlin_rmount`](merlin-rmount.md)
{% endcomment %} {% endcomment %}

View File

@@ -1,12 +1,4 @@
--- # Running Interactive Jobs
title: Running Interactive Jobs
#tags:
keywords: interactive, X11, X, srun, salloc, job, jobs, slurm, nomachine, nx
last_updated: 07 August 2024
summary: "This document describes how to run interactive jobs as well as X based software."
sidebar: merlin7_sidebar
permalink: /merlin7/interactive-jobs.html
---
### The Merlin7 'interactive' partition ### The Merlin7 'interactive' partition
@@ -120,16 +112,16 @@ salloc: Relinquishing job allocation 165
#### Graphical access #### Graphical access
[NoMachine](/merlin7/nomachine.html) is the official supported service for graphical [NoMachine](../02-How-To-Use-Merlin/nomachine.md) is the official supported service for graphical
access in the Merlin cluster. This service is running on the login nodes. Check the access in the Merlin cluster. This service is running on the login nodes. Check the
document [{Accessing Merlin -> NoMachine}](/merlin7/nomachine.html) for details about document [{Accessing Merlin -> NoMachine}](../02-How-To-Use-Merlin/nomachine.md) for details about
how to connect to the **NoMachine** service in the Merlin cluster. how to connect to the **NoMachine** service in the Merlin cluster.
For other non officially supported graphical access (X11 forwarding): For other non officially supported graphical access (X11 forwarding):
* For Linux clients, please follow [{How To Use Merlin -> Accessing from Linux Clients}](/merlin7/connect-from-linux.html) * For Linux clients, please follow [{How To Use Merlin -> Accessing from Linux Clients}](../02-How-To-Use-Merlin/connect-from-linux.md)
* For Windows clients, please follow [{How To Use Merlin -> Accessing from Windows Clients}](/merlin7/connect-from-windows.html) * For Windows clients, please follow [{How To Use Merlin -> Accessing from Windows Clients}](../02-How-To-Use-Merlin/connect-from-windows.md)
* For MacOS clients, please follow [{How To Use Merlin -> Accessing from MacOS Clients}](/merlin7/connect-from-macos.html) * For MacOS clients, please follow [{How To Use Merlin -> Accessing from MacOS Clients}](../02-How-To-Use-Merlin/connect-from-macos.md)
### 'srun' with x11 support ### 'srun' with x11 support

View File

@@ -132,7 +132,7 @@ Where:
* **`cpu_interactive` QoS:** Is restricted to one node and a few CPUs only, and is intended to be used when interactive * **`cpu_interactive` QoS:** Is restricted to one node and a few CPUs only, and is intended to be used when interactive
allocations are necessary (`salloc`, `srun`). allocations are necessary (`salloc`, `srun`).
For additional details, refer to the [CPU partitions](/merlin7/slurm-configuration.html#CPU-partitions) section. For additional details, refer to the [CPU partitions](slurm-configuration.md#CPU-partitions) section.
{{site.data.alerts.tip}} {{site.data.alerts.tip}}
Always verify QoS definitions for potential changes using the <b>'sacctmgr show qos format="Name%22,MaxTRESPU%35,MaxTRES%35"'</b> command. Always verify QoS definitions for potential changes using the <b>'sacctmgr show qos format="Name%22,MaxTRESPU%35,MaxTRES%35"'</b> command.
@@ -308,7 +308,7 @@ Where:
* **`gpu_a100_interactive` & `gpu_gh_interactive` QoS:** Guarantee interactive access to GPU nodes for software compilation and * **`gpu_a100_interactive` & `gpu_gh_interactive` QoS:** Guarantee interactive access to GPU nodes for software compilation and
small testing. small testing.
For additional details, refer to the [GPU partitions](/merlin7/slurm-configuration.html#GPU-partitions) section. For additional details, refer to the [GPU partitions](slurm-configuration.md#GPU-partitions) section.
{{site.data.alerts.tip}} {{site.data.alerts.tip}}
Always verify QoS definitions for potential changes using the <b>'sacctmgr show qos format="Name%22,MaxTRESPU%35,MaxTRES%35"'</b> command. Always verify QoS definitions for potential changes using the <b>'sacctmgr show qos format="Name%22,MaxTRESPU%35,MaxTRES%35"'</b> command.

View File

@@ -12,7 +12,7 @@ This document describes generic information of how to load and run ANSYS softwar
## ANSYS software in Pmodules ## ANSYS software in Pmodules
The ANSYS software can be loaded through **[PModules](/merlin7/pmodules.html)**. The ANSYS software can be loaded through **[PModules](pmodules.md)**.
The default ANSYS versions are loaded from the central PModules repository. The default ANSYS versions are loaded from the central PModules repository.
@@ -80,16 +80,16 @@ ANSYS/2025R2:
**ANSYS Remote Solve Manager (RSM)** is used by ANSYS Workbench to submit computational jobs to HPC clusters directly from Workbench on your desktop. **ANSYS Remote Solve Manager (RSM)** is used by ANSYS Workbench to submit computational jobs to HPC clusters directly from Workbench on your desktop.
Therefore, PSI workstations with direct access to Merlin can submit jobs by using RSM. Therefore, PSI workstations with direct access to Merlin can submit jobs by using RSM.
For further information, please visit the **[ANSYS RSM](/merlin7/ansys-rsm.html)** section. For further information, please visit the **[ANSYS RSM](ansys-rsm.md)** section.
### ANSYS Fluent ### ANSYS Fluent
For further information, please visit the **[ANSYS RSM](/merlin7/ansys-fluent.html)** section. For further information, please visit the **[ANSYS RSM](ansys-fluent.md)** section.
### ANSYS CFX ### ANSYS CFX
For further information, please visit the **[ANSYS RSM](/merlin7/ansys-cfx.html)** section. For further information, please visit the **[ANSYS RSM](ansys-cfx.md)** section.
### ANSYS MAPDL ### ANSYS MAPDL
For further information, please visit the **[ANSYS RSM](/merlin7/ansys-mapdl.html)** section. For further information, please visit the **[ANSYS RSM](ansys-mapdl.md)** section.

View File

@@ -19,7 +19,7 @@ The Merlin cluster supports OpenMPI versions across three distinct stages: stabl
#### Stable #### Stable
Versions in the `stable` stage are fully functional, thoroughly tested, and officially supported by the Merlin administrators. Versions in the `stable` stage are fully functional, thoroughly tested, and officially supported by the Merlin administrators.
These versions are available via [Pmodules](/merlin7/pmodules.html) and [Spack](/merlin7/spack.html), ensuring compatibility and reliability for production use. These versions are available via [PModules](pmodules.md) and [Spack](spack.md), ensuring compatibility and reliability for production use.
#### Unstable #### Unstable
@@ -71,7 +71,7 @@ specific pmix plugin versions available: pmix_v5,pmix_v4,pmix_v3,pmix_v2
``` ```
Important Notes: Important Notes:
* For OpenMPI, always use `pmix` by specifying the appropriate version (`pmix_$version`). * For OpenMPI, always use `pmix` by specifying the appropriate version (`pmix_$version`).
When loading an OpenMPI module (via [Pmodules](/merlin7/pmodules.html) or [Spack](/merlin7/spack.html)), the corresponding PMIx version will be automatically loaded. When loading an OpenMPI module (via [PModules](pmodules.md) or [Spack](spack.md)), the corresponding PMIx version will be automatically loaded.
* Users do not need to manually manage PMIx compatibility. * Users do not need to manually manage PMIx compatibility.
{{site.data.alerts.warning}} {{site.data.alerts.warning}}

View File

@@ -136,7 +136,7 @@ The PModules system is designed to accommodate the diverse software needs of Mer
### Requesting Missing Software ### Requesting Missing Software
If a specific software package is not available in PModules and there is interest from multiple users: If a specific software package is not available in PModules and there is interest from multiple users:
* **[Contact Support](/merlin7/contact.html):** Let us know about the software, and we will assess its feasibility for deployment. * **[Contact Support](../99-support/contact.md):** Let us know about the software, and we will assess its feasibility for deployment.
* **Deployment Timeline:** Adding new software to PModules typically takes a few days, depending on complexity and compatibility. * **Deployment Timeline:** Adding new software to PModules typically takes a few days, depending on complexity and compatibility.
* **User Involvement:** If you are interested in maintaining the software package, please inform us. Collaborative maintenance helps * **User Involvement:** If you are interested in maintaining the software package, please inform us. Collaborative maintenance helps
ensure timely updates and support. ensure timely updates and support.

View File

@@ -7,6 +7,6 @@ tags:
# Merlin 6 documentation available # Merlin 6 documentation available
Merlin 6 docs are now available at <https://hpce.pages.psi.ch/merlin6>! Merlin 6 docs are now available at [Merlin6 docs](../../merlin6/index.md)!
More complete documentation will be coming shortly. More complete documentation will be coming shortly.

View File

@@ -10,6 +10,6 @@ tags:
The Merlin7 cluster is officially in preproduction. This phase will be tested by a few users The Merlin7 cluster is officially in preproduction. This phase will be tested by a few users
and slowly we will contact other users to be part of it. Keep in mind that access is restricted. and slowly we will contact other users to be part of it. Keep in mind that access is restricted.
Merlin7 documentation is now available <https://hpce.pages.psi.ch/merlin7/slurm-configuration.html>. Merlin7 documentation is now available at [Slurm configuration](../../merlin7/03-Slurm-General-Documentation/slurm-configuration.md).
More complete documentation will be coming shortly. More complete documentation will be coming shortly.

View File

@@ -97,11 +97,6 @@ nav:
- merlin7/03-Slurm-General-Documentation/slurm-examples.md - merlin7/03-Slurm-General-Documentation/slurm-examples.md
- Jupyterhub: - Jupyterhub:
- merlin7/04-Jupyterhub/jupyterhub.md - merlin7/04-Jupyterhub/jupyterhub.md
- merlin7/04-Jupyterhub/jupyter-examples.md
- merlin7/04-Jupyterhub/jupytext.md
- merlin7/04-Jupyterhub/jupyter-extensions.md
- merlin7/04-Jupyterhub/jupyterlab.md
- merlin7/04-Jupyterhub/jupyterhub-trouble.md
- Software Support: - Software Support:
- merlin7/05-Software-Support/pmodules.md - merlin7/05-Software-Support/pmodules.md
- merlin7/05-Software-Support/spack.md - merlin7/05-Software-Support/spack.md