further vibing #2
This commit is contained in:
@@ -140,9 +140,9 @@ module load paraview
|
||||
vglrun paraview
|
||||
```
|
||||
|
||||
Officially, the supported method for running `vglrun` is by using the [NoMachine remote desktop](../../how-to-use-merlin/nomachine.md).
|
||||
Officially, the supported method for running `vglrun` is by using the [NoMachine remote desktop](../how-to-use-merlin/nomachine.md).
|
||||
Running `vglrun` it's also possible using SSH with X11 Forwarding. However, it's very slow and it's only recommended when running
|
||||
in Slurm (from [NoMachine](../../how-to-use-merlin/nomachine.md)). Please, avoid running `vglrun` over SSH from a desktop or laptop.
|
||||
in Slurm (from [NoMachine](../how-to-use-merlin/nomachine.md)). Please, avoid running `vglrun` over SSH from a desktop or laptop.
|
||||
|
||||
## Software
|
||||
|
||||
|
||||
@@ -11,9 +11,9 @@ permalink: /merlin6/cluster-introduction.html
|
||||
## Slurm clusters
|
||||
|
||||
* The new Slurm CPU cluster is called [**`merlin6`**](cluster-introduction.md).
|
||||
* The new Slurm GPU cluster is called [**`gmerlin6`**](../../gmerlin6/cluster-introduction.md)
|
||||
* The new Slurm GPU cluster is called [**`gmerlin6`**](../gmerlin6/cluster-introduction.md)
|
||||
* The old Slurm *merlin* cluster is still active and best effort support is provided.
|
||||
The cluster, was renamed as [**merlin5**](../../merlin5/cluster-introduction.md).
|
||||
The cluster, was renamed as [**merlin5**](../merlin5/cluster-introduction.md).
|
||||
|
||||
From July 2019, **`merlin6`** becomes the **default Slurm cluster** and any job submitted from the login node will be submitted to that cluster if not .
|
||||
* Users can keep submitting to the old *`merlin5`* computing nodes by using the option ``--cluster=merlin5``.
|
||||
|
||||
@@ -70,7 +70,7 @@ collected at a ***beamline***, you may have been assigned a **`p-group`**
|
||||
Groups are usually assigned to a PI, and then individual user accounts are added to the group. This must be done
|
||||
under user request through PSI Service Now. For existing **a-groups** and **p-groups**, you can follow the standard
|
||||
central procedures. Alternatively, if you do not know how to do that, follow the Merlin6
|
||||
**[Requesting extra Unix groups](../../quick-start-guide/requesting-accounts.md#requesting-extra-unix-groups)** procedure, or open
|
||||
**[Requesting extra Unix groups](../quick-start-guide/requesting-accounts.md)** procedure, or open
|
||||
a **[PSI Service Now](https://psi.service-now.com/psisp)** ticket.
|
||||
|
||||
### Documentation
|
||||
|
||||
@@ -22,11 +22,11 @@ If they are missing, you can install them using the Software Kiosk icon on the D
|
||||
|
||||
2. *[Optional]* Enable ``xterm`` to have similar mouse behavour as in Linux:
|
||||
|
||||

|
||||

|
||||
|
||||
3. Create session to a Merlin login node and *Open*:
|
||||
|
||||

|
||||

|
||||
|
||||
|
||||
## SSH with PuTTY with X11 Forwarding
|
||||
@@ -44,4 +44,4 @@ using the Software Kiosk icon (should be located on the Desktop).
|
||||
|
||||
2. Enable X11 Forwarding in your SSH client. In example, for Putty:
|
||||
|
||||

|
||||

|
||||
|
||||
@@ -28,7 +28,7 @@ visibility.
|
||||
## Direct transfer via Merlin6 login nodes
|
||||
|
||||
The following methods transfer data directly via the [login
|
||||
nodes](../../quick-start-guide/accessing-interactive-nodes.md#login-nodes-hardware-description). They are suitable
|
||||
nodes](../quick-start-guide/accessing-interactive-nodes.md). They are suitable
|
||||
for use from within the PSI network.
|
||||
|
||||
### Rsync
|
||||
@@ -72,7 +72,7 @@ The purpose of the software is to send a large file to someone, have that file a
|
||||
|
||||
From August 2024, Merlin is connected to the **[PSI Data Transfer](https://www.psi.ch/en/photon-science-data-services/data-transfer)** service,
|
||||
`datatransfer.psi.ch`. This is a central service managed by the **[Linux team](https://linux.psi.ch/index.html)**. However, any problems or questions related to it can be directly
|
||||
[reported](../../99-support/contact.md) to the Merlin administrators, which will forward the request if necessary.
|
||||
[reported](../99-support/contact.md) to the Merlin administrators, which will forward the request if necessary.
|
||||
|
||||
The PSI Data Transfer servers supports the following protocols:
|
||||
* Data Transfer - SSH (scp / rsync)
|
||||
@@ -167,4 +167,4 @@ provides a helpful wrapper over the Gnome storage utilities, and provides suppor
|
||||
- [others](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/managing-storage-volumes-in-gnome_using-the-desktop-environment-in-rhel-8#gvfs-back-ends_managing-storage-volumes-in-gnome)
|
||||
|
||||
|
||||
[More instruction on using `merlin_rmount`](merlin-rmount.md)
|
||||
[More instruction on using `merlin_rmount`](../software-support/merlin-rmount.md)
|
||||
|
||||
@@ -36,7 +36,7 @@ recalculate the notebook cells with this new kernel.
|
||||
|
||||
These environments are also available for standard work in a shell session. You
|
||||
can activate an environment in a normal merlin terminal session by using the
|
||||
`module` (q.v. [using Pmodules](../../how-to-use-merlin/using-modules.md)) command to load anaconda
|
||||
`module` (q.v. [using Pmodules](../how-to-use-merlin/using-modules.md)) command to load anaconda
|
||||
python, and from there using the `conda` command to switch to the desired
|
||||
environment:
|
||||
|
||||
@@ -79,7 +79,7 @@ queue. Additional customization can be implemented using the *'Optional user
|
||||
defined line to be added to the batch launcher script'* option. This line is
|
||||
added to the submission script at the end of other `#SBATCH` lines. Parameters can
|
||||
be passed to SLURM by starting the line with `#SBATCH`, like in [Running Slurm
|
||||
Scripts](../../slurm-general-docs/running-jobs.md). Some ideas:
|
||||
Scripts](../slurm-general-docs/running-jobs.md). Some ideas:
|
||||
|
||||
### Request additional memory
|
||||
|
||||
|
||||
@@ -119,16 +119,16 @@ salloc --clusters=merlin6 -N 2 -n 2 $SHELL
|
||||
|
||||
#### Graphical access
|
||||
|
||||
[NoMachine](../../how-to-use-merlin/nomachine.md) is the official supported service for graphical
|
||||
[NoMachine](../how-to-use-merlin/nomachine.md) is the official supported service for graphical
|
||||
access in the Merlin cluster. This service is running on the login nodes. Check the
|
||||
document [{Accessing Merlin -> NoMachine}](../../how-to-use-merlin/nomachine.md) for details about
|
||||
document [{Accessing Merlin -> NoMachine}](../how-to-use-merlin/nomachine.md) for details about
|
||||
how to connect to the **NoMachine** service in the Merlin cluster.
|
||||
|
||||
For other non officially supported graphical access (X11 forwarding):
|
||||
|
||||
* For Linux clients, please follow [{How To Use Merlin -> Accessing from Linux Clients}](../../how-to-use-merlin/connect-from-linux.md)
|
||||
* For Windows clients, please follow [{How To Use Merlin -> Accessing from Windows Clients}](../../how-to-use-merlin/connect-from-windows.md)
|
||||
* For MacOS clients, please follow [{How To Use Merlin -> Accessing from MacOS Clients}](../../how-to-use-merlin/connect-from-macos.md)
|
||||
* For Linux clients, please follow [{How To Use Merlin -> Accessing from Linux Clients}](../how-to-use-merlin/connect-from-linux.md)
|
||||
* For Windows clients, please follow [{How To Use Merlin -> Accessing from Windows Clients}](../how-to-use-merlin/connect-from-windows.md)
|
||||
* For MacOS clients, please follow [{How To Use Merlin -> Accessing from MacOS Clients}](../how-to-use-merlin/connect-from-macos.md)
|
||||
|
||||
### 'srun' with x11 support
|
||||
|
||||
|
||||
@@ -112,7 +112,7 @@ Slurm batch system using allocations:
|
||||
is not always possible (depending on the usage of the cluster).
|
||||
|
||||
Please refer to the documentation **[Running Interactive
|
||||
Jobs](../../slurm-general-docs/interactive-jobs.md)** for firther information about different
|
||||
Jobs](../slurm-general-docs/interactive-jobs.md)** for firther information about different
|
||||
ways for running interactive jobs in the Merlin6 cluster.
|
||||
|
||||
### Requirements
|
||||
@@ -124,7 +124,7 @@ communication between the GUI and the different nodes. For doing that, one must
|
||||
have a **passphrase protected** SSH Key. If the user does not have SSH Keys yet
|
||||
(simply run **`ls $HOME/.ssh/`** to check whether **`id_rsa`** files exist or
|
||||
not). For deploying SSH Keys for running Fluent interactively, one should
|
||||
follow this documentation: **[Configuring SSH Keys](../../how-to-use-merlin/ssh-keys.md)**
|
||||
follow this documentation: **[Configuring SSH Keys](../how-to-use-merlin/ssh-keys.md)**
|
||||
|
||||
#### List of hosts
|
||||
|
||||
|
||||
@@ -99,7 +99,7 @@ To setup HFSS RSM for using it with the Merlin cluster, it must be done from the
|
||||
Running jobs through Slurm from **ANSYS Electronics Desktop** is the way for
|
||||
running ANSYS HFSS when submitting from an ANSYS HFSS installation in a Merlin
|
||||
login node. **ANSYS Electronics Desktop** usually needs to be run from the
|
||||
**[Merlin NoMachine](../../how-to-use-merlin/nomachine.md)** service, which currently runs
|
||||
**[Merlin NoMachine](../how-to-use-merlin/nomachine.md)** service, which currently runs
|
||||
on:
|
||||
|
||||
* `merlin-l-001.psi.ch`
|
||||
|
||||
@@ -4,7 +4,7 @@ This document describes generic information of how to load and run ANSYS softwar
|
||||
|
||||
## ANSYS software in Pmodules
|
||||
|
||||
The ANSYS software can be loaded through **[PModules](../../how-to-use-merlin/using-modules.md)**.
|
||||
The ANSYS software can be loaded through **[PModules](../how-to-use-merlin/using-modules.md)**.
|
||||
|
||||
The default ANSYS versions are loaded from the central PModules repository.
|
||||
However, there are some known problems that can pop up when using some specific ANSYS packages in advanced mode.
|
||||
|
||||
Reference in New Issue
Block a user