vibe #3
This commit is contained in:
@@ -93,9 +93,8 @@ The following filesystems are mounted:
|
||||
Access to the PSI Data Transfer uses ***Multi factor authentication*** (MFA).
|
||||
Therefore, having the Microsoft Authenticator App is required as explained [here](https://www.psi.ch/en/computing/change-to-mfa).
|
||||
|
||||
{{site.data.alerts.tip}}Please follow the
|
||||
<b><a href="https://www.psi.ch/en/photon-science-data-services/data-transfer">Official PSI Data Transfer</a></b> documentation for further instructions.
|
||||
{{site.data.alerts.end}}
|
||||
!!! tip "Official Documentation"
|
||||
Please follow the [Official PSI Data Transfer](https://www.psi.ch/en/photon-science-data-services/data-transfer) documentation for further instructions.
|
||||
|
||||
### Directories
|
||||
|
||||
@@ -103,23 +102,20 @@ Therefore, having the Microsoft Authenticator App is required as explained [here
|
||||
|
||||
User data directories are mounted in RW.
|
||||
|
||||
{{site.data.alerts.warning}}Please, <b>ensure proper secured permissions</b> in your '/data/user'
|
||||
directory. By default, when directory is created, the system applies the most restrictive
|
||||
permissions. However, this does not prevent users for changing permissions if they wish. At this
|
||||
point, users become responsible of those changes.
|
||||
{{site.data.alerts.end}}
|
||||
!!! warning "Secure Permissions"
|
||||
Please, **ensure proper secured permissions** in your `/data/user` directory. By default, when directory is created, the system applies the most restrictive permissions. However, this does not prevent users for changing permissions if they wish. At this point, users become responsible of those changes.
|
||||
|
||||
#### /merlin/export
|
||||
|
||||
Transferring big amounts of data from outside PSI to Merlin is always possible through `/export`.
|
||||
|
||||
{{site.data.alerts.tip}}<b>The '/export' directory can be used by any Merlin user.</b>
|
||||
This is configured in Read/Write mode. If you need access, please, contact the Merlin administrators.
|
||||
{{site.data.alerts.end}}
|
||||
!!! tip "Export Directory Access"
|
||||
The `/export` directory can be used by any Merlin user. This is configured in Read/Write mode. If you need access, please, contact the Merlin administrators.
|
||||
|
||||
{{site.data.alerts.warning}}The use <b>export</b> as an extension of the quota <i>is forbidden</i>.
|
||||
<br><b><i>Auto cleanup policies</i></b> in the <b>export</b> area apply for files older than 28 days.
|
||||
{{site.data.alerts.end}}
|
||||
!!! warning "Export Usage Policy"
|
||||
The use **export** as an extension of the quota *is forbidden*.
|
||||
|
||||
Auto cleanup policies in the **export** area apply for files older than 28 days.
|
||||
|
||||
##### Exporting data from Merlin
|
||||
|
||||
@@ -139,9 +135,8 @@ Ensure to properly secure your directories and files with proper permissions.
|
||||
|
||||
Optionally, instead of using `/export`, Merlin project owners can request Read/Write or Read/Only access to their project directory.
|
||||
|
||||
{{site.data.alerts.tip}}<b>Merlin projects can request direct access.</b>
|
||||
This can be configured in Read/Write or Read/Only modes. If your project needs access, please, contact the Merlin administrators.
|
||||
{{site.data.alerts.end}}
|
||||
!!! tip "Project Access"
|
||||
Merlin projects can request direct access. This can be configured in Read/Write or Read/Only modes. If your project needs access, please, contact the Merlin administrators.
|
||||
|
||||
## Connecting to Merlin6 from outside PSI
|
||||
|
||||
|
||||
@@ -14,15 +14,15 @@ in the login nodes and X11 forwarding can be used for those users who have prope
|
||||
|
||||
### Accessing from a Linux client
|
||||
|
||||
Refer to [{How To Use Merlin -> Accessing from Linux Clients}](/merlin6/connect-from-linux.html) for **Linux** SSH client and X11 configuration.
|
||||
Refer to [{How To Use Merlin -> Accessing from Linux Clients}](../how-to-use-merlin/connect-from-linux.md) for **Linux** SSH client and X11 configuration.
|
||||
|
||||
### Accessing from a Windows client
|
||||
|
||||
Refer to [{How To Use Merlin -> Accessing from Windows Clients}](/merlin6/connect-from-windows.html) for **Windows** SSH client and X11 configuration.
|
||||
Refer to [{How To Use Merlin -> Accessing from Windows Clients}](../how-to-use-merlin/connect-from-windows.md) for **Windows** SSH client and X11 configuration.
|
||||
|
||||
### Accessing from a MacOS client
|
||||
|
||||
Refer to [{How To Use Merlin -> Accessing from MacOS Clients}](/merlin6/connect-from-macos.html) for **MacOS** SSH client and X11 configuration.
|
||||
Refer to [{How To Use Merlin -> Accessing from MacOS Clients}](../how-to-use-merlin/connect-from-macos.md) for **MacOS** SSH client and X11 configuration.
|
||||
|
||||
## NoMachine Remote Desktop Access
|
||||
|
||||
@@ -33,7 +33,7 @@ X applications are supported in the login nodes and can run efficiently through
|
||||
|
||||
### Configuring NoMachine
|
||||
|
||||
Refer to [{How To Use Merlin -> Remote Desktop Access}](/merlin6/nomachine.html) for further instructions of how to configure the NoMachine client and how to access it from PSI and from outside PSI.
|
||||
Refer to [{How To Use Merlin -> Remote Desktop Access}](../how-to-use-merlin/nomachine.md) for further instructions of how to configure the NoMachine client and how to access it from PSI and from outside PSI.
|
||||
|
||||
## Login nodes hardware description
|
||||
|
||||
|
||||
@@ -23,7 +23,7 @@ The basic principle is courtesy and consideration for other users.
|
||||
* It is **forbidden** to use the ``/data/user``, ``/data/project`` or ``/psi/home/`` for that purpose.
|
||||
* Always remove files you do not need any more (e.g. core dumps, temporary files) as early as possible. Keep the disk space clean on all nodes.
|
||||
* Prefer ``/scratch`` over ``/shared-scratch`` and use the latter only when you require the temporary files to be visible from multiple nodes.
|
||||
* Read the description in **[Merlin6 directory structure](/merlin6/storage.html#merlin6-directories)** for learning about the correct usage of each partition type.
|
||||
* Read the description in **[Merlin6 directory structure](../how-to-use-merlin/storage.md#merlin6-directories)** for learning about the correct usage of each partition type.
|
||||
|
||||
## User and project data
|
||||
|
||||
|
||||
@@ -30,8 +30,8 @@ In **`merlin6`**, Memory is considered a Consumable Resource, as well as the CPU
|
||||
and by default resources can not be oversubscribed. This is a main difference with the old **`merlin5`** cluster, when only CPU were accounted,
|
||||
and memory was by default oversubscribed.
|
||||
|
||||
{{site.data.alerts.tip}}Always check <b>'/etc/slurm/slurm.conf'</b> for changes in the hardware.
|
||||
{{site.data.alerts.end}}
|
||||
!!! tip "Check Configuration"
|
||||
Always check `/etc/slurm/slurm.conf` for changes in the hardware.
|
||||
|
||||
### Merlin6 CPU cluster
|
||||
|
||||
@@ -78,11 +78,8 @@ and, if possible, they will preempt running jobs from partitions with lower *Pri
|
||||
* For **`hourly`** there are no limits.
|
||||
* **`asa-general`,`asa-daily`,`asa-ansys`,`asa-visas` and `mu3e`** are **private** partitions, belonging to different experiments owning the machines. **Access is restricted** in all cases. However, by agreement with the experiments, nodes are usually added to the **`hourly`** partition as extra resources for the public resources.
|
||||
|
||||
{{site.data.alerts.tip}}Jobs which would run for less than one day should be always sent to <b>daily</b>, while jobs that would run for less
|
||||
than one hour should be sent to <b>hourly</b>. This would ensure that you have highest priority over jobs sent to partitions with less priority,
|
||||
but also because <b>general</b> has limited the number of nodes that can be used for that. The idea behind that, is that the cluster can not
|
||||
be blocked by long jobs and we can always ensure resources for shorter jobs.
|
||||
{{site.data.alerts.end}}
|
||||
!!! tip "Partition Selection"
|
||||
Jobs which would run for less than one day should be always sent to **daily**, while jobs that would run for less than one hour should be sent to **hourly**. This would ensure that you have highest priority over jobs sent to partitions with less priority, but also because **general** has limited the number of nodes that can be used for that. The idea behind that, is that the cluster can not be blocked by long jobs and we can always ensure resources for shorter jobs.
|
||||
|
||||
### Merlin5 CPU Accounts
|
||||
|
||||
@@ -192,14 +189,11 @@ resources from the batch system would drain the entire cluster for fitting the j
|
||||
Hence, there is a need of setting up wise limits and to ensure that there is a fair usage of the resources, by trying to optimize the overall efficiency
|
||||
of the cluster while allowing jobs of different nature and sizes (it is, **single core** based **vs parallel jobs** of different sizes) to run.
|
||||
|
||||
{{site.data.alerts.warning}}Wide limits are provided in the <b>daily</b> and <b>hourly</b> partitions, while for <b>general</b> those limits are
|
||||
more restrictive.
|
||||
<br>However, we kindly ask users to inform the Merlin administrators when there are plans to send big jobs which would require a
|
||||
massive draining of nodes for allocating such jobs. This would apply to jobs requiring the <b>unlimited</b> QoS (see below <i>"Per job limits"</i>)
|
||||
{{site.data.alerts.end}}
|
||||
!!! warning "Resource Limits"
|
||||
Wide limits are provided in the **daily** and **hourly** partitions, while for **general** those limits are more restrictive. However, we kindly ask users to inform the Merlin administrators when there are plans to send big jobs which would require a massive draining of nodes for allocating such jobs. This would apply to jobs requiring the **unlimited** QoS (see below "Per job limits").
|
||||
|
||||
{{site.data.alerts.tip}}If you have different requirements, please let us know, we will try to accomodate or propose a solution for you.
|
||||
{{site.data.alerts.end}}
|
||||
!!! tip "Custom Requirements"
|
||||
If you have different requirements, please let us know, we will try to accommodate or propose a solution for you.
|
||||
|
||||
#### Per job limits
|
||||
|
||||
|
||||
@@ -117,8 +117,8 @@ module load $MODULE_NAME # where $MODULE_NAME is a software in PModules
|
||||
srun $MYEXEC # where $MYEXEC is a path to your binary file
|
||||
```
|
||||
|
||||
{{site.data.alerts.tip}} Also, always consider that **`'--mem-per-cpu' x '--cpus-per-task'`** can **never** exceed the maximum amount of memory per node (352000MB).
|
||||
{{site.data.alerts.end}}
|
||||
!!! tip "Memory Limit"
|
||||
Also, always consider that `--mem-per-cpu` x `--cpus-per-task` can **never** exceed the maximum amount of memory per node (352000MB).
|
||||
|
||||
### Example 4: Non-hyperthreaded Hybrid MPI/OpenMP job
|
||||
|
||||
@@ -146,8 +146,8 @@ module load $MODULE_NAME # where $MODULE_NAME is a software in PModules
|
||||
srun $MYEXEC # where $MYEXEC is a path to your binary file
|
||||
```
|
||||
|
||||
{{site.data.alerts.tip}} Also, always consider that **`'--mem-per-cpu' x '--cpus-per-task'`** can **never** exceed the maximum amount of memory per node (352000MB).
|
||||
{{site.data.alerts.end}}
|
||||
!!! tip "Memory Limit"
|
||||
Also, always consider that `--mem-per-cpu` x `--cpus-per-task` can **never** exceed the maximum amount of memory per node (352000MB).
|
||||
|
||||
## GPU examples
|
||||
|
||||
|
||||
Reference in New Issue
Block a user