Reorganize merlin6 pages to follow navigation menu
The folders are only used for source organization; URLs remain flat.
This commit is contained in:
@ -0,0 +1,58 @@
|
||||
---
|
||||
title: Accessing Interactive Nodes
|
||||
#tags:
|
||||
#keywords:
|
||||
last_updated: 13 June 2019
|
||||
#summary: ""
|
||||
sidebar: merlin6_sidebar
|
||||
permalink: /merlin6/interactive.html
|
||||
---
|
||||
|
||||
|
||||
## Login nodes description
|
||||
|
||||
The Merlin6 login nodes are the official machines for accessing the recources of Merlin6.
|
||||
From these machines, users can submit jobs to the Slurm batch system as well as visualize or compile their software.
|
||||
|
||||
The Merlin6 login nodes are the following:
|
||||
|
||||
| Hostname | SSH | NoMachine | #cores | #Threads | CPU | Memory | Scratch | Scratch Mountpoint |
|
||||
| ------------------- | --- | --------- | ------ |:--------:| :-------------------- | ------ | ---------- | :------------------ |
|
||||
| merlin-l-001.psi.ch | yes | yes | 2 x 22 | 2 | Intel Xeon Gold 6152 | 384GB | 1.8TB NVMe | ``/scratch`` |
|
||||
| ~~merlin-l-002.psi.ch~~ | - | - | 2 x 22 | 2 | Intel Xeon Gold 6142 | 384GB | 1.8TB NVMe | ``/scratch`` |
|
||||
| merlin-l-01.psi.ch | yes | - | 2 x 16 | 2 | Intel Xeon E5-2697Av4 | 512GB | 100GB SAS | ``/scratch`` |
|
||||
| merlin-l-02.psi.ch | yes | - | 2 x 16 | 2 | Intel Xeon E5-2697Av4 | 512GB | 100GB SAS | ``/scratch`` |
|
||||
|
||||
* Please note that ``merlin-l-002`` is not in production yet.
|
||||
|
||||
---
|
||||
|
||||
## Remote Access
|
||||
|
||||
### SSH Access
|
||||
|
||||
For interactive command shell access, use an SSH client. We recommend to activate SSH's X11 forwarding to allow you to use graphical applications (e.g. a text editor. For more performant graphical access, refer to the sections below).
|
||||
|
||||
E.g. for Linux:
|
||||
|
||||
```bash
|
||||
ssh -XY $username@merlin-l-01.psi.ch
|
||||
```
|
||||
|
||||
X applications are supported in the login nodes and X11 forwarding can be used for those users who have properly configured X11 support in their desktops:
|
||||
* Merlin6 administrators **do not offer support** for user desktop configuration (Windows, MacOS, Linux).
|
||||
* Hence, Merlin6 administrators **do not offer official support** for X11 client setup.
|
||||
* However, a generic guide for X11 client setup (Windows, Linux and MacOS) will be provided.
|
||||
* PSI desktop configuration issues must be addressed through **[PSI Service Now](https://psi.service-now.com/psisp)** as an *Incident Request*.
|
||||
* Ticket will be redirected to the corresponding Desktop support group (Windows, Linux).
|
||||
|
||||
### More efficient graphical access using a **NoMachine** client
|
||||
|
||||
X applications are supported in the login nodes and can run efficiently through a **NoMachine** client. This is the officially supported way to run more demanding X applications on Merlin6. The client software can be downloaded from [the Nomachine Website](https://www.nomachine.com/product&p=NoMachine%20Enterprise%20Client).
|
||||
|
||||
* NoMachine *client installation* support has to be requested through **[PSI Service Now](https://psi.service-now.com/psisp)** as an *Incident Request*.
|
||||
* Ticket will be redirected to the corresponding support group (Windows or Linux)
|
||||
* NoMachine *client configuration* and *connectivity* for Merlin6 is fully supported by Merlin6 administrators.
|
||||
* Please contact us through the official channels on any configuration issue with NoMachine.
|
||||
|
||||
---
|
55
pages/merlin6/02 accessing-merlin6/accessing-slurm.md
Normal file
55
pages/merlin6/02 accessing-merlin6/accessing-slurm.md
Normal file
@ -0,0 +1,55 @@
|
||||
---
|
||||
title: Accessing Slurm Cluster
|
||||
#tags:
|
||||
#keywords:
|
||||
last_updated: 13 June 2019
|
||||
#summary: ""
|
||||
sidebar: merlin6_sidebar
|
||||
permalink: /merlin6/slurm-access.html
|
||||
---
|
||||
|
||||
## The Merlin6 Slurm batch system
|
||||
|
||||
Clusters at PSI use the [Slurm Workload Manager](http://slurm.schedmd.com/) as the batch system technology for managing and scheduling jobs.
|
||||
Historically, *Merlin4* and *Merlin5* also used Slurm. In the same way, **Merlin6** has been also configured with this batch system.
|
||||
|
||||
Slurm has been installed in a **multi-clustered** configuration, allowing to integrate multiple clusters in the same batch system.
|
||||
* Two different Slurm clusters exist: **merlin5** and **merlin6**.
|
||||
* **merlin5** is a cluster with very old hardware (out-of-warranty).
|
||||
* **merlin5** will exist as long as hardware incidents are soft and easy to repair/fix (i.e. hard disk replacement)
|
||||
* **merlin6** is the default cluster when running Slurm commands (i.e. sinfo).
|
||||
|
||||
Please follow the section **Merlin6 Slurm** for more details about configuration and job submission.
|
||||
|
||||
### Merlin5 Access
|
||||
|
||||
Keeping the **merlin5** cluster will allow running jobs in the old computing nodes until users have fully migrated their codes to the new cluster.
|
||||
|
||||
From July 2019, **merlin6** becomes the **default cluster**. However, users can keep submitting to the old **merlin5** computing nodes by using
|
||||
the option ``--cluster=merlin5`` and using the corresponding Slurm partition with ``--partition=merlin``. In example:
|
||||
|
||||
```bash
|
||||
#SBATCH --clusters=merlin6
|
||||
```
|
||||
|
||||
Example of how to run a simple command:
|
||||
|
||||
```bash
|
||||
srun --clusters=merlin5 --partition=merlin hostname
|
||||
sbatch --clusters=merlin5 --partition=merlin myScript.batch
|
||||
```
|
||||
|
||||
### Merlin6 Access
|
||||
|
||||
In order to run jobs on the **Merlin6** cluster, you need to specify the following option in your batch scripts:
|
||||
|
||||
```bash
|
||||
#SBATCH --clusters=merlin6
|
||||
```
|
||||
|
||||
Example of how to run a simple command:
|
||||
|
||||
```bash
|
||||
srun --clusters=merlin6 hostname
|
||||
sbatch --clusters=merlin6 myScript.batch
|
||||
```
|
150
pages/merlin6/02 accessing-merlin6/merlin6-directories.md
Normal file
150
pages/merlin6/02 accessing-merlin6/merlin6-directories.md
Normal file
@ -0,0 +1,150 @@
|
||||
---
|
||||
title: Merlin6 Data Directories
|
||||
#tags:
|
||||
#keywords:
|
||||
last_updated: 28 June 2019
|
||||
#summary: ""
|
||||
sidebar: merlin6_sidebar
|
||||
permalink: /merlin6/data-directories.html
|
||||
---
|
||||
|
||||
## Merlin6 directory structure
|
||||
|
||||
Merlin6 offers the following directory classes for users:
|
||||
|
||||
* ``/psi/home/<username>``: Private user **home** directory
|
||||
* ``/data/user/<username>``: Private user **data** directory
|
||||
* ``/data/project/general/<projectname>``: Shared **Project** directory
|
||||
* For BIO experiments, a dedicated ``/data/project/bio/$projectname`` exists.
|
||||
* ``/scratch``: Local *scratch* disk (only visible by the node running a job).
|
||||
* ``/shared-scratch``: Shared *scratch* disk (visible from all nodes).
|
||||
|
||||
Properties of the directory classes:
|
||||
|
||||
| Directory | Block Quota [Soft:Hard] | Block Quota [Soft:Hard] | Quota Change Policy: Block | Quota Change Policy: Files | Backup | Backup Policy |
|
||||
| ---------------------------------- | ----------------------- | ----------------------- |:--------------------------------- |:-------------------------------- | ------ | :----------------------------- |
|
||||
| /psi/home/$username | USR [10GB:11GB] | *Undef* | Up to x2 when strictly justified. | N/A | yes | Daily snapshots for 1 week |
|
||||
| /data/user/$username | USR [1TB:1.074TB] | USR [1M:1.1M] | Inmutable. Need a project. | Changeable when justified. | no | Users responsible for backup |
|
||||
| /data/project/bio/$projectname | GRP [1TB:1.074TB] | GRP [1M:1.1M] | Subject to project requirements. | Subject to project requirements. | no | Project responsible for backup |
|
||||
| /data/project/general/$projectname | GRP [1TB:1.074TB] | GRP [1M:1.1M] | Subject to project requirements. | Subject to project requirements. | no | Project responsible for backup |
|
||||
| /scratch | *Undef* | *Undef* | N/A | N/A | no | N/A |
|
||||
| /shared-scratch | *Undef* | *Undef* | N/A | N/A | no | N/A |
|
||||
|
||||
### User home directory
|
||||
|
||||
This is the default directory users will land when login in to any Merlin6 machine.
|
||||
It is intended for your scripts, documents, software development, and other files which
|
||||
you want to have backuped. Do not use it for data or HPC I/O-hungry tasks.
|
||||
|
||||
This directory is mounted in the login and computing nodes under the path:
|
||||
|
||||
```bash
|
||||
/psi/home/$username
|
||||
```
|
||||
|
||||
Home directories are part of the PSI NFS Central Home storage provided by AIT and
|
||||
are managed by the Merlin6 administrators.
|
||||
|
||||
Users can check their quota by running the following command:
|
||||
|
||||
```bash
|
||||
quota -s
|
||||
```
|
||||
|
||||
#### Home directory policy
|
||||
|
||||
* Read **[Important: Code of Conduct](## Important: Code of Conduct)** for more information about Merlin6 policies.
|
||||
* Is **forbidden** to use the home directories for IO intensive tasks
|
||||
* Use ``/scratch``, ``/shared-scratch``, ``/data/user`` or ``/data/project`` for this purpose.
|
||||
* Users can retrieve up to 1 week of their lost data thanks to the automatic **daily snapshots for 1 week**.
|
||||
Snapshots can be accessed at this path:
|
||||
|
||||
```bash
|
||||
/psi/home/.snapshop/$username
|
||||
```
|
||||
|
||||
### User data directory
|
||||
|
||||
The user data directory is intended for *fast IO access* and keeping large amounts of private data.
|
||||
This directory is mounted in the login and computing nodes under the directory
|
||||
|
||||
```bash
|
||||
/data/user/$username
|
||||
```
|
||||
|
||||
Users can check their quota by running the following command:
|
||||
|
||||
```bash
|
||||
mmlsquota -u <username> --block-size auto merlin-user
|
||||
```
|
||||
|
||||
#### User data directory policy
|
||||
|
||||
* Read **[Important: Code of Conduct](## Important: Code of Conduct)** for more information about Merlin6 policies.
|
||||
* Is **forbidden** to use the data directories as ``scratch`` area during a job runtime.
|
||||
* Use ``/scratch``, ``/shared-scratch`` for this purpose.
|
||||
* No backup policy is applied for user data directories: users are responsible for backing up their data.
|
||||
|
||||
### Project data directory
|
||||
|
||||
This storage is intended for *fast IO access* and keeping large amounts of a project's data, where the data also can be
|
||||
shared by all members of the project (the project's corresponding unix group). We recommend to keep most data in
|
||||
project related storage spaces, since it allows users to coordinate. Also, project spaces have more flexible policies
|
||||
regarding extending the available storage space.
|
||||
|
||||
You can request a project space by submitting an incident request via **[PSI Service Now](https://psi.service-now.com/psisp)** using the subject line
|
||||
|
||||
```
|
||||
Subject: [Merlin6] Project Request for project name xxxxxx
|
||||
```
|
||||
|
||||
Please list your wish for a project name and list the accounts that should be part of it. The project will receive a corresponding unix group.
|
||||
|
||||
The project data directory is mounted in the login and computing nodes under the dirctory:
|
||||
|
||||
```bash
|
||||
/data/project/$username
|
||||
```
|
||||
|
||||
Project quotas are defined on a per *group* basis. Users can check the project quota by running the following command:
|
||||
|
||||
```bash
|
||||
mmlsquota -j $projectname --block-size auto -C merlin.psi.ch merlin-proj
|
||||
```
|
||||
|
||||
#### Project Directory policy
|
||||
|
||||
* Read **[Important: Code of Conduct](## Important: Code of Conduct)** for more information about Merlin6 policies.
|
||||
* It is **forbidden** to use the data directories as ``scratch`` area during a job's runtime, i.e. for high throughput I/O for a job's temporary files. Please Use ``/scratch``, ``/shared-scratch`` for this purpose.
|
||||
* No backups: users are responsible for managing the backups of their data directories.
|
||||
|
||||
### Scratch directories
|
||||
|
||||
There are two different types of scratch storage: **local** (``/scratch``) and **shared** (``/shared-scratch``).
|
||||
|
||||
**local** scratch should be used for all jobs that do not require the scratch files to be accessible from multiple nodes, which is trivially
|
||||
true for all jobs running on a single node.
|
||||
**shared** scratch is intended for files that need to be accessible by multiple nodes, e.g. by a MPI-job where tasks are spread out over the cluster
|
||||
and all tasks need to do I/O on the same temporary files.
|
||||
|
||||
**local** scratch in Merlin6 computing nodes provides a huge number of IOPS thanks to the NVMe technology. **Shared** scratch is implemented using a distributed parallel filesystem (GPFS) resulting in a higher latency, since it involves remote storage resources and more complex I/O coordination.
|
||||
|
||||
``/shared-scratch`` is only mounted in the *Merlin6* computing nodes (i.e. not on the login nodes), and its current size is 50TB. This can be increased in the future.
|
||||
|
||||
The properties of the available scratch storage spaces are given in the following table
|
||||
|
||||
| Cluster | Service | Scratch | Scratch Mountpoint | Shared Scratch | Shared Scratch Mountpoint | Comments |
|
||||
| ------- | -------------- | ------------ | ------------------ | -------------- | ------------------------- | ------------------------------------- |
|
||||
| merlin5 | computing node | 50GB / SAS | ``/scratch`` | ``N/A`` | ``N/A`` | ``merlin-c-[01-64]`` |
|
||||
| merlin6 | login node | 100GB / SAS | ``/scratch`` | ``N/A`` | ``N/A`` | ``merlin-l-0[1,2]`` |
|
||||
| merlin6 | computing node | 1.3TB / NVMe | ``/scratch`` | 50TB / GPFS | ``/shared-scratch`` | ``merlin-c-[001-022,101-122,201-222`` |
|
||||
| merlin6 | login node | 2.0TB / NVMe | ``/scratch`` | ``N/A`` | ``N/A`` | ``merlin-l-00[1,2]`` |
|
||||
|
||||
#### Scratch directories policy
|
||||
|
||||
* Read **[Important: Code of Conduct](## Important: Code of Conduct)** for more information about Merlin6 policies.
|
||||
* By default, *always* use **local** first and only use **shared** if your specific use case requires it.
|
||||
* Temporary files *must be deleted at the end of the job by the user*.
|
||||
* Remaining files will be deleted by the system if detected.
|
||||
|
||||
---
|
35
pages/merlin6/02 accessing-merlin6/nomachine.md
Normal file
35
pages/merlin6/02 accessing-merlin6/nomachine.md
Normal file
@ -0,0 +1,35 @@
|
||||
---
|
||||
title: NoMachine
|
||||
#tags:
|
||||
#keywords:
|
||||
last_updated: 9 July 2019
|
||||
#summary: ""
|
||||
sidebar: merlin6_sidebar
|
||||
permalink: /merlin6/nomachine.html
|
||||
---
|
||||
|
||||
NoMachine is a desktop virtualization tool. It is similar to VNC, Remote
|
||||
Desktop, etc. It uses the NX protocol to enable a graphical login to remote
|
||||
servers.
|
||||
|
||||
## Installation
|
||||
|
||||
NoMachine is available for PSI Windows computers in the Software Kios under the
|
||||
name **NX Client**. Please use the latest version (at least 6.0). For MacOS and
|
||||
Linux, the NoMachine client can be downloaded from https://www.nomachine.com/.
|
||||
|
||||
## Connecting to Merlin
|
||||
|
||||
Currently the recommended way of connecting to Merlin5 is through
|
||||
`merlin-nx-01.psi.ch`. This can be added in NoMachine as a new connection. It is
|
||||
also available through `remacc.psi.ch`, a NoMachine 'jump point' provided by
|
||||
Photon Science for access from outside PSI.
|
||||
|
||||
The `merlin-nx-01` machine does not directly access Merlin itself. However it
|
||||
provides a fully configured Linux environment from which merlin can be accessed
|
||||
with `ssh` commands.
|
||||
|
||||
It is planned to run NoMachine directly on the Merlin6 login node in the future.
|
||||
This will enable data to be more easily transferred from Merlin6 through
|
||||
NoMachine.
|
||||
|
124
pages/merlin6/02 accessing-merlin6/requesting-accounts.md
Normal file
124
pages/merlin6/02 accessing-merlin6/requesting-accounts.md
Normal file
@ -0,0 +1,124 @@
|
||||
---
|
||||
title: Requesting Accounts
|
||||
#tags:
|
||||
#keywords:
|
||||
last_updated: 28 June 2019
|
||||
#summary: ""
|
||||
sidebar: merlin6_sidebar
|
||||
permalink: /merlin6/request-account.html
|
||||
---
|
||||
|
||||
Requesting access to the cluster must be done through **[PSI Service Now](https://psi.service-now.com/psisp)** as an
|
||||
*Incident Request*. AIT and us are working on a ServiceNow integrated form to ease this process in the future.
|
||||
|
||||
Due to the ticket *priority* being *Low* for non-emergency requests of this kind, it might take up to 56h in the worst case until access to the cluster is granted (raise the priority if you have strong reasons for faster access) .
|
||||
|
||||
---
|
||||
|
||||
## Requesting Access to Merlin6
|
||||
|
||||
Access to Merlin6 is regulated by a PSI user's account being a member of the **svc-cluster_merlin6** group.
|
||||
|
||||
Registration for **Merlin6** access *must be done* through **[PSI Service Now](https://psi.service-now.com/psisp)**:
|
||||
|
||||
* Please open a ticket as *Incident Request*, with subject:
|
||||
|
||||
```
|
||||
Subject: [Merlin6] Access Request for user xxxxx
|
||||
```
|
||||
|
||||
* Text content (please use always this template and fill the fields marked by `xxxxx`):
|
||||
|
||||
```
|
||||
Dear HelpDesk,
|
||||
|
||||
I would like to request access to the Merlin6 cluster. This is my account information
|
||||
* Last Name: xxxxx
|
||||
* First Name: xxxxx
|
||||
* PSI user account: xxxxx
|
||||
|
||||
Please add me to the following Unix groups:
|
||||
* 'svc-cluster_merlin6'
|
||||
|
||||
Thanks,
|
||||
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Requesting Access to Merlin5
|
||||
|
||||
Merlin5 computing nodes will be available for some time as a **best effort** service.
|
||||
For accessing the old Merlin5 resources, users should belong to the **svc-cluster_merlin5** Unix Group.
|
||||
|
||||
Registration for **Merlin5** access *must be done* through **[PSI Service Now](https://psi.service-now.com/psisp)**:
|
||||
|
||||
* Please open a ticket as *Incident Request*, with subject:
|
||||
|
||||
```
|
||||
Subject: [Merlin5] Access Request for user xxxxx
|
||||
```
|
||||
|
||||
* Text content (please use always this template):
|
||||
|
||||
* Text content (please use always this template and fill the fields marked by `xxxxx`):
|
||||
|
||||
```
|
||||
Dear HelpDesk,
|
||||
|
||||
I would like to request access to the Merlin5 cluster. This is my account information
|
||||
* Last Name: xxxxx
|
||||
* First Name: xxxxx
|
||||
* PSI user account: xxxxx
|
||||
|
||||
Please add me to the following Unix groups:
|
||||
* 'svc-cluster_merlin5'
|
||||
|
||||
Thanks,
|
||||
|
||||
```
|
||||
|
||||
Alternatively, if you want to request access to both Merlin5 and Merlin6, you can request it in the same ticket as follows:
|
||||
* Use the template **[Requesting Access to Merlin6](##Requesting-Access-to-Merlin6)**
|
||||
* Add the **``'svc-cluster_merlin5'``** Unix Group after the line containing the merlin6 group **`'svc-cluster_merlin6'`**)
|
||||
|
||||
---
|
||||
|
||||
## Requesting extra Unix groups
|
||||
|
||||
Some users may require to be added to some extra specific Unix groups.
|
||||
* This will grant access to specific resources.
|
||||
* In example, some BIO groups may belong to a specific BIO group for having access to the project area for that group.
|
||||
* Supervisors should inform new users which extra groups are needed for their project(s).
|
||||
|
||||
When requesting access to **[Merlin6](##Requesting-Access-to-Merlin6)** or **[Merlin5](##Requesting-Access-to-Merlin5)**,
|
||||
these extra Unix Groups can be added in the same *Incident Request* by supplying additional lines specifying the respective Groups.
|
||||
|
||||
Naturally, this step can also be done later when the need arises in a separate **[PSI Service Now](https://psi.service-now.com/psisp)** ticket.
|
||||
|
||||
* Please open a ticket as *Incident Request*, with subject:
|
||||
|
||||
```
|
||||
Subject: [Unix Group] Access Request for user xxxxx
|
||||
```
|
||||
|
||||
* Text content (please use always this template):
|
||||
|
||||
```
|
||||
Dear HelpDesk,
|
||||
|
||||
I would like to request membership for the Unix Groups listed below. This is my account information
|
||||
* Last Name: xxxxx
|
||||
* First Name: xxxxx
|
||||
* PSI user account: xxxxx
|
||||
|
||||
List of unix groups I would like to be added to:
|
||||
* unix_group_1
|
||||
* unix_group_2
|
||||
* ...
|
||||
* unix_group_N
|
||||
|
||||
Thanks,
|
||||
```
|
||||
|
||||
**Important note**: Requesting access to specific Unix Groups will require validation from the responsible of the Unix Group. If you ask for inclusion in many groups it may take longer, since the fulfillment of the request will depend on more people.
|
72
pages/merlin6/02 accessing-merlin6/requesting-projects.md
Normal file
72
pages/merlin6/02 accessing-merlin6/requesting-projects.md
Normal file
@ -0,0 +1,72 @@
|
||||
---
|
||||
title: Requesting a Project
|
||||
#tags:
|
||||
#keywords:
|
||||
last_updated: 01 July 2019
|
||||
#summary: ""
|
||||
sidebar: merlin6_sidebar
|
||||
permalink: /merlin6/request-project.html
|
||||
---
|
||||
|
||||
A project owns its own storage area which can be accessed by the storage members.
|
||||
Projects can receive a higher storage quota than user areas and should be the primary way of organizing bigger storage requirements
|
||||
in a multi-user collaboration.
|
||||
|
||||
Access to a project's directories is governed by project members belonging to a common **Unix group**. You may use an existing
|
||||
Unix group or you may have a new Unix group created especially for the project. The **project responsible** will be the owner of
|
||||
the Unix group (this is important)!
|
||||
|
||||
The **default storage quota** for a project is 1TB (with a maximal *Number of Files* of 1M). If you need a larger assignment, you
|
||||
need to request this and provide a description of your storage needs.
|
||||
|
||||
To request a project, please provide the following information in a **[PSI Service Now ticket](https://psi.service-now.com/psisp)**
|
||||
|
||||
* Please open an *Incident Request* with subject:
|
||||
```
|
||||
Subject: [Merlin6] Project Request for project name xxxxxx
|
||||
```
|
||||
|
||||
* and base the text field of the request on this template
|
||||
```
|
||||
Dear HelpDesk
|
||||
|
||||
I would like to request a new Merlin6 project.
|
||||
|
||||
Project Name: xxxxx
|
||||
UnixGroup: xxxxx # Must be an existing Unix Group
|
||||
|
||||
The project responsible is the Owner of the Unix Group.
|
||||
If you need a storage quota exceeding the defaults, please provide a description
|
||||
and motivation for the higher storage needs:
|
||||
|
||||
Storage Quota: 1TB with a maximum of 1M Files
|
||||
Reason: (None for default 1TB/1M)
|
||||
|
||||
Best regards,
|
||||
```
|
||||
|
||||
**If you need a new Unix group** to be created, you need to first get this group through
|
||||
a separate ***[PSI Service Now ticket](https://psi.service-now.com/psisp)**. Please
|
||||
use the following template. You can also specify the login names of the initial group
|
||||
members and the **owner** of the group. The owner of the group is the person who
|
||||
will be allowed to modify the group.
|
||||
|
||||
* Please open an *Incident Request* with subject:
|
||||
```
|
||||
Subject: Request for new unix group xxxx
|
||||
```
|
||||
|
||||
* and base the text field of the request on this template
|
||||
```
|
||||
Dear HelpDesk
|
||||
|
||||
I would like to request a new unix group.
|
||||
|
||||
Unix Group Name: unx-xxxxx
|
||||
Initial Group Members: xxxxx, yyyyy, zzzzz, ...
|
||||
Group Owner: xxxxx
|
||||
|
||||
Best regards,
|
||||
```
|
||||
|
||||
|
47
pages/merlin6/02 accessing-merlin6/transfer-data.md
Normal file
47
pages/merlin6/02 accessing-merlin6/transfer-data.md
Normal file
@ -0,0 +1,47 @@
|
||||
---
|
||||
title: Transferring Data
|
||||
#tags:
|
||||
#keywords:
|
||||
last_updated: 9 July 2019
|
||||
#summary: ""
|
||||
sidebar: merlin6_sidebar
|
||||
permalink: /merlin6/transfer-data.html
|
||||
---
|
||||
|
||||
## Transferring Data from the PSI Network to/from Merlin6
|
||||
|
||||
### Rsync
|
||||
|
||||
Rsync is the preferred method to transfer data from Linux/MacOS. It allows
|
||||
transfers to be easily resumed if they get interrupted. The general syntax is:
|
||||
|
||||
```
|
||||
rsync -avAHXS <src> <dst>
|
||||
```
|
||||
|
||||
For example, to transfer files from your local computer to a merlin project
|
||||
directory:
|
||||
|
||||
```
|
||||
rsync -avAHXS ~/localdata user@merlin-l-01.psi.ch:/data/project/general/myproject/
|
||||
```
|
||||
|
||||
You can resume interrupted transfers by simply rerunning the command. Previously
|
||||
transferred files will be skipped.
|
||||
|
||||
|
||||
### WinSCP
|
||||
|
||||
The WinSCP tool can be used for remote file transfer on Windows. It is available
|
||||
from the Software Kiosk on PSI machines. Add `merlin-l-01.psi.ch` as a host and
|
||||
connect with your PSI credentials. You can then drag-and-drop files between your
|
||||
local computer and merlin.
|
||||
|
||||
|
||||
## Transferring Data to/from outside PSI
|
||||
|
||||
Merlin6 is only accessible from within the PSI network. To connect from outside you can use
|
||||
|
||||
- [VPN](https://www.psi.ch/en/computing/vpn) ([alternate instructions](https://intranet.psi.ch/BIO/ComputingVPN))
|
||||
- [SSH hop](https://www.psi.ch/en/computing/ssh-hop)
|
||||
- [No Machine](nomachine.md)
|
Reference in New Issue
Block a user