Add ANSYS HFSS + CSCS

This commit is contained in:
2022-04-13 12:10:51 +02:00
parent 4bcde04fea
commit 9837429824
9 changed files with 186 additions and 0 deletions

View File

@ -0,0 +1,15 @@
# Follow the pattern here for the URLs -- no slash at the beginning, and include the .html. The link here is rendered exactly as is in the Markdown references.
entries:
- product: PSI HPC@CSCS
folders:
- title: Overview
# URLs for top-level folders are optional. If omitted it is a bit easier to toggle the accordion.
folderitems:
- title: Overview
url: /CSCS/index.html
- title: Operations
folderitems:
- title: Transfer Data
url: /CSCS/transfer-data.html

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

32
pages/CSCS/index.md Normal file
View File

@ -0,0 +1,32 @@
---
title: PSI HPC@CSCS Admin Overview
#tags:
#keywords:
last_updated: 22 September 2020
#summary: ""
sidebar: CSCS_sidebar
permalink: /CSCS/index.html
---
## PSI HPC@CSCS
For offering high-end HPC sources to PSI users, AIT has a long standing col-laboration with the national supercomputing centre CSCS (since 2005).
Some of the resources are procured by central PSI funds while users have the optionsof an additional buy-in at the same rates.
### PSI resources at Piz Daint
The yearly computing resources at CSCS for the PSI projects are 320,000 NH (Node Hours). The yearly storage resources for the PSI projects is a total of 40TB.
These resources are centrally financed, but in addition experiments can individually purchase more resources.
### Piz Daint total resources
References:
* [Piz Daint Information](https://www.cscs.ch/computers/piz-daint/)
* [Piz Daint: One of the most powerful supercomputers in the workd](https://www.cscs.ch/publications/news/2017/piz-daint-one-of-the-most-powerful-supercomputers-in-the-world)
## Contact information
* Contact person at PSI: Marc Caubet Serrabou <marc.caubet@psi.ch>
* Mail list contact: <psi-hpc-at-cscs-admin@lists.psi.ch>
* Contact Person at CSCS: Angelo Mangili <amangili@psi.ch>

View File

@ -0,0 +1,52 @@
---
title: Transferring Data betweem PSI and CSCS
#tags:
keywords: CSCS, data-transfer
last_updated: 02 March 2022
summary: "This Document shows the procedure for transferring data between CSCS and PSI"
sidebar: CSCS_sidebar
permalink: /CSCS/transfer-data.html
---
# Transferring Data
This document shows how to transfer data between PSI and CSCS by using a Linux workstation.
## Preparing SSH configuration
If the directory **`.ssh`** does not exist in your home directory, create it with **`0700`** permissions:
```bash
mkdir ~/.ssh
chmod 0700 ~/.ssh
```
Then, if it does not exist, create a new file **`.ssh/config`**, otherwise add the following lines
to the already existing file, by replacing **`$cscs_accountname`** by your CSCS `username`:
```bash
Host daint.cscs.ch
Compression yes
ProxyJump ela.cscs.ch
Host *.cscs.ch
User $cscs_accountname
```
### Advanced SSH configuration
There are many different SSH settings available which would allow advanced configurations.
Users may have some configurations already present, therefore would need to adapt it accordingly.
## Transferring files
Once the above configuration is set, then try to rsync from Merlin to CSCS, on any direction:
```bash
# CSCS -> PSI
rsync -azv daint.cscs.ch:<source_path> <destination_path>
# PSI -> CSCS
rsync -azv <source_path> daint.cscs.ch:<destination_path>
```

View File

@ -0,0 +1,87 @@
---
title: ANSYS HFSS / ElectroMagnetics
#tags:
last_updated: 13 April 2022
keywords: software, ansys, ansysEM, em, slurm, hfss
summary: "This document describes how to run ANSYS HFSS (ElectroMagnetics) in the Merlin6 cluster"
sidebar: merlin6_sidebar
permalink: /merlin6/ansys-hfss.html
---
This document describes the different ways for running **ANSYS HFSS (ElectroMagnetics)**
## ANSYS HFSS (ElectroMagnetics)
This recipe is intended to show how to run ANSYS HFSS (ElectroMagnetics) in Slurm.
Having in mind that in general, running ANSYS HFSS means running **ANSYS Electronics Desktop**.
## Running HFSS / Electromagnetics jobs
### PModules
Is necessary to run at least ANSYS software **ANSYS/2022R1**, which is available in PModules:
```bash
module use unstable
module load Pmodules/1.1.6
module use overlay_merlin
module load ANSYS/2022R1
```
## Remote job submission: HFSS RSM and SLURM
Running jobs through Remote RSM or Slurm is the recommended way for running ANSYS HFSS.
* **HFSS RSM** can be used from ANSYS HFSS installations running on Windows workstations at PSI (as long as are in the internal PSI network).
* **Slurm** can be used when submitting directly from a Merlin login node (i.e. `sbatch` command or interactively from **ANSYS Electronics Desktop**)
### HFSS RSM (from remote workstations)
Running jobs through Remote RSM is the way for running ANSYS HFSS when submitting from an ANSYS HFSS installation on a PSI Windows workstation.
A HFSS RSM service is running on each **Merlin login node**:
- `merlin-l-01.psi.ch:32958`
- `merlin-l-001.psi.ch:32958`
- `merlin-l-002.psi.ch:32958`
The service is listening on port **`32958`** which is the default for ANSYS/2022R1. Windows workstations must ensure that **Electronics Desktop** is connecting to that port (in general, this does not need changes in the configuration unless it was previously modified or a new ANSYS version changes the default port).
To setup HFSS RSM for using it with the Merlin cluster, it must be done from the following **ANSYS Electronics Desktop** menu:
1. **[Tools]->[Job Management]->[Select Scheduler]**.
![Select_Scheduler]({{"/images/ANSYS/HFSS/01_Select_Scheduler_Menu.png"}})
2. In the new **[Select scheduler]** window, setup the following settings and **Refresh**:
![RSM_Remote_Scheduler]({{"/images/ANSYS/HFSS/02_Select_Scheduler_RSM_Remote.png"}})
* **Select Scheduler**: `Remote RSM`.
* **Server**: Add a Merlin login node.
* **User name**: Add your Merlin username.
* **Password**: Add you Merlin username password.
Once *refreshed*, the **Scheduler info** box must provide **Slurm** information of the server (see above picture). If the box contains that information, then you can save changes (`OK` button).
3. **[Tools]->[Job Management]->[Submit Job...]**.
![Submit_Job]({{"/images/ANSYS/HFSS/04_Submit_Job_Menu.png"}})
4. In the new **[Submite Job]** window, you must specify the location of the **ANSYS Electronics Desktop** binary.
![Product_Path]({{"/images/ANSYS/HFSS/05_Submit_Job_Product_Path.png"}})
* In example, for **ANSYS/2022R1**, the location is `/data/software/pmodules/Tools/ANSYS/2022R1/v221/AnsysEM22.1/v221/Linux64/ansysedt`:.
### HFSS Slurm (from login node only)
Running jobs through Slurm from **ANSYS Electronics Desktop** is the way for running ANSYS HFSS when submitting from an ANSYS HFSS installation in a Merlin login node. **ANSYS Electronics Desktop** usually needs to be run from the **[Merlin NoMachine](/merlin6/nomachine.html)** service, which currently runs on:
- `merlin-l-001.psi.ch`
- `merlin-l-002.psi.ch`
Since the Slurm client is present in the login node (where **ANSYS Electronics Desktop** is running), the application will be able to detect and to submit directly to Slurm. Therefore, we only have to configure **ANSYS Electronics Desktop** to submit to Slurm. This can set as follows:
1. **[Tools]->[Job Management]->[Select Scheduler]**.
![Select_Scheduler]({{"/images/ANSYS/HFSS/01_Select_Scheduler_Menu.png"}})
2. In the new **[Select scheduler]** window, setup the following settings and **Refresh**:
![RSM_Remote_Scheduler]({{"/images/ANSYS/HFSS/03_Select_Scheduler_Slurm.png"}})
* **Select Scheduler**: `Slurm`.
* **Server**: must point to `localhost`.
* **User name**: must be empty.
* **Password**: must be empty.
The **Server, User name** and **Password** boxes can't be modified, but if value do not match with the above settings, they should be changed by selecting another Scheduler which allows editig these boxes (i.e. **RSM Remote**).
Once *refreshed*, the **Scheduler info** box must provide **Slurm** information of the server (see above picture). If the box contains that information, then you can save changes (`OK` button).