Updates in Slurm

This commit is contained in:
2019-07-01 18:04:15 +02:00
parent 864ef84a0f
commit 5c2ea17076
4 changed files with 23 additions and 88 deletions

View File

@ -15,7 +15,6 @@ permalink: /merlin6/slurm-examples.html
```bash
#!/bin/bash
#SBATCH --partition=hourly # Using 'hourly' will grant higher priority
#SBATCH --constraint=mc # Use CPU batch system
#SBATCH --ntasks-per-core=1 # Force no Hyper-Threading, will run 1 task per core
#SBATCH --mem-per-cpu=8000 # Double the default memory per cpu
#SBATCH --time=00:30:00 # Define max time job will run
@ -35,7 +34,6 @@ hyperthreads), hence we want to use the memory as if we were using 2 threads.
```bash
#!/bin/bash
#SBATCH --partition=hourly # Using 'hourly' will grant higher priority
#SBATCH --constraint=mc # Use CPU batch system
#SBATCH --ntasks-per-core=1 # Force no Hyper-Threading, will run 1 task per core
#SBATCH --mem=352000 # We want to use the whole memory
#SBATCH --time=00:30:00 # Define max time job will run
@ -58,7 +56,6 @@ the job will use. This must be done in order to avoid conflicts with other jobs
#SBATCH --exclusive # Use the node in exclusive mode
#SBATCH --ntasks=88 # Job will run 88 tasks
#SBATCH --ntasks-per-core=2 # Force Hyper-Threading, will run 2 tasks per core
#SBATCH --constraint=mc # Use CPU batch system
#SBATCH --time=00:30:00 # Define max time job will run
#SBATCH --output=myscript.out # Define your output file
#SBATCH --error=myscript.err # Define your error file
@ -81,7 +78,6 @@ per thread is 4000MB, in total this job can use up to 352000MB memory which is t
#SBATCH --ntasks=44 # Job will run 44 tasks
#SBATCH --ntasks-per-core=1 # Force no Hyper-Threading, will run 1 task per core
#SBATCH --mem=352000 # Define the whole memory of the node
#SBATCH --constraint=mc # Use CPU batch system
#SBATCH --time=00:30:00 # Define max time job will run
#SBATCH --output=myscript.out # Define your output file
#SBATCH --error=myscript.err # Define your output file