Finished basic commands

This commit is contained in:
caubet_m 2019-06-19 16:53:19 +02:00
parent 9417abdd84
commit d6d21c531c
2 changed files with 16 additions and 13 deletions

View File

@ -53,7 +53,7 @@ Merlin6 contains 3 partitions for general purpose. These are ``general``, ``dail
``general`` will be the default. Partition can be defined with the ``--partition`` option as follows:
```bash
#SBATCH --partition=<general|daily|hourly> # name of slurm partition to submit. 'general' is the 'default'.
#SBATCH --partition=<general|daily|hourly> # Partition to use. 'general' is the 'default'.
```
Please check the section [Slurm Configuration#Merlin6 Slurm Partitions] for more information about Merlin6 partition setup.
@ -89,7 +89,7 @@ The following template should be used by any user submitting jobs to CPU nodes:
```bash
#!/bin/sh
#SBATCH --partition=<general|daily|hourly> # Specify 'general' or 'daily' or 'hourly'
#SBATCH --time=<D-HH:MM:SS> # Recommended, and strictly recommended when using 'general' partition.
#SBATCH --time=<D-HH:MM:SS> # Strictly recommended when using 'general' partition.
#SBATCH --output=<output_file> # Generate custom output file
#SBATCH --error=<error_file> # Generate custom error file
#SBATCH --constraint=mc # You must specify 'mc' when using 'cpu' jobs
@ -97,10 +97,10 @@ The following template should be used by any user submitting jobs to CPU nodes:
##SBATCH --exclusive # Uncomment if you need exclusive node usage
## Advanced options example
##SBATCH --nodes=1 # Uncomment and specify number of nodes to use
##SBATCH --ntasks=44 # Uncomment and specify number of nodes to use
##SBATCH --ntasks-per-node=44 # Uncomment and specify number of tasks per node
##SBATCH --ntasks-per-core=2 # Uncomment and specifty number of tasks per core (threads)
##SBATCH --nodes=1 # Uncomment and specify #nodes to use
##SBATCH --ntasks=44 # Uncomment and specify #nodes to use
##SBATCH --ntasks-per-node=44 # Uncomment and specify #tasks per node
##SBATCH --ntasks-per-core=2 # Uncomment and specify #tasks per core (a.k.a. threads)
##SBATCH --cpus-per-task=44 # Uncomment and specify the number of cores per task
```
@ -141,7 +141,7 @@ The following template should be used by any user submitting jobs to GPU nodes:
```bash
#!/bin/sh
#SBATCH --partition=<general|daily|hourly> # Specify 'general' or 'daily' or 'hourly'
#SBATCH --time=<D-HH:MM:SS> # Recommended, and strictly recommended when using 'general' partition.
#SBATCH --time=<D-HH:MM:SS> # Strictly recommended when using 'general' partition.
#SBATCH --output=<output_file> # Generate custom output file
#SBATCH --error=<error_file # Generate custom error file
#SBATCH --constraint=gpu # You must specify 'gpu' for using GPUs

View File

@ -17,12 +17,14 @@ information about options and examples.
Useful commands for the slurm:
```bash
sinfo # to see the name of nodes, their occupancy, name of slurm partitions, limits (try out with "-l" option)
squeue # to see the currently running/waiting jobs in slurm (additional "-l" option may also be useful)
sinfo # to see the name of nodes, their occupancy,
# name of slurm partitions, limits (try out with "-l" option)
squeue # to see the currently running/waiting jobs in slurm
# (additional "-l" option may also be useful)
sbatch Script.sh # to submit a script (example below) to the slurm.
srun <command> # to submit a command to Slurm. Same options as in 'sbatch' can be used.
salloc # to allocate computing nodes. Useful for running interactive jobs (ANSYS, Python Notebooks, etc.).
scancel job_id # to cancel slurm job, job id is the numeric id, seen by the squeue
salloc # to allocate computing nodes. Use for interactive runs.
scancel job_id # to cancel slurm job, job id is the numeric id, seen by the squeue.
```
---
@ -30,9 +32,10 @@ scancel job_id # to cancel slurm job, job id is the numeric id, seen by the sq
## Advanced basic commands:
```bash
sinfo -N -l # list nodes, state, resources (number of CPUs, memory per node, etc.), and other information
sinfo -N -l # list nodes, state, resources (#CPUs, memory per node, ...), etc.
sshare -a # to list shares of associations to a cluster
sprio -l # to view the factors that comprise a job's scheduling priority (add -u <username> for filtering user)
sprio -l # to view the factors that comprise a job's scheduling priority
# add '-u <username>' for filtering user
```
---