diff --git a/pages/merlin6/merlin6-slurm/running-jobs.md b/pages/merlin6/merlin6-slurm/running-jobs.md index 479bd8a..43b5cfe 100644 --- a/pages/merlin6/merlin6-slurm/running-jobs.md +++ b/pages/merlin6/merlin6-slurm/running-jobs.md @@ -53,7 +53,7 @@ Merlin6 contains 3 partitions for general purpose. These are ``general``, ``dail ``general`` will be the default. Partition can be defined with the ``--partition`` option as follows: ```bash -#SBATCH --partition= # name of slurm partition to submit. 'general' is the 'default'. +#SBATCH --partition= # Partition to use. 'general' is the 'default'. ``` Please check the section [Slurm Configuration#Merlin6 Slurm Partitions] for more information about Merlin6 partition setup. @@ -89,7 +89,7 @@ The following template should be used by any user submitting jobs to CPU nodes: ```bash #!/bin/sh #SBATCH --partition= # Specify 'general' or 'daily' or 'hourly' -#SBATCH --time= # Recommended, and strictly recommended when using 'general' partition. +#SBATCH --time= # Strictly recommended when using 'general' partition. #SBATCH --output= # Generate custom output file #SBATCH --error= # Generate custom error file #SBATCH --constraint=mc # You must specify 'mc' when using 'cpu' jobs @@ -97,10 +97,10 @@ The following template should be used by any user submitting jobs to CPU nodes: ##SBATCH --exclusive # Uncomment if you need exclusive node usage ## Advanced options example -##SBATCH --nodes=1 # Uncomment and specify number of nodes to use -##SBATCH --ntasks=44 # Uncomment and specify number of nodes to use -##SBATCH --ntasks-per-node=44 # Uncomment and specify number of tasks per node -##SBATCH --ntasks-per-core=2 # Uncomment and specifty number of tasks per core (threads) +##SBATCH --nodes=1 # Uncomment and specify #nodes to use +##SBATCH --ntasks=44 # Uncomment and specify #nodes to use +##SBATCH --ntasks-per-node=44 # Uncomment and specify #tasks per node +##SBATCH --ntasks-per-core=2 # Uncomment and specify #tasks per core (a.k.a. threads) ##SBATCH --cpus-per-task=44 # Uncomment and specify the number of cores per task ``` @@ -141,7 +141,7 @@ The following template should be used by any user submitting jobs to GPU nodes: ```bash #!/bin/sh #SBATCH --partition= # Specify 'general' or 'daily' or 'hourly' -#SBATCH --time= # Recommended, and strictly recommended when using 'general' partition. +#SBATCH --time= # Strictly recommended when using 'general' partition. #SBATCH --output= # Generate custom output file #SBATCH --error= # to submit a command to Slurm. Same options as in 'sbatch' can be used. -salloc # to allocate computing nodes. Useful for running interactive jobs (ANSYS, Python Notebooks, etc.). -scancel job_id # to cancel slurm job, job id is the numeric id, seen by the squeue +salloc # to allocate computing nodes. Use for interactive runs. +scancel job_id # to cancel slurm job, job id is the numeric id, seen by the squeue. ``` --- @@ -30,9 +32,10 @@ scancel job_id # to cancel slurm job, job id is the numeric id, seen by the sq ## Advanced basic commands: ```bash -sinfo -N -l # list nodes, state, resources (number of CPUs, memory per node, etc.), and other information +sinfo -N -l # list nodes, state, resources (#CPUs, memory per node, ...), etc. sshare -a # to list shares of associations to a cluster -sprio -l # to view the factors that comprise a job's scheduling priority (add -u for filtering user) +sprio -l # to view the factors that comprise a job's scheduling priority + # add '-u ' for filtering user ``` ---