--- title: Running Interactive Jobs #tags: keywords: interactive, X11, X, srun, salloc, job, jobs, slurm, nomachine, nx last_updated: 07 August 2024 summary: "This document describes how to run interactive jobs as well as X based software." sidebar: merlin7_sidebar permalink: /merlin7/interactive-jobs.html --- ### The Merlin7 'interactive' partition On `merlin7`, is recommended to always run interactive jobs on the `interactive` partition. This partition oversubscribe CPUs (up to 4 users can use the same CPU), and has the highest priority. In general, access to this partition should be quick, and can be used as an extension of the login nodes. Other interactive partitions are available on the `gmerlin7` cluster, however, the main user is for CPU access only. Since the GPU resources are very expensive and we don't have many, please do not submit interactive allocations on GPU nodes using GPUs unless strongly justified. ## Running interactive jobs There are two different ways for running interactive jobs in Slurm. This is possible by using the ``salloc`` and ``srun`` commands: * **``salloc``**: to obtain a Slurm job allocation (a set of nodes), execute command(s), and then release the allocation when the command is finished. * **``srun``**: is used for running parallel tasks. ### srun Is run is used to run parallel jobs in the batch system. It can be used within a batch script (which can be run with ``sbatch``), or within a job allocation (which can be run with ``salloc``). Also, it can be used as a direct command (in example, from the login nodes). When used inside a batch script or during a job allocation, ``srun`` is constricted to the amount of resources allocated by the ``sbatch``/``salloc`` commands. In ``sbatch``, usually these resources are defined inside the batch script with the format ``#SBATCH