# Running Interactive Jobs ### The Merlin7 'interactive' partition On the **`merlin7`** cluster, it is recommended to always run interactive jobs on the **`interactive`** partition. This partition allows CPU oversubscription (up to four users may share the same CPU) and **has the highest scheduling priority**. Access to this partition is typically quick, making it a convenient extension of the login nodes for interactive workloads. On the **`gmerlin7`** cluster, additional interactive partitions are available, but these are primarily intended for CPU-only workloads (such like compiling GPU-based software, or creating an allocation for submitting jobs to Grace-Hopper nodes). !!! warning Because **GPU resources are scarce and expensive**, interactive allocations on GPU nodes that use GPUs should only be submitted when strictly necessary and well justified. ## Running interactive jobs There are two different ways for running interactive jobs in Slurm. This is possible by using the ``salloc`` and ``srun`` commands: * **``salloc``**: to obtain a Slurm job allocation (a set of nodes), execute command(s), and then release the allocation when the command is finished. * **``srun``**: is used for running parallel tasks. ### srun Is run is used to run parallel jobs in the batch system. It can be used within a batch script (which can be run with ``sbatch``), or within a job allocation (which can be run with ``salloc``). Also, it can be used as a direct command (in example, from the login nodes). When used inside a batch script or during a job allocation, ``srun`` is constricted to the amount of resources allocated by the ``sbatch``/``salloc`` commands. In ``sbatch``, usually these resources are defined inside the batch script with the format ``#SBATCH