Add GPU example
This commit is contained in:
@ -157,6 +157,31 @@ srun $MYEXEC # where $MYEXEC is a path to your binary file
|
||||
{{site.data.alerts.tip}} Also, always consider that **`'--mem-per-cpu' x '--cpus-per-task'`** can **never** exceed the maximum amount of memory per node (352000MB).
|
||||
{{site.data.alerts.end}}
|
||||
|
||||
## GPU examples
|
||||
|
||||
Using GPUs requires two major changes. First, the cluster needs to be specified
|
||||
to `gmerlin6`. This should also be added to later commands pertaining to the
|
||||
job, e.g. `scancel --cluster=gmerlin6 <jobid>`. Second, the number of GPUs
|
||||
should be specified using `--gpus`, `--gpus-per-task`, or similar parameters.
|
||||
Here's an example for a simple test job:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
#SBATCH --partition=gpu # Or 'gpu-short' for higher priority but 2-hour limit
|
||||
#SBATCH --cluster=gmerlin6 # Required for GPU
|
||||
#SBATCH --gpus=2 # Total number of GPUs
|
||||
#SBATCH --cpus-per-gpu=5 # Request CPU resources
|
||||
#SBATCH --time=1-00:00:00 # Define max time job will run
|
||||
#SBATCH --output=myscript.out # Define your output file
|
||||
#SBATCH --error=myscript.err # Define your error file
|
||||
|
||||
module purge
|
||||
module load cuda # load any needed modules here
|
||||
srun $MYEXEC # where $MYEXEC is a path to your binary file
|
||||
```
|
||||
|
||||
Slurm will automatically set the gpu visibility (eg `$CUDA_VISIBLE_DEVICES`).
|
||||
|
||||
## Advanced examples
|
||||
|
||||
### Array Jobs: launching a large number of related jobs
|
||||
|
Reference in New Issue
Block a user