GPUs 🔗

PALMA-II offers nodes equipped with graphics processors (GPUs) for workloads that can benefit from these accelerators such as image processing or machine learning. Below you can find a list of all GPU models currently present in the cluster.

GPU model Compute
capability
Memory CUDA
cores
GFLOPS
F32
GFLOPS
F64
Partitions

How to submit a GPU job 🔗

If you want to use a GPU for your computations:

  • Choose a GPU according to your needs: single- or double-precision, VRAM etc.
  • Use one of the gpu[...] partitions for your job (see table above) and specify with --gres=gpu:<number of GPUs> the number of GPUs your job needs (see here for more information on generic resources in SLURM). Remember to also reserve CPUs and RAM accordingly!
⚠️

Please do not reserve a whole GPU node if you are not using all available GPUs and do not need all available CPU cores! This allows more users to use the available GPUs at the same time!

📣
gpuexpress

You can at most allocate 1 GPU, 4 CPU cores and 29G of RAM on this node (such that 1 node can accommodate 8 jobs).

Below you will find an example job script which reserves 6 CPUs and 1 GPU on the gpuv100 partition.

GPU Job Script
#!/bin/bash

#SBATCH --nodes=1
#SBATCH --tasks-per-node=6
#SBATCH --partition=gpuv100
#SBATCH --gres=gpu:1
#SBATCH --time=00:10:00

#SBATCH --job-name=MyGPUJob
#SBATCH --output=output.dat
#SBATCH --mail-type=ALL
#SBATCH --mail-user=your_account@uni-muenster.de

# load modules with available GPU support (this is an example, modify to your needs!)
module load fosscuda
module load TensorFlow

# run your application
...