To build and execute code on the GPU A100 cluster, please login to
- a GPU A100 login node, like bgnlogin.nhr.zib.de.
- see also Quickstart
Code build
For code generation we recommend the software package NVIDIA hpcx which is a combination of compiler and powerful libraries, like e.g. MPI.
Plain OpenMP for GPU
bgnlogin1 $ module load nvhpc-hpcx/23.1 bgnlogin1 $ module list Currently Loaded Modulefiles: ... 4) hpcx 5) nvhpc-hpcx/23.1 bgnlogin1 $ nvc -mp -target=gpu openmp_gpu.c -o openmp_gpu.bin
MPI and OpenMP for GPU
bgnlogin1 $ module load nvhpc-hpcx/23.1 bgnlogin1 $ module list Currently Loaded Modulefiles: ... 4) hpcx 5) nvhpc-hpcx/23.1 bgnlogin1 $ mpicc -mp -target=gpu mpi_openmp_gpu.c -o mpi_openmp_gpu.bin
Code execution
Lise's CPU-only partition and the A100 GPU partition share the same SLURM batch system. The main SLURM partition for the A100 GPU partition has the name "gpu-a100". An example job script is shown below.
GPU job script
#!/bin/bash #SBATCH --partition=gpu-a100 #SBATCH --nodes=2 #SBATCH --ntasks=8 #SBATCH --gres=gpu:4 module load openmpi/gcc.11/4.1.4 mpirun ./mycode.bin