To build and execute code on the GPU A100 cluster, please login to
- a GPU A100 login node, like bgnlogin.nhr.zib.de.
- see also Quickstart
Code build
For code generation we recommend the software package NVIDIA hpcx which is a combination of compiler and powerful libraries, like e.g. MPI.
Plain OpenMP for GPU
bgnlogin1 $ module load nvhpc-hpcx/23.1 bgnlogin1 $ module list Currently Loaded Modulefiles: ... 4) hpcx 5) nvhpc-hpcx/23.1 bgnlogin1 $ nvc -mp -target=gpu openmp_gpu.c -o openmp_gpu.bin
MPI + OpenMP for GPU
bgnlogin1 $ module load nvhpc-hpcx/23.1 bgnlogin1 $ module list Currently Loaded Modulefiles: ... 4) hpcx 5) nvhpc-hpcx/23.1 bgnlogin1 $ mpicc -mp -target=gpu mpi_openmp_gpu.c -o mpi_openmp_gpu.bin
Code execution
All available slurm partitions for the A100 GPU partition you can see on Slurm partitions GPU A100.
Job script for plain OpenMP
#!/bin/bash #SBATCH --partition=gpu-a100 #SBATCH --gres=gpu:4 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=72 ./openmp_gpu.bin
Job script for MPI + OpenMP
#!/bin/bash #SBATCH --partition=gpu-a100 #SBATCH --gres=gpu:4 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=72 module load nvhpc-hpcx/23.1 mpirun --np 8 --map-by ppr:2:socket:pe=1 ./mpi_openmp_gpu.bin