To build and execute code on the GPU A100 clusterpartition, please login to
- a GPU A100 login node, like bgnlogin.nhr.zib.de.
- see also QuickstartGPU A100 partition
Code build
For code generation we recommend the software package NVIDIA hpcx which is a combination of compiler and powerful libraries, like e.g. MPI.
...
Codeblock |
---|
language | text |
---|
title | MPI and + OpenMP for GPU |
---|
|
bgnlogin1 $ module load nvhpc-hpcx/23.1
bgnlogin1 $ module list
Currently Loaded Modulefiles: ... 4) hpcx 5) nvhpc-hpcx/23.1
bgnlogin1 $ mpicc -mp -target=gpu mpi_openmp_gpu.c -o mpi_openmp_gpu.bin |
Code execution
CPU-only partition and All available slurm partitions for the A100 GPU partition share the same SLURM batch system. The main SLURM partition for the A100 GPU partition has the name "gpu-a100". An example job script is shown below.
...
title | GPU job script |
---|
linenumbers | true |
---|
you can see on Slurm partition GPU A100.
Codeblock |
---|
language | text |
---|
title | Job script for plain OpenMP |
---|
|
#!/bin/bash
#SBATCH --partition=gpu-a100:shared
#SBATCH --gres=gpu:1
#SBATCH --nodes=21
#SBATCH --ntasks=8 -per-node=72
./openmp_gpu.bin
|
Codeblock |
---|
language | text |
---|
title | Job script for MPI + OpenMP |
---|
|
#!/bin/bash
#SBATCH --partition=gpu-a100
#SBATCH --gres=gpu:4
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=72
module load openmpinvhpc-hpcx/gcc23.11/4.1.4
mpirun --np 8 --map-by ppr:2:socket:pe=1 ./mycodempi_openmp_gpu.bin
|