To build and execute code on the GPU A100 clusterpartition, please use the appropriate login nodes as listed in Quickstart.login to
- a GPU A100 login node, like bgnlogin.nhr.zib.de.
- see also GPU A100 partition
Code build
For code generation we recommend the software package NVIDIA hpcx which includes is a combination of compiler and access to powerful libraries, like e.g. MPI.
Codeblock |
---|
language | text |
---|
title | Plain OpenMP for GPU |
---|
|
bgnlogin1 ~ $ module load nvhpc-hpcx/23.1
bgnlogin1 ~ $ module list
Currently Loaded Modulefiles: ... 4) hpcx 5) nvhpc-hpcx/23.1 |
...
bgnlogin1 $ nvc -mp -target=gpu openmp_gpu.c -o openmp_gpu.bin |
Codeblock |
---|
language | text |
---|
title | MPI + OpenMP and MPIfor GPU |
---|
|
bgnlogin1 ~ $ module load nvhpc-hpcx/23.1
bgnlogin1 ~ $ modulempicc list
Currently Loaded Modulefiles: ... 4) hpcx 5) nvhpc-hpcx/23.1 |
Code execution
Lise's CPU-only partition and the A100 GPU partition share the same SLURM batch system. The main SLURM partition for the A100 GPU partition has the name "gpu-a100". An example job script is shown below.
Codeblock |
---|
title | GPU job script |
---|
linenumbers | true |
---|
|
-mp -target=gpu mpi_openmp_gpu.c -o mpi_openmp_gpu.bin |
Code execution
All available slurm partitions for the A100 GPU partition you can see on Slurm partition GPU A100.
Codeblock |
---|
language | text |
---|
title | Job script for plain OpenMP |
---|
|
#!/bin/bash
#SBATCH --partition=gpu-a100:shared
#SBATCH --gres=gpu:1
#SBATCH --nodes=21
#SBATCH --ntasks=8
-per-node=72
./openmp_gpu.bin
|
Codeblock |
---|
language | text |
---|
title | Job script for MPI + OpenMP |
---|
|
#!/bin/bash
#SBATCH --partition=gpu-a100
#SBATCH --gres=gpu:4
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=72
module load openmpinvhpc-hpcx/gcc23.11/4.1.4
mpirun --np 8 --map-by ppr:2:socket:pe=1 ./mycodempi_openmp_gpu.bin
|