CUDA (Compute Unified Device Architecture) is an interface to program Nvidia GPUs. It offers support to the languages such as C, C++, and Fortran.
To build and execute code on the GPU A100 clusterpartition, please login to
- a GPU A100 login node, like bgnlogin.nhr.zib.de.
- see also QuickstartGPU A100 partition
Note, that codes written in the cross-industry standard language SYCL can be executed on Nvidia (and AMD) hardware.
Code build
For code generation we recommend the software package NVIDIA hpcx which is a combination of compiler and powerful libraries, like e.g. CUDA, blascublas, and MPI.
Codeblock | ||||
---|---|---|---|---|
| ||||
bgnlogin1 $ module load nvhpc-hpcx/23.1 bgnlogin1 $ module list Currently Loaded Modulefiles: ... 4) hpcx 5) nvhpc-hpcx/23.1 bgnlogin1 $ nvc -cuda -gpu=cc8.0 cuda.c -o cuda.bin bgnlogin1 $ nvc -cuda -gpu=cc8.0 -cudalib=cublas cuda_cublas.c -o cuda_cublas.bin |
CUDA
...
can be used in combination with MPI.
Codeblock | ||||
---|---|---|---|---|
| ||||
bgnlogin1 $ module load nvhpc-hpcx/23.1 bgnlogin1 $ module list Currently Loaded Modulefiles: ... 4) hpcx 5) nvhpc-hpcx/23.1 bgnlogin1 $ nvc -mpicc -cuda -gpu=cc8.0 -o cuda_cublas.bin cuda_cublas.c bgnlogin1 $ nvc -cuda -gpu=cc8.0 -cudalib=cublas -o cuda_cublas.bin mpi_cuda_cublas.c bgnlogin1 $ mpicc -cuda -gpu=cc8.0 -cudalib=cublas -o mpi_cuda_cublas_mpi.bin cuda_cublas_mpi.c |
Code execution
All available slurm partitions for the A100 GPU partition you can see on Slurm partitions partition GPU A100.
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --partition=gpu-a100:shared #SBATCH --gres=gpu:41 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=72 ./cuda.bin ./openmpcuda_gpucublas.bin |
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --partition=gpu-a100 #SBATCH --gres=gpu:4 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=72 module load nvhpc-hpcx/23.1 mpirun --np 8 --map-by ppr:2:socket:pe=1 ./mpi_openmpcuda_gpucublas.bin |