CUDA (Compute Unified Device Architecture) is an interface to program Nvidia GPUs. It offers support to the languages such as C, C++, and Fortran.
To build and execute code on the GPU A100 cluster, please login to
...
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --partition=gpu-a100 #SBATCH --gres=gpu:4 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=72 module load nvhpc-hpcx/23.1 mpirun --np 8 --map-by ppr:2:socket:pe=1 ./mpi_cuda_cublas.bin |
GPU-aware MPI
For efficient use of MPI-distributed GPU codes, an GPU/CUDA-aware MPI installation of Open MPI is available in the openmpi/gcc.11/4.1.4
environment module. Open MPI respects the resource requests made to Slurm. Thus, no special arguments are required to mpiexec/run
. Nevertheless, please consider and check the correct binding for your application to CPU cores and GPUs. Use --report-bindings
of mpiexec/run to check it.