Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 55 Nächste Version anzeigen »

Content

Code Compilation

For code compilation you can choose one of the two compilers Intel or Gnu. Both compilers are able to include the Intel MPI library.

Intel compiler

MPI, intel
module load intel/2024.2
module load impi/2021.13
export I_MPI_CC=icx
mpiicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
MPI, OpenMP, intel
module load intel/2024.2
module load impi/2021.13
export I_MPI_CC=icx
mpiicc -qopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c

Gnu compiler

MPI, gnu
module load gcc/13.3.0
module load openmpi/gcc/5.0.3
mpicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
MPI, OpenMP, gnu
module load gcc/13.3.0
module load openmpi/gcc/5.0.3
mpicc -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c

Code execution

To execute your code you need to

  1. have a binary (executable, model code), which is the result of code compilation,
  2. create a slurm job script,
  3. submit the slurm jobs script.
Job submission
blogin> sbatch myjobscipt.slurm
Submitted batch job 8028673
blogin> ls slurm-8028673.out
slurm-8028673.out

Slurm scripts

A slurm script is submitted to the job scheduler slurm. It contains

  • the control about requested compute nodes and
  • commands to start your binary.

Using mpirun

Using mpirun (from the MPI library) to start the binary you need to switch off slurm binding by adding export SLURM_CPU_BIND=none.

MPI only

MPI, full node
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 192 --map-by ppr:96:node ./hello.bin
MPI, half node
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 96 --map-by ppr:48:node ./hello.bin

MPI, OpenMP

You can run one code compiled with MPI and OpenMP. The examples cover the setup

  • 2 nodes,
  • 4 processes per node, 24 threads per process.
MPI, OpenMPI, full node
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
export OMP_NUM_THREADS=24
mpirun -np 8 --map-by ppr:4:node:pe=24 ./hello.bin

Using srun

MPI only

MPI, full node
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
srun --ntasks-per-node=96 ./hello.bin

MPI, OpenMP

MPI, OpenMP, full node
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=24
srun --ntasks-per-node=4 --cpus-per-task=48 ./hello.bin
  • Keine Stichwörter