Content
Code Compilation
For code compilation you can choose one of the two compilers - Intel oneAPI or GNU. Both compilers are able to include the Intel MPI library.
Intel one API compiler
Codeblock |
---|
title | plain MPI, icc |
---|
collapse | true |
---|
|
module load intel
module load impi
mpiicx -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpiifx -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90
mpiicpx -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp |
Codeblock |
---|
title | hybrid MPI/OpenMP |
---|
collapse | true |
---|
|
module load intel
module load impi
mpiicx -fopenmp mpiicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpiifortmpiifx -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90
mpiicpx mpiicpc-fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp |
GNU compiler
Codeblock |
---|
title | plain MPI, gcc |
---|
collapse | true |
---|
|
module load gcc
module load impi
mpigcc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpif90 -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90
mpigxx -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp |
Codeblock |
---|
title | hybrid MPI, /OpenMP, icc |
---|
collapse | true |
---|
|
module load intelgcc
module load impi
mpiiccmpigcc -qopenmp fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpiifortmpif90 -qopenmpfopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90
mpiicpcmpigxx -qopenmpfopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp |
...
Slurm job script
You need to start the MPI parallelized code on the system. You can choose between two approaches, namely
...
Using mpirun
Using mpirun
the pinning is controled controlled by the MPI library. Pinning by slurm SLURM you need to switch off by adding export SLURM_CPU_BIND=none
.
MPI only
Codeblock |
---|
title | MPI, full node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
mpirun -ppn 96 ./hello.bin |
Codeblock |
---|
title | MPI scattered, half node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export I_MPI_PIN_DOMAIN=core
export I_MPI_PIN_ORDER=scatter
mpirun -ppn 1248 ./hello.bin |
Codeblock |
---|
title | MPI, hyperthreading |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
mpirun -ppn 192 ./hello.bin |
MPI, OpenMP
You can run one code compiled with MPI and OpenMP. The examples cover the setup
- 2 nodes,
- 12 4 processes per node, 2 24 threads per process,one code compiled with MPI and OpenMP.
Codeblock |
---|
title | MPI, OpenMP compact, full node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export OMP_NUM_THREADS=224
mpirun -ppn 124 ./hello.bin |
The example covers the setup
- 2 nodes,
- 4 processes per node, 12 threads per process.
Codeblock |
---|
title | MPI, OpenMP scattered, half node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=212
mpirun -ppn 124 ./hello.bin |
The example covers the setup
- 2 nodes,
- 96 4 processes per node using hyperthreading,
- 2 48 threads per process.
Codeblock |
---|
title | MPI, OpenMP hyperthreading |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=248
mpirun -ppn 4 ./hello.bin |
Using srun
MPI only
Codeblock |
---|
title | MPI, full node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
srun --ntasks-per-node=96 ./hello.bin |
Codeblock |
---|
title | MPI, half node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
srun --ntasks-per-node=48 ./hello.bin |
MPI, OpenMP
You can run one code compiled with MPI and OpenMP. The example covers the setup
- 2 nodes,
- 4 processes per node, 24 threads per process.
Codeblock |
---|
title | MPI, OpenMP, full node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=24
srun --ntasks-per-node=4 --cpus-per-task=48 ./hello.bin |
The example covers the setup
- 2 nodes,
- 4 processes per node, 12 threads per process.
Codeblock |
---|
title | MPI, OpenMP, half node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=12
srun --ntasks-per-node=4 --cpus-per-task=24 ./hello.bin |
The example covers the setup
- 2 nodes,
- 4 processes per node using hyperthreading,
- 48 threads per process.
Codeblock |
---|
title | MPI, OpenMP, hyperthreading |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=48
srun --ntasks-per-node=4 --cpus-per-task=48 ./hello.bin |