...
Code compilation
Intel compiler
Codeblock |
---|
title | Serial codeicc |
---|
collapse | true |
---|
|
module load intel
icc -o hello.bin hello.c
ifort -o hello.bin hello.f90
icpc -o hello.bin hello.cpp |
Codeblock |
---|
title | OpenMP parallel codeicc |
---|
collapse | true |
---|
|
module load intel
icc -qopenmp -o hello.bin hello.c
ifort -qopenmp -o hello.bin hello.f90
icpc -qopenmp -o hello.bin hello.cpp |
Gnu compiler
Codeblock |
---|
title | Serial gcc |
---|
collapse | true |
---|
|
module load gcc
gcc -o hello.bin hello.c
gfortran -o hello.bin hello.f90
g++ -o hello.bin hello.cpp |
Codeblock |
---|
title | OpenMP gcc |
---|
collapse | true |
---|
|
module load gcc
gcc -fopenmp -o hello.bin hello.c
gfortran -fopenmp -o hello.bin hello.f90
g++ -fopenmp -o hello.bin hello.cpp |
Code execution
You can run a single OpenMP code. The examples cover the setup
- binary compiled with Intel compiler, see also Compilation Guide1 node,
- 1 OpenMP code running.
Codeblock |
---|
|
#SBATCH --nodes=1
./hello.bin |
Codeblock |
---|
title | OpenMP, full node |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --partition=standard96:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=96
./hello.bin |
Codeblock |
---|
title | OpenMP, half node |
---|
collapse | true |
---|
|
#SBATCH --nodes=1
#SBATCH --partition=standard96:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=48
./hello.bin |
Codeblock |
---|
title | OpenMP, hyperthreading |
---|
collapse | true |
---|
|
#SBATCH --nodes=1
#SBATCH --partition=standard96:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=192
./hello.bin |
You can run different OpenMP codes at the same time. The examples cover the setup
- 2 nodes,
- 4 OpenMP codes run simultaneously.
- The code is not MPI parallel.
mpirun
is used to start the codes only.
Codeblock |
---|
title | OpenMP parallelsimultaneously |
---|
collapse | true |
---|
|
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=48
mpirun -ppn 2 \
-np 1 ./code1.bin : -np 1 ./code2.bin : -np 1 ./code3.bin : -np 1 ./code4.bin |
Codeblock |
---|
title | OpenMP simultaneously hyperthreading |
---|
collapse | true |
---|
|
#SBATCH --nodes=12
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=2496
mpirun -ppn 2 \
-np 1 ./code1.bin : -np 1 ./code2.bin : -np 1 ./code3.bin : -np 1 ./hellocode4.bin |
...
Intel compiler flags
To make full use of the vectorizing capabilities of the CPUs, AVX512 instructions and the 512bit ZMM registers can be used with the following compile flags with the Intel compilers:
...
With GNU compilers (GCC 7.x and later), architecture-specific optimization for Skylake and Cascade Lake CPUs is enabled with
-march=skylake-avx512
Using the Intel MKL
The Intel® Math Kernel Library (Intel® MKL) is designed to run on multiple processors and operating systems. It is also compatible with several compilers and third party libraries, and provides different interfaces to the functionality. To support these different environments, tools, and interfaces Intel MKL provides multiple libraries from which to choose.
...