Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 31 Nächste Version anzeigen »

Code compilation

Intel compiler

Serial icc
module load intel
icc -o hello.bin hello.c
ifort -o hello.bin hello.f90
icpc -o hello.bin hello.cpp
OpenMP icc
module load intel
icc -qopenmp -o hello.bin hello.c
ifort -qopenmp -o hello.bin hello.f90 
icpc -qopenmp -o hello.bin hello.cpp

Gnu compiler

Serial gcc
module load gcc
gcc -o hello.bin hello.c
gfortran -o hello.bin hello.f90
g++ -o hello.bin hello.cpp
OpenMP gcc
module load gcc
gcc -fopenmp -o hello.bin hello.c
gfortran -fopenmp -o hello.bin hello.f90
g++ -fopenmp -o hello.bin hello.cpp

Code execution

To execute your code you need to

  1. have a binary, which is the result of code compilation,
  2. create a slurm job script,
  3. submit the slurm jobs script.
Submission of a job
blogin> sbatch myjobscipt.slurm
Submitted batch job 8028673
blogin> ls slurm-8028673.out
slurm-8028673.out

The examples for slurm jobs scripts cover the setup

  • 1 node,
  • 1 OpenMP code running.
Serial
#SBATCH --nodes=1
./hello.bin
OpenMP, full node
#SBATCH --nodes=1
#SBATCH --partition=standard96:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=96
./hello.bin
OpenMP, half node
#SBATCH --nodes=1
#SBATCH --partition=standard96:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=48
./hello.bin
OpenMP, hyperthreading
#SBATCH --nodes=1
#SBATCH --partition=standard96:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=192
./hello.bin

You can run different OpenMP codes at the same time. The examples cover the setup

  • 2 nodes,
  • 4 OpenMP codes run simultaneously.
  • The code is not MPI parallel. mpirun is used to start the codes only.
OpenMP simultaneously
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=48
mpirun -ppn 2 \
       -np 1 ./code1.bin : -np 1 ./code2.bin : -np 1 ./code3.bin : -np 1 ./code4.bin
OpenMP simultaneously hyperthreading
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=96
mpirun -ppn 2 \
       -np 1 ./code1.bin : -np 1 ./code2.bin : -np 1 ./code3.bin : -np 1 ./code4.bin

Intel compiler flags

To make full use of the vectorizing capabilities of the CPUs, AVX512 instructions and the 512bit ZMM registers can be used with the following compile flags with the Intel compilers:

-xCORE-AVX512 -qopt-zmm-usage=high

However, high ZMM usage is not recommended in all cases (read more).

With GNU compilers (GCC 7.x and later), architecture-specific optimization for Skylake and Cascade Lake CPUs is enabled with

-march=skylake-avx512

Using the Intel MKL

The Intel® Math Kernel Library (Intel® MKL) is designed to run on multiple processors and operating systems. It is also compatible with several compilers and third party libraries, and provides different interfaces to the functionality. To support these different environments, tools, and interfaces Intel MKL provides multiple libraries from which to choose.

Check out the link below to see what libraries are recommended for a particular use case. https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/

  • Keine Stichwörter