Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 29 Nächste Version anzeigen »

Code Compilation

Intel compiler

MPI, icc
module load intel/19.0.5
module load impi/2019.5
mpiicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpiifort -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90
mpiicpc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp
MPI, OpenMP, icc
module load intel/19.0.5
module load impi/2019.5
mpiicc -qopenmp  -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpiifort -qopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90 
mpiicpc -qopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp

Gnu compiler

MPI, gcc
module load gcc/9.3.0
module load impi/2019.5
mpigcc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpif90 -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90
mpigxx -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp
MPI, OpenMP gcc
module load gcc/9.3.0
module load impi/2019.5
mpigcc -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpif90 -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90 
mpigxx -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp

Code execution

You need to start the MPI parallelized code on the system. You can choose between two approaches, namely using mpirun or srun.

Using mpirun

Using mpirun the pinning is controled by the MPI library. Pinning by slurm you need to switch off by adding export SLURM_CPU_BIND=none.

MPI only

MPI
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
mpirun -ppn 96 ./hello.bin
MPI scattered
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export I_MPI_PIN_DOMAIN=core
export I_MPI_PIN_ORDER=scatter
mpirun -ppn 12 ./hello.bin
MPI hyperthreading
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
mpirun -ppn 192 ./hello.bin

MPI, OpenMP

You can run one code compiled with MPI and OpenMP. The examples cover the setup

  • 2 nodes,
  • 12 processes per node, 2 threads per process,
  • one code compiled with MPI and OpenMP.
MPI, OpenMP compact
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export OMP_NUM_THREADS=2
mpirun -ppn 12 ./hello.bin
MPI, OpenMP scattered
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=2
mpirun -ppn 12 ./hello.bin

The example covers the setup

  • 2 nodes,
  • 96 processes per node using hyperthreading,
  • 2 threads per process.
MPI, OpenMP hyperthreading
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export OMP_NUM_THREADS=2
mpirun -ppn 96 ./hello.bin

Using srun

MPI only

MPI
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=96
#SBATCH --partition=standard96:test
srun ./hello.bin

MPI, OpenMP

You can run one code compiled with MPI and OpenMP. The examples cover the setup

  • 2 nodes,
  • 12 processes per node, 2 threads per process,
  • one code compiled with MPI and OpenMP.
MPI, OpenMP
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=12
#SBATCH --cpus-per-task=4
#SBATCH --partition=standard96:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=2
srun ./hello.bin

The example covers the setup

  • 2 nodes,
  • 96 processes per node using hyperthreading,
  • 2 threads per process.
MPI, OpenMP hyperthreading
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=96
#SBATCH --cpus-per-task=2
#SBATCH --partition=standard96:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=2
srun ./hello.bin



  • Keine Stichwörter