Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 11 Nächste Version anzeigen »

Code Compilation

MPI
module load intel
module load impi
mpiicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpiifort -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90
mpiicpc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp
MPI, OpenMP
module load intel
module load impi
mpiicc -qopenmp  -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
mpiifort -qopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.f90 
mpiicpc -qopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.cpp

Code execution

You need to start the MPI parallelized code on the system. You can choose between two approaches, namely using mpirun or srun.

Using mpirun

Using mpirun the pinning is controled by the MPI library. Pinning by slurm you need to switch off by adding export SLURM_CPU_BIND=none.

MPI
#SBATCH --time=00:10:00
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
mpirun -ppn 96 ./hello.bin
MPI scattered
#SBATCH --nodes=2
#SBATCH --partition=standard96:test
module load impi/2019.5
export SLURM_CPU_BIND=none
export I_MPI_PIN_DOMAIN=core
export I_MPI_PIN_ORDER=scatter
mpirun -ppn 12 ./hello.bin




  • Keine Stichwörter