Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

Content

Inhalt

Code execution

When using OpenMPI, binding is controlled using the –-bind-to parameter. To bind processes to cores, use --bind-to core. Possible other values can be found in the man page.

Codeblock
mpirun --bind-to core ./yourprogram

Our hardware supports hyperthreading, allowing you start 192 processes on Cascade Lake machines (*96 partitions) and 80 on Skylake machines.

If no specific requests regarding the number of tasks has been done, mpirun defaults to hyperthreading and starts cores*2 processes. If a number of tasks has been specified (with -N and/or --tasks-per-node), mpirun  honors this via the flag -map-by. For example:

...

For examples for code execution, please visit Slurm partition CPU CLX.


Code compilation

For code compilation please use gnu compiler.

Codeblock
titleMPI, gnu
collapsetrue
module load gcc/13.3.0
module load openmpi/gcc/5.0.3
mpicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c


Codeblock
titleMPI, OpenMP, gnu
collapsetrue
module load gcc/13.3.0
module load openmpi/gcc/5.0.3
mpicc -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c

Slurm job script

A slurm script is submitted to the job scheduler slurm. It contains

  • the request for compute nodes of a Slurm partition CPU CLX and
  • commands to start your binary. You have two options to start an MPI binary.
    • using mpirun
    • using srun

Using mpirun

Using mpirun (from the MPI library) to start the binary you need to switch off slurm binding by adding export SLURM_CPU_BIND=none.

Codeblock
titleMPI, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 192 --map-by ppr:96:node ./hello.bin


Codeblock
titleMPI, half node
collapsetrue
#!/bin/bash
#SBATCH -

...

-nodes=2
#SBATCH --partition=cpu-clx:test
module load

...

 openmpi/gcc

...

/

...

5.

...

0.

...

3
export 

...

SLURM_

...

CPU_

...

BIND=none
mpirun -np 96 --map-by ppr:

...

48:node ./

...

hello.bin

You can run one code compiled with MPI and OpenMP. The example covers the setup

  • 2 nodes,
  • 4 processes per node, 24 threads per process.
Codeblock
titleMPI, OpenMPI, full node
collapsetrue
#!/bin/bash
#SBATCH -N 4-nodes=2
#SBATCH --tasks-per-node 96

partition=cpu-clx:test
module load gcc/9.3.0 openmpi/gcc.9/45.10.43
export tasksSLURM_perCPU_node=${SLURM_TASKS_PER_NODE%\(*}

mpirun -BIND=none
export OMP_NUM_THREADS=24
mpirun -np 8 --map-by ppr:$tasks_per_node:node ./yourexe:4:node:pe=24 ./hello.bin

Using srun

Codeblock
titleMPI, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
srun --ntasks-per-node=96 ./hello.bin

You can run one code compiled with MPI and OpenMP. The example covers the setup

  • 2 nodes,
  • 4 processes per node, 24 threads per process.
Codeblock
titleMPI, OpenMP, full node
collapsetrue
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=24
srun --ntasks-per-node=4 --cpus-per-task=48 ./hello.bin