Content
Code Compilation
For code compilation please use Gnu compiler.
module load gcc/13.3.0
module load openmpi/gcc/5.0.3
mpicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
module load gcc/13.3.0
module load openmpi/gcc/5.0.3
mpicc -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c
Code execution
To execute your code you need to
- have a binary (executable, model code), which is the result of code compilation,
- create a slurm job script,
- submit the slurm jobs script.
blogin> sbatch myjobscipt.slurm
Submitted batch job 8028673
blogin> ls slurm-8028673.out
slurm-8028673.out
Slurm scripts
A slurm script is submitted to the job scheduler slurm. It contains
- the control about requested compute nodes and
- commands to start your binary.
Using mpirun
Using mpirun
(from the MPI library) to start the binary you need to switch off slurm binding by adding export SLURM_CPU_BIND=none
.
MPI only
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 192 --map-by ppr:96:node ./hello.bin
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
mpirun -np 96 --map-by ppr:48:node ./hello.bin
MPI, OpenMP
You can run one code compiled with MPI and OpenMP. The example covers the setup
- 2 nodes,
- 4 processes per node, 24 threads per process.
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
module load openmpi/gcc/5.0.3
export SLURM_CPU_BIND=none
export OMP_NUM_THREADS=24
mpirun -np 8 --map-by ppr:4:node:pe=24 ./hello.bin
Using srun
MPI only
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
srun --ntasks-per-node=96 ./hello.bin
MPI, OpenMP
You can run one code compiled with MPI and OpenMP. The example covers the setup
- 2 nodes,
- 4 processes per node, 24 threads per process.
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=cpu-clx:test
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=24
srun --ntasks-per-node=4 --cpus-per-task=48 ./hello.bin