Content
Inhalt |
---|
Code
...
execution
For examples for code execution, please visit Slurm partition CPU CLX.
Code compilation
For code compilation please use
...
gnu compiler.
...
Codeblock | ||||
---|---|---|---|---|
| ||||
module load gcc/13.3.0 module load openmpi/gcc/5.0.3 mpicc -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c |
...
Codeblock | ||||
---|---|---|---|---|
| ||||
module load gcc/13.3.0 module load openmpi/gcc/5.0.3 mpicc -fopenmp -Wl,-rpath,$LD_RUN_PATH -o hello.bin hello.c |
Code execution
To execute your code you need to
- have a binary (executable, model code), which is the result of code compilation,
- create a slurm job script,
- submit the slurm jobs script.
Codeblock | ||||
---|---|---|---|---|
| ||||
blogin> sbatch myjobscipt.slurm
Submitted batch job 8028673
blogin> ls slurm-8028673.out
slurm-8028673.out |
Slurm scripts
Slurm job script
A slurm script is submitted to the job scheduler slurm. It contains
- the control about requested request for compute nodes andof a Slurm partition CPU CLX and
- commands to start your binary. You have two options to start an MPI binary.
- using
mpirun
- using
srun
- using
Using mpirun
Using mpirun
(from the MPI library) to start the binary you need to switch off slurm binding by adding export SLURM_CPU_BIND=none
.
MPI only
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --nodes=2 #SBATCH --partition=cpu-clx:test module load openmpi/gcc/5.0.3 export SLURM_CPU_BIND=none mpirun -np 192 --map-by ppr:96:node ./hello.bin |
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --nodes=2 #SBATCH --partition=cpu-clx:test module load openmpi/gcc/5.0.3 export SLURM_CPU_BIND=none mpirun -np 96 --map-by ppr:48:node ./hello.bin |
MPI, OpenMP
You can run one code compiled with MPI and OpenMP. The example covers the setup
- 2 nodes,
- 4 processes per node, 24 threads per process.
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --nodes=2 #SBATCH --partition=cpu-clx:test module load openmpi/gcc/5.0.3 export SLURM_CPU_BIND=none export OMP_NUM_THREADS=24 mpirun -np 8 --map-by ppr:4:node:pe=24 ./hello.bin |
Using srun
MPI only
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --nodes=2 #SBATCH --partition=cpu-clx:test srun --ntasks-per-node=96 ./hello.bin |
...
You can run one code compiled with MPI and OpenMP. The example covers the setup
...