Inhalt | ||
---|---|---|
|
...
SLURM partitions
CentOS 7 | Rocky Linux 9 |
---|---|
● | ● |
● | ● |
● | ● |
● | ● |
● | |
● | |
● | ● |
● available ● closed/not available
What remains unchanged
...
For users of SLURM’s
srun
job launcher:
Open MPI 5.x has dropped support for the PMI-2 API, it solely depends on PMIx to bootstrap MPI processes. For this reason the environment setting was changed fromSLURM_MPI_TYPE=pmi2
toSLURM_MPI_TYPE=pmix
, so binaries linked against Open MPI can be started as usual “out of the box” usingsrun mybinary
. For the case of a binary linked against Intel-MPI, this works too when a recent version (≥2021.11) of Intel-MPI has been used. If an older version of Intel-MPI has been used, and relinking/recompiling is not possible, one can follow the workaround for PMI-2 withsrun
as described in the Q&A section below. Switching fromsrun
tompirun
instead should also be considered.Using more processes per node than available physical cores (PPN > 96; hyperthreads) with the OPX provider:
The OPX provider currently does not support using hyperthreads/PPN > 96 on the clx partitions. Doing so may result in segmentation faults in libfabric during process startup. If a high number of PPN is really required, the default libfabric provider has to be changed to PSM2 by settingFI_PROVIDER=psm2
. Note that the usage of hyperthreads may not advisable. We encourage users to test performance before using more threads than available physical cores.
...