You can use srun
to start multiple job steps concurrently on a single node, e.g. if your job is not big enough to fill a whole node. There are a few details to follow:
- By default, the
srun
command gets exclusive access to all resources of the job allocation and uses all tasks- you therefore need to limit
srun
to only use part of the allocation - this includes implicitly granted resources, i.e. memory and GPUs
- the
--exact
flag is needed. - if running non-mpi programs, use the
-c
option to denote the number of cores, each process should have access to
- you therefore need to limit
srun
waits for the program to finish, so you need to start concurrent processes in the backgroundGood default memory per cpu values (without hyperthreading) are usually are:
standard96 large96 huge96 medium40 large40/gpu --mem-per-cpu
3770M
7781M 15854M 4525M
19075M