...
The compute nodes of the CPU cluster of system Lise are organised via the following Slurm partitions.
Partition name | Node count | CPU | Main memory (GB) | Max. nodes per job | Max. jobs per user (running/ queued) | Wall time limit (hh:mm:ss) | Remark |
---|---|---|---|---|---|---|---|
cpu-clx | 688 | Cascade 9242 | 362 | 512 | 128 / 500 | 12:00:00 |
use blogin3-8.nhr.zib.de | ||||||
cpu-clx:test | 32 dedicated +128 on demand | 362 | 16 | 1 / 500 | 01:00:00 | test nodes with higher priority but less wall time |
large96 | 28 | 747 | 8 | 128 / 500 | 12:00:00 | fat memory nodes blogin1-2.nhr.zib.de |
large96:test | 2 dedicated +2 on demand | 747 | 2 | 1 / 500 | 01:00:00 | fat memory test nodes with higher priority but less wall time |
large96:shared | 2 dedicated | 747 | 1 | 128 / 500 | 48:00:00 | fat memory nodes for data pre- and post-processing |
huge96 | 2 | 1522 | 1 | 128 / 500 | 24:00:00 | very fat memory nodes for data pre- and post-processing |
See Slurm usage how to pass the 12h wall time limit with job dependencies.
Which partition to choose?
If you do not request a partition, your job will be placed in the default partition, which is standard96.
The default partition cpu-clx is suitable for most calculations. The :test partitions are, as the name suggests, intended for shorter and smaller test runs. These have a higher priority and a few dedicated nodes, but provide only limited resources. Shared nodes are suitable for pre- and post-processing. A job running on a shared node is only accounted for its core fraction (cores of job / all cores per node). All non-shared nodes are exclusive to one job only at a time.
...