The compute nodes of Lise in Berlin (blogin.hlrn.de) and Emmy in Göttingen (glogin.hlrn.de) are organized via the following SLURM partitions:
...
Partition (number holds cores per node) | Node name | Nodes | Max. nodes per job | Max. jobs per user | Usable memory MB per node | CPU, GPU type | Shared | NPL per node hour | Remark | |
---|---|---|---|---|---|---|---|---|---|---|
standard96 | gcn# | 12:00:00 | 924996 | 256 | unlimited | 362 000 | Cascade 9242 | ✘ | 96 | default partition |
standard96:test | gcn# | 1:00:00 | 16 8 dedicated +48 128 on demand | 16 | unlimited | 362 000 | Cascade 9242 | ✘ | 96 | test nodes with higher priority but lower walltime |
large96 | gfn# | 12:00:00 | 12 | 2 | unlimited | 747 000 | Cascade 9242 | ✘ | 144 | fat memory nodes |
large96:test | gfn# | 1:00:00 | 2 dedicated +2 on demand | 2 | unlimited | 747 000 | Cascade 9242 | ✘ | 144 | fat memory test nodes with higher priority but lower walltime |
large96:shared | gfn# | 48:00:00 | 2 dedicated + 26 on demand | 1 | unlimited | 747 000 | Cascade 9242 | ✓ | 144 | fat memory nodes for data pre- and postprocessing |
huge96 | gsn# | 24:00:00 | 2 | 1 | unlimited | 1522 000 | Cascade 9242 | ✘ | 192 | very fat memory nodes for data pre- and postprocessing |
medium40 | gcn# | 48:00:00 | 368424 | 128 | unlimited | 181 000 | Skylake 6148 | ✘ | 40 | |
medium40:test | gcn# | 1:00:00 | 32 8 dedicated +96 64 on demand | 8 | unlimited | 181 000 | Skylake 6148 | ✘ | 40 | test nodes with higher priority but lower walltime |
large40 | gfn# | 48:00:00 | 1112 | 4 | unlimited | 764 000 | Skylake 6148 | ✘ | 80 | fat memory nodes |
large40:test | gfn# | 1:00:003 | 2 dedicated +2 on demand | 2 | unlimited | 764 000 | Skylake 6148 | ✘ | 80 | fat memory test nodes with higher priority but lower walltime |
large40:shared | gfn# | 48:00:00 | 2 dedicated +6 on demand | 1 | unlimited | 764 000 | Skylake 6148 | ✓ | 80 | fat memory nodes for data pre- and postprocessing |
gpu | ggpu# | 1248:00:00 | 33 | 2 | unlimited | 764 000 MB per node (32GB HBM per gpuGPU) | Skylake 6148 + 4 Nvidia V100 32GB | ✘ | 375 | see GPU Usage |
grete | ggpu# | 48:00:00 | 3333 | 8 | unlimited | 500 000 MB per node (40GB HBM per GPU) | Zen3 EPYC 7513 + 4 NVidia A100 40GB | ✘ | 600 | |
grete:shared | ggpu# | 48:00:00 | 3535 | 1 | unlimited | 500 000 MB and 1 000 000 MB per node (40GB and or 80GB HBM per GPU) | Zen3 EPYC 7513 + 4 NVidia A100 40GB and Zen2 EPYC 7662 + 8 NVidia A100 80GB | ✓600/1200* | 150 per GPU | |
grete:interactive | ggpu# | 48:00:00 | 33 | 1 | unlimited | 500 000 MB (40GB per GPU10GB or 20GB HBM per MiG slice) | Zen3 EPYC 7513 + 4 NVidia A100A100 40GB splitted in 2g.10gb and 3g.20gb slices | ✓600 | 47 per MiG slice | see GPU Usage GPUs are split into slices via MIG (3 slices per GPU) |
grete:preemptible | ggpu# | 48:00:00 | 33 | 1 | unlimited | 500 000 MB (40GB per GPU10GB or 20GB HBM per MiG slice) | Zen3 EPYC 7513 + 4 NVidia A100 40GB splitted in 2g.10gb and 3g.20gb slices | ✓600 | 47 per MiG slice |
* 600 for the nodes with 4 GPUs, and 1200 for the nodes with 8 GPUs
...