The compute nodes of Lise in Berlin (blogin.hlrn.de) and Emmy in Göttingen (glogin.hlrn.de) are organized via the following SLURM partitions:
Lise (Berlin)
Partition (number holds cores per node) | Node name | Max. walltime | Nodes | Max. nodes per job | Usable memory MB per node | CPU | Shared | Charged core-hours per node | Remark | |
---|---|---|---|---|---|---|---|---|---|---|
standard96 | bcn# | 12:00:00 | 1204 | 512 | 16 / 500 | 362 000 | Cascade 9242 | ✘ | 96 | default partition |
standard96:test | bcn# | 1:00:00 | 32 dedicated +128 on demand | 16 | 1 / 500 | 362 000 | Cascade 9242 | ✘ | 96 | test nodes with higher priority but lower walltime |
large96 | bfn# | 12:00:00 | 28 | 8 | 16 / 500 | 747 000 | Cascade 9242 | ✘ | 144 | fat memory nodes |
large96:test | bfn# | 1:00:00 | 2 dedicated +2 on demand | 2 | 1 / 500 | 747 000 | Cascade 9242 | ✘ | 144 | fat memory test nodes with higher priority but lower walltime |
large96:shared | bfn# | 48:00:00 | 2 dedicated | 1 | 16 / 500 | 747 000 | Cascade 9242 | ✓ | 144 | fat memory nodes for data pre- and postprocessing |
huge96 | bsn# | 24:00:00 | 2 | 1 | 16 / 500 | 1522 000 | Cascade 9242 | ✓ | 192 | very fat memory nodes for data pre- and postprocessing |
12 hours are too short? See here how to pass the 12h walltime limit with job dependencies.
List of CPUs and GPUs at HLRN
Short name | Link to manufacturer specifications | Where to find | Units per node | Cores per unit | Clock speed |
---|---|---|---|---|---|
Cascade 9242 | Intel Cascade Lake Platinum 9242 (CLX-AP) | Lise and Emmy compute partitions | 2 | 48 | 2.3 |
Cascade 4210 | Intel Cascade Lake Silver 4210 (CLX) | blogin[1-8], glogin[3-8] | 2 | 10 | 2.2 |
Skylake 6148 | Intel Skylake Gold 6148 | Emmy compute partitions | 2 | 20 | 2.4 |
Skylake 4110 | Intel Skylake Silver 4110 | glogin[1-2] | 2 | 8 | 2.1 |
Tesla V100 | NVIDIA Tesla V100 32GB | Emmy grete partitions | 4 | 640/5120* | |
Tesla A100 | NVIDIA Tesla A100 40GB and 80GB | Emmy grete partitions | 4 or 8 | 432/6912* |
*Tensor Cores / CUDA Cores