Inhalt |
---|
The compute nodes of Lise in Berlin (blogin.hlrn.de) and Emmy in Göttingen (glogin.hlrn.de) are organized the CPU cluster of system Lise are organised via the following SLURM Slurm partitions:.
Lise (Berlin)
Partition name | Node |
---|
count |
---|
CPU | Main memory (GB) | Max. nodes per job | Max. jobs per user (running/ queued) |
---|
Usable memory MB per node
CPU
Charged core-hours per node
Wall time limit (hh:mm:ss) | Remark | |||||
---|---|---|---|---|---|---|
cpu-clx | 688 | Cascade 9242 | 362 | 512 | 128 / 500 | 12:00:00 |
16 / 500
use blogin3-8.nhr.zib.de | ||||
cpu-clx:test | 32 dedicated +128 on demand | 362 | 16 | 1 / 500 |
01:00:00 |
test nodes with higher priority but |
less wall time |
large96 |
747 000
28 | 747 | 8 | 128 / 500 | 12:00:00 |
1522 000
Partition (number holds cores per node)
per job
per user
Usable memory MB per node
CPU, GPU type
gcn#
+48 on demand
+2 on demand
fat memory nodes |
12 hours are too short? See here how to pass the 12h walltime limit with job dependencies.
Emmy (Göttingen)
blogin1-2.nhr.zib.de | ||||||
large96:test | 2 dedicated +2 on demand | 747 | 2 | 1 / 500 | 01:00:00 | fat memory test nodes with higher priority but |
less wall time |
large96:shared |
2 dedicated |
747 |
1 | 128 / 500 | 48:00:00 |
1522 000
fat memory nodes for data pre- and |
post-processing |
huge96 |
32 dedicated
+96 on demand
181 000
764 000
764 000
2 | 1522 | 1 | 128 / 500 | 24:00:00 | very fat memory nodes for data pre- and |
see GPU Usage
post-processing |
See Slurm usage how to pass the 12h wall time limit with job dependencies.
Which partition to choose?
If you do not request a partition, your job will be placed in the default partition, which is standard96.
The default partitions are The default partition cpu-clx is suitable for most calculations. The :test partitions are, as the name suggests, intended for shorter and smaller test runs. These have a higher priority and a few dedicated nodes, but are limited in time and number of nodesprovide only limited resources. Shared nodes are suitable for pre- and postprocessingpost-processing. A job running on a shared node is only accounted for its core fraction (cores of job / all cores per node). All non-shared nodes are exclusive to one job , which implies that full NPL are paid.
Details about the CPU/GPU types can be found below.
The network topology is described here.
The only at a time.
The available home/local-ssd/work/perm storages file systems are discussed in under File Systems.
An For an overview of all Slurm partitions and node statuses is provided bystatus of nodes: sinfo -r
To see For detailed information about a particular nodes type: scontrol show node <nodename>
Charge rates
Charge rates for the Slurm partitions can be found under Accounting.
Fat-Tree Communication Network of Lise
See OPA Fat Tree network of Lise
List of CPUs
...
Short name | Link to manufacturer specifications | Where to find | Units per node | Cores per unit | Clock speed | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Cascade 9242 | Intel Cascade Lake Platinum 9242 (CLX-AP) | Lise and Emmy compute partitionsCPU partition "Lise" | 2 | 48 | 2.3 | ||||||||||||||||
Cascade 4210 | Intel Cascade Lake Silver 4210 (CLX) | blogin[1-8], glogin[3-86] | 2 | 102.2 | Skylake 6148 | Intel Skylake Gold 6148 | Emmy compute partitions | 2 | 20 | 2.4 | Skylake 4110 | Intel Skylake Silver 4110 | glogin[1-2] | 2 | 8 | 2.1 | Tesla V100 | NVIDIA Tesla V100 32GB | Emmy gpu partition | 4 | 640/5120* |
...