Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

Inhalt

The compute nodes of Lise in Berlin (blogin.hlrn.de) and Emmy in Göttingen (glogin.hlrn.de) are organized the CPU cluster of system Lise are organised via the following SLURM Slurm partitions:.

Lise (Berlin)

Partition (number holds cores per node)

Partition name

Node
name
count
Max. walltimeNodes

CPU

Main memory (GB)

Max. nodes
per job

Max. jobs per user (running/ queued)

per user

Usable memory MB per node

CPU

Shared

Charged core-hours per node

Remarkstandard96bcn#standard96:testbcn#1:00:00
Wall time limit (hh:mm:ss)Remark
cpu-clx688Cascade 9242362512

128 / 500

12:00:00
1204512

16 / 500

362 000Cascade 924296default partition1:
use
blogin3-8.nhr.zib.de
cpu-clx:test32 dedicated
+128 on demand
362 161 / 500
362 000Cascade 924296test nodes with higher priority but lower walltimelarge96bfn#12:00:0028816 / 500747 000Cascade 9242144fat memory nodeslarge96:testbfn#
01:00:00
2 dedicated
+2 on demand21 / 500747 000Cascade 9242144fat memory
test nodes with higher priority but
lower walltime
less wall time
large96
:shared24
bfn#48:00:002 dedicated116 / 500

747 000

Cascade 9242144fat memory nodes for data pre- and postprocessinghuge96bsn#
287478128 / 50012:00:00
2
116 / 500

1522 000

Cascade 9242192very

Partition (number holds cores per node)

Node name

Max. walltime

NodesMax. nodes
per job
Max. jobs
per user

Usable memory MB per node

CPU, GPU type

SharedNPL per node hourRemarkstandard96

gcn#

12:00:00924256unlimited362 000Cascade 924296default partitionstandard96:testgcn#1:00:0016 dedicated
+48 on demand16unlimited362 000Cascade 924296test nodes with higher priority but lower walltimelarge96gfn#12:00:00122unlimited747 000Cascade 9242144fat memory nodeslarge96:testgfn#1:00:002 dedicated
+2 on demand2unlimited747 000Cascade 9242144
fat memory nodes
for data pre- and postprocessing

12 hours are too short? See here how to pass the 12h walltime limit with job dependencies.

Emmy (Göttingen)


blogin1-2.nhr.zib.de
large96:test2 dedicated
+2 on demand
74721 / 50001:00:00fat memory test nodes with higher priority but
lower walltime
less wall time
large96:shared
gfn#48:00:00
2 dedicated
+2 on demand1unlimited
747
000
Cascade 9242144fat memory nodes for data pre- and postprocessinghuge96gsn#24
1128 / 50048:00:00
21unlimited

1522 000

Cascade 9242192very
fat memory nodes for data pre- and
postprocessing
post-processing
medium40large40:sharedgfn#48:00:0021unlimited764 000Skylake  614880
huge96
gcn#48:00:00368128unlimited181 000Skylake  614840medium40:testgcn#1:00:00

32 dedicated

+96 on demand

8unlimited

181 000

Skylake  614840test nodes with higher priority but lower walltimelarge40gfn#48:00:00114unlimited

764 000

Skylake  614880fat memory nodeslarge40:testgfn#1:00:0032unlimited

764 000

Skylake  614880fat memory test nodes with higher priority but lower walltime
215221128 / 50024:00:00

very fat memory nodes for data pre- and

postprocessinggpuggpu#12:00:0033unlimited764 000Skylake  6148 + Tesla V10080

see GPU Usage

gpu40unlimited40GB per gpu4 NVidia A100600

post-processing

See Slurm usage how to pass the 12h wall time limit with job dependencies.

Which partition to choose?

If you do not request a partition, your job will be placed in the default partition, which is standard96.

The default partitions are The default partition cpu-clx is suitable for most calculations. The :test partitions are, as the name suggests, intended for shorter and smaller test runs. These have a higher priority and a few dedicated nodes, but are limited in time and number of nodesprovide only limited resources. Shared nodes are suitable for pre- and postprocessingpost-processing. A job running on a shared node is only accounted for its core fraction (cores of job / all cores per node). All non-shared nodes are exclusive to one job , which implies that full NPL are paid.

Details about the CPU/GPU types can be found below.
The network topology is described here.

The only at a time.

The available home/local-ssd/work/perm storages file systems are discussed in under File Systems.

An For an overview of all Slurm partitions and node statuses is provided bystatus of nodes: sinfo -r
To see For detailed information about a particular nodes type: scontrol show node <nodename>

Charge rates

Charge rates for the Slurm partitions can be found under Accounting.

Fat-Tree Communication Network of Lise

See OPA Fat Tree network of Lise

List of CPUs

...


Short nameLink to manufacturer specificationsWhere to findUnits per node

Cores per unit

Clock speed
[GHz]

Cascade 9242Intel Cascade Lake Platinum 9242 (CLX-AP)Lise and Emmy compute partitionsCPU partition "Lise"2482.3
Cascade 4210Intel Cascade Lake Silver 4210 (CLX)blogin[1-8], glogin[3-86]2102.2Skylake  6148Intel Skylake Gold 6148Emmy compute partitions2202.4Skylake 4110Intel Skylake Silver 4110glogin[1-2]282.1Tesla V100NVIDIA Tesla V100 32GBEmmy gpu partition4

640/5120*

...