Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 3 Nächste Version anzeigen »

The compute nodes of Lise in Berlin (blogin.hlrn.de) and Emmy in Göttingen (glogin.hlrn.de) are organized via the following SLURM partitions:

Lise (Berlin)

Partition (number holds cores per node)

Node nameMax. walltimeNodesMax. nodes
per job

Max jobs (running/ queued)
per user

Usable memory MB per node

CPU

Shared

Charged core-hours per node

Remark
standard96bcn#12:00:001204512

16 / 500

362 000Cascade 924296default partition











standard96:testbcn#1:00:0032 dedicated
+128 on demand
161 / 500362 000Cascade 924296test nodes with higher priority but lower walltime
large96bfn#12:00:0028816 / 500747 000Cascade 9242144fat memory nodes
large96:testbfn#1:00:002 dedicated
+2 on demand
21 / 500747 000Cascade 9242144fat memory test nodes with higher priority but lower walltime
large96:sharedbfn#48:00:002 dedicated116 / 500

747 000

Cascade 9242144fat memory nodes for data pre- and postprocessing
huge96bsn#24:00:002116 / 500

1522 000

Cascade 9242192

very fat memory nodes for data pre- and postprocessing

12 hours are too short? See here how to pass the 12h walltime limit with job dependencies.

Emmy (Göttingen)

* 600 for the nodes with 4 GPUs, and 1200 for the nodes with 8 GPUs


Which partition to choose?

If you do not request a partition, your job will be placed in the default partition, which is standard96.

The default partitions are suitable for most calculations. The :test partitions are, as the name suggests, intended for shorter and smaller test runs. These have a higher priority and a few dedicated nodes, but are limited in time and number of nodes. Shared nodes are suitable for pre- and postprocessing. A job running on a shared node is only accounted for its core fraction (cores of job / all cores per node). All non-shared nodes are exclusive to one job, which implies that full NPL are paid.

Details about the CPU/GPU types can be found below.
The network topology is described here.

The available home/local-ssd/work/perm storages are discussed in File Systems.

An overview of all partitions and node statuses is provided by: sinfo -r
To see detailed information about a nodes type: scontrol show node <nodename>

List of CPUs and GPUs at HLRN


Short nameLink to manufacturer specificationsWhere to findUnits per node

Cores per unit

Clock speed
[GHz]

Cascade 9242Intel Cascade Lake Platinum 9242 (CLX-AP)Lise and Emmy compute partitions2482.3
Cascade 4210Intel Cascade Lake Silver 4210 (CLX)blogin[1-8], glogin[3-8]2102.2
Skylake  6148Intel Skylake Gold 6148Emmy compute partitions2202.4
Skylake 4110Intel Skylake Silver 4110glogin[1-2]282.1
Tesla V100NVIDIA Tesla V100 32GBEmmy grete partitions4

640/5120*


Tesla A100NVIDIA Tesla A100 40GB and 80GBEmmy grete partitions4 or 8

432/6912*


*Tensor Cores / CUDA Cores

  • Keine Stichwörter