Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

Inhalt

The compute nodes of the CPU cluster of system Lise are organised via the following slurm Slurm partitions.

Partition name

Node numbercount

CPU

Main memory (GB)

Max. nodes
per job

Max. jobs per user (running/ queued)per user

Walltime Wall time limit (hh:mm:ss)Remark
standard96cpu-clx1204688Cascade 924236251216

128 / 500

12:00:00default partition
standard96cpu-clx:test32 dedicated
+128 on demand
362 161 / 50001:00:00test nodes with higher priority but lower walltimeless wall time
large9628747816 128 / 50012:00:00fat memory nodes
large96:test2 dedicated
+2 on demand
74721 / 5001001:00:00fat memory test nodes with higher priority but lower walltimeless wall time
large96:shared2 dedicated747116 128 / 50048:00:00fat memory nodes for data pre- and postprocessingpost-processing
huge9621522116 128 / 50024:00:00

very fat memory nodes for data pre- and postprocessingpost-processing

See Slurm usage how to pass the 12h walltime wall time limit with job dependencies.

Charge rates

Charge rates for the slurm partitions you find in Accounting.

Fat-Tree Network of Lise

See OPA Fat Tree network of Lise

Emmy (Göttingen)

...

Partition (number holds cores per node)

...

Max. walltime

...

.

...

Usable memory MB per node

...

CPU, GPU type

...

gcn#

...

2 dedicated

+6 on demand

...

747 000

...

1522 000

...

very fat memory nodes for data pre- and postprocessing

...

...

8 dedicated

+64 on demand

...

181 000

...

764 000

...

2 dedicated

+2 on demand

...

764 000

...

2 dedicated

+6 on demand

...

500 000 MB per node

(40GB HBM per GPU)

...

see /wiki/spaces/PUB/pages/428683

...

Skylake  6148 + 4 Nvidia V100 32GB,

Zen3 EPYC 7513 + 4 NVidia A100 40GB,

and Zen2 EPYC 7662 + 8 NVidia A100 80GB

...

764 000 MB (32 GB per GPU)

or 500 000 MB (10GB or 20GB HBM per MiG slice)

...

Skylake  6148 + 4 Nvidia V100 32GB,

Zen3 EPYC 7513 + 4 NVidia A100  40GB splitted in 2g.10gb and 3g.20gb slices

...

150 per GPU (V100)

or 47 per MiG slice (A100)

see /wiki/spaces/PUB/pages/428683

A100 GPUs are split into slices via MIG (3 slices per GPU)

...

764 000 MB (32 GB per GPU)

or 500 000 MB (10GB or 20GB HBM per MiG slice)

...

Skylake  6148 + 4 Nvidia V100 32GB,

Zen3 EPYC 7513 + 4 NVidia A100  40GB splitted in 2g.10gb and 3g.20gb slices

...

150 per GPU (V100)

or 47 per MiG slice (A100)

* 600 for the nodes with 4 GPUs, and 1200 for the nodes with 8 GPUs

Which partition to choose?

If you do not request a partition, your job will be placed in the default partition, which is standard96.

The default partitions are partition is suitable for most calculations. The :test partitions are, as the name suggests, intended for shorter and smaller test runs. These have a higher priority and a few dedicated nodes, but are limited in time and number of nodesprovide only limited resources. Shared nodes are suitable for pre- and postprocessingpost-processing. A job running on a shared node is only accounted for its core fraction (cores of job / all cores per node). All non-shared nodes are exclusive to one job , which implies that full NPL are paid.Details about the CPU/GPU types can be found below.
The network topology is described hereonly at a time.

The available home/local-ssd/work/perm storages file systems are discussed in Storage under File Systems.

An For an overview of all Slurm partitions and node statuses is provided bystatus of nodes: sinfo -r
To see For detailed information about a particular nodes type: scontrol show node <nodename>

Charge rates

Charge rates for the Slurm partitions can be found under Accounting.

Fat-Tree Communication Network of Lise

See OPA Fat Tree network of Lise

List of CPUs

...


Short nameLink to manufacturer specificationsWhere to findUnits per node

Cores per unit

Clock speed
[GHz]

Cascade 9242Intel Cascade Lake Platinum 9242 (CLX-AP)Lise and Emmy compute partitionsCPU partition "Lise"2482.3
Cascade 4210Intel Cascade Lake Silver 4210 (CLX)blogin[1-8], glogin[3-8]6]2102.2Skylake  6148Intel Skylake Gold 6148Emmy compute partitions2202.4Skylake 4110Intel Skylake Silver 4110glogin[1-2]282.1Tesla V100NVIDIA Tesla V100 32GBEmmy grete partitions4

640/5120*

Tesla A100NVIDIA Tesla A100 40GB and 80GBEmmy grete partitions4 or 8

432/6912*

...