Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 60 Nächste Version anzeigen »

The compute nodes of the CPU cluster of system Lise are organised via the following slurm partitions.

Partition name

Node number

CPU

Main memory (GB)

Max. nodes
per job

Max jobs (running/ queued)
per user

Walltime (hh:mm:ss)Remark
standard961204Cascade 9242362512

16 / 500

12:00:00default partition
standard96:test32 dedicated
+128 on demand
362 161 / 50001:00:00test nodes with higher priority but lower walltime
large9628747816 / 50012:00:00fat memory nodes
large96:test2 dedicated
+2 on demand
74721 / 50010:00:00fat memory test nodes with higher priority but lower walltime
large96:shared2 dedicated747116 / 50048:00:00fat memory nodes for data pre- and postprocessing
huge9621522116 / 50024:00:00

very fat memory nodes for data pre- and postprocessing

See Slurm usage how to pass the 12h walltime limit with job dependencies.

Which partition to choose?

If you do not request a partition, your job will be placed in the default partition, which is standard96.

The default partitions are suitable for most calculations. The :test partitions are, as the name suggests, intended for shorter and smaller test runs. These have a higher priority and a few dedicated nodes, but are limited in time and number of nodes. Shared nodes are suitable for pre- and postprocessing. A job running on a shared node is only accounted for its core fraction (cores of job / all cores per node). All non-shared nodes are exclusive to one job.

Details about the CPU/GPU types can be found below.
The network topology is described here.

The available home/local-ssd/work/perm storages are discussed in Storage Systems.

An overview of all partitions and node statuses is provided by: sinfo -r
To see detailed information about a nodes type: scontrol show node <nodename>

Charge rates

Charge rates for the slurm partitions you find in Accounting.

Fat-Tree Network of Lise

See OPA Fat Tree network of Lise

List of CPUs and GPUs


Short nameLink to manufacturer specificationsWhere to findUnits per node

Cores per unit

Clock speed
[GHz]

Cascade 9242Intel Cascade Lake Platinum 9242 (CLX-AP)CPU partition "Lise"2482.3
Cascade 4210Intel Cascade Lake Silver 4210 (CLX)blogin[1-6]2102.2
Tesla A100NVIDIA Tesla A100 40GB and 80GB

GPU A100 partition

4

432/6912*


*Tensor Cores / CUDA FP64 Cores

  • Keine Stichwörter