Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 95 Nächste Version anzeigen »

Content

Code execution

To execute your code you need to

  1. build a binary (executable, model code), see Compilation CPU CLX,
  2. create a slurm job script like in Workflow compilerWorkflow OpenMPIWorkflow Intel MPI with the slurm partitions in Partition for CPU CLX,
  3. submit the slurm job script.
> sbatch myjobscipt.slurm
Submitted batch job 8028673
> ls slurm-8028673.out
slurm-8028673.out

Partition for CPU CLX

The compute nodes of the CPU cluster of system Lise are organised via the following Slurm partitions.

Partition name

Node count

CPU

Main memory (GB)

Max. nodes
per job

Max. jobs per user (running/ queued)

Wall time limit (hh:mm:ss)Remark
cpu-clx948Cascade 9242362512

128 / 500

12:00:00default
cpu-clx:test16 dedicated
+128 on demand
362 161 / 50001:00:00test nodes with higher priority but less wall time
cpu-clx:ssd50362
128/50012:00:00local 2TB SSD for IO
cpu-clx:large287478128 / 50012:00:00fat memory nodes
blogin1-2.nhr.zib.de
cpu-clx:huge215221128 / 50024:00:00

very fat memory nodes for data pre- and post-processing

See Slurm usage how to pass the 12h wall time limit with job dependencies.

Which partition to choose?

The default partition cpu-clx is suitable for most calculations. The :test partitions are, as the name suggests, intended for shorter and smaller test runs. These have a higher priority and a few dedicated nodes, but provide only limited resources. Shared nodes are suitable for pre- and post-processing. A job running on a shared node is only accounted for its core fraction (cores of job / all cores per node). All non-shared nodes are exclusive to one job only at a time.

The available home/local-ssd/work/perm file systems are discussed under File Systems.

For an overview of all Slurm partitions and status of nodes: sinfo -r
For detailed information about a particular nodes: scontrol show node <nodename>

Charge rates for accounting

Charge rates for the Slurm partitions can be found under Accounting.

Fat-Tree Communication Network of Lise

See OPA Fat Tree network of Lise

List of CPUs


Short nameLink to manufacturer specificationsWhere to findUnits per node

Cores per unit

Clock speed
[GHz]

Cascade 9242Intel Cascade Lake Platinum 9242 (CLX-AP)CPU partition "Lise"2482.3
Cascade 4210Intel Cascade Lake Silver 4210 (CLX)blogin[1-8]2102.2
  • Keine Stichwörter