Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 199 Nächste Version anzeigen »

Content

Project account

NHR center NHR@ZIB follows NHR-wide regulations.

  • A User Account accesses a project account containing units of core hour. A project account can be a Test Project or a Compute Project.
  • A batch job on the compute system is charged by a number of core hour to measure the usage.
  • Usage of persistent storage including the tape library are currently not accounted.

Charge rates

NHR@ZIB operates system Lise which hold different Compute cluster containing different (slurm-)partitions representing the specific hardware.

ClusterPartitionCharge (core hour)
per 1 node per 1 h occupancy time
Increased charge rate due to
CPU cluster

standard96

standard96:test

96

large96

large96:test

large96:shared

144high memory layout
huge96

192

high memory layout
GPU A100 cluster



gpu-a100

600

four NVidia A100 (80 GB) per compute node

gpu-a100:shared

150 per GPU

600 for four NVidia A100 (80 GB) per node

gpu-a100:shared:mig

21.43 per MiG slice

four NVidia A100 (80 GB) splitted each into

two 2g.10gb slices (8 per node and currently 24 in total) and

one 3g.20gb slice (4 per node and currently 12 in total)

GPU PVC clustergpu-pvcfree of chargetest phase

Charge Rates for NHR@Göttingen

NHR@Göttingen operates system Emmy which hold different Compute cluster containing different types of compute nodes each. The charge rates for the partitions are given in the table.

one node in partitioncharged "core hours" per 1h occupancy timeincreased charge rate due to

standard96

standard96:test

96

large96

large96:test

large96:shared

144high memory layout
huge96

192

high memory layout

medium40

medium40:test

40

large40

large40:test

80high memory layout

gpu

375

four NVidia V100 (32 GB) GPUs per node

grete

600

four NVidia A100 (40 GB)

grete:shared

150 per GPU

600: four NVidia A100 (40 GB) per node
1200: eight NVidia A100 (80 GB) GPUs per node

grete:interactive

grete:preemptible

47 per MiG slice

four NVidia A100 (40 GB) splitted each into

two 2g.10gb slices (8 per node and currently 24 in total) and

one 3g.20gb slice (4 per node and currently 12 in total)

Job charge

The charge of core hours for a batch job depends on the number of nodes, the wallclock time used by the job, and the charge rate for the partition used. For a batch job with

  • a number of nodes n,

  • running with a wallclock time of t hours, and

  • on a partition with a charge rate charge_p

the job charge charge_j yields

charge_j = n * t * charge_p
Example 1: charge for a node reservation

A job on 10 nodes running for 3 hours on partition huge96 (= 192 core hour) yields a job charge of 5760 core hour.

Batch jobs running in the partition large96:shared access a subset of cores on a node. For a reservation of cores, the number of nodes is the appropriate node fraction.

Example 2: charge for a core reservation

A job on 48 cores on partition large96:shared (96 cores per node, 144 core hour) has a reservation for

num = 48/96 = 0.5 nodes. Assuming a wallclock time of 3 hours yields a job charge of 216 core hour.

  • Keine Stichwörter