Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

Content

Inhalt

Project account

NHR center NHR@ZIB follows NHR-wide regulations.

  • A User Account accesses a project account containing units of core hour. A project account can be a Test Project or a Compute Project.
  • A batch job on the compute system is charged by a number of core hour to measure the compute timeusage.
  • Usage of persistent storage including the tape library are currently not accounted.

Content

...

Charge rates

NHR@ZIB operates system Lise which hold  with different Compute cluster containing different partitions. Properties for available (slurm-)partitions representing the specific hardware.

...

you find on the pages

Each partition holds a specific charge rate.

Compute partitionSlurm partitionCharge (core hour)
per 1 node per 1 h occupancy time
Increased charge rate due toRemark
CPU clusterpartition "Lise"

standard96

standard96:test

96

large96

large96:test

large96:shared

144high memory layout
huge96

192


high memory layout
GPU A100 clusterpartition



gpu-a100

600

four NVidia A100 (80 GB) per compute node

gpu-a100:shared

150 per GPU

600 for four NVidia A100 (80 GB) per node

gpu-a100:shared:mig

21.43 per MiG slice

four NVidia A100 (80 GB) splitted each into

two 2g.10gb slices (8 per node and currently 24 in total) and

one 3g.20gb slice (4 per node and currently 12 in total)

GPU PVC clusterpartitiongpu-pvcfree of chargetest phase

Charge Rates for NHR@Göttingen

NHR@Göttingen operates system Emmy which hold different Compute cluster containing different types of compute nodes each. The charge rates for the partitions are given in the table.

...

standard96

standard96:test

...

large96

large96:test

large96:shared

...

192

...

medium40

medium40:test

...

large40

large40:test

...

gpu

...

four NVidia V100 (32 GB) GPUs per node

...

grete

...

four NVidia A100 (40 GB)

...

grete:shared

...

600: four NVidia A100 (40 GB) per node
1200: eight NVidia A100 (80 GB) GPUs per node

...

grete:interactive

grete:preemptible

...

four NVidia A100 (40 GB) splitted each into

two 2g.10gb slices (8 per node and currently 24 in total) and

one 3g.20gb slice (4 per node and currently 12 in total)

Job Charge

Job charge

The charge of core hours for a batch job depends on the number of nodes, the wallclock time used by the job, and the charge rate for the partition used. For a batch job with

...

Panel
titleExample 2: charge for a core reservation

A job on 48 cores on partition large96:shared (96 cores per node, 144 core hour) has a reservation for

num = 48/96 = 0.5 nodes. Assuming a wallclock time of 3 hours yields a job charge of 216 core hour.

Select the account in your batch job

Batch jobs are submitted by a user account to the compute system.

...

.

...

For the User Account the default project for computing time can be changed under the link User Data on the Portal NHR@ZIB.

Codeblock
titleExample: account for one job
To charge the account myaccount
add the following line to the job script. 
#SBATCH --account=myaccount

After job script submission the batch system checks the project for account coverage and authorizes the job for scheduling. Otherwise the job is rejected, please notice the error message:

Codeblock
titleExample: out of core hour
You can check the account of a job that is out of core hour. > squeue ... myaccount ... AccountOutOfNPL ...