Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...

To match your job requirements to the hardware you can choose among various partitions. Each partition has its own job queue. All available partitions and their corresponding walltime, core number, memory, CPU/GPU types are listed here.

...

Partition

...

Location

...

Usable memory per node

...

747 GB

...

1522 GB

...

very fat nodes for data pre- and postprocessing

...

16 dedicated

+48 on demand

...

362 GB

...

747 GB

...

747 GB

...



Which partition to choose?

If you do not request a partition, you will be placed on to the default partition, which is standard96 in Berlin and medium40 in Göttingen.

The default partitions are suitable for most calculations. The :test partitions are, as the name suggests, intended for shorter and smaller test runs. These have a higher priotity and a few dedicated nodes, but are limited in time and number of nodes. Shared nodes are suitable for post-processing. A job running on a shared node is only accounted for its core fraction (cores of job / all cores per node). All non shared nodes are exclusive to one job, which implies that full NPL are paid.

Details about the CPU/GPU types and network topology can be found here.

Parameters

ParameterSBATCH flagComment
# nodes-N <#>
# tasks-n <#>
# tasks per node#SBATCH --tasks-per-node <#>Different defaults between mpirun and srun
partition

-p <name>

standard96 (Lise), medium40 (Emmy)

# cores per task

-c <#>Default 1, interesting for OpenMP/Hybrid jobs
Wall time limit-t hh:mm:ss
Mail--mail-type=ALLSee sbatch manpage for different types
Project/Account-A <project>Specify project for NPL accounting

...