Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...

Partition

(number holds cores per node)

Location

Max. walltimeNodesMax nodes
per job
Max jobs
per user

Max memory per node

SharedNPL per node hourRemark
standard96Lise12:00:00952256(var)362 GB14default partition
standard96:testLise1:00:0032 dedicated
+128 on demand
161362 GB14test nodes with higher priority but lower walltime
large96Lise12:00:00284(var)747 GB21fat nodes
large96:testLise1:00:002 dedicated
+2 on demand
21747 GB21fat test nodes with higher priority but lower walltime
large96:sharedLise48:00:002 dedicated1(var)

747 GB

21fat nodes for data pre- and postprocessing
huge96Lise24:00:0021(var)

1522 GB

28

very fat nodes for data pre- and postprocessing











medium40Emmy12:00:00368128unlimited362 GB6default partition
medium40:testEmmy1:00:00

16 dedicated

+48 on demand

8unlimited

362 GB

6test nodes with higher priority but lower walltime
large40Emmy12:00:00114unlimited

747 GB

12fat nodes
large40:testEmmy1:00:0032unlimited

747 GB

12fat test nodes with higher priority but lower walltime
large40:sharedEmmy24:00:0021unlimited747 GB12for data pre- and postprocessing
gpuEmmy12:00:0011unlimited

equipped with 4 x NVIDIA Tesla V100 32GB

...

The default partitions are suitable for most calculations. The :test partitions are, as the name suggests, intended for shorter and smaller test runs. These have a higher priotity and a few dedicated nodes, but are limited in time and number of nodes. The :shared

Shared nodes are mainly suitable for postprocessing. Nearly all post-processing. A job running on a shared node is only accounted for its core fraction (cores of job / all cores per node). All non shared nodes are exclusive to one job, except for the nodes in these :shared partitionswhich implies that full NPL are paid.

Parameters

ParameterSBATCH flagComment
# nodes-N #<#>
# tasks-n #<#>
# tasks per node#SBATCH --tasks-per-node #<#>Different defaults between mpirun and srun
partition

-p <name>

standard96 /medium40(Lise), medium40 (Emmy)

# CPUs cores per task

-c #<#>Default 1, interesting for OpenMP/Hybrid jobs
TimelimitWall time limit-t hh:mm:ss
Mail--mail-type=ALLSee sbatch manpage for different types
Project/Account-A <project>Specify project for NPL accounting

Job Scripts

A job script can be any script that contains special instruction for Slurm. Most commonly used forms are shell scripts, such as bash or plain sh. But other scripting languages (e.g. Python, Perl, R) are also possible.

...