...
(number holds cores per node) | Location | Max. walltime | Nodes | Max nodes per job | Max jobs per user | Max memory per node | Shared | NPL per node hour | Remark |
---|---|---|---|---|---|---|---|---|---|
standard96 | Lise | 12:00:00 | 952 | 256 | (var) | 362 GB | ✘ | 14 | default partition |
standard96:test | Lise | 1:00:00 | 32 dedicated +128 on demand | 16 | 1 | 362 GB | ✘ | 14 | test nodes with higher priority but lower walltime |
large96 | Lise | 12:00:00 | 28 | 4 | (var) | 747 GB | ✘ | 21 | fat nodes |
large96:test | Lise | 1:00:00 | 2 dedicated +2 on demand | 2 | 1 | 747 GB | ✘ | 21 | fat test nodes with higher priority but lower walltime |
large96:shared | Lise | 48:00:00 | 2 dedicated | 1 | (var) | 747 GB | ✓ | 21 | fat nodes for data pre- and postprocessing |
huge96 | Lise | 24:00:00 | 2 | 1 | (var) | 1522 GB | ✓ | 28 | very fat nodes for data pre- and postprocessing |
medium40 | Emmy | 12:00:00 | 368 | 128 | unlimited | 362 GB | ✘ | 6 | default partition |
medium40:test | Emmy | 1:00:00 | 16 dedicated +48 on demand | 8 | unlimited | 362 GB | ✘ | 6 | test nodes with higher priority but lower walltime |
large40 | Emmy | 12:00:00 | 11 | 4 | unlimited | 747 GB | ✘ | 12 | fat nodes |
large40:test | Emmy | 1:00:00 | 3 | 2 | unlimited | 747 GB | ✘ | 12 | fat test nodes with higher priority but lower walltime |
large40:shared | Emmy | 24:00:00 | 2 | 1 | unlimited | 747 GB | ✓ | 12 | for data pre- and postprocessing |
gpu | Emmy | 12:00:00 | 1 | 1 | unlimited | ✘ | equipped with 4 x NVIDIA Tesla V100 32GB |
...
The default partitions are suitable for most calculations. The :test partitions are, as the name suggests, intended for shorter and smaller test runs. These have a higher priotity and a few dedicated nodes, but are limited in time and number of nodes. The :shared
Shared nodes are mainly suitable for postprocessing. Nearly all post-processing. A job running on a shared node is only accounted for its core fraction (cores of job / all cores per node). All non shared nodes are exclusive to one job, except for the nodes in these :shared partitionswhich implies that full NPL are paid.
Parameters
Parameter | SBATCH flag | Comment |
---|---|---|
# nodes | -N #<#> | |
# tasks | -n #<#> | |
# tasks per node | #SBATCH --tasks-per-node #<#> | Different defaults between mpirun and srun |
partition | -p <name> | standard96 /medium40(Lise), medium40 (Emmy) |
# CPUs cores per task | -c #<#> | Default 1, interesting for OpenMP/Hybrid jobs |
TimelimitWall time limit | -t hh:mm:ss | |
--mail-type=ALL | See sbatch manpage for different types | |
Project/Account | -A <project> | Specify project for NPL accounting |
Job Scripts
A job script can be any script that contains special instruction for Slurm. Most commonly used forms are shell scripts, such as bash
or plain sh
. But other scripting languages (e.g. Python, Perl, R) are also possible.
...