Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...

The default partitions are suitable for most calculations. The :test partitions are, as the name suggests, intended for shorter and smaller test runs. These have a higher priotity and a few dedicated nodes, but are limited in time and number of nodes. The :shared nodes are mainly for postprocessing. Nearly all nodes are exclusive to one job, except for the nodes in these :shared partitions.

Parameters

11

ParameterComment
# nodes-N #
# tasks-n #
# tasks per node--tasks-per-node #96Hyperthreading active by default! See below
partition

-p <name>

standard96/medium40

# CPUs per task

-c #Default 1, interesting for OpenMP/Hybrid jobs
Timelimit-t hh:mm:ss

12:00:00


Mail--mail-type=ALLSee sbatch manpage for different types

...

Codeblock
languagebash
titleExample Batch Script
linenumberstrue
#!/bin/bash

#SBATCH -p medium40
#SBATCH -N 16
#SBATCH -t 06:00:00

module load impi
srun mybinary


Tasks, CPUs and Hyperthreading

By default, hyperthreading is activated. Our nodes have 40 or 96 cores, with two threads each. Slurm doesn't differentiate between hyperthreads and cores and calls a single hyperthread CPU. So don't be confused by this weird nomenclature. If you do not specify anything, 192 or 80 processes will be started. If you want to disable it, you will have to use the --tasks-per-node option and set it to 96 or 40. If your software uses shared memory parallelization (e.g. OpenMP), you only need a single task per node, but more CPUs per task, which is set by -c. Take a look at the examples for more information.

Getting Information about Jobs

Using the Shared Nodes

Advanced Options

Job Arrays

Job Dependencies