...
To match your job requirements to with the hardware, you can choose among the
- Compute cluster of Lise which slurm partitions of the Compute partitions which are linked to their
- charge rates on the page Accounting.
...
Codeblock | ||||||
---|---|---|---|---|---|---|
| ||||||
#!/bin/bash #SBATCH -p medium40standard96:test #SBATCH -N 16 #SBATCH -t 06:00:00 module load impi srun mybinary |
...
Parameter | SBATCH flag | Comment |
---|---|---|
# nodes | -N <#> | |
# tasks | -n <#> | |
# tasks per node | #SBATCH --tasks-per-node <#> | Different defaults between mpirun and srun |
partition | -p <name> | standard96 (Lise), medium40 (Emmy)e.g. standard96, overview: Slurm partition CPU CLX |
# CPUs per task | -c <#> | interesting for OpenMP/Hybrid jobs |
Wall time limit | -t hh:mm:ss | |
--mail-type=ALL | See sbatch manpage for different types | |
Project/Account | -A <project> | Specify project for NPL core hour accounting |
Job Walltime
The maximum runtime is set per partition and can be viewed either on the system with sinfo
or here. There is no minimum walltime (we cannot stop your jobs from finishing, obviously), but a walltime of at least 1 hour is encouraged. A large amount of smaller, shorter jobs can cause problems with our accounting system. The occasional short job is fine, but if you submit larger amounts of jobs that finish (or crash) quickly, we might have to intervene and temporarily suspend your account. If you have lots of smaller workloads, please consider combining them into a single job that uses at least 1 hour.
...
See according Section in the Quick Start Guide.
Using the Shared Nodes
We provide a varying number of nodes from the large40 and large96 partitions as post processeing nodes in a shared mode, so that multiple jobs can run at once on a single node. You can request CPUs and memory and should take care, that you do not exceed your limits. For each CPU/Hyperthread, there is about 9.6Gb of Memory on large40:shared or 4 on the large96:shared partition.
...