Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...

Slurm offers a lot of options for job allocation, process placement, job dependencies and arrays and much more. We cannot exhaustively cover all topics here. As mentioned at the top of the page, please consult the official documentation and the man pages for an in depth description of all parameters.

Dependent Jobs - How to pass the 12h walltime limit

If your simulation is restartable, a follow-up job can be triggered automatically. You need to hand over only the ID of the previous job. Simply copy, paste and execute the following lines:

Codeblock
titleExample of a job chain with 3 parts
# submit first job, extract job id
sbatch_output=$(sbatch job1.sbatch)
jobid=${sbatch_output##* }

# submit second job with dependency: starts only if previous job terminates successfully)
sbatch_output=$(sbatch --dependency=afterok:$jobid job2.sbatch)
jobid=${sbatch_output##* }

# submit third job with dependency: starts only if previous job terminates successfully)
sbatch_output=$(sbatch --dependency=afterok:$jobid job2.sbatch)


Job Arrays

Job arrays are the preferred way to submit many similar job. Jobs, for instance, if you need to run the same program on a number of input files, or with different settings or run them with a range of parameters. Arrays are created with the -a start-finish sbatch parameter. E.g. sbatch -a 0-19 will create 20 jobs indexed from 0 to 19. There are different ways to index the arrays, which are described below.

...