...
OpenMP support ist built in with the compilers from Intel and GNU.
Using the Batch System
To run your applications on the HLRN, you need to go through our batch system/scheduler: Slurm. The scheduler uses metainformation about the job (requested node and core count, wall time, etc.) and then runs your program on the compute nodes, once the resources are available and your job is next in line. For a more in depth introduction, visit our Slurm documentation.
We distinguish two kinds of jobs:
- Interactive job execution
- Job script execution
Resource specification
To request resources, there are multiple flags to be used when submitting the job.
Parameter | Default Value | |
---|---|---|
# tasks | -n # | 1 |
# nodes | -N # | 1 |
# tasks per node | --tasks-per-node # | |
partition | -p <name> | standard96/medium40 |
Timelimit | -t hh:mm:ss | 12:00:00 |
Interactive jobs
Interactive MPI programs are executed applying the following steps (example for the default medium partition):
- Ask for an interactive shell with the command
srun <…> --pty bash
. We advise to use the one of the test partitions for interactive jobs. - In the interactive shell, execute the parallel program with the MPI starter mpirun or srun.
Codeblock | ||
---|---|---|
| ||
blogin1:~ > srun -t 00:10:00 -p medium40:test -N2 --tasks-per-node 24 --pty bash
bash-4.2$ mpirun hello_world >> hello_world.out
bash-4.2$ exit
blogin1:~ > |
Job scripts
Please go to our webpage MPI start Guide for more details about job scripts. For introduction, standard batch system jobs are executed applying the following steps:
- Provide (write) a batch job script, see the examples below.
- Submit the job script with the command
sbatch
(sbatch jobscript.sh
) - Monitor and control the job execution, e.g. with the commands
squeue
andscancel
(cancel the job).
A job script is a script (written in bash
, ksh
or csh
syntax) containing Slurm keywords which are used as arguments for the command sbatch
.
Erweitern | |||||||
---|---|---|---|---|---|---|---|
| |||||||
Requesting 4 nodes in the medium partition with 96 cores (no hyperthreading) for 10 minutes, using Intel MPI.
|
Erweitern | |||||||
---|---|---|---|---|---|---|---|
| |||||||
Requesting 1 large node with 96 CPUs (physical cores) for 20 minutes, and then using 192 hyperthreads
|