...
Compute system Lise at NHR@ZIB contains different Compute partitions for CPUs and GPUs. Your choice for the partition affects specific configurations of
- Login nodes,
- slurm partition (compute nodes of Compute partitions and Accounting), and
- Software.
Login nodes
...
- choose a login node associated to your Compute partitions and
- use authentication via SSH Login.
File systems
Each complex has the following file systems available. More information about Quota, usage, and best pratices are available on Fixing Quota Issues. Hints for data transfer are given here.
- Home file system with 340 TiByte capacity containing
$HOME
directories/home/${USER}/
- Lustre parallel file system with 8.1 PiByte capacity containing
$WORK
directories/scratch/usr/${USER}/
- project data directories
/scratch/projects/<projectID>/
(not yet available) - Tape archive with 120 TiByte capacity (accessible on the login nodes, only)
$TMPDIR
directories /scratch/tmp/${USER}/
Info |
---|
Best practices for using WORK as a lustre filesystem: https://www.nas.nasa.gov/hecc/support/kb/lustre-best-practices_226.html |
Info |
---|
Hints for fair usage of the shared WORK ressource: Metadata Usage on WORK |
Software and environment modules
...
To avoid conflicts between different compilers and compiler versions, builds of most important libraries are provided for all compilers and major release numbers.
Program build
Please visit the specific workflow pages of our Compute partitions.
Using slurm batch system
To run your applications on the systems, you need to go through our batch system/scheduler: Slurm. The scheduler uses meta information about the job (requested node and core count, wall time, etc.) and then runs your program on the compute nodes, once the resources are available and your job is next in line. For a more in depth introduction, visit our Slurm documentation.
We distinguish two kinds of jobs:
- Interactive job execution
- Job script execution
Resource specification
To request resources, there are multiple flags to be used when submitting the job.
...
-p <name>
...
For using compute resources interactively, e.g. to follow the execution of MPI programs, the following steps are required. Note that non-interactive batch jobs via job scripts (see below) are the primary way of using the compute resources.
- A resource allocation for interactive usage has to be requested first with the
salloc --interactive
command which should also include your resource requirements. - When
salloc
successfully allocated the requested resources, you have to issue an additional srun command to work one of the allocated nodes (see example below) if you want to work on the compute node. - Afterwards,
srun
or MPI launch commands, likempirun
ormpiexec
, can be used to start parallel programs (see according user guides)
Codeblock | ||
---|---|---|
| ||
blogin1 ~ $ salloc -t 00:10:00 -p cpu-clx:test -N2 --tasks-per-node 24
salloc: Granted job allocation [...]
salloc: Waiting for resource configuration
salloc: Nodes bcn[1001,1003] are ready for job
# To get a shell on one of the allocated nodes
blogin1 ~ $ srun --pty --interactive --preserve-env ${SHELL}
bcn1001 ~ $ srun hostname | sort | uniq -c
24 bcn1001
24 bcn1003
bcn1001 ~ $ exit
# Exit a second time for Berlin/Lise
blogin1:~ > exit
salloc: Relinquishing job allocation [...] |
Job scripts
Please go to our webpage CPU CLX partition for more details about job scripts. For introduction, standard batch system jobs are executed applying the following steps:
- Provide (write) a batch job script, see the examples below.
- Submit the job script with the command
sbatch
(sbatch jobscript.sh
) - Monitor and control the job execution, e.g. with the commands
squeue
andscancel
(cancel the job).
Job Accounting
Accounting gives you more information about job accounting.
...
File systems
Each complex has the following file systems available. More information about Quota, usage, and best pratices are available on Fixing Quota Issues. Hints for data transfer are given here.
- Home file system with 340 TiByte capacity containing
$HOME
directories/home/${USER}/
- Lustre parallel file system with 8.1 PiByte capacity containing
$WORK
directories/scratch/usr/${USER}/
$TMPDIR
directories/scratch/tmp/${USER}/
- project data directories
/scratch/projects/<projectID>/
(not yet available)
- Tape archive with 120 TiByte capacity (accessible on the login nodes, only)
Info |
---|
Best practices for using WORK as a lustre filesystem: https://www.nas.nasa.gov/hecc/support/kb/lustre-best-practices_226.html |
Info |
---|
Hints for fair usage of the shared WORK ressource: Metadata Usage on WORK |