Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...

The HLRN-IV system consists of two independent systems named Lise (named after Lise Meitner) and Emmy (named after Emmy Noether). The systems are located at the Zuse Institute Berlin (Lise) and the University of Göttingen respectively(Emmy). Overall, the HLRN-IV system consists of 1270 compute nodes with 121,920 cores in total. You can learn more about the system and the differences between the sites on the HLRN-IV website.

...

Please login to the gateway nodes using the Secure Shell ssh (protocol version 2), see the example below. The standard gateways are called

blogin.hlrn.de (Berlin)
and
glogin.hlrn.de (Göttingen).

Please note, that there is a memory limit (currently 64 GByte) on a per user basis on the frontends (memory login nodes. Memory and CPU intesive intensive tasks should be submitted via slurm as a job anyway)to our SLURM batch system.

Login authentication is possible only by via SSH keys only. For information and instructions please see our SSH Pubkey tutorial.

...

  • Home file system with 340 TiByte capacity containing $HOME directories /home/${USER}/
  • Lustre parallel file system with 8.1 PiByte capacity containing
    • $WORK directories /scratch/usr/${USER}/
    • $TMPDIR directories /scratch/tmp/${USER}/
    • project data directories /scratch/projects/<projectID>/ (not yet available)
  • Tape archive with 120 TiByte capacity (accessible on the login nodes, only)
  • On Emmy: SSD for temporary data at $LOCAL_TMPDIR (400 GB shared among all jobs running on the node)

...


Info
Best practices for using WORK as a lustre filesystem: https://www.nas.nasa.gov/hecc/support/kb/lustre-best-practices_226.html

...

As Intel MPI is the communication library recommended by the system vendor, currently only documentation for Intel MPI is provided, except for application specific documentation.

OpenMP support ist built in is available with the compilers from Intel and GNU.

...

To run your applications on the HLRN, you need to go through our batch system/scheduler: Slurm. The scheduler uses metainformation meta information about the job (requested node and core count, wall time, etc.) and then runs your program on the compute nodes, once the resources are available and your job is next in line. For a more in depth introduction, visit our Slurm documentation.

...