Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...

  • Phase 1 nodes (partitions medium40 and large40): Local SSD for temporary data at $LOCAL_TMPDIR (400 GiB shared among all jobs running on the node). The environment variable $LOCAL_TMPDIR is available on all nodes, but on the phase 2 systems it points to a ramdisk.

HOME

Each user holds one HOMR directory on each compute site Emmy and Lise.

...

The home filesystem and /sw are mounted via NFS, so performance is medium. We take daily snapshots of the filesystem, which can be used to restore a former state of a file or directory. These snapshots can be accessed through the path /home/.snapshots or /sw/.snapshots. There are additional regular backups to restore the filesystem in case of a catastrophic failure.

WORK

The Lustre based work filesystem /scratch is the main work filesystem for the HLRN clusters. Each user can distribute data to different directories.

...

A general recommendation for network filesystems is to keep the number of metadata operations for open and closing files, as well as checks for file existence or changes as low as possible. These operations often become a bottleneck for the IO of your job and on large clusters, as the ones operated by HLRN, can easily overload the file servers.

Separate GPU-scratch for grete Partitions on Emmy

On Emmy, glogin9.hlrn.de and the Grete nodes (ggpu[101-134,146-147,201-202]) have their own separate /scratch  that is different from the one on the other frontend and compute nodes. On these nodes, you can still access the /scratch of the other frontend and compute nodes at /scratch-emmy .

PERM, tape archive

The magnetic tape archive provides additional storage for inactive data to free up space on the WORK or HOME filesystem. It is directly accessible on the login nodes..

...