HLRN is operating at each site 3 central storage systems with their global file systems:
File System | Capacity | Storage Technology and Function |
---|---|---|
HOME | 340 TiB | IBM Spectrum Scale file system, exported via NFS to compute and login nodes
|
WORK | 8 PiByte | DDN ExaScaler with Lustre parallel file system |
PERM | Tape archive with multiple petabyte capacity with additional harddisk caches |
The system Emmy has additional storage options for high IO demands:
- Phase 1 nodes (partitions
medium40
andlarge40
): Local SSD for temporary data at$LOCAL_TMPDIR
(400 GiB shared among all jobs running on the node). The environment variable$LOCAL_TMPDIR
is available on all nodes, but on the phase 2 systems it points to a ramdisk. - DDN IME based burst buffer with 48TiB NVMe storage (general availability together with the phase 2 nodes)
Login and copying data between HLRN sites
Inter-complex login (ssh
) as well data copy (rsync/sftp
) between both sites (Berlin and Göttingen) should work right out of the box. The same is true for inner-complex ssh and scp between nodes of one site. This is enabled through hostbased authentication.
Always use the short hostname for ssh/rsync
, either the generic names blogin, glogin or the specific names like blogin5, glogin2, etc. This allows the usage of the direct intersite connection HLRN Link, which ich much faster then the internet connection, which is used, when you access the nodes of the other site by the hostnames b/glogin.hlrn.de.
HOME
Each user holds one home directory on each compute site Emmy and Lise.
- user directory
HOME=/home/${USER}
- configuration files
- source code and executables
The home filesystem and /sw
are mounted via NFS, so performance is medium. We take daily snapshots of the filesystem, which can be used to restore a former state of a file or directory. These snapshots can be accessed through the path /home/.snapshots
or /sw/.snapshots
. There are additional regular backups to restore the filesystem in case of a catastrophic failure.
WORK
The Lustre based work filesystem /scratch
is the main work filesystem for the HLRN clusters. Each user can distribute data to different directories.
- user directory
WORK=/scratch/usr/${USER}
- large intput/output data for production jobs
- the intention is to use this data for the user only
- project directories
/scratch/projects/<projectID>
- large intput/output data for production jobs
- the intention is to share this data within the project group
TMPDIR
directory =/scratch/tmp/${USER}
- applications and compilers store data temporarily
We provide no backup of this filesystem. The storage system of Emmy provides around 65GiB/s streaming bandwith and Lise around 85GiB/s during the acceptance test. With higher occupancy, the effective (write) streaming bandwidth is reduced.
The storage system is hard-disk based (with SSDs for metadata), so the best performance can be reached with sequential IO of large files that is aligned to the fullstripe size of the underlying RAID6 (Emmy 1MiB, Lise 16MiB).
If you are accessing a large file (1GiB+) from multiple nodes in parallel, please consider to activate striping of the file with the Lustre command lfs setstripe
(specific to this file or for a whole directory, changes apply only for new files, so applying a new striping to an existing file requires a file copy) with a sensible stripe_count
(recommendation: Emmy up to 32, Lise up to 8) and a stripe_size
, which is a multiple of the RAID6 fullstripe size and matches the IO sizes of your job.
A general recommendation for network filesystems is to keep the number of metadata operations for open and closing files, as well as checks for file existence or changes as low as possible. These operations often become a bottleneck for the IO of your job and on large clusters, as the ones operated by HLRN, can easily overload the file servers.
PERM, tape archive
The magnetic tape archive provides additional storage for inactive data to free up space on the WORK or HOME filesystem. It is directly accessible via the login nodes at the mountpoint /perm/${USER}/
.
Emmy provides the additional option to access the PERM archive via ssh
to the archive nodes gperm1
and gperm2
, so you can use rsync, scp, sftp
for file transfer.
For reasons of efficiency and performance, small files and/or complex directory structures should not be transferred to the archive directly. Please aggregate your data to compressed tarballs or other archive containers with a maximum size of 5,5TiB before copying your data to the archive.
Quota
Important Note: The following table presents the default quota for each file system. These values are reasonable for average use cases. We are aware that certain projects need larger quota for their workflows. If your project needs larger quotas, please contact your consultant to discuss your needs and how we can help you.
You read quota information for your user account with the command hlrnquota
. More details about quotas you find on the page Fixing Quota Issues.
Home | Work | Perm | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Block | Inode | Block | Inode | Block | Inode | |||||||
soft | hard | soft | hard | soft | hard | soft | hard | soft | hard | soft | hard | |
Users | 40 GiB | 100 GiB | unlim. | unlim. | 3 TiB | 30 TiB | 250.000 | 1.500.000 | 2 TiB | 3 TiB | 2.000 | 2.200 |
Projects | 40 GiB | 100 GiB | unlim. | unlim. | 12 TiB | 120 TiB | 1.000.000 | 6.000.000 | 8 TiB1 | 12 TiB1 | 8.0001 | 8.8001 |
Quota is available on the three file systems HOME, WORK and PERM. On each file system we distinguish
- quota for blocks, that is the disk space your files allocate, and
- quota for inodes, that is the number of files and directories.
Each quota consists of two numbers, the soft limit and the hard limit.
- Once your usage achieves the soft limit, the grace peroid of 2 weeks starts to count. By the end of the grace period you are not able to write files. As soon as you drop down below your soft limit the grace period is reseted.
- Once your usage achieves the hard limit, you are not able to write files.
On Feb 1st 2021 all members of a project will be added to the matching UNIX-group and gain access to the projects files. Please adjust your project members / files accordingly by then. If you want to grant project members access to the files before that date, simply re-add them under https://zulassung.hlrn.de/.
1 Projects on Perm only available on Emmy in Goettingen