Inhalt |
---|
HLRN NHR provides tailored WORK file system systems for improved IO throughput of IO intense job workloads.
...
WORK is the default shared file system for all jobs and can be accessed using the $WORK
variable. WORK is accessible for all users .WORK consists and consists of 8 Metadata Targets (MDT's) with NVMe SSDs and 28 Object Storage Targets (OST's) on Lise and 96 OST's on Emmy. Both using classical hard drives.
...
Some workloads will benefit of striping. Files will be split transparently between a number of 2 up to 28 OSTs on Lise and up to 96 OST's on Emmy.
Especially large shared file IO patterns will perform good using stripingbenefit from striping. Up to 28 OSTs on Lise can be used, recommended are up to 8 OSTs for Lise. We have preconfigured a progressive file layout (PFL), which sets an automatic striping based on the file size.
Access: create a directory with striping using new directory in $WORK
and set lfs setstripe -c <stripsize> <dir>
...
Local SSDs
Some Compute Nodes have are installed with local SSD storage up to 2 TB on Lise and 400 480 GB or 1TB (depending on the node) on Emmy.
Info |
---|
Data on local SSDs can not be shared across nodes and will be deleted after the job is finished. |
For unshared local IO this is the best performing file system to use.
Lise: SSD | Lise: CAS | Emmy: SSD | |
---|---|---|---|
Access | via queuepartition: using | via queuepartition: using via queue: using | |
Type and size | Intel NVMe SSD DC P4511 (2 TB) | Intel NVMe SSD DC P4511 (2 TB) using Intel Optane SSD DC P4801X (200 GB) as write-trough cache | Intel DC S4500 (400 GB) |
FastIO
...
WORK is extended with 4 additional OST's using NVMe SSDs to accelerate heavy (random) IO-demands. To accelerate specific IO-demands further striping for up to these 4 OSTs is available.
Access: ask support@hlrn.de for access
create a new directory in $WORK
and set lfs setstripe -p flash <dir>
Size:
55 TiB - quoted
...
IME - Emmy only
Göttingen:
DDN Infinite Memory Engine (IME) based Burst Buffers and file system Cache is a fast data tier between compute nodes and the WORK file system.
This helps avoid overload on the system when a program tries to write large amount of data within a short period of time to the global parallel file system.
IME servers
consist of Solid State Disk (SSD) that acts as a cache and burst buffer to improve the global file system performance. IME servers are currently available for use in EMMY.
Access: IME Burst Buffer, File System Cache
...
Finding the right File System
If your jobs have a significant IO part we recommend asking your consultant via support@hlrnsupport@nhr.zib.de to recommend the right file system for you.
...
If you have a significant amount of node-local IO which is not needed to be accessed after job end and will be smaller than 2 TB on Lise and 400 GB on Emmy we recommend using $LOCAL_TMPDIR. Depending on your IO pattern this may accelerate IO to up to 100%.
...
Global IO is defined as shared IO which will be able to be accessed from multiple nodes at the same time and will be persistent after job end.
Especially random IO on small files will be accelerated up to 200% using FastIO on Lise or IME on Emmy
INTERNAL - not public - will be deleted
Recommendation Matrix:
Max performance gain on IO versus default $WORK in brackets.
...
FastIO stripe=4 (+200%) or
$WORK stripe=4-8 (+200%)
...
FastIO stripe=4 (+80%)
$WORK stripe=4-8 (+70%)
...
FastIO stripe=4 (+120%)
$WORK stripe=MAX (+90%)
...
FastIO stripe=4 (+200%) or
$WORK stripe=MAX (+150%)
...
FastIO stripe=4 (+100%)
FastIO (+50%)
Commands
Striping einrichten
mkdir <dirname>
lfs setstripe -c <count> <
dirname
>
DoM einrichten
mkdir <dirname>
lfs setstripe
-E 64K -L mdt -E -1 -p work.rotational <
dirname
>
FastIO einrichten
mkdir <dirname>
lfs setstripe -p flash <
dirname
>
Fastio mit Striping 4 einrichten:
mkdir <dirname>
lfs setstripe -p flash -c 4 <
dirname
>
Prüfen
lfs getstripe <dirname>
Vorschlag Ankündigung
...
.
...