Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 3 Nächste Version anzeigen »

HLRN provides tailored WORK filesystem for improved IO throughput for IO intense job workloads.


Filesystem Types

Default Lustre (WORK)

This is the default shared Filesystem for all jobs and can be accessed using the $WORK variable.

WORK consists of 8 Metadata Targets (MDT's) with NVMe SSDs and 28 Object Storage Targets (OST's) on Lise and 96 OST's on Emmy. Both using rotational hard drives.

Access: $WORK

Size: 8 PiB quoted


Lustre with striping (WORK)

Some workloads will benefit of striping. Files will be split between a number of 2 up to 28 OSTs on Lise and up to 96 OST's on Emmy.

Especially large shared IO patterns will perform good using striping.

Access: create a directory with striping using "lfs setstripe -c <stripsize> <dir>"

Size: 8 PiB like $WORK


Local SSDs

Some Compute Nodes have local SSD storage. Files on local SSDs can not be shared across nodes and will be deleted after job end.

For single node IO this is the best performing filesystem to use.



Lise: SSDLise: CASEmmy
Access

via queue: standard:96:ssd

using $LOCAL_TMPDIR

via queue large96:ssd and huge96:ssd

using $LOCAL_TMPDIR

via queue medium96

using $LOCAL_TMPDIR

Type and sizeIntel NVMe SSD DC P4511 (2 TB)

Intel NVMe SSD DC P4511 (2 TB) using

Intel Optane SSD DC P4801X (200 GB)

as writhe-trough cache

Intel DC S4500 (400 GB)


FastIO - Lise only

Additional 4 OST's using NVMe SSDs integrated into WORK.

Access: ask support@hlrn.de for access

Size: 55 TiB - quoted


IME - Emmy only

DDN Infinite Memory Engine (IME) based Burst Buffers and file system Cache - a fast data tier between compute nodes and the Lustre file system /scratch - is used as an I/O accelerator for I/O bound problems. This helps avoid overload on the system when a program tries to write large amount of data within a short period of time to the global parallel file system. IME servers consist of Solid State Disk (SSD) that acts as a cache and burst buffer to improve the global file system performance. IME servers are currently available for use in EMMY.

AccessIME Burst Buffer, File System Cache


Finding the right filesystem

If your jobs have a significant IO part we recommend asking your consultant via support@hlrn.de to recommend the right filesystem for you.


Local IO

If you have a significant amount of node-local IO which is not needed to be accessed after job end and will be smaller than 2 TB on Lise and 400 GB on Emmy we recommend using $LOCAL_TMPDIR. Depending on your IO pattern this may accelerate IO to up to 100%.


Global IO

Global IO is defined as shared IO which will be able to be accessed from multiple nodes at the same time and will be persistent after job end.

Especially random IO on small files will be accelerated up to 200% using FastIO on Lise or IME on Emmy



INTERNAL

Recommendation Matrix:

Max performance gain on IO versus default $WORK in brackets.


small random IOlots of large IO per processfew large IO accessed from many nodesunknown IO

local IOglobal IOlocal IOglobal IOlocal IOglobal IOlocal IOglobal IO
Code e.g.
OpenFOAM?


FESOM?

write IOLocal SSDs (+100%)FastIO stripe=4 (+30%)Local SSDs (+15%)$WORK$WORK

FastIO stripe=4       (+200%) or

$WORK stripe=4-8 (+200%)

Local SSDs (+40%)

FastIO stripe=4       (+80%)

$WORK stripe=4-8  (+70%)

read IOLocal SSDs (+30%)FastIO             (+140%)Local SSDs (+30%)FastIO stripe=4 (+20%)Local SSDs (+35%)FastIO stripe=4       (+200%)Local SSDs (+30%)

FastIO stripe=4         (+120%)

$WORK stripe=MAX (+90%)

balanced IOLocal SSDs (+60%)FastIO stripe=4 (+90%)Local SSDs (+25%)FastIO               (+15%)Local SSDs (+20%)

FastIO stripe=4         (+200%) or

$WORK stripe=MAX (+150%)

Local SSDs (+35%)

FastIO stripe=4         (+100%)

FastIO                        (+50%)

Vorschlag Ankündigung

Subj: additional WORK filesystems available

Dear HLRN Users

to achieve better IO performance HLRN has installed additional filesystems tailored for dedicates IO demands.

LISE has now node-local SSDs using NVMe and Optane to accelerate local IO. Furthermore we have installed a SSD based Lustre called 'FastIO'.

EMMY is upgraded with an IME.....

If your jobs have a significant amount of IO which can be accelerated visit: _________ or contact our support.

Kind Regards

HLRN-Team




-----
TODO:
- Links
- CMDs



  • Keine Stichwörter