...
- Home file system with 340 TiByte capacity containing
$HOME
directories/home/${USER}/
- Lustre parallel file system with 8.1 PiByte capacity containing
$WORK
directories/scratch/usr/${USER}/
$TMPDIR
directories/scratch/tmp/${USER}/
- project data directories
/scratch/projects/<projectID>/
(not yet available)
- Tape archive with 120 TiByte capacity HDD cache (accessible on the login nodes, only (Lise), access using gperm1/2 on Emmy)
- On Emmy: SSD for temporary data at
$LOCAL_TMPDIR
(400 GB shared among all jobs running on the node)
...