WORK is a Lustre filesystem with metadata servers and object storage servers managing the user data. WORK is a single, shared resource, and as such some user jobs may impact with their I/O patterns the performance of other jobs running concurrently using the WORK resources. We want to achieve that this resource is used fairly across the wide spectrum of applications and users.
If jobs request hundreds of thousands metadata operations like open, close and stat per job, this can cause a "slow" filesystem (unresponsiveness) even when the metadata are stored on SSDs in our Lustre configuration.
Therefore, we provide here some general advice to avoid critical conditions of WORK including unresponsiveness:
- Write intermediate results and checkpoints as seldom as possible.
- Try to write/read larger data volumes (>1 MiB) and reduce the number of files concurrently managed in WORK.
- For inter-process communication use proper protocols (e.g. MPI) instead of files in WORK.
- If you want to control your jobs externally, consider to use POSIX signals, instead of using files frequently opened/read/closed by your program. You can send signals e.g. to batch jobs via "scancel --signal..."
- Use MPI-IO to coordinate your I/O instead of each MPI task doing individual POSIX I/O (HDF5 and netCDF may help you with this).
Analysis of meta data
An existing application can be investigated with respect to meta data usage. Let us assume an example job script for an MPI parallel application myexample.bin
. For this example 16 MPI tasks are executed.
#!/bin/bash #SBATCH --nodes=2 #SBATCH --ntasks-per-node=8 #SBATCH --time=01:00:00 #SBATCH --partition=standard96 srun ./myexample.bin
Once you add the linux command strace
to the job you create two files per linux process (MPI task). For this example 32 trace files are created. Large MPI jobs can create a huge number of trace files, e.g. a 128 node job with 128 x 96 MPI tasks created 24576 files. For this investigation we strongly recommend to reduce the MPI task number as far as possible.
#!/bin/bash #SBATCH --nodes=2 #SBATCH --ntasks-per-node=8 #SBATCH --time=01:00:00 #SBATCH --partition=standard96 srun strace -ff -t -o trace -e open,openat ./myexample.bin
Analysing one trace file shows all file open
activity of one process.
> ls -l trace.* -rw-r----- 1 bzfbml bzfbml 21741 Mar 10 13:10 trace.445215 ... > wc -l trace.445215 258 trace.445215 > cat trace.445215 13:10:37 open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 13:10:37 open("/lib64/libfabric.so.1", O_RDONLY|O_CLOEXEC) = 3 ... 13:10:38 open("/scratch/usr/bzfbml/mpiio_zxyblock.dat", O_RDWR) = 8 13:10:43 +++ exited with 0 +++
The example code creates one file with the name mpiio_zxyblock.dat
. You need to expect a number of open
entries originating from the system. For the example above, 258 open
statements include only one open
from the application code.
Known issues
For some of the codes we are aware of certain issues:
- OpenFOAM: always set ‘runTimeModifiable false’ and fileHandler collated with a sensible value for purgeWrite and writeInterval (see: https://www.hlrn.de/doc/display/PUB/OpenFOAM)
- NAMD: special considerations during replica-exchange runs: https://www.hlrn.de/doc/display/PUB/NAMD
If you have questions or you are unsure regarding your individual scenario, please get in contact with your consultant.