Auszug |
---|
an object-oriented open source CFD toolkit |
...
The next example is derived from https://develop.openfoam.com/committees/hpc/-/wikis/HPC-motorbike. It utilizes two full nodes and has collated file I/O. All required The input/case files can be downloaded here: motorbike_with_parallel_slurm_script.tar.gz.example.zip. To run this example you may use the SLURM script provided below (click on "Expand source"):
Codeblock | ||||||
---|---|---|---|---|---|---|
| ||||||
#!/bin/bash #SBATCH --time 1:00:00 #SBATCH --nodes 21 #SBATCH --tasks-per-node 96 #SBATCH --partition standard96cpu-clx:test #SBATCH --job-name foam_test_job #SBATCH --output ol-%x.%j.out #SBATCH --error ol-%x.%j.err module load openfoam/v2406 . $WM_PROJECT_DIR/etc/bashrc$FOAM_INIT # initialize OpenFOAM environment . $WM_PROJECT_DIR/bin/tools/RunFunctions # source OpenFOAM helper functions (wrappers) tasks_per_node=${SLURM_TASKS_PER_NODE%\(*} ntasks=$(($tasks_per_node*$SLURM_JOB_NUM_NODES)) foamDictionary -entry "numberOfSubdomains" -set "$ntasks" system/decomposeParDict # number of geometry fractions after decompositon will be number of tasks provided by slurm date "+%T" runApplication blockMesh # create coarse master mesh (here one block) date "+%T" runApplication decomposePar # decompose coarse master mesh over processors mv log.decomposePar log.decomposePar_v0 date "+%T" runParallel snappyHexMesh -overwrite # parallel: refine mesh for each processor (slow if large np) matching surface geometry (of the motorbike) date "+%T" runApplication reconstructParMesh -constant # reconstruct fine master mesh 1/2 (super slow if large np) runApplication reconstructPar -constant # reconstruct fine master mesh 2/2 date "+%T" rm -fr processor* # delete decomposed coarse master mesh cp -r 0.org 0 # provide start field date "+%T" runApplication decomposePar # parallel: decompose fine master mesh and start field over processors date "+%T" runParallel potentialFoam # parallel: run potentialFoam date "+%T" runParallel simpleFoam # parallel: run simpleFoam date "+%T" |
...
If you can not use our local 2TB-SSDs (see Special Filesystems) #SBATCH --partition={standard,large,huge}96:ssd at $LOCAL_TMPDIR please refer to our general advices to reduce Metadata Usage on WORK (=Lustre).
To adapt/optimize your OpenFOAM job specifically for I/O operations on $WORK (=Lustre) we strongly recommend the following steps:
...