General computational fluid dynamics solver (cell-centered FVM). GPUs are supported.
General Information
To obtain and checkout a product license please read Ansys Suite first.
Documentation and Tutorials
As part of the official documentation you find for example all text commands to write journal files: /sw/eng/ansys_inc/v231/doc_manuals/v231/Ansys_Fluent_Text_Command_List.pdf
Example Jobscripts
The underlying test case are
- natural convection / circulation: descriped here, cas file: NaturalConvection_SimulationFiles.zip
- steady nozzle flow: described in Fluent tutorial guide (2023 R1, Ch. 8) "Modeling Transient Compressible Flow", cas file: nozzle_gpu_supported.cas.h5
#!/bin/bash #SBATCH -t 00:10:00 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=40 #SBATCH -L ansys #SBATCH -p medium #SBATCH --mail-type=ALL #SBATCH --output="cavity.log.%j" #SBATCH --job-name=cavity_on_cpu module load ansys/2019r2 srun hostname -s > hostfile echo "Running on nodes: ${SLURM_JOB_NODELIST}" fluent 2d -g -t${SLURM_NTASKS} -ssh -mpi=intel -pib -cnf=hostfile << EOFluentInput >cavity.out.$SLURM_JOB_ID ; this is an Ansys journal file aka text user interface (TUI) file file/read-case initial_run.cas.h5 parallel/partition/method/cartesian-axes 2 file/auto-save/append-file-name time-step 6 file/auto-save/case-frequency if-case-is-modified file/auto-save/data-frequency 10 file/auto-save/retain-most-recent-files yes solve/initialize/initialize-flow solve/iterate 100 exit yes EOFluentInput echo '#################### Fluent finished ############'
#!/bin/bash #SBATCH -t 00:59:00 #SBATCH --nodes=1 #SBATCH --partition=gpu-a100:shared #SBATCH --ntasks-per-node=1 #SBATCH --gres=gpu:1 # number of GPUs per node - ignored if exclusive partition with 4 GPUs #SBATCH --gpu-bind=single:1 # bind each process to its own GPU (single:<tasks_per_gpu>) #SBATCH -L ansys #SBATCH --output="slurm-log.%j" module add gcc openmpi/gcc.11 ansys/2023r2_mlx_openmpiCUDAaware # external OpenMPI is CUDA-aware hostlist=$(srun hostname -s | sort | uniq -c | awk '{printf $2":"$1","}') echo "Running on nodes: $hostlist" cat <<EOF >tui_input.jou file/read-cas nozzle_gpu_supported.cas.h5 solve/initialize/hyb-initialization solve/iterate 1000 yes file/write-case-data outputfile1 file/export cgns outputfile2 full-domain yes yes pressure temperature x-velocity y-velocity mach-number quit exit EOF fluent 3ddp -g -cnf=$hostlist -t${SLURM_NTASKS} -gpu -nm -i tui_input.jou \ -mpi=openmpi -pib -mpiopt="--report-bindings --rank-by core" >/dev/null 2>&1 echo '#################### Fluent finished ############'
#!/bin/bash #SBATCH -t 00:10:00 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=4 #SBATCH -L ansys #SBATCH -p gpu-a100 ### on emmy -p is simply called gpu #SBATCH --output="slurm.log.%j" #SBATCH --job-name=cavity_on_gpu echo "Running on nodes: ${SLURM_JOB_NODELIST}" srun hostname -s > hostfile module add gcc openmpi/gcc.11 # external OpenMPI is CUDA aware module add ansys/2023r2_mlx_openmpiCUDAaware cat <<EOF >fluent.jou ; this is an Ansys journal file aka text user interface (TUI) file parallel/gpgpu/show file/read-case initial_run.cas.h5 solve/iterate 100 file/write-case-data outputfile ok exit EOF fluent 2d -g -t${SLURM_NTASKS} -gpgpu=4 -mpi=openmpi -pib -cnf=hostfile -i fluent.jou >/dev/null 2>&1 echo '#################### Fluent finished ############'
Your job can be offloaded if parallel/gpgpu/show denotes the selected devices with a "(*)".
Your job was offloaded successfully if the actual call of you solver prints "AMG on GPGPU".
In this case, your .trn output file contains device_list and amgx_and_runtime, respectively.
Ansys only supports certain GPU vendors/models:
https://www.ansys.com/it-solutions/platform-support/previous-releases
Look here for the PDF called "Graphics Cards Tested" of your version... (most Nividia, some AMD)
The number of CPU-cores (e.g. ntasks-per-node=Integer*GPUnr) per node must be an integer multiple of the GPUs (e.g. gpgpu=GPUnr) per node.
Fluent GUI: to setup your case at your local machine
Unfortunately, the case setup is most convenient with the Fluent GUI only. Therefore, we recommend doing all necessary GUI interactions on your local machine beforehand. As soon as the case setup is complete (geometry, materials, boundaries, solver method, etc.), save it as a *.cas file. After copying the *.cas file to the working directory of the supercomputer, this prepared case (incl. the geometry) just needs to be read [file/read-case], initialized [solve/initialize/initialize-flow], and finally executed [solve/iterate]. Above, you will find examples of *.jou (TUI) files in the job scripts.
Iff you cannot set up your case input files *.cas by other means you may start a Fluent GUI as a last resort on our compute nodes.
But be warned: to keep fast/small OS images on the compute node there is a minimal set of graphic drivers/libs only; X-window interactions involve high latency.
srun -N 1 -p standard96:test -L ansys --x11 --pty bash # wait for node allocation, then run the following on the compute node export XDG_RUNTIME_DIR=$TMPDIR/$(basename $XDG_RUNTIME_DIR); mkdir -p $XDG_RUNTIME_DIR module add ansys/2023r1 fluent &