Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 63 Nächste Version anzeigen »

General computational fluid dynamics solver (cell-centered FVM). GPUs are supported.

General Information

To obtain and checkout a product license please read Ansys Suite first.

Documentation and Tutorials

Besides the official documentation and tutorials (see Ansys Suite), another alternative source is: https://cfd.ninja/tutorials
As part of the official documentation you find for example all text commands to write journal files: /sw/eng/ansys_inc/v231/doc_manuals/v231/Ansys_Fluent_Text_Command_List.pdf

Example Jobscripts

The underlying test case described here can be downloaded here: NaturalConvection_SimulationFiles.zip

Job for 2 nodes with 40 tasks (on 40 cpu-cores) per node
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=40
#SBATCH -L ansys
#SBATCH -p medium
#SBATCH --mail-type=ALL
#SBATCH --output="cavity.log.%j"
#SBATCH --job-name=cavity_on_cpu

module load ansys/2019r2
srun hostname -s > hostfile
echo "Running on nodes: ${SLURM_JOB_NODELIST}"

fluent 2d -g -t${SLURM_NTASKS} -ssh  -mpi=intel -pib -cnf=hostfile << EOFluentInput >cavity.out.$SLURM_JOB_ID
      ; this is an Ansys journal file
      file/read-case initial_run.cas.h5
      parallel/partition/method/cartesian-axes 2
      file/auto-save/append-file-name time-step 6 
      file/auto-save/case-frequency if-case-is-modified
      file/auto-save/data-frequency 10
      file/auto-save/retain-most-recent-files yes
      solve/initialize/initialize-flow 
      solve/iterate 100
      exit
      yes
EOFluentInput

echo '#################### Fluent finished ############'


Job for 2 nodes running with 4 GPUs per node and 4 host tasks per node
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH -L ansys
#SBATCH -p gpu-a100
### on emmy -p is simply called gpu
#SBATCH --output="slurm.log.%j"
#SBATCH --job-name=cavity_on_gpu

echo "Running on nodes: ${SLURM_JOB_NODELIST}"
srun hostname -s > hostfile

module load ansys

cat <<EOF > fluent.jou
; this is an Ansys journal file aka text user interface (TUI) file
parallel/gpgpu/show
file/read-case initial_run.cas.h5
solve/iterate 100
file/write-case-data outputfile
ok
exit
EOF

fluent 2d -g -t${SLURM_NTASKS} -gpgpu=4 -mpi=intel -pib -cnf=hostfile -i fluent.jou  >/dev/null 2>&1
echo '#################### Fluent finished ############'

Your job can be offloaded if parallel/gpgpu/show denotes the selected devices with a "(*)".
Your job was offloaded successfully if the actual call of you solver prints "AMG on GPGPU".
In this case, your .trn output file contains device_list and amgx_and_runtime, respectively.

Ansys only supports certain GPU vendors/models:
https://www.ansys.com/it-solutions/platform-support/previous-releases
Look here for the PDF called "Graphics Cards Tested" of your version... (most Nividia, some AMD)

The number of CPU-cores (e.g. ntasks-per-node=Integer*GPUnr) per node must be an integer multiple of the GPUs (e.g. gpgpu=GPUnr) per node.

Fluent GUI: to setup your case at your local machine

Unfortunately, the case setup is most convenient with the Fluent GUI only. Therefore, we recommend doing all necessary GUI interactions on your local machine beforehand. As soon as the case setup is complete (geometry, materials, boundaries, solver method, etc.), save it as a *.cas file. After copying the *.cas file to the working directory of the supercomputer, this prepared case (incl. the geometry) just needs to be read [file/read-case], initialized [solve/initialize/initialize-flow], and finally executed [solve/iterate]. Above, you will find examples of *.jou (TUI) files in the job scripts.

Iff you cannot set up your case input files *.cas by other means you may start a Fluent GUI as a last resort on our compute nodes.
But be warned: to keep fast/small OS images on the compute node there is a minimal set of graphic drivers/libs only; X-window interactions involve high latency.

Interactive Fluent GUI run (not recommended for supercomputer use)
srun -N 1 -p standard96:test -L ansys --x11 --pty bash

# wait for node allocation, then run the following on the compute node 

export XDG_RUNTIME_DIR=$TMPDIR/$(basename $XDG_RUNTIME_DIR); mkdir -p $XDG_RUNTIME_DIR
module add ansys/2023r1
fluent &
  • Keine Stichwörter