Zum Ende der Metadaten springen
Zum Anfang der Metadaten

Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 52 Nächste Version anzeigen »

General computational fluid dynamics solver (cell-centered FVM). GPUs are supported.

General Information

To obtain and checkout a product license please read Ansys Suite first.

Documentation and Tutorials

Besides the official documentation and tutorials (see Ansys Suite), another alternative source is: https://cfd.ninja/tutorials

Example Jobscripts

The underlying test case described here can be downloaded here: NaturalConvection_SimulationFiles.zip

Job for 2 nodes with 40 tasks (on 40 cpu-cores) per node
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=40
#SBATCH -L ansys
#SBATCH -p medium
#SBATCH --mail-type=ALL
#SBATCH --output="cavity.log.%j"
#SBATCH --job-name=cavity_on_cpu

module load ansys/2019r2
srun hostname -s > hostfile
echo "Running on nodes: ${SLURM_JOB_NODELIST}"

fluent 2d -g -t${SLURM_NTASKS}  -ssh  -mpi=intel -cnf=hostfile << EOFluentInput >cavity.out.$SLURM_JOB_ID
      file/read-case initial_run.cas.h5
      parallel/partition/method/cartesian-axes 2
      solve/initialize/initialize-flow
      solve/iterate 100
      exit
      yes
EOFluentInput

echo '#################### Fluent finished ############'


Job for 2 nodes running with 4 GPUs per node and 4 host tasks per node
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH -L ansys
#SBATCH -p gpu-a100
### on emmy -p is simply called gpu
#SBATCH --output="slurm.log.%j"
#SBATCH --job-name=cavity_on_gpu

echo "Running on nodes: ${SLURM_JOB_NODELIST}"
srun hostname -s > hostfile

module load ansys

cat <<EOF > fluent.jou
; this is a Ansys journal file aka text user interface (TUI) file
parallel/gpgpu/show
file/read-cas initial_run.cas.h5
solve/iterate 100
file/write-case-data outputfile
ok
exit
EOF

fluent 2d -g -t${SLURM_NTASKS} -gpgpu=4 -mpi=intel -cnf=hostfile -i fluent.jou  >/dev/null 2>&1
echo '#################### Fluent finished ############'

Your job can be offloaded if parallel/gpgpu/show denotes the selected devices with a "(*)".
Your job was offloaded successfully if the actual call of you solver prints "AMG on GPGPU".
In this case, your .trn output file contains device_list and amgx_and_runtime, respectively.

Ansys only supports certain GPU vendors/models:
https://www.ansys.com/it-solutions/platform-support/previous-releases
Look here for the PDF called "Graphics Cards Tested" of your version... (most Nividia, some AMD)

The number of CPU-cores (e.g. ntasks-per-node=Integer*GPUnr) per node must be an integer multiple of the GPUs (e.g. gpgpu=GPUnr) per node.
  • Keine Stichwörter