Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.


Auszug
General computational fluid dynamics solver (cell-centered FVM). GPUs are supported.

...

Info

To obtain and checkout a product license please read Ansys Suite first.

Documentation and Tutorials

...

Codeblock
languagebash
titleConvection - 2 CPU-nodes each with 40 96 cores (IntelMPI)
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4096
#SBATCH -L ansys
#SBATCH -p mediumstandard96:test
#SBATCH --mail-type=ALL
#SBATCH --output="cavity.log.%j"
#SBATCH --job-name=cavity_on_cpu
 
module load ansys/2019r22023r2
srun hostname srun hostname -s > hostfile
echo echo "Running on nodes: ${SLURM_JOB_NODELIST}"
 
fluent 2d -g -t${SLURM_NTASKS} -ssh  -mpi=intel -pib -cnf=hostfile << EOFluentInput >cavity.out.$SLURM_JOB_ID
      ; this is an Ansys journal file
      file journal file aka text user interface (TUI) file
      file/read-case initialcase initial_run.cas.h5
      parallel/partition/method/cartesian-axes 2axes 2
      file/auto-save/append-file-name timename time-step 6
      file/auto-save/case-frequency iffrequency if-case-is-modified
      file/auto-save/data-frequency 10frequency 10
      file/auto-save/retain-most-recent-files yesfiles yes
      solve/initialize/initialize-flow
      solve/iterate 100iterate 100
      exit
      yes
EOFluentInput
 
echo echo '#################### Fluent finished ############'

...

Codeblock
languagebash
titleNozzle flow - 1 GPU-node with 1 host-cpu and 1 GPU (new gpu native mode, OpenMPI)
#!/bin/bash
#SBATCH -t 00:59:00
#SBATCH --nodes=1
#SBATCH --partition=gpu-a100:shared ### on GPU-cluster of NHR@ZIB
#SBATCH --ntasks-per-node=1
#SBATCH --gres=gpu:1             # number of GPUs per node - ignored if exclusive partition with 4 GPUs
#SBATCH --gpu-bind=single:1      # bind each process to its own GPU (single:<tasks_per_gpu>)
#SBATCH -L ansys
#SBATCH --output="slurm-log.%j"

module add gcc openmpi/gcc.11 ansys/2023r2_mlx_openmpiCUDAaware # via external OpenMPI is CUDA-aware
hostlist=$(srun hostname -s | sort | uniq -c | awk '{printf $2":"$1","}')
echo "Running on nodes: $hostlist"

cat <<EOF > tui>tui_input.jou
; this is an Ansys journal file aka text user interface (TUI)
file
file/read-cas nozzle_gpu_supported.cas.h5
solve/initialize/hyb-initialization
solve/iterate 1000100 yes
file/write-case-data outputfile1
file/export cgns outputfile2 full-domain yes yes
pressure temperature x-velocity y-velocity mach-number
quit
exit
EOF

fluent 3ddp -g -cnf=$hostlist -t${SLURM_NTASKS} -gpu -nm -i tui_input.jou \
       -mpi=openmpi -pib -mpiopt="--report-bindings --rank-by core" >/dev/null 2>&1
echo '#################### Fluent finished ############'

...

Codeblock
languagebash
titleConvection - 2 GPU-nodes each with 4 cpus/GPUs (old gpgpu mode, OpenMPI)
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH -L ansys
#SBATCH -p gpu-a100  ### on emmy GPU-pcluster isof simply calledNHR@ZIB
gpu
#SBATCH --output="slurm.log.%j"
#SBATCH --job-name=cavity_on_gpu

module add gcc openmpi/gcc.11 
echo "Running on nodes: ${SLURM_JOB_NODELIST}"
# external OpenMPI is CUDA aware
module add ansys/2023r2_mlx_openmpiCUDAaware

hostlist=$(srun hostname -s > hostfile

module load ansys | sort | uniq -c | awk '{printf $2":"$1","}')
echo "Running on nodes: $hostlist"

cat <<EOF > fluent>fluent.jou
; this is an Ansys journal file aka text user interface (TUI) file
parallel/gpgpu/show
file/read-case initial_run.cas.h5
solve/set/flux-type yes
solve/iterate 100
file/write-case-data outputfile
ok
exit
EOF

fluent 2d -g -t${SLURM_NTASKS} -gpgpu=4 -mpi=intelopenmpi -pib -cnf=hostfile$hostlist -i fluent.jou  >/dev/null 2>&1
echo '#################### Fluent finished ############'

...

Info

Ansys only supports certain GPU vendors/models:
https://www.ansys.com/it-solutions/platform-support/previous-releases
Look here for the PDF called "Graphics Cards Tested" of your version... (most NividiaNvidia, some AMD)


Info
The number of CPU-cores (e.g. ntasks-per-node=Integer*GPUnr) per node must be an integer multiple of the GPUs (e.g. gpgpu=GPUnr) per node.

...

Unfortunately, the case setup is most convenient with the Fluent GUI only. Therefore, we recommend doing all necessary GUI interactions on your local machine beforehand. As soon as the case setup is complete (geometry, materials, boundaries, solver method, etc.), save it as a *.cas file. After copying the *.cas file to the working directory of the supercomputer, this prepared case (incl. the geometry) just needs to be read [file/read-case], initialized [solve/initialize/initialize-flow], and finally executed [solve/iterate]. Above, you will find examples of *.jou (TUI) files in the job scripts.

Iff If you cannot set up your case input files *.cas by other means you may start a Fluent GUI as a last resort on our compute nodes.
But be warned: to keep fast/small OS images on the compute node there is a minimal set of graphic drivers/libs only; X-window interactions involve high latency.

...