Auszug |
---|
General computational fluid dynamics solver (cell-centered FVM). GPUs are supported. |
General Information
The ANSYS software package is developed and distributed by ANSYS, Inc..
Info |
---|
This documentation describes the specifics of installation and usage of ANSYS at HLRN. Introductory courses for ANSYS as well as courses for special topics are offered by ANSYS Inc. and their regional offices, e.g. in Germany. It is recommended to take at least an introductory course (see the CAD-FEM GmbH homepage). Good (free) starting points for self-study are https://students.cadfem.net/de/ansys-gratis-lernen.html and https://courses.ansys.com |
Details of the HLRN Installation of ANSYS
The ANSYS versions currently installed are
...
Info |
---|
The module name is ansys . Other versions may be installed. Inspect the output of : module avail ansys |
Usage and Licensing at HLRN
The use of Ansys is restricted to members of the ansys
user group. You can ask to become a group member at support[at]hlrn.de
Please note the license conditions: Our academic licenses are restricted to students, PhD students, teachers and trainers of public institutions. They cannot be used in projects that are financed by industrial partners.
Warnung |
---|
|
Important: Always add
#SBATCH -L ansys to your job script. |
The flag "#SBATCH -L ansys" ensures that the scheduler starts jobs only, when licenses are available.
You can check the availability yourself: scontrol show lic
...
aa_t_a
is a "ANSYS Academic Teaching License" with a maximum of 4 tasks. These may be used only for student projects, student instruction and student demonstrations. Eligible users are allowed to activate these, by adding the flag
Codeblock |
---|
title | Activation of the aa_t_a license |
---|
|
-lpf $ANSYSLIC_DIR/prodord/license.preferences_for_students_and_teaching.xml |
...
Info |
---|
To obtain and checkout a product license please read Ansys Suite first. |
Documentation and Tutorials
Info |
---|
Besides the official documentation and tutorials (see Ansys Suite), another alternative source is: https://cfd.ninja/tutorials As part of the official documentation you find for example all text commands to write journal files: /sw/eng/ansys_inc/v231/doc_manuals/v231/Ansys_Fluent_Text_Command_List.pdf |
Example Jobscripts
The underlying test case described here can be downloaded here: are
- natural convection / circulation: descriped here, cas file: NaturalConvection_SimulationFiles.zip
- steady nozzle flow: described in Fluent tutorial guide (2023 R1, Ch. 8) "Modeling Transient Compressible Flow", cas file: nozzle_gpu_supported.cas.h5
Codeblock |
---|
language | bash |
---|
title | Job for Convection - 2 CPU-nodes each with 40 tasks (on 40 cpu-cores) per node96 cores (IntelMPI) |
---|
|
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4096
#SBATCH -L ansys
#SBATCH -p mediumstandard96:test
#SBATCH --mail-type=ALL
#SBATCH --output="cavity.log.%j"
#SBATCH --job-name=cavity_on_cpu
module load ansys/2019r22023r2
srun hostname -s > hostfile
echo "Running on nodes: ${SLURM_JOB_NODELIST}"
fluent 2d -g -t${SLURM_NTASKS} -ssh ssh -mpi=intel -pib -cnf=hostfile << EOFluentInput >cavity.out.$SLURM_JOB_ID
; this is an Ansys journal file aka text user interface (TUI) file
file/read-case initial_run.cas.h5
parallel parallel/partition/method/cartesian-axes 2
file/auto-save/append-file-name time-step 6
file/auto-save/case-frequency if-case-is-modified
file/auto-save/data-frequency 10
file/auto-save/retain-most-recent-files yes
solve solve/initialize/initialize-flow
solve/iterate 100
exit
yes
EOFluentInput
echo '#################### Fluent finished ############' |
Codeblock |
---|
language | bash |
---|
title | Nozzle flow - 1 GPU-node with 1 host-cpu and 1 GPU (new gpu native mode, OpenMPI) |
---|
|
#!/bin/bash
#SBATCH -t 00:59:00
#SBATCH --nodes=1
#SBATCH --partition=gpu-a100:shared ### on GPU-cluster of NHR@ZIB
#SBATCH --ntasks-per-node=1
#SBATCH --gres=gpu:1 # number of GPUs per node - ignored if exclusive partition with 4 GPUs
#SBATCH --gpu-bind=single:1 # bind each process to its own GPU (single:<tasks_per_gpu>)
#SBATCH -L ansys
#SBATCH --output="slurm-log.%j"
module add gcc openmpi/gcc.11 ansys/2023r2_mlx_openmpiCUDAaware # external OpenMPI is CUDA-aware
hostlist=$(srun hostname -s | sort | uniq -c | awk '{printf $2":"$1","}')
echo "Running on nodes: $hostlist"
cat <<EOF >tui_input.jou
file/read-cas nozzle_gpu_supported.cas.h5
solve/initialize/hyb-initialization
solve/iterate 100 yes
file/write-case-data outputfile1
file/export cgns outputfile2 full-domain yes yes
pressure temperature x-velocity y-velocity mach-number
quit
exit
EOF
fluent 3ddp -g -cnf=$hostlist -t${SLURM_NTASKS} -gpu -nm -i tui_input.jou \
-mpi=openmpi -pib yes
EOFluentInput
-mpiopt="--report-bindings --rank-by core" >/dev/null 2>&1
echo '#################### Fluent finished ############'
|
Codeblock |
---|
language | bash |
---|
title | Job for 1 node running Convection - 2 GPU-nodes each with 4 GPUs and 4 host taskscpus/GPUs (old gpgpu mode, OpenMPI) |
---|
|
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=12
#SBATCH --ntasks-per-node=4
#SBATCH --gpus-per-node=4
#SBATCH -L ansys
#SBATCH -p gpu-a100 ### on GPU-cluster of NHR@ZIB
#SBATCH --output="slurm.log.%j"
#SBATCH --job-name=cavity_on_gpu
module add gcc openmpi/gcc.11 # external OpenMPI is CUDA aware
module add ansys/2023r2_mlx_openmpiCUDAaware
hostlist=$(srun hostname -s | sort | uniq -c | awk '{printf $2":"$1","}')
echo "Running on nodes: ${SLURM_JOB_NODELIST}"
module load ansys/2020r2
$hostlist"
cat <<EOF > fluent>fluent.jou
; this is aan Ansys journal file aka text user interface (TUI) file
parallel/gpgpu/show
file/read-cascase initial_run.cas.h5
solve/set/flux-type yes
solve/iterate 10100
file/write-case-data outputfile
ok
exit
EOF
fluent 2d -g -t${SLURM_NTASKS} -gpgpu=${SLURM_GPUS_PER_NODE}4 -mpi=intelopenmpi -pib -cnf=$hostlist -i fluent.jou >/dev/null 2>&1
echo '#################### Fluent finished ############' |
Your job can be /was offloaded successfully if parallel/gpgpu/show prints denotes the selected devices with a "(*)" and the solver notes .
Your job was offloaded successfully if the actual call of you solver prints "AMG on GPGPU".
In this case, your .trn output file contains devicesdevice_list.trn and and amgx_and_runtime.trn, respectively.
Info |
---|
Ansys only supports certain GPU vendors/models: https://www.ansys.com/it-solutions/platform-support/previous-releases Look here for the PDF called "Graphics Cards Tested" of your version... (most NividiaNvidia, some AMD) |
Info |
---|
-gpgpu mustThe number of CPU-cores (e.g. ntasks-per-node=Integer*GPUnr) per node must be an integer multiple of the tasks per node (ntasks-per-node) GPUs (e.g. gpgpu=GPUnr) per node. |
Fluent GUI: to setup your case at your local machine
Unfortunately, the case setup is most convenient with the Fluent GUI only. Therefore, we recommend doing all necessary GUI interactions on your local machine beforehand. As soon as the case setup is complete (geometry, materials, boundaries, solver method, etc.), save it as a *.cas file. After copying the *.cas file to the working directory of the supercomputer, this prepared case (incl. the geometry) just needs to be read [file/read-case], initialized [solve/initialize/initialize-flow], and finally executed [solve/iterate]. Above, you will find examples of *.jou (TUI) files in the job scripts.
If you cannot set up your case input files *.cas by other means you may start a Fluent GUI as a last resort on our compute nodes.
But be warned: to keep fast/small OS images on the compute node there is a minimal set of graphic drivers/libs only; X-window interactions involve high latency.
Codeblock |
---|
language | bash |
---|
title | Interactive Fluent GUI run (not recommended for supercomputer use) |
---|
|
srun -N 1 -p standard96:test -L ansys --x11 --pty bash
# wait for node allocation, then run the following on the compute node
export XDG_RUNTIME_DIR=$TMPDIR/$(basename $XDG_RUNTIME_DIR); mkdir -p $XDG_RUNTIME_DIR
module add ansys/2023r1
fluent & |