Auszug |
---|
General computational fluid dynamics solver (cell-centered FVM). GPUs are supported. |
...
Info |
---|
To obtain and checkout a product license please read Ansys Suite first. |
Documentation and Tutorials
...
Codeblock |
---|
language | bash |
---|
title | Job for 2 nodes running with 4 GPUs per node and 4 host tasks per node |
---|
|
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH -L ansys
#SBATCH -p gpu-a100
### on emmy -p is simply called gpu
#SBATCH --output="slurm.log.%j"
#SBATCH --job-name=cavity_on_gpu
echo "Running on nodes: ${SLURM_JOB_NODELIST}"
srun hostname -s > hostfile
module load ansys
cat <<EOF > fluent.jou
; this is a Ansys journal file aka text user interface (TUI) file
parallel/gpgpu/show
file/read-cascase initial_run.cas.h5
solve/iterate 100
file/write-case-data outputfile
ok
exit
EOF
fluent 2d -g -t${SLURM_NTASKS} -gpgpu=4 -mpi=intel -cnf=hostfile -i fluent.jou >/dev/null 2>&1
echo '#################### Fluent finished ############' |
...
Info |
---|
The number of CPU-cores (e.g. ntasks-per-node=Integer*GPUnr) per node must be an integer multiple of the GPUs (e.g. gpgpu=GPUnr) per node. |
Fluent GUI - not recommended for supercomputer use
Unfortunately, the case setup (geometry, materials, boundaries, solver method, etc.) is most convenient with the Fluent GUI. Therefore, we recommend doing all necessary GUI interactions on your local machine beforehand. After copying the *.cas file to the working directory of the supercomputer, this fully prepared case (incl. the geometry) just needs to be read [file/read-case], initialized [solve/initialize/initialize-flow], and finally executed [solve/iterate]. Above, you wil find examples of *.jou (TUI) files in the job scripts.
Iff you cannot set up your case input files *.cas by other means you may start a Fluent GUI as a last resort on our compute nodes.
But be warned: to keep fast/small OS images on the compute node there is a minimal set of graphic drivers/libs only; X-window interactions involve high latency.
Codeblock |
---|
language | bash |
---|
title | Interactive Fluent GUI run (not recommended) |
---|
|
srun -N 1 -p standard96:test -L ansys --x11 --pty bash
# wait for node allocation, then run the following on the compute node
export XDG_RUNTIME_DIR=$TMPDIR/$(basename $XDG_RUNTIME_DIR); mkdir -p $XDG_RUNTIME_DIR
module add ansys/2023r1
fluent & |