Auszug |
---|
General computational fluid dynamics solver (cell-centered FVM). GPUs are supported. |
...
Info |
---|
To obtain and checkout a product license please read Ansys Suite first. |
Documentation and Tutorials
...
Codeblock |
---|
language | bash |
---|
title | Convection - 2 CPU-nodes each with 40 96 cores each(IntelMPI) |
---|
|
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4096
#SBATCH -L ansys
#SBATCH -p mediumstandard96:test
#SBATCH --mail-type=ALL
#SBATCH --output="cavity.log.%j"
#SBATCH --job-name=cavity_on_cpu
module load ansys/2019r22023r2
srun hostname -s > hostfile
echo "Running on nodes: ${SLURM_JOB_NODELIST}"
fluent 2d -g -t${SLURM_NTASKS} -ssh ssh -mpi=intel -pib -cnf=hostfile << EOFluentInput >cavity.out.$SLURM_JOB_ID
; this is an Ansys journal file aka text user interface (TUI) file
file file/read-case initial_run.cas.h5
parallel parallel/partition/method/cartesian-axes 2
file file/auto-save/append-file-name time-step 6
file file/auto-save/case-frequency if-case-is-modified
file/auto-save/data-frequency 10
file file/auto-save/retain-most-recent-files yes
solve/initialize/initialize-flow
solve/iterate 100
exit
yes
EOFluentInput
echo '#################### Fluent finished ############' |
Codeblock |
---|
language | bash |
---|
title | Nozzle flow - 1 GPU-node with 1 host-cpu and 1 GPU (new gpu native mode, OpenMPI) |
---|
|
#!/bin/bash
#SBATCH -t 00:59:00
#SBATCH --nodes=1
#SBATCH --partition=gpu-a100:shared ### on GPU-cluster of NHR@ZIB
#SBATCH --ntasks-per-node=1
#SBATCH --gres=gpu:1 # number of GPUs per node - ignored if exclusive partition with 4 GPUs
#SBATCH --gpu-bind=single:1 # bind each process to its own GPU (single:<tasks_per_gpu>)
#SBATCH -L ansys
#SBATCH --output="slurm-log.%j"
module add gcc openmpi/gcc.11 ansys/2023r2_mlx_openmpiCUDAaware # external OpenMPI is CUDA-aware
hostlist=$(srun hostname -s | sort | uniq -c | awk '{printf $2":"$1","}')
echo "Running on nodes: $hostlist"
cat <<EOF >tui_input.jou
file/read-cas nozzle_gpu_supported.cas.h5
solve/initialize/initialize-flow
solve/iterate 100
exit
yes
EOFluentInput
hyb-initialization
solve/iterate 100 yes
file/write-case-data outputfile1
file/export cgns outputfile2 full-domain yes yes
pressure temperature x-velocity y-velocity mach-number
quit
exit
EOF
fluent 3ddp -g -cnf=$hostlist -t${SLURM_NTASKS} -gpu -nm -i tui_input.jou \
-mpi=openmpi -pib -mpiopt="--report-bindings --rank-by core" >/dev/null 2>&1
echo '#################### Fluent finished ############'
|
Codeblock |
---|
language | bash |
---|
title | Convection - 2 GPU-nodes each with 4 cpus/GPUs (old gpgpu mode, OpenMPI) |
---|
|
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH -L ansys
#SBATCH -p gpu-a100 ### on emmy GPU-pcluster isof simply calledNHR@ZIB
gpu
#SBATCH --output="slurm.log.%j"
#SBATCH --job-name=cavity_on_gpu
echo "Running on nodes: ${SLURM_JOB_NODELIST}"
module add gcc openmpi/gcc.11 # external OpenMPI is CUDA aware
module add ansys/2023r2_mlx_openmpiCUDAaware
hostlist=$(srun hostname -s > hostfile
module load ansys| sort | uniq -c | awk '{printf $2":"$1","}')
echo "Running on nodes: $hostlist"
cat <<EOF > fluent>fluent.jou
; this is an Ansys journal file aka text user interface (TUI) file
parallel/gpgpu/show
file/read-case initial_run.cas.h5
solve/set/flux-type yes
solve/iterate 100
file/write-case-data outputfile
ok
exit
EOF
fluent 2d -g -t${SLURM_NTASKS} -gpgpu=4 -mpi=intelopenmpi -pib -cnf=hostfile$hostlist -i fluent.jou >/dev/null 2>&1
echo '#################### Fluent finished ############' |
...
Info |
---|
The number of CPU-cores (e.g. ntasks-per-node=Integer*GPUnr) per node must be an integer multiple of the GPUs (e.g. gpgpu=GPUnr) per node. |
...
Unfortunately, the case setup is most convenient with the Fluent GUI only. Therefore, we recommend doing all necessary GUI interactions on your local machine beforehand. As soon as the case setup is complete (geometry, materials, boundaries, solver method, etc.), save it as a *.cas file. After copying the *.cas file to the working directory of the supercomputer, this prepared case (incl. the geometry) just needs to be read [file/read-case], initialized [solve/initialize/initialize-flow], and finally executed [solve/iterate]. Above, you will find examples of *.jou (TUI) files in the job scripts.
Iff If you cannot set up your case input files *.cas by other means you may start a Fluent GUI as a last resort on our compute nodes.
But be warned: to keep fast/small OS images on the compute node there is a minimal set of graphic drivers/libs only; X-window interactions involve high latency.
...