Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...

Usage and Licensing at HLRN

The use of Ansys is restricted to members of the ansys user group. You can apply to become a group member at support[at]hlrn.de
Please note the license conditions: Our academic licenses are restricted to students, PhD students, teachers and trainers of public institutions. They cannot be used in projects that are financed by industrial partners.

Warnung
titleLicences

Important: Always add
#SBATCH -L ansys
to your job script.

This allows for the batch system to start the job The flag "#SBATCH -L ansys" ensures that the scheduler starts jobs only, when the appropriate number of licenses is are available.
You can check the availability yourself: scontrol show lic

  • aa

...

  • _

...

  • r is a "ANSYS Academic Research License" with 16 inclusive tasks. Research jobs with more than 16 tasks cost additional "aa_r_hpc" licenses.

...

  • aa_t_a is a "ANSYS Academic Teaching License" with a maximum of 16 tasks. These may be used only for student projects, student instruction and student demonstrations. Eligible users are allowed to activate these, by adding the flag

    Codeblock
    titleActivation of the aa_t_a license
    -lpf $ANSYSLIC_DIR/prodord/license.preferences_for_students_and_teaching.xml

    to the Ansys executable, such as "cfx5solve". The path $ANSYSLIC_DIR is provided after loading any Ansys module. Alternatively, eligible users are allowed to redefine their command "cfx5solve" with

    Codeblock
    titleRedefining cfx5solve for students
    source $ANSYSLIC_DIR/cfx5solve_redef_with_student_lic

    after loading the Ansys module inside the job script. This will prepend the flag "-lpf ..." to "cfx5solve" automatically if less than 17 task are requested.

Example Jobscripts

Codeblock
languagebash
titleThis is an example for a parallel distributed memory job on 2 nodes with 40 tasks per node
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=40
#SBATCH -L ansys
#SBATCH -p medium
#SBATCH --mail-type=ALL
#SBATCH --output="cavity.log.%j"
#SBATCH --job-name=cavity

module load ansys/2019r2
srun hostname -s > hostfile
echo "Running on nodes: ${SLURM_JOB_NODELIST}"

fluent 2d -g -t${SLURM_NTASKS}  -ssh  -mpi=intel -cnf=hostfile << EOFluentInput >cavity.out.$SLURM_JOB_ID
      file/read-case cavity.cas
      parallel/partition/method/cartesian-axes 2
      solve/initialize/initialize-flow
      solve/iterate 100
      exit
      yes
EOFluentInput

echo '#################### Fluent finished ############'

...