A Finite Element Analysis Package for Engineering Application
Details of the HLRN Installation of ABAQUS
The ABAQUS versions currently installed are
- ABAQUS 2020
- ABAQUS 2019 (default)
- ABAQUS 2018 (first version with multi-node support)
- ABAQUS 2017
- ABAQUS 2016 (last version including Abaqus/CFD)
The module name is abaqus
. Other versions may be installed. Inspect the output of : module avail abaqus
Conditions for Usage and Licensing at HLRN
All usage of ABAQUS at HLRN is strictly limited to teaching and academic research for non-industry funded projects only.
Access to and usage of the software is regionally limited:
- Users from Berlin (account names "be*") are allowed to use the ZIB license. Add the following line to your own ABAQUS environment file
$HOME/abaqus_v6.env
:abaquslm_license_file="1700@130.73.232.72"
- Users from other german states can use the software installed on HLRN but have to use their own license from their own license server.
To use Abaqus
you need to mail support[at]hlrn.de
and ask to become a member of the UNIX group abaqus.
You can check your group membership by calling groups.
Usually, there are always sufficient licenses for Abaqus/Standard and Abaqus/Explicit command-line based jobs. In contrast, we only offer 4 licenses of the interactive Abaqus/CAE (GUI). If you add the flag "#SBATCH -L cae" to your job script, the SLURM scheduler starts your job only, if CAE licenses are available. You can check the available CAE licenses yourself with: scontrol show lic
Example Jobscripts
The input file of the test case (Large Displacement Analysis of a beam plate) is: c2.inp
Distributed Memory Parallel Processing
#!/bin/bash #SBATCH -t 00:10:00 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=48 #SBATCH -p standard96:test #SBATCH --mail-type=ALL #SBATCH --job-name=abaqus.c2 module load abaqus/2020 # host list: echo "SLURM_NODELIST: $SLURM_NODELIST" create_abaqus_hostlist_for_slurm # This command will create the file abaqus_v6.env for you. # If abaqus_v6.env exists in the case folder, it will append the line with the hostlist. ### ABAQUS parallel execution abq2019 analysis job=c2 cpus=${SLURM_NTASKS} standard_parallel=all mp_mode=mpi interactive double echo '#################### ABAQUS finished ############'
SLURM logs to: slurm-<your job id>.out
The log of the solver is written to: c2.msg
The small number of elements in this example does not allow to use all cores of two nodes (2x96). Typically, if there is sufficient memory per core, we recommend to use all physical cores. Such as, in case of standard96: #SBATCH --ntasks-per-node=96 Please refer to, to see the number of cores on your machine (Lise, Emmy) and selected partition.
Single Node Processing
#!/bin/bash #SBATCH -t 00:10:00 #SBATCH --nodes=1 ## 2016 and 2017 do not run on more than one node #SBATCH --ntasks-per-node=96 #SBATCH -p standard96:test #SBATCH --job-name=abaqus.c2 module load abaqus/2016 # host list: echo "SLURM_NODELIST: $SLURM_NODELIST" create_abaqus_hostlist_for_slurm # This command will create the file abaqus_v6.env for you. # If abaqus_v6.env exists in the case folder, it will append the line with the hostlist. ### ABAQUS parallel execution abq2016 analysis job=c2 cpus=${SLURM_NTASKS} standard_parallel=all mp_mode=mpi interactive double echo '#################### ABAQUS finished ############'