Versionen im Vergleich

Schlüssel

  • Diese Zeile wurde hinzugefügt.
  • Diese Zeile wurde entfernt.
  • Formatierung wurde geändert.

...


Auszug

a Finite Element Analysis Package for Engineering Application

Details of the HLRN Installation of ABAQUS

The ABAQUS versions currently installed are

  • ABAQUS 2019 (default)
  • ABAQUS 2018
  • ABAQUS 2017
  • ABAQUS 2016

All versions: products installed: ABAQUS/Standard, ABAQUS/Explicit, ABAQUS/CAE, ABAQUS/CFD


...

Codeblock
titleTo see our provided versions type:
module avail abaqus

ABAQUS 2018 is the first version with multi-node support.
ABAQUS 2016 is the last version including Abaqus/CFD.

Documentation

Info

To access the official documentation (starting with version 2017) you can register for free at:
http://help.3ds.com

Conditions for Usage and Licensing

...

The Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB) owns licenses to install and use ABAQUS on computers owned and operated by them.

All usage of ABAQUS at HLRN is strictly limited to teaching and academic research for non-industry funded projects only.

Access to and usage of the software is regionally limited:

...

Warnung

You are only able to work with our installed Abaqus products if you can bring your own license (see details below). Alternatively, you might consider using other Finite Element Analysis (FEA) tools such as Mechanical or LS-DYNA from Ansys. Based on ABAQUS naming conventions and input style formats open-source projects like www.calculix.de might be an option, too (OpenMP parallel, MPI is experimental https://www.feacluster.com/calculix.php).

To bring your own license to our systems first follow the steps described in Bring your own license (It should be sufficient if our login nodes can access your license server.)
Secondly, place a file called 
abaqus_v6.env in your home ~ folder (or in your current working directory). Inside that file add the following line:

...

  • Users from other german states can use the software installed on HLRN but have to use their own license from their own license server.

Access to the binary code is given to users who are members of the UNIX group abaqus. Users can request to become a member of this group by contacting their local HLRN project consultant.

Example Jobscripts

Distributed Memory Parallel Processing

...

abaquslm_license_file="

...

port@ip_number_of_your_license_server"

Example Jobscripts

The input file of the test case (Large Displacement Analysis of a linear beam in a plane) is: c2.inp

Distributed Memory Parallel Processing

Codeblock
languagebash
titleThis is an example

...

of an Abaqus

...

2020 job on 2 nodes with

...

48 tasks, each.

...

Codeblock
languagebash
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=2  
#SBATCH --ntasks-per-node=4048
#SBATCH -p mediumstandard96:test
#SBATCH --mail-type=ALL
#SBATCH --job-name=abaqus.c2

module load abaqus/20192020

# host list:
echo "SLURM_NODELIST:  $SLURM_NODELIST"
create_abaqus_hostlist_for_slurm
# This command will create the file abaqus_v6.env for you.
# If abaqus_v6.env exists already in the case folder, it will append the line with the hostlist.

### ABAQUS parallel execution
abq2019 analysis job=c2 cpus=${SLURM_NTASKS} standard_parallel=all mp_mode=mpi interactive double

echo '#################### ABAQUS finished ############'

SLURM logs to: slurm-<your job id>.out

The log of the solver is written to: c2.msg

Warnung

The small number of elements in this example does not allow to use 2x96 cores. Hence, 2x48 are utilized here. But typically, if there is sufficient memory per core, we recommend using all physical cores per node (such as, in the case of standard96: #SBATCH --ntasks-per-node=96). Please refer to Slurm partition CPU CLX, to see the number of cores on your selected partition and machine.

Single Node Processing

Codeblock
languagebash
titleThis is an example

...

of an Abaqus 2016 single-node job with

...

96 tasks.

...

Warnung

Abaqus 2016 and 2017 do not run on more than one node

code
languagebash
#!/bin/bash
#SBATCH -t 00:10:00
#SBATCH --nodes=1  ## 2016 and 2017 do not run on more than one node
#SBATCH --ntasks-per-node=4096
#SBATCH -p mediumstandard96:test
#SBATCH --mail-type=ALL
#SBATCH --job-name=abaqus.c2

module load abaqus/20192016

# host list:
echo "SLURM_NODELIST:  $SLURM_NODELIST"
create_abaqus_hostlist_for_slurm
# This command will create the file abaqus_v6.env for you.
# If abaqus_v6.env exists already in the case folder, it will append the line with the hostlist.

### ABAQUS parallel execution
abq2016 analysis job=c2 cpus=${SLURM_NTASKS} standard_parallel=all mp_mode=mpi interactive double

echo '#################### ABAQUS finished ############'

Abaqus CAE GUI - not recommended for supercomputer use!

If you cannot set up your case input files *.inp by other means you may start a CAE GUI as a last resort on our compute nodes.
But be warned: to keep fast/small OS images on the compute node there is a minimal set of graphic drivers/libs only; X-window interactions involve high latency.
If you comply with our license terms (discussed above) you can use one of our four CAE licenses. In this case, please always add

Codeblock
#SBATCH -L cae

to your job script. This ensures that the SLURM scheduler starts your job only if a CAE license is available.

Codeblock
languagebash
titleInteractive CAE GUI run (not recommended)
srun -p standard96:test -L cae --x11 --pty bash

# wait for node allocation (a single node is the default), then run the following on the compute node 

module load abaqus/2022
abaqus cae -mesa