...
More information is available on the VASP website and from the VASP wiki.
Usage Conditions
...
Access to VASP at HLRN executables is restricted to users satisfying the following criteria. The user must
- be
...
- member of a research group owning a VASP license,
...
- be registered in Vienna as a VASP user of this research group,
...
- employ VASP only for work on
...
- projects of this research group.
Only members of the UNIX group groups vasp5_2 or vasp6 have access to VASP executables provided by HLRN. To have their user ID included in this groupthese groups, users can ask their consultant or send an e-mail to our supportsubmit a support request. It is recommended that users make sure that they already got registered in Vienna beforehand as this will be verified with the VASP support team first. Users whose research group did not upgrade its VASP license to version 6.x cannot become member of the vasp6 group.
Modules
VASP is an MPI-parallel application. It is recommended We recommend to use mpirun as the job starter for VASP at HLRN. The MPI environment module providing the mpirun command associated with a particular VASP installation needs to be loaded ahead of the VASP environment module.
VASP Version | User Group | VASP Modulefile | MPI Requirement | CPU/GPU | Lise/Emmy |
---|---|---|---|---|---|
5.4.4 with patch 16052018 | vasp5_2 | vasp/5.4.4.p1 | impi/2019.5 |
Executables
...
/ | / | ||||
6.4.1 | vasp6 | vasp/6.4.1 | impi/2021.7.1 | / | / |
6.4.1 | vasp6 | vasp/6.4.1 | nvhpc-hpcx/23.1 | / | / |
Executables
Our installations of VASP comprise the regular executables (vasp_std
, vasp_gam
, vasp_ncl
) and, optionally, community driven modifications to VASP as shown in the table below, all available from . They are available in the directory added to the PATH
environment variable by one of the vasp
environment module(s)modules.
Executable | Description |
---|---|
vasp_std | multiple k-points (formerly vasp_cd ) |
vasp_gam | Gamma-point only (formerly vasp_gamma_cd ) |
vasp_ncl | non-collinear calculations, spin-orbit coupling (formerly vasp ) |
vaspsol_[std|gam|ncl] | set of VASPsol-enabled executables (only for v. 5.4.4) |
vasptst_[std|gam|ncl] | set of VTST-enabled executables (only for v. 5.4.4) |
vasptstsol_[std|gam|ncl] | set of executables combining these modifications (only for v. 5.4.4) |
N.B.: The VTST script collection is not available from the vasp
environment module(s)modules. Instead, it is provided by the vtstscripts
environment module(s).
N.B.: The version 6.4.1 has been compiled with support for: OpenMP, HDF5, and Wannier90. The CPU version supports additionally Libxc.
Example Jobscripts
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --time 12:00:00 #SBATCH --nodes 2 #SBATCH --tasks-per-node 40 export SLURM_CPU_BIND=none module load impi/2019.5 module load vasp/5.4.4.p1 mpirun vasp_std |
...
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash
#SBATCH --time 12:00:00
#SBATCH --nodes 2
#SBATCH --tasks-per-node 96
export SLURM_CPU_BIND=none
module load impi/2019.5
module load vasp/5.4.4.p1
mpirun vasp_std |
The following job-script exemplifies how to run vasp 6.4.1 making use of OpenMP threads: here, we have 2 OpenMP threads and 48 MPI processes per node (the product of these 2 numbers should ideally be equal to the number of CPU cores per node).
In many cases, running vasp with the parallelization over MPI ranks alone can bring a good performance. However, certain application cases can benefit from hybrid parallelization over MPI and OpenMP. A detailed discussion is found here. If you opt for hybrid parallelization, then pay attention to the pinning, as shown in the example below.
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash
#SBATCH --time=12:00:00
#SBATCH --nodes=2
#SBATCH --tasks-per-node=48
#SBATCH --cpus-per-task=2
#SBATCH --partition=standard96
#SBATCH -A your_project_account
export SLURM_CPU_BIND=none
# Set the number of OpenMP threads as given by the slurm parameter "cpus-per-task"
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# Adjust the maximum stack size of OpenMP threads
export OMP_STACKSIZE=512m
# Binding OpenMP threads
export OMP_PLACES=cores
export OMP_PROC_BIND=close
# Binding MPI ranks
export I_MPI_PIN=yes
export I_MPI_PIN_DOMAIN=omp
export I_MPI_PIN_CELL=core
module load impi/2021.7.1 vasp/6.4.1
mpirun vasp_std |
In the following example, we show a job-script that will run on GPUs. Per default, vasp will use one GPU per MPI process. If you plan to use 4 GPUs per node, you need to set 4 tasks per node. Then, setting the number of OpenMP threads to 18 (such that 4x18 = 72, which is the number of CPU cores in one node) may bring additional speedup to your calculation. However, this will happen only with proper pinning.
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --time=12:00:00 #SBATCH --nodes=2 #SBATCH --tasks-per-node=4 #SBATCH --cpus-per-task=18 #SBATCH --partition=gpu-a100 #SBATCH -A your_project_account export SLURM_CPU_BIND=none module load nvhpc-hpcx/23.1 vasp/6.4.1 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} export OMP_PLACES=cores export OMP_PROC_BIND=close # Avoid hcoll as MPI collective algorithm export OMPI_MCA_coll="^hcoll" # You may need to adjust this limit, depending on the case export OMP_STACKSIZE=512m # Carefully adjust ppr:2, if you don't use 4 MPI processes per node mpirun --bind-to core --map-by ppr:2:socket:PE=${SLURM_CPUS_PER_TASK} vasp_std |