...
Access to VASP executables is restricted to users satisfying who satisfy the following criteria. :
- The user must
...
- be a member of a research group owning a VASP license,.
- The user must employ VASP only for work on projects of this research group,.
- The user must be registered in Vienna as a VASP user of this research group. This task must be done by logging into the VASP Portal at https://www.vasp.at/vasp-portal/, and registering the user's institutional email address.
...
VASP Version | User Group | VASP Modulefile | Compute Partitions | MPI Requirement | CPU/GPU | Lise/Emmy | Supported Features | ||
---|---|---|---|---|---|---|---|---|---|
5.4.4 with patch 16052018 | vasp5_2 | vasp/5.4.4.p1 | CentOS 7 | impi/2019.5 | / | / | |||
6.4.1 | vasp6 | vasp/6.4.1 | CentOS 7 | impi/2021.7.1 | / | / | OpenMP, HDF5, Wannier90, Libxc | ||
6.4.2 | vasp6 | vasp/6.4.2 | CentOS 7 | impi/2021.7.1 | / | / | OpenMP, HDF5, Wannier90, Libxc, DFTD4 van-der-Waals functional | ||
6.4. | 13 | vasp6 | vasp/6.4. | 1nvhpc-hpcx/23.1 | / 3 | Rocky Linux 9 | impi/2021.13 | / | / |
...
OpenMP, HDF5, |
...
...
Executables
Our installations of VASP comprise the regular executables (vasp_std
, vasp_gam
, vasp_ncl
) and, optionally, community driven modifications to VASP as shown in the table below. They are available in the directory added to the PATH
environment variable by one of the vasp
environment modules.
...
Example Jobscripts
HTML Kommentar | |||||||
---|---|---|---|---|---|---|---|
Commenting out this block, since Berlin and Göttingen have separate documentation pages now.
|
...
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --time 12:00:00 #SBATCH --nodes 2 #SBATCH --tasks-per-node 96 export SLURM_CPU_BIND=none module load impi/2019.5 module load vasp/5.4.4.p1 mpirun vasp_std |
...
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash
#SBATCH --time=12:00:00
#SBATCH --nodes=2
#SBATCH --tasks-per-node=48
#SBATCH --cpus-per-task=2
#SBATCH --partition=cpu-clx
export SLURM_CPU_BIND=none
# Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task"
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# Adjust the maximum stack size of OpenMP threads
export OMP_STACKSIZE=512m
# Binding OpenMP threads
export OMP_PLACES=cores
export OMP_PROC_BIND=close
# Binding MPI tasks
export I_MPI_PIN=yes
export I_MPI_PIN_DOMAIN=omp
export I_MPI_PIN_CELL=core
module load impi/2021.13
module load vasp/6.4.3
# This is to avoid the (harmless) warning message "MPI strtup(): warning I_MPI_PMI_LIBRARY will be ignored since the hydra process manager was found"
unset I_MPI_PMI_LIBRARY
# Our tests have shown that vasp has better performance with psm2 as libfabric provider
# Check if this also apply to your system
# To stick to the default provider, comment out the following line
export FI_PROVIDER=psm2
mpirun vasp_std |
Here is the same example, but for the compute nodes with CentOS7
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --time=12:00:00 #SBATCH --nodes=2 #SBATCH --tasks-per-node=48 #SBATCH --cpus-per-task=2 #SBATCH --partition=standard96 export SLURM_CPU_BIND=none # Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task" export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK # Adjust the maximum stack size of OpenMP threads export OMP_STACKSIZE=512m # Binding OpenMP threads export OMP_PLACES=cores export OMP_PROC_BIND=close # Binding MPI tasks export I_MPI_PIN=yes export I_MPI_PIN_DOMAIN=omp export I_MPI_PIN_CELL=core module load impi/2021.7.1 module load vasp/6.4.1 mpirun vasp_std |
In the The following example , we show shows a job script that will run on the Nvidia A100 GPU nodes (Berlin). Per default, VASP will use one GPU per MPI task. If you plan to use 4 GPUs per node, you need to set 4 MPI tasks per node. Then, set the number of OpenMP threads to 18 (because 4x18=72 which is the number of CPU cores on GPU A100 partition) to speed up your calculation. This, however, also requires proper process pinning.
...