Auszug | ||
---|---|---|
| ||
a full-potential all-electron code, employing linearized augmented planewaves (LAPW) plus local orbitals (lo) as basis set. |
Description
exciting
is an ab initio code that implements density-functional theory (DFT), capable of reaching the precision of micro Hartree. As its name suggests, exciting
has a strong focus on excited-state properties. Among its features are:
...
The most recent compiled version is fluorine neon, and it has been built using with the intel-oneapi compiler (v. 2021.2) and linked to Intel MKL (including FFTW). N.B.: exciting
fluorine is also available.
The exciting
module depends on intel/2021.2
and impi/2021.7.1
.
exciting | Module file | Requirement | Compute Partitions | Features | CPU/GPU | Lise/Emmy |
---|---|---|---|---|---|---|
fluorine | exciting/009-fluorine | impi/2021.7.1 | CentOS 7 | MPI, OpenMP, MKL (including FFTW) | / | / |
neon-20 | exciting/010-neon | impi/2021.7.1 | CentOS 7 | MPI, OpenMP, MKL (including FFTW) | / | / |
neon-21 | exciting/010-neon-21 | impi/2021.7.1 | CentOS 7 | MPI, OpenMP, MKL (including FFTW) | / | / |
neon-21 | exciting/010-neon-21 | impi/2021. |
...
Under construction
...
13 | Rocky Linux 9 | MPI, OpenMP, MKL (including FFTW) | / | / |
Example Jobscripts
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash
#SBATCH --time 12:00:00
#SBATCH --partition=cpu-clx
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=24
#SBATCH --cpus-per-task=4
#SBATCH --job-name=exciting
module load impi/2021.13
# Load exciting neon
# Check the table above to find which module to load, depending on the version to be used
module load exciting/010-neon-21
# Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
# Adjust the maximum stack size of OpenMP threads
export OMP_STACKSIZE=512m
# Do not use the CPU binding provided by slurm
export SLURM_CPU_BIND=none
# Binding OpenMP threads
export OMP_PLACES=cores
export OMP_PROC_BIND=close
# Binding MPI tasks
export I_MPI_PIN=yes
export I_MPI_PIN_DOMAIN=omp
export I_MPI_PIN_CELL=core
# Important: Do not use srun when SLURM_CPU_BIND=none in combination with the pinning settings defined above
mpirun exciting |
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --time 12:00:00 #SBATCH --partition standard96 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=24 #SBATCH --cpus-per-task=4 #SBATCH --job-name=exciting module load impi/2021.7.1 # Load exciting neon # Check the table above to find which module to load, depending on the version to be used module load exciting/009010-neon-fluorine21 # Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task" export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} # Adjust the maximum stack size of OpenMP threads export OMP_STACKSIZE=512m # Do not use the CPU binding provided by slurm export SLURM_CPU_BIND=none # Binding OpenMP threads export OMP_PLACES=cores export OMP_PROC_BIND=close # Binding MPI tasks export I_MPI_PIN=yes export I_MPI_PIN_DOMAIN=omp export I_MPI_PIN_CELL=core # Important: Do not use srun when SLURM_CPU_BIND=none in combination with the pinning settings defined above mpirun exciting |