/
exciting

exciting

Description

exciting is an ab initio code that implements density-functional theory (DFT), capable of reaching the precision of micro Hartree. As its name suggests, exciting has a strong focus on excited-state properties. Among its features are:

  • G0W0 approximation;
  • Solution to the Bethe-Salpeter equation (BSE), to compute optical properties;
  • Time-dependent DFT (TDDFT) in both frequency and time domains;
  • Density-functional perturbation theory for lattice vibrations.

exciting is an open-source code, released under the GPL license.

More information is found on the official website: https://exciting-code.org/

Modules

exciting is currently available only on Lise. The standard species files deployed with exciting are located in $EXCITING_SPECIES. If you wish to use a different set, please refer to the manual.

The most recent compiled version is neon, and it has been built using with the intel-oneapi compiler and linked to Intel MKL (including FFTW).

excitingModule fileRequirementCompute PartitionsFeaturesCPU/GPU
fluorine
exciting/009-fluorine
impi/2021.7.1
CentOS 7MPI, OpenMP, MKL (including FFTW)(Haken) / (Fehler)
neon-20
exciting/010-neon
impi/2021.7.1
CentOS 7MPI, OpenMP, MKL (including FFTW)(Haken) / (Fehler)
neon-21
exciting/010-neon-21
impi/2021.7.1
CentOS 7MPI, OpenMP, MKL (including FFTW)(Haken) / (Fehler)
neon-21
exciting/010-neon-21
impi/2021.13
CPU CLX - Rocky Linux 9MPI, OpenMP, MKL (including FFTW)(Haken) / (Fehler)
neon-21
exciting/010-neon-21
openmpi/gcc/5.0.3
CPU Genoa - Rocky Linux 9MPI, OpenMP, MKL (including FFTW)(Haken) / (Fehler)

Example Jobscripts

For compute nodes - CPU CLX - Rocky Linux 9
#!/bin/bash
#SBATCH --time 12:00:00
#SBATCH --partition=cpu-clx
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=24
#SBATCH --cpus-per-task=4
#SBATCH --job-name=exciting
 
module load impi/2021.13
# Load exciting neon 
# Check the table above to find which module to load, depending on the version to be used
module load exciting/010-neon-21
 
# Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
  
# Adjust the maximum stack size of OpenMP threads
export OMP_STACKSIZE=512m
 
# Do not use the CPU binding provided by slurm
export SLURM_CPU_BIND=none
  
# Binding OpenMP threads
export OMP_PLACES=cores
export OMP_PROC_BIND=close
  
# Binding MPI tasks
export I_MPI_PIN=yes
export I_MPI_PIN_DOMAIN=omp
export I_MPI_PIN_CELL=core

# Important: Do not use srun when SLURM_CPU_BIND=none in combination with the pinning settings defined above
mpirun exciting


For compute nodes - CPU Genoa - Rocky Linux 9
#!/bin/bash
#SBATCH --time 12:00:00
#SBATCH --partition=cpu-genoa
#SBATCH --nodes=3
#SBATCH --ntasks-per-node=12
#SBATCH --cpus-per-task=16
#SBATCH --job-name=exciting
 
module load openmpi/gcc/5.0.3
# Load exciting neon 
# Check the table above to find which module to load, depending on the version to be used
module load exciting/010-neon-21
 
# Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
  
# Adjust the maximum stack size of OpenMP threads
export OMP_STACKSIZE=512m
 
# Do not use the CPU binding provided by slurm
export SLURM_CPU_BIND=none
  
# Binding OpenMP threads
export OMP_PLACES=cores
export OMP_PROC_BIND=close

# Do not use srun combined with export SLURM_CPU_BIND=none  
# Important: here we are using mpirun to start the MPI process. The pinning is performed according to the following line
mpirun --bind-to core --map-by ppr:${SLURM_NTASKS_PER_NODE}:node:pe=${OMP_NUM_THREADS} exciting

Related content

CPU Genoa partition
CPU Genoa partition
Read with this
Chemistry
Chemistry
More like this
Compute partitions
Compute partitions
Read with this
Quantum ESPRESSO
Quantum ESPRESSO
More like this
GROMACS
Read with this
Octopus
More like this