...
Octopus is currently available only on Lise. The standard pseudopotentials deployed with Octopus are located in $OCTOPUS_ROOT/
share/octopus/pseudopotentials/PSF/
. If you wish need to use a different set, please refer to the manual.
The most recent compiled version is 12.1, and it has been built using with the intel-oneapi compiler (v. 2021.2) and linked to Intel MKL (including FFTW).
...
Octopus version | Module files | Requirements | Optional features supported | Compute partitions | CPU/GPU |
---|---|---|---|---|---|
12.1 | octopus/12.1 |
|
...
| CentOS 7 | / | |||
14.1 | octopus/14.1 |
| NetCDF | Rocky Linux 9 | / |
Example Jobscripts
Assuming that your input file inp
is located within the directory where you are submitting the jobscriptjob script, and that the output is written to out
, one example of jobscript job script is given below
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --time 12:00:00 #SBATCH --partition cpu-clx #SBATCH --nodes=1 #SBATCH --ntasks-per-node=24 #SBATCH --cpus-per-task=4 #SBATCH --job-name=octopus module load intelimpi/2021.2 impi/2021.7.113 module load octopus/1214.1 # Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task" export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} # Adjust the maximum stack size of OpenMP threads export OMP_STACKSIZE=512m # Do not use the CPU binding provided by slurm export SLURM_CPU_BIND=none # Binding OpenMP threads export OMP_PLACES=cores export OMP_PROC_BIND=close # Binding MPI tasks export I_MPI_PIN=yes export I_MPI_PIN_DOMAIN=omp export I_MPI_PIN_CELL=core mpirun octopus |
Please, check carefully for your use cases the best parallelization strategies in terms of e. g. the number of MPI processes and OpenMP threads. Note that the variables ParStates
, ParDomains
and ParKPoints
defined in the input file also impact the parallelization performance.
A similar example valid for the CPU partitions with CentOS 7 is
Codeblock | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #SBATCH --time 12:00:00 #SBATCH --partition standard96 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=24 #SBATCH --cpus-per-task=4 #SBATCH --job-name=octopus module load intel/2021.2 impi/2021.7.1 octopus/12.1 # Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task" export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} # Adjust the maximum stack size of OpenMP threads export OMP_STACKSIZE=512m # Do not use the CPU binding provided by slurm export SLURM_CPU_BIND=none # Binding OpenMP threads export OMP_PLACES=cores export OMP_PROC_BIND=close # Binding MPI tasks export I_MPI_PIN=yes export I_MPI_PIN_DOMAIN=omp export I_MPI_PIN_CELL=core mpirun octopus |