a computational chemistry application provided by Gaussian Inc.
License agreement
In order to use Gaussian you have to agree to the following conditions.
1. I am not a member of a research group developing software competitive to Gaussian.
2. I will not copy the Gaussian software, or make it available to anyone else.
3. I will properly acknowledge Gaussian Inc. in publications.
Please contact support with a copy of the following statement, to add your user ID to the Gaussian UNIX group.
Limitations
Gaussian 16 is available at NHR@ZIB.
"Linda parallelism", Cluster/network parallel execution of Gaussian, is not supported at any of our systems. Only "shared-memory multiprocessor parallel execution" is supported, therefore no Gaussian job can use more than a single compute node.
Description
Gaussian 16 is the latest in the Gaussian series of programs. It provides state-of-the-art capabilities for electronic structure modeling.
QuickStart
Environment modules
The following versions have been installed:
Version | Installation Path | modulefile |
---|---|---|
Modules for running on CPUs | ||
Gaussian 16 Rev. C.02 | /sw/chem/gaussian/g16_C02/skl/g16 | gaussian /16 .C02 |
Modules for running on GPUs | ||
Gaussian 16 Rev. C.02 | /sw/chem/gaussian/g16_C02/a100/g16 |
|
GPU Job Performance
GPUs are effective for DFT calculations, for both ground and excited states for larger molecules. They are not effective for small jobs, and also not used effectively by post-SCF calculations such as MP2 or CCSD.
Job submissions
Besides your Gaussian input file you have to prepare a job script to define the compute resources for the job; both input file and job script have to be in the same directory.
Default runtime files (.rwf, .inp, .d2e, .int, .skr files) will be saved only temporarily in $ $LOCAL_TMPDIR on the compute node where the job was scheduled to. The files will be removed by the scheduler when a job is done.
If you wish to restart your calculations when a job is done (successful or not) please define checkpoint (file_name.chk) file in your G16 input file (%Chk=route/route/name.chk).
CPU jobs
Since only the "shared-memory multiprocessor" parallel version is supported, your jobs can use only one node and up to 96 maximum cores per node.
CPU job script example
#!/bin/bash #SBATCH --time=12:00:00 # expected run time (hh:mm:ss) #SBATCH --partition=standard96:ssd # Compute Nodes with installed local SSD storage #SBATCH --mem=16G # memory, roughly 2 times %mem defined in the input name.com file #SBATCH --cpus-per-task=16 # No. of CPUs, same amount as defined by %nprocs in the filename.com input file module load gaussian/16.C02 g16 filename.com # g16 command, input: filename.com
GPU jobs
Since only the "shared-memory multiprocessor" parallel version is supported, your jobs can use only one node up to 4 GPUs per node.
#!/bin/bash #SBATCH --time=12:00:00 # expected run time (hh:mm:ss) #SBATCH --partition=gpu-a100 # Compute Nodes with installed local SSD storage #SBATCH --nodes=1 # number of compute node #SBATCH --mem=32G # memory, roughly 2 times %mem defined in the input name.com file #SBATCH --ntasks=32 # No.CPUs plus the number of control CPUs same amount as defined by %cpu in the filename.com input file #SBATCH --gres=gpu:4 # No. GPUs same amount as defined by %GPUCPU in the filename.com input file module load cuda/11.8 module load gaussian/16.C02 g16 filename.com # g16 command, input: filename.com
Specifying GPUs & Control CPUs for a Gaussian Job
The GPUs to use for a calculation and their controlling CPUs are specified with the %GPUCPU Link 0 command. This command takes one parameter:
%GPUCPU=gpu-list=control-cpus
For example, for 2 GPUs, a job which uses 2 control CPU processes would use the following Link 0 commands:
%CPU=0-1 #Control CPUs are included in this list.
%GPUCPU=0,1=0,1
Using 4 GPUs and 4 control CPU processes:
%CPU=0-3 #Control CPUs are included in this list.
%GPUCPU=0,1,2,3=0,1,2,3
Using 4 GPUs and a total of 32 CPU processes including 4 control CPU processes :
%CPU=0-31 #Control CPUs are included in this list.
%GPUCPU=0,1,2,3=0,1,2,3
Interactive jobs
Example for CPU calculations:
~ $ salloc -t 00:10:00 -p standard96:ssd -N1 --tasks-per-node 24
~ $ g16 filename.com
Exmaple for GPU calculations:
~ $ salloc -t 00:10:00 -p gpu-a100 -N1 --ntasks=32
~ $ g16 filename.com
Restart calculations from checkpoint files
opt=restart
Restart molecular geometry optimization from checkpoint file. All existing information; basis sets, wavefunction and molecular structures during the geometry optimization can be read from the checkpoint file.
%chk=filename.chk %mem=16GB %nprocs=16 # method chkbasis guess=read geom=allcheck opt=restart
#restart
Restart vibrational frequency computation from the checkpoint file.
%chk=filename.chk %mem=16GB %nprocs=16 # restart
Input file examples
Example for CPU calculations: water.com
Example for GPU calculations: DeOxyThymidine.com