a computational chemistry application provided by Gaussian, Inc.
License agreement
In order to use Gaussian you have to agree to the following conditions.
1. I am not a member of a research group developing software competitive to Gaussian.
2. I will not copy the Gaussian software, or make it available to anyone else.
3. I will properly acknowledge Gaussian Inc. in publications.
Please contact support with a copy of the following statement, to add your user ID to the Gaussian UNIX group.
Limitations
Gaussian 16 is available at NHR@ZIB.
"Linda parallelism", Cluster/network parallel execution of Gaussian, is not supported at any of our systems. Only "shared-memory multiprocessor parallel execution" is supported, therefore no Gaussian job can use more than a single compute node.
Description
Gaussian 16 is the latest in the Gaussian series of programs. It provides state-of-the-art capabilities for electronic structure modeling.
QuickStart
Environment modules
The following versions have been installed:
Version | Installation Path | modulefile |
---|---|---|
Modules for running on CPUs | ||
Gaussian 16 Rev. C.02 | /sw/chem/gaussian/g16_C02/skl/g16 | gaussian /16 .C02 |
Modules for running on GPUs | ||
Gaussian 16 Rev. C.02 | /sw/chem/gaussian/g16_C02/a100/g16 |
|
Job submissions
Besides your Gaussian input file you have to prepare a job script to define the compute resources for the job; both input file and job script have to be in the same directory.
Default runtime files (.rwf, .inp, .d2e, .int, .skr files) will be saved only temporarily in $ $LOCAL_TMPDIR on the compute node where the job was scheduled to. The files will be removed by the scheduler when a job is done.
If you wish to restart your calculations when a job is done (successful or not) please define checkpoint (file_name.chk) file in your G16 input file (%Chk=route/route/name.chk).
CPU jobs
Since only the "shared-memory multiprocessor" parallel version is supported, your jobs can use only one node and up to 96 maximum cores per node.
CPU job script example
#!/bin/bash #SBATCH --time=12:00:00 # expected run time (hh:mm:ss) #SBATCH --partition=standard96:ssd # Compute Nodes with installed local SSD storage #SBATCH --mem=16G # memory, roughly 2 times %mem defined in the input name.com file #SBATCH --cpus-per-task=16 # No. of CPUs, same amount as defined by %nprocs in the filename.com input file module load gaussian/16.C02 g16 filename.com # g16 command, input: filename.com
GPU jobs
Since only the "shared-memory multiprocessor" parallel version is supported, your jobs can use only one node up to 4 GPUs per node.
#!/bin/bash #SBATCH --time=12:00:00 # expected run time (hh:mm:ss) #SBATCH --partition=gpu-a100 # Compute Nodes with installed local SSD storage #SBATCH --nodes=1 # number of compute node #SBATCH --mem=32G # memory, roughly 2 times %mem defined in the input name.com file #SBATCH --cpus-per-task=1 # No.CPUs plus the number of control CPUs same amount as defined by %cpu plus %GPUCPU in the filename.com input file #SBATCH --gpus-per-task=4 # No. GPUs same amount as defined by %GPUCPU in the filename.com input file module load gaussian/16.C02 g16 filename.com # g16 command, input: filename.com
Specifying GPUs & Control CPUs for a Gaussian Job
The GPUs to use for a calculation and their controlling CPUs are specified with the %GPUCPU Link 0 command. This command takes one parameter:
%GPUCPU=gpu-list=control-cpus
Restart calculations from checkpoint files
opt=restart
Restart molecular geometry optimization from checkpoint file. All existing information; basis sets, wavefunction and molecular structures during the geometry optimization can be read from the checkpoint file.
%chk=filename.chk %mem=16GB %nprocs=16 # method chkbasis guess=read geom=allcheck opt=restart
#restart
Restart vibrational frequency computation from the checkpoint file.
%chk=filename.chk %mem=16GB %nprocs=16 # restart