...
Partition name | Nodes | CPU | Main memory (GB) | GPUs per node | GPU hardware | Walltime (hh:mm:ss) | Description |
---|---|---|---|---|---|---|---|
gpu-a100 | 36 | Ice Lake 8360Y | 1000 | 4 | NVIDIA Tesla A100 80GB | 24:00:00 | full node exclusive |
gpu-a100:shared | 5 | 4 | NVIDIA Tesla A100 80GB | shared node access, exclusive use of the requested GPUs | |||
gpu-a100:shared:mig | 1 | 28 (4 x 7) | 1 to 28 1g.10gb A100 MIG slices | shared node access, shared GPU devices via Multi Instance GPU. Each of the four GPUs is logically split into usable seven slices with 10 GB of GPU memory associated to each slice |
See here how Slurm usage how to pass the 12h a 24h walltime limit with job dependencies.
...