3. Slurm partitions#
Compute nodes in the cluster are organized in partitions based on their hardware characteristics. The mysinfo command provides detailed information about all partitions in the cluster. The main feature of a partition can be recognized by its suffix:
_mpi: nodes connected with fast InfiniBand Interconnect, designed for multi-node (MPI) jobs_gpu: nodes with high-end NVIDIA data center GPUs_himem: nodes with high amounts of CPU memory (RAM)partitions without suffix: high-core nodes for generic compute
The part of the partition name that comes before the suffix indicates the
hardware generation in that partition. For example, ampere_gpu has Nvidia
Ampere (A100) GPUs, while zen5_mpi has nodes with AMD Zen 5 CPUs and a fast
interconnect.
See also
Full hardware description of the partitions in Hydra can be found in VSCdocHydra Hardware
In most cases, specifying a partition is not
necessary, as Slurm will automatically determine the partitions that are
suitable for your job based on the requested resources, such as number of tasks
or GPUs. If really needed, you can submit your jobs to specific partitions. It’s
also possible to request a comma-separated list of partitions. For example, to
indicate that your job may run in partitions zen5_mpi or zen4 you can use
--partition=zen5_mpi,zen4.  Note however, that a job will only run in a
single partition. Slurm will decide the partition based on priority and
availability.