Hydra is a general purpose cluster and member of the VSC.

The cluster is capable to run all workload types (single core, SMP, MPI, HTC) and evolves in a continuous upgrade model.

Hydra description

Hydra is a cluster being continuously upgraded. Traditional clusters are purchased in one go and remain fixed for their lifetime (generally 4-5 years) until replaced by a new one. However, research is changing and evolving all the time. We have and will therefore invest regularly in Hydra to extend or adapt the cluster on a yearly basis to better meet users' needs.

Resources overview

The cluster is composed of about 160 compute nodes, covering several CPU generations. Part of the cluster has Infiniband network for MPI jobs. A GPFS storage of ~800 TB offers fast and large capacities for IO demanding jobs. There are nodes with 16, 20 or 24 cores and fitted with memory capacity from 64 GB to 1.5 TB of RAM. Several master nodes run the core services such as Moab, Torque or the LDAP servers.


Hydra is freely accessible to all VUB and ULB users. You need to activate your NetID for Hydra before being able to access it. See our documentation for more details. We impose limitations on the number and type of jobs that can be executed since this is a shared environment and therefore everyone need a chance to get jobs started.


The HPC offering is not limited to a simple access to the Hydra cluster. The HPC team provides several services to help researchers to develop and perform scientific computing as efficiently as possible. With a team fully formed in 2017, the following services can be officially presented.


The most critical one! The whole team is involved in support, from answering simple practical questions to co-working on complex requests.

Learn More


Most of the software is available as modules. If you want an extra - freely available - software, let us know.

Learn More


For new research projects requiring HPC, new fund requests or simply to improve your researcher efficiency, please contact us for advices and/or to work with you.

Learn More

System Administration

The cluster is continuously maintained: security patches, OS upgrades, hardware replacement etc. The HPC team works on it on a daily basis.

Learn More


We are developping this new competence, with new hardware coming up soon.

Learn More

Paid services

Extras we offer but at a given price. It can be extra computing power, storage or hosting your compute nodes.

Learn More
  • 160

    Compute nodes

  • 800

    TB of storage

  • 2600


  • 800000

    Jobs per year

Limits on Hydra

User limits

Name Limit
Number of running jobs Maximum 500 jobs
Number of concurrent usable cores From 1000 to 2500
Number of jobs in the queue per user 2500 jobs
Job array size 2500 jobs per array
Memory per job Default is 1 GB and maximum is 1500 GB.
Home space Quota (soft and hard) set to 100 GB
Work space Quota set to 400 GB soft and 1 TB hard with a grace period of 14 days.
Walltime for jobs Maximum allowed is 120 hours

Hydra latest news

11 December 2019

Storage migration on December 16th


We have planned a short Hydra maintenance window for this Monday 16th of December from 9:00. The jobs scheduler has been programmed to ensure that no jobs will be running by Monday 16th. Pending jobs will be started once the maintenance ...

16 September 2019

Annual Maintenance of Hydra is complete


The Hydra maintenance is over. We have successfully finished all scheduled tasks. All users can now submit and run jobs again. Some important considerations:

  • Software in /apps/brussels/CO7/magnycours has been deleted.
  • ...

02 September 2019

Annual Maintenance of Hydra


Hydra will be shut down for its annual maintenance on September 9th at 9 AM. This maintenance will be carried out for the whole week until the end of September 13th. A reservation is already in place to make sure that all jobs finish on ...

Hydra in pictures

Want to know what Hydra looks like? Here are some pictures of the cluster.

Our main suppliers