1. Basic use of the HPC¶
1.1. How can I connect to Hydra?¶
Connecting to Hydra requires an active VSC account. If you are eligible, you can already create your account.
Once you have your VSC account, you need a client software in your computer to connect to Hydra. We provide instructions for several operative systems to setup your connection.
If you cannot connect after following the previous instructions, here are a few things you can check:
Are you using the correct SSH key?
Make sure that your private key file is in the right place and has the right permissions. It needs to have read permissions by the owner of the file only. Please check VSC Docs: Reset your SSH keys if you need to start over.
Is the passphrase of your SSH key correct?
Upon login you will be asked to re-enter the passphrase that you entered during generation of your SSH key pair. If you forgot this passphrase, you will have to reset your keys following the instructions in VSC Docs: Reset your SSH keys. Note that it can take up to 1 hour before you will be able to use your new SSH key.
Helpdesk We can assist you in the login process.
1.2. What can I do in the login node?¶
The login node is your main interface with the compute nodes. This is where you can access your data, write the scripts and input files for your calculations and submit them to the job scheduler of Hydra. It is also possible to run small scripts in the login nodes, for instance to process the resulting data from your calculations or to test your scripts before submission to the scheduler. However, the login node is not the place to run your calculations and hence, the following restrictions apply:
Any single user can use a maximum of 12GB of memory in the login node.
The amount of CPU time that can be used is always fairly divided over all users. A single user cannot occupy all CPU cores.
The allowed network connections to the outside are limited to SSH, FTP, HTTP and HTTPS.
Lightweight graphical applications can be launched through X11 forwarding. For more complex programs and visualization tools, a graphical desktop environment is available through a VNC. More information in the Software section: How can I run graphical applications?
Jobs submitted to the scheduler are preprocessed before placement in the queues to ensure that their requirements of resources are correct. For instance, the system automatically assigns memory limits to your job if you didn’t specify it manually. Detailed information can be found in the section Job Submission.
Users compiling their own software should check How can I build/install additional software/packages?
1.3. What software is available?¶
Software in Hydra is provided with modules that can be dynamically loaded by the users. Please, read our documentation on the Module System.
1.4. How can I check my disk quota?¶
You can check your disk quota with the following command in the login node:
myquota provides up to date information (updated every 5 min) about your data use and quota of your
$VSC_SCRATCH. Users will receive warning emails whenever they reach 90% of their quota in any of their partitions in Hydra.
To prevent your account from becoming unusable, you should regularly check your disk quota and cleanup any files that are no longer necessary for your active projects.
Long-term archival of data in Hydra is forbidden.
For more information, please check VSC Docs: Disk space usage
1.5. How can I check my resource usage?¶
You can check resource usage of running and recently finished jobs with the command:
The 3 main resources to keep an eye on are:
Maximum is 5 days. Always set a walltime that is close to the duration of your job (with some margin). Longer walltimes cause longer wait times in queue as it is more difficult for the scheduler to find a hole in the schedule for your job.
Jobs get 4 GB per core by default. Small jobs requesting <1 GB are always considered good in terms of resources regardless of their memory use. Always set a slightly higher (10%) amount of memory than needed by your job. Requesting excessive memory will not make your job any faster but it will make it wait longer in queue.
- core activity
Also known as CPU time. It is only reported for jobs running more than 5 minutes. Keep in mind that not all software can exploit any arbitrary number of cores. For instance,
Ris limited to 1 core by default and any additional cores will be ignored.
Making an efficient use of the HPC cluster is important for you and your colleagues. Requesting too many resources is detrimental in several ways:
Your jobs will stay more time in queue: the larger the pool of requested resources, the more difficult is it for the resource manager to free them
Big jobs will impact the whole queue and can slow it down beyond their waiting time in it
Any computational resources not used are a waste of energy, which directly translate to carbon emissions
You can also check resource usage of your jobs once their finish. The last few lines of the job output file will show the exact requested and used resources
Resources Requested: walltime=00:05:00,nodes=1:ppn=1,mem=1gb,neednodes=1:ppn=1 Resources Used: cput=00:01:19,vmem=105272kb,walltime=00:01:22,mem=17988kb,energy_used=0
1.6. How can I get more storage?¶
The storage provided in the individual partitions
$VSC_SCRATCH is relatively limited on purpose. Users needing a larger storage space are expected to be part of a Virtual Organization (VO) and use the shared storage in it.
1.7. How can I use GPUs in my jobs?¶
The available GPUs in Hydra are listed in VSC Docs: Hydra hardware
Use the following PBS directive in your job to request a single GPU:
#PBS -l nodes=1:ppn=1:gpus=1
If you want to run your job on a specific GPU type, add one of the following features:
1 2 3
#PBS -l feature=kepler (for the K20Xm) #PBS -l feature=pascal (for the P100) #PBS -l feature=geforce (for the 1080Ti)
Keep in mind that more specific job requests will probably have to wait longer in the queue.
Before you submit a GPU job, make sure that you use software that is optimized for running on GPUs with support for CUDA. You can check this by looking at the name of the module. If the module name contains one of the words
gompic, the software is built with CUDA support.
1.8. Where can I find public datasets and databases?¶
We provide storage for datasets and databases that are public and free to use in the shared directory
/databases. The data in there is accessible by all users of the HPC cluster. Users who need public data to run their calculations should always check first if it is already available in
Helpdesk We can add new databases to the HPC cluster upon request.
The PDB database can be found in
/databases/bio/PDB and is automatically updated on a weekly basis.
1.9. Can I run containers on Hydra?¶
We support Singularity containers in Hydra. Singularity allows any user to run a container without root privileges. You can either use any of the containers already installed in Hydra that are located in
/apps/brussel/singularity/, use your own container or request the installation of a container to VUB-HPC Support.
Users can create their own Singularity image in their personal computer (with root privileges) either manually or by importing an existing Docker image. Then the resulting container can then be transferred to Hydra and run with your VSC user account. We recommend using Singularity containers to use software with very specific and complex requirements (dependencies, installation, …) and for quickly testing any software.
The documentation by Sylabs, the developers of Singularity.
The Module System is still the preferred method to run software in Hydra. Singularity containers are usually not optimized for the different CPU architectures present in the cluster and put more network pressure on the system.