Frequently asked questions (FAQ)¶
- 1. How can I connect to Hydra?
- 2. What can I do in the login node?
- 3. Where can I find installed software?
- 4. How can I check my disk quota?
- 5. I have exceeded my $HOME disk quota, what now?
- 6. How can I check my resource usage?
- 7. I have accidentally deleted some data, how can I restore it?
- 8. How can I share files with other users?
- 9. Why is my job not starting?
- 10. Why has my job crashed?
- 11. How can I use the GPUs to run my jobs?
- 12. Jobs on the GPU nodes fail with “all CUDA-capable devices are busy or unavailable”
- 13. How can I run a job longer than the maximum allowed wall time?
- 14. Where can I find public datasets and databases?
- 15. Can I run containers on Hydra?
First check the page Creating an account, to make sure that your netID is activated for Hydra or that you have a valid VSC account.
Next, follow the instructions in the HPC tutorial.
If you cannot connect after following the instructions, there are a few things you can check:
Is your password correct?
You can reset your password in your personal account manager (PAM). Note that it can take up to 1 hour to update our password database before you will be able to use your new password.
Are you using an SSH key?
Make sure that your private key file is in the right place and has the right permissions.
If you still cannot connect, please contact us at firstname.lastname@example.org.
The login node is your main interface with the compute nodes. This is where you can access your data, write the scripts and input files for your calculations and submit them to the job scheduler of Hydra. It is also possible to run small scripts in the login nodes, for instance to process the resulting data from your calculations or to test your scripts before submission to the scheduler. However, the login node is not the place to run your calculations and hence, the following restrictions apply
- Any single user can use a maximum of 12GB of memory in the login node.
- The amount of CPU time that can be used is always fairly divided over all users. A single user cannot occupy all CPU cores.
- The allowed network connections to the outside are limited to SSH, FTP, HTTP and HTTPS.
A graphical desktop environment is available through a VNC, allowing the use of complex graphical programs and visualization tools. More information in the Software section: 4. How can I run graphical applications?
Jobs submitted to the scheduler are preprocessed before placement in the queues to ensure that their requirements of resources are correct. For instance, the system automatically assigns memory limits to your job if you didn’t specify it manually. Detailed information can be found in the section Job Submission: Hydra.
Users compiling their own software should check 3. How can I install additional software/packages?.
To prevent your account from becoming unusable, you should regularly check your disk quota and cleanup your $HOME ($VSC_HOME), $VSC_DATA (VSC accounts only) and $VSC_SCRATCH.
You can check your disk quota with the following command:myquota
Users with a VSC account will regularly receive warning emails when they have reached 90% of their quota.
When your $HOME ($VSC_HOME) is full your account will become unusable.
In that case you have to delete some files from your $HOME, such as temporary files, checkpoint files, until there is enough free space.
If you need more backup storage, users with a VSC account can use $VSC_DATA.
If you need large temporary storage for your running jobs, you can use $VSC_SCRATCH. Files will not be deleted there, but there is also no backup of your data on $VSC_SCRATCH.
Remember that the HPC team is not responsible for data loss (although we will do everything we can to avoid it). You should regularly backup important data and clean-up your $HOME, $VSC_DATA and $VSC_SCRATCH.
If you need more storage than is available by default, please contact us at email@example.com.
Making efficient use of the HPC cluster is important for you and your colleagues:
- Your jobs will start faster.
- Better usage means the HPC team can buy more/faster hardware.
The 3 main resources to keep an eye on are:
- memory usage
- wall time usage
- cores usage (CPU time): how many cores are doing actual work?
You can check resource usage of running and recently finished jobs with the command:myresources
- Cores usage is only reported for jobs that are running longer than 5 minutes.
- If your requested memory is 1GiB or less, this is always considered good (you get 2GiB ‘for free’).
You can also check resource usage of finished jobs at the end of your job output file. The last few lines of this file show the requested and used resources, for example:Resources Requested: walltime=00:05:00,nodes=1:ppn=1,mem=1gb,neednodes=1:ppn=1 Resources Used: cput=00:01:19,vmem=105272kb,walltime=00:01:22,mem=17988kb,energy_used=0
First of all, remember that you are responsible for your own backups: the HPC team does not guarantee the safety of your data on Hydra. Nevertheless we will do everything we can to avoid any data-loss.
If your deleted data was on your $VSC_SCRATCH, the data is permanently lost, as we do not make any backups there.
For netID users, we make daily backups of $HOME, and we keep them for 7 days.
For VSC account users, we make backups of $HOME and $VSC_DATA, and we keep them for 1 month.
To access the backups of your $HOME, you first need to find out the directory your $HOME is mounted on:df $HOME
The above command will return output similar to this:Filesystem 1K-blocks Used Available Use% Mounted on 172.31.244.238:/export/fs-svub1 1073741824 814717952 259023872 76% /svub1
In the example output above, $HOME is mounted on ‘/svub1’. Go to the snapshot directory (replace ‘/svub1’ with the directory your $HOME is mounted on):cd /svub1/.zfs/snapshot
List all backups currently available:ls -ahl
Each directory name contains a timestamp (i.e. date+time when the backup was created). For example, the backup ‘.auto-20181118T020500UTC’ was created November 18th at 02:05. To access one of the backups, do (replace the timestamp with the one you need):cd .auto-20181118T020500UTC/<netID>
In this directory you will find all your files and directories in the state as they were in on the date of the timestamps. You can browse this directory and copy back the data you need to your $HOME.
First of all, do not panic if your job does not start immediately. Depending on Hydra load, the number of jobs that you have submitted recently, and the requested resources in your job script, waiting times can take hours to days. Usually, the load on Hydra is higher on weekdays, and lower in the weekends and during holidays. Remember that the more resources you request, the longer you may have to wait in the queue.
To get an overview of the available hardware and their current load, you can issue the command:nodestat
There are many possible causes for a job crash. Here are just a few of the more common ones:
Requested resources (memory or walltime) exceeded
Check the last few lines of your job output file. If the used and requested memory or wall time are very similar, you have probably exceeded your requested resource. Increase the resource in your job script and resubmit.
Wrong module(s) loaded
Check the syntax (case sensitive) and version of the modules. If you are using more than one module, check that they are compiled with the same toolchain.
Wrong input filetype
Convert from dos to unix with the command:dos2unix <input_file>
We currently have three types of GPGPUs on Hydra:
- 12x Tesla K20Xm (6GB) (6 nodes, each 2 GPUs each): ivybridge, 20 cores, 128 GB RAM
- 8x Tesla P100 (16GB) (4 nodes, each 2 GPUs each): broadwell, 24 cores, 256 GB RAM
- 4x Geforce 1080Ti (12GB) (1 node, 4 GPUs each): broadwell, 32 cores, 512 GB RAM
If you want to run a GPU job, submit with the following PBS directives:#PBS -l nodes=1:ppn=1:gpus=1
If you want to run your job on a specific GPU type, add one of the following features:#PBS -l feature=kepler (for the K20Xm) #PBS -l feature=pascal (for the P100) #PBS -l feature=geforce (for the 1080Ti)
Note however that if you make more specific job requests you may have to wait longer in the queue.
Important: before you submit a GPU job, make sure that you use software that is optimized for running on GPUs with support for CUDA. You can check this by looking at the name of the module. If the module name contains one of the words
gompic, the software is built with CUDA support.
The maximum wall time for calculations on Hydra is 5 days. If your job requires a longer wall time, there are a few options to consider.
- If the software supports multiprocessing and/or multithreading, you can request more CPU cores for your job. Consult the documentation of the software for parallel execution instructions. However, to prevent wasting computational resources, you have to make sure that adding more cores gives you the expected speedup. It is very important that you perform a parallel efficiency test. For more info on parallel efficiency, see the HPC training slides. Note also that you may have to increase the requested RAM memory proportional to the number of processes.
- Long running calculations can often be split into restartable chunks that can be submitted one after the other. First consult the documentation of the software itself for restarting options.
- If the software does not support restarting, you can use checkpointing. Checkpointing means making a snapshot of the current state of your calculation and saving it to disk. The snapshot can then be used to restore the state and continue your calculation in a new job. Checkpointing on Hydra can be done conveniently with
csub, a tool that automates the process of:
- halting the job
- checkpointing the job
- killing the job
- re-submitting the job to the queue
- reloading the checkpointed state into memory
For example, to submit the job script ‘myjob.pbs’ with checkpointing and re-submitting every 24 hours, type:csub -s myjob.pbs --shared --job_time=24:00:00
This checkpointing and re-submitting cycle will be repeated until your calculation has completed.
Checkpointing and reloading is done as part of the job, and typically takes up to 15 minutes depending on the amount of RAM memory being used. Thus, in the example above you should specify the following PBS directive in your job script:#PBS -l walltime=24:15:00
Job output and error files are written in your directory
$VSC_SCRATCH/chkpt(along with checkpoint files and csub log files). Other output files created by your job may also be written there.
csubuses DMTCP (Distributed MultiThreaded CheckPointing). Users who want full control can also use DMTCP directly. Example launch/restart job scripts can be downloaded here:
csub/DMTCP is not yet tested with all installed software on Hydra. It has been successfully used with software written in Python, R, and with Gaussian. For more info on DMTCP-supported applications, see http://dmtcp.sourceforge.net/supportedApps.html
If you are running a
Gaussian 16job with
csub, a few extra lines must be added to your job script:module load Gaussian/G16.A.03-intel-2017b unset LD_PRELOAD module unload jemalloc/4.5.0-intel-2017b export G09NOTUNING=1 export GAUSS_SCRDIR=$VSC_SCRATCH/<my_unique_gauss_scrdir> # make sure this directory is present
If you run into issues with checkpointing/restarting, please contact us at firstname.lastname@example.org.
To avoid that each user has to download their own private copy of publicly accessible data, a shared directory
Users who need public data to run their calculations should always check first if it is already available in
/databases. If the data is not available, the HPC team can add it upon request at email@example.com.
Singularity containers are supported on Hydra for quickly testing software before creating an optimized build, and for some very complex installations.
Singularity allows running a container without root privileges. Users with root access on a Linux machine can create their own Singularity image, either manually or by importing an existing Docker image. The HPC team can also create images upon request at firstname.lastname@example.org.
Note that our
modulesystem is still the preferred method of running software, because Singularity containers are usually not optimized for the different CPU architectures and put more network pressure on the cluster.
For more information on Singularity containers, see: https://www.sylabs.io/docs