Skip to content

Open Ondemand

Open Ondemand (official page ) is a portal for single point of access to an UTHPC cluster. It can be used to view your files, submit jobs and much more

Note

There are two ways to gain access:

  1. University of Tartu login - this can be used with University of Tartu Rocket cluster account.

  2. ETAIS login - available for ETAIS users.

Authenticating Requirements

Regardless of the way of authentication it is neccesary to have these three things:

  • An active HPC allocation.
  • Have logged in to the cluster via SSH at least once to create a home directory. This can be done in the ondemand portal as well.
  • Only for ETAIS login: An ETAIS account with a freeIPA user.

Note

You are currently unable to log out from the authenticated session. If you want to log out after finishing working, then it's best to clear your cookies afterwards or use an incognito mode.

Basic usage

The site can be found at ondemand.hpc.ut.ee . Upon logging in you will be greeted with the dashboard. This is currently very basic but there will be more functionality in the future.

Submitting jobs

The job composer allows you to generate and run job scripts. You can either create your job from scratch or use some of our predetermined templates. Those can be found in the templates tab.

Warning

Ondemand has no way of knowing your allocations currently. You will need to find them from your UTHPC allocation info in ETAIS and apply it yourself. You can put the allocation in the job script or insert it under the job options button.

The file navigator gives you the same access permissions as your Unix user on the cluster. You can freely navigate your home directory and the /gpfs/space/projects project directories, as long as you have permissions.

Interactive terminal

At the top of the screen, you can use Clusters rocket shell access to open a terminal inside of the browser. This allows you to have an interactive view of the cluster if needed. The shell gets opened in login1.hpc.ut.ee and is equivalent to using your own terminal to SSH in there.

RStudio

To use RStudio click the RStudio icon after logging in. It is located in the second row at the home screen. This opens up a Slurm job script where values can be filled in.

Launching the App

When launching the RStudio app, you will need to configure several settings:

  • Account:
    • If using a UT login, the default Account should be used uthpc.
    • If using an ETAIS login, choose the appropriate Account allocation under which you would like to use the resources.
  • Partition:
    • This determines which partition on the Rocket cluster your job will run on.
    • In most cases, then 'main' partition should be used. More information about partitions.
  • CPU cores:
    • Select the number of CPU cores needed for your RStudio instance.
    • Generally, 1 core is sufficient. Consider parallel processing with R to utilize multiple cores.
  • Memory (GB):
    • Choose the amount of memory required by the RStudio instance.
    • On you very first run, allocate more memory - 12 GB. Otherwise the job may run out of memory. On first startup the image is being built for your user.
    • Typically, 4-6 GB is sufficient, but larger datasets may require more.
  • R version:
    • Select the desired R version.
  • Number of hours:
    • Since RStudio runs within a Slurm job, you must specify a time limit.
    • The job does not stop automatically when idle; it only stops when the time limit is reached.
  • data location:
    • This specifies where the RStudio session data is saved.
    • It can be left blank, but if collaborating on a project, a shared folder might be convenient.
    • Using a project folder allows another user to start an RStudio instance referencing the same folder, enabling shared session history and environment variables.
    • Note: This is not the default home folder where RStudio code runs; a different default folder to start code from must be accessed manually.

Job states

After clicking Launch, the job is submitted to the cluster. You will be redirected to a job status screen where the job state is displayed. The Slurm job ID is shown in brackets next to RStudio, which may be useful for debugging.

  • Queued: The job is waiting in the Slurm queue.
  • Starting: The Singularity container for RStudio is being built. The first startup may take up to 5 minutes.
  • To monitor progress, click on the Session ID number.
  • In the session’s filesystem view, check output.log for real-time updates.
  • Additional files in this directory may provide insight into the app's background processes.
  • Running: The container is ready, and you can connect to RStudio by clicking Connect to RStudio Server at the bottom of the job screen.

Tips and Best Practices

Debugging

If you are having issues starting up your RStudio instance, check the logs from you rocket user at the path ondemand/data/sys/dashboard/batch_connect/sys/bc_rstudio/output/<session_id>/output.log. If there were any issues the error should be shown there.

If the first startup of your RStudio instance failed, clean up the RStudio session folder before trying again. They are located at your specified path or default ~/.local/share/rstudio.

Change the Working Directory

To set the working directory, run the following command:

setwd("/intended_working_directory/")

Then in the Files tab, navigate to More and click Go To Working Directory. Now, your workspace operates from the specified directory, allowing you to browse files conveniently.

The RStudio instance provides access to project folders located at /gpfs/space/projects and /gpfs/helios/projects.

Adding a Library Path

To add a library path, run:

.libPaths( c( "~/userLibrary" , .libPaths() ) )

This will add the user library to the top of the path list, making it the first place a package is searched from. Verify the library paths using .libPaths().

Monitoring Memory Usage

To check memory usage in RStudio navigate to Tools Memory Memory Usage Report. This shows the memory usage of R inside the container.

For broader monitoring, visit elk.hpc.ut.ee to track job-level resource usage. Note that this displays overall job metrics rather than the specific resources utilized within RStudio.

Cleaning Up Resources

Once you have finished working in RStudio, consider terminating the job if significant time remains. This helps free up cluster resources for other users.

To delete a running job:

  1. Go to My Interactive Sessions.
  2. Click Delete next to the RStudio job.

This ensures efficient resource utilization within the cluster.

Important notice

Billing is based on the RStudio instance you have requested from the cluster as a Slurm job being active with the resource specification requested upon launch. It does not dive into the processes run inside your RStudio instance itself. Accounting will stop when the time expires or you shut down the instance manually.