RFantibody¶
This page documents the usage of the RFantibody pipleline on the UTHPC cluster.
Please also refer to the RFantibody GitHub documentation for more in-depth information
When encountering any issues or have ideas for improvements, please let us know at support@hpc.ut.ee
First steps¶
RFantibody is used by interacting with a container for maximum portability. The image for Singularity has been provided at /gpfs/space/software/cluster_software/containers/rfantibody/rfantibody.sif
. All other prerequisites need to be installed by the user.
The following steps need to be done only once per user and will set up the necessary files and directories.
To start, clone the RFantibody repository to your desired location with git clone https://github.com/RosettaCommons/RFantibody.git
. During this documentation, we will be working that directory.
Install the model weights with bash include/download_weights.sh
. This will automatically download all needed files and places them in a directory called weights
. This step should take a couple of minutes at most.
In the cloned directory, run the following commands to finalize the installation:
module load singularity
singularity run --nv -B .:/home /gpfs/space/software/cluster_software/containers/rfantibody/rfantibody.sif
bash /home/include/setup.sh
The second command places you inside of the container. The third command installs all of the required python packages and more. This step should take no more than five minutes to complete. Finally, you can execute exit
to close the container.
Bug fixes¶
There are some issues with the default installation discovered during testing.
According to this issue, to get RFdiffusion to work, a script needs to be moved as follows:
cp scripts/rfdiffusion_inference.py src/rfantibody/rfdiffusion/
The final RF2 step fails due to missing weight names. According to this issue, you should modify the file at src/rfantibody/rf2/config/base.yml
and change the model_weights
parameter to equal /home/weights/RF2_ab.pt
instead.
Usage¶
The official guide is built to use a live terminal session inside of the container. Our attempts to get it working inside a Slurm script have been unsuccessful as of now. To start an interactive job on the cluster with GPUs, you can use the following:
srun -p gpu --cpus-per-task=2 --mem=12G --gres=gpu:tesla:1 --pty bash
Please refer to our quickstart guide for more information about the options here. Do note, that this job runs only for as long as you have the terminal session open.
Activate the container with module load singularity
and singularity run -H /home --nv -B .:/home /gpfs/space/software/cluster_software/containers/rfantibody/rfantibody.sif
.
The following steps are identical to the RFantibody GitHub documentation listed at the top of this page. The only difference is with RFdiffusion, where you need to replace /home/src/rfantibody/scripts/rfdiffusion_inference.py
with /home/src/rfantibody/rfdiffusion/rfdiffusion_inference.py
Most of the steps also have a example bash script provided. You can modify those to ease the running of longer commands.
File paths¶
You might want to have inputs/outputs in a different directory. By default, Singularity mounts your home directory as well, so you are free to use that when executing parts of the pipeline. If you wish to use a project directory, you need to bind it when starting the container, for example singularity run -H /home --nv -B .:/home -B /gpfs/space/projects/myproject:/myproject
, which makes it available in your workflows as /myproject
.