|
ABOUT_TEXT = """ |
|
## About this challenge |
|
|
|
We're inviting the ML/bio community to predict developability properties for 244 antibodies from the [GDPa1 dataset](https://huggingface.co/datasets/ginkgo-datapoints/GDPa1). |
|
|
|
**What is antibody developability?** |
|
|
|
Antibodies have to be manufacturable, stable in high concentrations, and have low off-target effects. |
|
Properties such as these can often hinder the progression of an antibody to the clinic, and are collectively referred to as 'developability'. |
|
Here we show 5 of these properties and invite the community to submit and develop better predictors, which will be tested out on a heldout private set to assess model generalization. |
|
|
|
**How to submit?** |
|
|
|
1. Download the [GDPa1 dataset](https://huggingface.co/datasets/ginkgo-datapoints/GDPa1) |
|
2. Make predictions for all the antibody sequences for your property of interest. |
|
3. Submit a CSV file containing the `"antibody_name"` column and a column matching the property name you are predicting (e.g. `"antibody_name,Titer"` if you are predicting Titer). |
|
There is an example submission file on the "✉️ Submit" tab. |
|
|
|
For the cross-validation metrics (if training only on the GDPa1 dataset), use the `"hierarchical_cluster_IgG_isotype_stratified_fold"` column to split the dataset into folds and make predictions for each of the folds. |
|
Submit a CSV file in the same format but also containing the `"hierarchical_cluster_IgG_isotype_stratified_fold"` column. |
|
There is also an example cross-validation submission file on the "✉️ Submit" tab, and we will be releasing a full code tutorial shortly. |
|
|
|
**How to evaluate?** |
|
|
|
You can calculate the Spearman correlation coefficient on the GDPa1 dataset yourself before uploading to the leaderboard. |
|
Simply use the `spearmanr(predictions, targets, nan_policy='omit')` function from `scipy.stats`. |
|
For the heldout private set, we will calculate these results privately at the end of the competition (and possibly at other points throughout the competition) - but there will not be "rolling results" on the private test set. |
|
|
|
**How to contribute?** |
|
|
|
We'd like to add some more existing models to the leaderboard. Some examples of models we'd like to add: |
|
- ESM embeddings + ridge regression |
|
- Absolute folding stability models |
|
- AbLEF |
|
|
|
If you would like to collaborate with others, start a discussion on the "Community" tab at the top of this page. |
|
|
|
### FAQs |
|
|
|
""" |
|
|
|
FAQS = { |
|
"Is there a fee to enter?": "No. Participation is free of charge.", |
|
"Who can participate?": "Anyone. We encourage academic labs, individuals, and especially industry teams who use developability models in production.", |
|
"Where can I find more information about the methods used to generate the data?": ( |
|
"Our [PROPHET-Ab preprint](https://www.biorxiv.org/content/10.1101/2025.05.01.651684v1) described in detail the methods used to generate the training dataset. " |
|
"Note: These assays may differ from previously published methods, and these correlations between literature data and experimental data are also described in the preprint. " |
|
"These same methods are used to generate the heldout test data." |
|
), |
|
"How were the heldout sequences designed?": ( |
|
"We sampled 80 paired antibody sequences from [OAS](https://opig.stats.ox.ac.uk/webapps/oas/). We tried to represent the range of germline variants, sequence identities to germline, and CDR3 lengths. " |
|
"The sequences in the dataset are quite diverse as measured by pairwise sequence identity." |
|
), |
|
"Do I need to design new proteins?": ( |
|
"No. This is just a predictive competition, which will be judged according to the correlation between predictions and experimental values. There may be a generative round in the future." |
|
), |
|
"Can I participate anonymously?": ( |
|
"Yes! Please create an anonymous Hugging Face account so that we can uniquely associate submissions. Note that top participants will be contacted to identify themselves at the end of the tournament." |
|
), |
|
"How is intellectual property handled?": ( |
|
"Participants retain IP rights to the methods they use and develop during the tournament. Read more details in our terms here [link]." |
|
), |
|
"Do I need to submit my code / methods in order to participate?": ( |
|
"No, there are no requirements to submit code / methods and submitted predictions remain private. " |
|
"We also have an optional field for including a short model description. " |
|
"Top performing participants will be requested to identify themselves at the end of the tournament. " |
|
"There will be one prize for the best open-source model, which will require code / methods to be available." |
|
), |
|
"How often does the leaderboard update?": ( |
|
"The leaderboard should reflect new submissions within a minute of submitting. Note that the leaderboard will not show the results on the private test set, these will be calculated once at the end of the tournament (and possibly at another occasion before that)." |
|
), |
|
"How many submissions can I make?": ( |
|
"You can currently make unlimited submissions, but we may choose to limit the number of possible submissions per user. For the private test set evaluation the latest submission will be used." |
|
), |
|
"How are winners determined?": ( |
|
'There will be 6 prizes (one for each of the assay properties plus an "open-source" prize). ' |
|
'For the property-specific prizes, winners will be determined by the submission with the highest Spearman rank correlation coefficient on the private holdout set. ' |
|
'For the "open-source" prize, this will be determined by the highest average Spearman across all properties. ' |
|
"We reserve the right to award the open-source prize to a predictor with competitive results for a subset of properties (e.g. a top polyreactivity model)." |
|
), |
|
"How does the open-source prize work?": ( |
|
"Participants who open-source their code and methods will be eligible for the open-source prize (as well as the other prizes)." |
|
), |
|
"What do I need to submit?": ( |
|
'There is a tab on the Hugging Face competition page to upload predictions for datasets - for each dataset participants need to submit a CSV containing a column for each property they would like to predict (e.g. called "HIC"), ' |
|
'and a row with the sequence matching the sequence in the input file. These predictions are then evaluated in the backend using the Spearman rank correlation between predictions and experimental values, and these metrics are then added to the leaderboard. ' |
|
'Predictions remain private and are not seen by other contestants.' |
|
), |
|
} |
|
|