File size: 10,736 Bytes
8dcd98f 3edbc93 8dcd98f 3edbc93 fafd10e 8dcd98f 3edbc93 fafd10e 3edbc93 fafd10e 8dcd98f fafd10e 8dcd98f fafd10e 8dcd98f 21f87d6 8dcd98f 21f87d6 8dcd98f fafd10e 21f87d6 fafd10e 0768e70 3edbc93 8dcd98f 89d69bf 62b6599 3edbc93 8dcd98f 3edbc93 8dcd98f d00113e 3edbc93 8dcd98f 813ce52 58db0a0 21f87d6 d00113e 58db0a0 d00113e 3edbc93 10e69e7 fafd10e 3edbc93 10e69e7 89d69bf 10e69e7 8dcd98f 10e69e7 89d69bf 10e69e7 89d69bf 10e69e7 89d69bf 10e69e7 89d69bf 10e69e7 fafd10e 8f9985e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
ABOUT_INTRO = f"""
## About this challenge
### Register [here](https://datapoints.ginkgo.bio/ai-competitions/2025-abdev-competition)!
- Task: Predict 5 antibody developability properties based on the assays in the [GDPa1 dataset](https://huggingface.co/datasets/ginkgo-datapoints/GDPa1)
- Submissions are scored on a private held-out test set.
#### What is antibody developability and why is it important?
Antibodies have to be manufacturable, stable in high concentrations, and have low off-target effects.
Properties such as these can often hinder the progression of an antibody to the clinic, and are collectively referred to as 'developability'.
Here we invite the community to submit and develop better predictors, which will be tested out on a heldout private set to assess model generalization.
#### 🏆 Prizes
For each of the 5 properties in the competition, there is a prize for the model with the highest performance for that property on the private test set.
There is also an 'open-source' prize for the best model trained on the GDPa1 dataset (reporting cross-validation results) and assessed on the private test set where authors provide all training code and data.
For each of these 6 prizes, participants have the choice between **$10k in data generation credits** with [Ginkgo Datapoints](https://datapoints.ginkgo.bio/) or a **cash prize** with a value of $2000.
See the FAQ below for more details.
"""
# TODO include link to competition terms on datapoints website
ABOUT_TEXT = """
#### How to participate?
1. **Create a Hugging Face account** [here](https://huggingface.co/join) if you don't have one yet (this is used to track unique submissions and to access the GDPa1 dataset).
2. **Register your team** on the [Competition Registration](https://datapoints.ginkgo.bio/ai-competitions/2025-abdev-competition) page.
3. **Build a model** or validate it on the [GDPa1](https://huggingface.co/datasets/ginkgo-datapoints/GDPa1) dataset.
4. **Choose a validation track**. You must first upload predictions on the validation set before submitting predictions on the private test set:
- **Track 1**: If you already have a developability model, you can submit your predictions for the GDPa1 public dataset.
- **Track 2**: If you don't have a model, train one using cross-validation on the GDPa1 dataset and submit your predictions under the "Cross-validation" option.
5. **Submit your predictions** as a CSV with `antibody_name` + one column per property you are predicting (e.g. `"antibody_name,Titer,PR_CHO"` if your model predicts Titer and Polyreactivity).
You do **not** need to predict all 5 properties — each property has its own leaderboard and prize.
If you click the "Anonymous" checkbox, your predictions will not be displayed alongside your Hugging Face username but with a random ID.
6. **Final test submission**: Download test sequences from the "✉️ Submit" tab and upload predictions.
The validation set results will appear on the leaderboard after a few minutes. The private test set results will not appear on the leaderboard, and will be used to determine the winners at the close of the competition.
We may release private test set results at intermediate points during the competition.
There is an example submission file on the "✉️ Submit" tab. When you are ready to submit your predictions for the private test set, download the test set sequences from the "✉️ Submit" tab and follow the same process.
For the cross-validation metrics (if training only on the GDPa1 dataset), use the `"hierarchical_cluster_IgG_isotype_stratified_fold"` column to split the dataset into folds and make predictions for each of the folds.
Submit a CSV file in the same format but also containing the `"hierarchical_cluster_IgG_isotype_stratified_fold"` column.
There is also an example cross-validation submission file on the "✉️ Submit" tab, and we will be releasing a full cross-validation code tutorial shortly.
#### How to evaluate?
You can easily calculate the Spearman correlation coefficient on the GDPa1 dataset yourself before uploading to the leaderboard.
Simply use the `spearmanr(predictions, targets, nan_policy='omit')` function from `scipy.stats`.
For the heldout private set, we will calculate these results privately at the end of the competition (and possibly at other points throughout the competition) - but there will not be "rolling results" on the private test set.
#### How to contribute?
We'd like to add some more existing models to the leaderboard. Some examples of models we'd like to add:
- ESM embeddings + ridge regression
- Absolute folding stability models (for Thermostability)
- AbLEF
If you would like to collaborate with others, start a discussion on the "Community" tab at the top of this page.
"""
# Note(Lood): Let's track these FAQs in the main Google Doc and have that remain the source of truth.
# Note(Lood): Add another note of "many models are trained on different datasets, and differing train/test splits, so this is a consistent way of comparing for a heldout set"
FAQS = {
"Is there a fee to enter?": "No. Participation is free of charge.",
"Who can participate?": "Anyone. We encourage academic labs, individuals, and especially industry teams who use developability models in production.",
"Where can I find more information about the methods used to generate the data?": (
"Our [PROPHET-Ab preprint](https://www.biorxiv.org/content/10.1101/2025.05.01.651684v1) described in detail the methods used to generate the training dataset. "
"Note: These assays may differ from previously published methods, and these correlations between literature data and experimental data are also described in the preprint. "
"These same methods are used to generate the heldout test data."
),
"How were the heldout sequences designed?": (
"We sampled 80 paired antibody sequences from [OAS](https://opig.stats.ox.ac.uk/webapps/oas/). We tried to represent the range of germline variants, sequence identities to germline, and CDR3 lengths. "
"The sequences in the dataset are quite diverse as measured by pairwise sequence identity."
),
"Do I need to design new proteins?": (
"No. This is just a predictive competition, which will be judged according to the correlation between predictions and experimental values. There may be a generative round in the future."
),
"Can I participate anonymously?": (
"Yes! Please still create an anonymous Hugging Face account so that we can uniquely associate submissions. Note that top participants will be contacted to identify themselves at the end of the tournament."
),
"How is intellectual property handled?": (
"Participants retain IP rights to the methods they use and develop during the tournament. Read more details in our terms here [link]."
),
"Do I need to submit my code / methods in order to participate?": (
"No, there are no requirements to submit code / methods and submitted predictions remain private. "
"We also have an optional field for including a short model description. "
"Top performing participants will be requested to identify themselves at the end of the tournament. "
"There will be one prize for the best open-source model, which will require code / methods to be available."
),
"How often does the leaderboard update?": (
"The leaderboard should reflect new submissions within a minute of submitting. Note that the leaderboard will not show the results on the private test set, these will be calculated once at the end of the tournament (and possibly at another occasion before that)."
),
"How many submissions can I make?": (
"You can currently make unlimited submissions, but we may choose to limit the number of possible submissions per user. For the private test set evaluation the latest submission will be used."
),
"How are winners determined?": (
'There will be 6 prizes (one for each of the assay properties plus an "open-source" prize). '
'For the property-specific prizes, winners will be determined by the submission with the highest Spearman rank correlation coefficient on the private holdout set. '
'For the "open-source" prize, this will be determined by the highest average Spearman across all properties. '
"We reserve the right to award the open-source prize to a predictor with competitive results for a subset of properties (e.g. a top polyreactivity model)."
),
"How does the open-source prize work?": (
"Participants who open-source their code and methods will be eligible for the open-source prize (as well as the other prizes)."
),
"What do I need to submit?": (
'There is a tab on the Hugging Face competition page to upload predictions for datasets - for each dataset participants need to submit a CSV containing a column for each property they would like to predict (e.g. called "HIC"), '
'and a row with the sequence matching the sequence in the input file. These predictions are then evaluated in the backend using the Spearman rank correlation between predictions and experimental values, and these metrics are then added to the leaderboard. '
'Predictions remain private and are not seen by other contestants.'
),
"Can I submit predictions for only one property?": (
"Yes. You do not need to predict all 5 properties to participate. Each property has its own leaderboard and prize, so you may submit models for a subset of the assays if you wish."
),
"Can I switch between Track 1 and Track 2 during the competition?": (
"Yes. You may submit to both tracks. For example, you can benchmark an existing model on the GDPa1 dataset (Track 1) and later also train and submit a cross-validation model on GDPa1 (Track 2)."
),
"Are participants required to use the provided cross-validation splits?": (
"Yes, if submitting cross-validation results, to ensure fair comparison. The results will be calculated by taking the average Spearman correlation coefficient across all folds."
),
"Are there any country restrictions for prize eligibility?": (
"Yes. Due to applicable laws, prizes cannot be awarded to participants from countries under U.S. sanctions. See the competition terms for details."
),
"How are private test set submissions handled?": (
"We will use the private test set submission at the close of the competition to determine the winners. "
"If there are any intermediate releases of private test set results, these will not affect the final ranking."
),
}
|