Vision encoder used for digital ST

#1
by andrewsong90 - opened

Dear authors,

Thank you very much for the wonderful work!
For all the spots in TCGA digital ST, which vision encoder was used? The preprint says UNI/Phikon/H-optimus, but I was curious which one in particular was used as a backbone to generate this dataset.

Thank you,
Andrew

Hello Andrew,

Thank you for your interest. Here, we used the base models that were introduced in the corresponding publications. Let me know if you need the exact commits.
This mentioned, I also did experiments with recent versions and saw some performance gains.

Please let me know if you have further questions.

Best,

Thank you for the quick response - Maybe I wasn't clear on my question.
When you predicted ST for each patch of TCGA WSIs (which is released here as HF), which vision encoder did you use?
You must have used one of the vision encoders you mentioned as the final feature extractor, rather than using all of them.
I couldn't find the exact information in preprint.

Best,

Hello Andrew,

Ah, I see. In our analysis, we treat the pathology foundation model as a hyperparameter that we select on the validation dataset. You can find the optimal hyperparameters and pretrained DeepSpot weights on Zenodo at https://zenodo.org/records/15322099 in top_param_overall.yaml. These are the models we use for generating the digital TCGA spatial transcriptomics data.

For TCGA LUSC and TCGA LUAD we used Lung_LUSC_LUAD_Visium ; For TCGA SKCM - Melanoma_TuPro ; For TCGA KIRC - Kidney_HEST1K.

Please let me know if you have further questions.

Best,

Sign up or log in to comment