Update README.md
Browse files
README.md
CHANGED
@@ -18,35 +18,35 @@ tags:
|
|
18 |
|
19 |

|
20 |
|
21 |
-
##
|
22 |
|
23 |
This repository hosts pre-trained checkpoints from the **OpenMind** benchmark:
|
24 |
-
π **
|
25 |
([arXiv:2412.17041](https://arxiv.org/abs/2412.17041)) β the first extensive benchmark study for **self-supervised learning (SSL)** on **3D medical imaging** data.
|
26 |
|
27 |
-
|
28 |
|
29 |
-
**These models are not recommended to be used as-is.** Instead we recommend using the downstream fine-tuning
|
30 |
-
*While
|
31 |
|
32 |
---
|
33 |
|
34 |
-
##
|
35 |
|
36 |
We release SSL checkpoints for two backbone architectures:
|
37 |
|
38 |
-
- **ResEnc-L**: A CNN-based encoder [[
|
39 |
- **Primus-M**: A transformer-based encoder [[Primus paper](https://arxiv.org/abs/2503.01835)]
|
40 |
|
41 |
-
Each encoder has been pre-trained using the following SSL techniques:
|
42 |
|
43 |
| Method | Description |
|
44 |
|---------------|-------------|
|
45 |
-
| [Volume Contrastive (VoCo)](https://arxiv.org/abs/2402.17300) |
|
46 |
-
| [VolumeFusion (VF)](https://arxiv.org/abs/2306.16925) | Spatial fusion-based SSL |
|
47 |
-
| [Models Genesis (MG)](https://www.sciencedirect.com/science/article/pii/S1361841520302048) |
|
48 |
-
| [Masked Autoencoders (MAE)](https://openaccess.thecvf.com/content/CVPR2022/html/He_Masked_Autoencoders_Are_Scalable_Vision_Learners_CVPR_2022_paper) |
|
49 |
-
| [Spark 3D (S3D)](https://arxiv.org/abs/2410.23132) |
|
50 |
-
| [SimMIM](https://openaccess.thecvf.com/content/CVPR2022/html/Xie_SimMIM_A_Simple_Framework_for_Masked_Image_Modeling_CVPR_2022_paper.html) | Simple masked reconstruction |
|
51 |
-
| [SwinUNETR SSL](https://arxiv.org/abs/2111.14791) |
|
52 |
-
| [SimCLR](https://arxiv.org/abs/2002.05709) | Contrastive learning baseline |
|
|
|
18 |
|
19 |

|
20 |
|
21 |
+
## Overview
|
22 |
|
23 |
This repository hosts pre-trained checkpoints from the **OpenMind** benchmark:
|
24 |
+
π **An OpenMind for 3D medical vision self-supervised learning** (Wald, T., Ulrich, C., Suprijadi, J., Ziegler, S., Nohel, M., Peretzke, R., ... & Maier-Hein, K. H. (2024).)
|
25 |
([arXiv:2412.17041](https://arxiv.org/abs/2412.17041)) β the first extensive benchmark study for **self-supervised learning (SSL)** on **3D medical imaging** data.
|
26 |
|
27 |
+
Each model was pre-trained using a particular SSL method on the [OpenMind Dataset](https://huggingface.co/datasets/AnonRes/OpenMind), a large-scale, standardized collection of public brain MRI datasets.
|
28 |
|
29 |
+
**These models are not recommended to be used as-is for feature extraction.** Instead we recommend using the downstream fine-tuning frameworks for **segmentation** and **classification** adaptation, available in the [adaptation repository](https://github.com/TaWald/nnUNet).
|
30 |
+
*While manual download is possible, we recommend using the auto-download feature of the fine-tuning repository by providing the repository URL on Hugging Face instead of a local checkpoint path.*
|
31 |
|
32 |
---
|
33 |
|
34 |
+
## Model Variants
|
35 |
|
36 |
We release SSL checkpoints for two backbone architectures:
|
37 |
|
38 |
+
- **ResEnc-L**: A CNN-based encoder [[a](https://arxiv.org/abs/2410.23132), [b](https://arxiv.org/abs/2404.09556)]
|
39 |
- **Primus-M**: A transformer-based encoder [[Primus paper](https://arxiv.org/abs/2503.01835)]
|
40 |
|
41 |
+
Each encoder has been pre-trained using one of the following SSL techniques:
|
42 |
|
43 |
| Method | Description |
|
44 |
|---------------|-------------|
|
45 |
+
| [Volume Contrastive (VoCo)](https://arxiv.org/abs/2402.17300) | Contrastive pretraining method for 3D volumes |
|
46 |
+
| [VolumeFusion (VF)](https://arxiv.org/abs/2306.16925) | Spatial volume fusion-based segmentation SSL method |
|
47 |
+
| [Models Genesis (MG)](https://www.sciencedirect.com/science/article/pii/S1361841520302048) | Reconstruction and denoising based pretraining method |
|
48 |
+
| [Masked Autoencoders (MAE)](https://openaccess.thecvf.com/content/CVPR2022/html/He_Masked_Autoencoders_Are_Scalable_Vision_Learners_CVPR_2022_paper) | Default reconstruction based pretraining method |
|
49 |
+
| [Spark 3D (S3D)](https://arxiv.org/abs/2410.23132) | Sparse reconstruction based pretraining mehtod (CNN only) |
|
50 |
+
| [SimMIM](https://openaccess.thecvf.com/content/CVPR2022/html/Xie_SimMIM_A_Simple_Framework_for_Masked_Image_Modeling_CVPR_2022_paper.html) | Simple masked reconstruction based pretraining method (TR only) |
|
51 |
+
| [SwinUNETR SSL](https://arxiv.org/abs/2111.14791) | Rotation, Contrastive and Reconstruction based pre-training method. |
|
52 |
+
| [SimCLR](https://arxiv.org/abs/2002.05709) | Transfer of 2D Contrastive learning baseline method to 3D |
|