DarthReca commited on
Commit
a5c78f1
·
verified ·
1 Parent(s): 7511e84

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -17
README.md CHANGED
@@ -4,29 +4,40 @@ datasets:
4
  - DarthReca/crisislandmark
5
  language:
6
  - en
7
- library_name: torchgeo
8
  tags:
9
- - climate
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
  # CLOSP-RN
12
 
13
- <!-- Provide a quick summary of what the model is/does. -->
 
 
14
 
15
  ## Model Details
16
-
17
- ### Model Description
18
-
19
- <!-- Provide a longer summary of what this model is. -->
20
-
21
 
22
 
23
  - **Developed by:** Daniele Rege Cambrin
24
- - **Model type:** [More Information Needed]
25
- - **Language(s) (NLP):** [More Information Needed]
26
- - **License:** [More Information Needed]
27
- - **Finetuned from model [optional]:** [More Information Needed]
28
- - **Repository:** [More Information Needed]
29
- - **Paper:** [More Information Needed]
30
 
31
  ## How to Get Started with the Model
32
 
@@ -34,7 +45,16 @@ Use the code below to get started with the model.
34
 
35
  [More Information Needed]
36
 
37
- ## Training Details
38
-
39
-
40
  ## Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - DarthReca/crisislandmark
5
  language:
6
  - en
7
+ library_name: transformers
8
  tags:
9
+ - remote-sensing
10
+ - text-to-image-retrieval
11
+ - multimodal
12
+ - geospatial
13
+ - SAR
14
+ - multispectral
15
+ - crisis-management
16
+ - earth-observation
17
+ - contrastive-learning
18
+ base_model:
19
+ - sentence-transformers/all-MiniLM-L6-v2
20
+ - torchgeo/resnet50_sentinel2_all_moco
21
+ - torchgeo/resnet50_sentinel1_all_moco
22
  ---
23
  # CLOSP-RN
24
 
25
+ CLOSP (Contrastive Language Optical SAR Pretraining) is a multimodal architecture designed for text-to-image retrieval.
26
+ It creates a unified embedding space for text, Sentinel-2 (MSI), and Sentinel-1 (SAR) data.
27
+ The CLOSP-RN variant uses a ResNet-50 vision backbone.
28
 
29
  ## Model Details
30
+ The model uses three separate encoders: one for text, one for Sentinel-1 (SAR) data, and one for Sentinel-2 (MSI) data.
31
+ During training, it uses a contrastive objective to align the textual embeddings with the corresponding visual embeddings (either SAR or MSI).
 
 
 
32
 
33
 
34
  - **Developed by:** Daniele Rege Cambrin
35
+ - **Model type:** CLOSP
36
+ - **Language(s) (NLP):** english
37
+ - **License:** OpenRAIL
38
+ - **Finetuned from model:** [More Information Needed]
39
+ - **Repository:** [GitHub](https://github.com/DarthReca/closp)
40
+ - **Paper:** [ArXiv](https://arxiv.org/abs/2507.10403)
41
 
42
  ## How to Get Started with the Model
43
 
 
45
 
46
  [More Information Needed]
47
 
 
 
 
48
  ## Citation
49
+
50
+ ```bibtex
51
+ @misc{cambrin2025texttoremotesensingimageretrievalrgbsources,
52
+ title={Text-to-Remote-Sensing-Image Retrieval beyond RGB Sources},
53
+ author={Daniele Rege Cambrin and Lorenzo Vaiani and Giuseppe Gallipoli and Luca Cagliero and Paolo Garza},
54
+ year={2025},
55
+ eprint={2507.10403},
56
+ archivePrefix={arXiv},
57
+ primaryClass={cs.CV},
58
+ url={https://arxiv.org/abs/2507.10403},
59
+ }
60
+ ```