lcybuaa commited on
Commit
50ef85d
·
verified ·
1 Parent(s): 06004b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -29
README.md CHANGED
@@ -14,38 +14,52 @@ The **Git-10M** dataset is a global-scale remote sensing image-text pair dataset
14
  <img src="https://github.com/Chen-Yang-Liu/Text2Earth/raw/main/images/dataset.png" width="1000"/>
15
  </div>
16
 
17
- ## Load Dataset
18
- ```python
19
- from modelscope.msdatasets import MsDataset
20
- ds = MsDataset.load('lcybuaa/Git-10M')
21
- ```
22
-
23
 
24
  ## View samples from the dataset
25
  ```python
26
  from datasets import load_dataset
 
 
 
 
 
 
 
 
 
 
 
 
27
  save_path = 'xxxxx'
28
- ds = load_dataset('lcybuaa/Git-10M', cache_dir=save_path)
29
  train_dataset = ds["train"]
30
 
 
31
  for i, example in enumerate(train_dataset):
 
32
  image = example["image"]
33
- # Text Description
34
- text = example["text"].split('_GOOGLE_LEVEL_')[-1]
35
- # Image Resolution
36
- Level = int(example["text"].split('_GOOGLE_LEVEL_')[0])
37
- if Level != 0:
38
- Resolution = 2**(17-Level)
39
- else:
40
- print('This image comes from a public dataset. There is no available resolution metadata.')
41
- # save image
42
- image.save(f"image_{i}.png") #
43
- print('text:', text)
44
-
 
 
 
 
 
 
45
  ```
46
 
47
  ## Git-RSCLIP: Remote Sensing Vision-Language Contrastive Pre-training Foundation Model
48
- Git-RSCLIP is pre-trained using the contrastive learning framework on the Git-10M dataset.
49
  Git-RSCLIP is here:[[Huggingface](https://huggingface.co/lcybuaa/Git-RSCLIP) | [Modelscope](https://modelscope.cn/models/lcybuaa1111/Git-RSCLIP)]
50
 
51
  Compare the Top1-Acc of Zero-shot classification on multiple image classification datasets:
@@ -63,13 +77,13 @@ Compare the Top1-Acc of Zero-shot classification on multiple image classificatio
63
  # BibTeX entry and citation info
64
 
65
  ```bibtex
66
- @misc{liu2025text2earthunlockingtextdrivenremote,
67
- title={Text2Earth: Unlocking Text-driven Remote Sensing Image Generation with a Global-Scale Dataset and a Foundation Model},
68
- author={Chenyang Liu and Keyan Chen and Rui Zhao and Zhengxia Zou and Zhenwei Shi},
69
- year={2025},
70
- eprint={2501.00895},
71
- archivePrefix={arXiv},
72
- primaryClass={cs.CV},
73
- url={https://arxiv.org/abs/2501.00895},
74
- }
75
  ```
 
14
  <img src="https://github.com/Chen-Yang-Liu/Text2Earth/raw/main/images/dataset.png" width="1000"/>
15
  </div>
16
 
 
 
 
 
 
 
17
 
18
  ## View samples from the dataset
19
  ```python
20
  from datasets import load_dataset
21
+ import math
22
+
23
+ def XYZToLonLat(x,y,z):
24
+ # Transform tile-location to (longitude,latitude)
25
+ n = 2**z*1.0
26
+ lon = x / n * 360.0 - 180.0 # longitude
27
+ lat = math.atan(math.sinh(math.pi * (1 - 2.0 * y / n)))
28
+ lat = math.degrees(lat) # latitude
29
+ return lon,lat
30
+
31
+
32
+ # load dataset
33
  save_path = 'xxxxx'
34
+ ds = load_dataset.load('lcybuaa/Git-10M', cache_dir=save_path)
35
  train_dataset = ds["train"]
36
 
37
+
38
  for i, example in enumerate(train_dataset):
39
+ # PIL image:
40
  image = example["image"]
41
+ # filename of the image:
42
+ img_name = example["img_name"]
43
+ # visual quality score as shown in Fig. 5 of the paper.
44
+ img_quality_score = example['img_quality_score']
45
+ # caption of the image
46
+ caption = example['caption']
47
+ # word length of the caption as shown in Fig. 6 of the paper.
48
+ caption_length = example['caption_length']
49
+ # image spatial resolution as shown in Fig. 4 of the paper.
50
+ resolution = example['resolution']
51
+ # image Geolocation as shown in Fig. 3 of the paper.
52
+ Google_location = example['Google_location']
53
+ Level_TileZ, TileX, TileY = Google_location.split('_')
54
+ longitude, latitude = XYZToLonLat(TileX, TileY, Level_TileZ)
55
+
56
+ # More Tips:
57
+ # Resolution = 2 ** (17 - Level_TileZ)
58
+
59
  ```
60
 
61
  ## Git-RSCLIP: Remote Sensing Vision-Language Contrastive Pre-training Foundation Model
62
+ Git-RSCLIP is pre-trained using the contrastive learning framework on the **Git-10M dataset**.
63
  Git-RSCLIP is here:[[Huggingface](https://huggingface.co/lcybuaa/Git-RSCLIP) | [Modelscope](https://modelscope.cn/models/lcybuaa1111/Git-RSCLIP)]
64
 
65
  Compare the Top1-Acc of Zero-shot classification on multiple image classification datasets:
 
77
  # BibTeX entry and citation info
78
 
79
  ```bibtex
80
+ @ARTICLE{Text2Earth,
81
+ author={Liu, Chenyang and Chen, Keyan and Zhao, Rui and Zou, Zhengxia and Shi, Zhenwei},
82
+ journal={IEEE Geoscience and Remote Sensing Magazine},
83
+ title={Text2Earth: Unlocking text-driven remote sensing image generation with a global-scale dataset and a foundation model},
84
+ year={2025},
85
+ volume={},
86
+ number={},
87
+ pages={2-23},
88
+ doi={10.1109/MGRS.2025.3560455}}
89
  ```