rsi commited on
Commit
29263f0
Β·
verified Β·
1 Parent(s): 5808027

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +126 -3
README.md CHANGED
@@ -1,3 +1,126 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+ <h2 align="center">Pixels, Points, Polygons: A Global Dataset and Baseline for Multimodal Building Vectorization</h2>
3
+ <!-- <h3 align="center">Arxiv</h3> -->
4
+ <!-- <h3 align="center"><a href="https://raphaelsulzer.de/">Raphael Sulzer<sup>1,2</sup></a><br></h3> -->
5
+ <h3><align="center">Raphael Sulzer<sup>1,2</sup></a></h3>
6
+ <align="center"><sup>1</sup>LuxCarta <sup>2</sup>Inria
7
+ <img src="./media/teaser.jpg" width=100% height=100%>
8
+ <b>Figure 1</b>: A view of our dataset of Zurich, Switzerland
9
+ </div>
10
+
11
+
12
+ <!-- [[Project Webpage]()] [[Paper](https://arxiv.org/abs/2412.07899)] [[Video]()] -->
13
+
14
+ ## Abstract:
15
+
16
+ asd
17
+
18
+ ## Highlights
19
+
20
+ - A global, multimodal dataset of aerial images, aerial lidar point clouds and building polygons
21
+ - A library for training and evaluating state-of-the-art deep learning methods on the dataset
22
+
23
+
24
+ ## Dataset
25
+
26
+ ### Numbers
27
+
28
+ #TODO put some images and numbers about the dataset
29
+
30
+ <!-- ### Properties -->
31
+
32
+ We provide train and val splits of the dataset in two different sizes 224 $\times$ 224 and 512 $\times$ 512. Both sized versions cover the same areas. The tiles of the test split have a fixed size of 2000 $\times$ 2000.
33
+
34
+ ### Download
35
+
36
+ hugginface link
37
+
38
+ ### Prepare custom tile size
39
+
40
+ See [datasets preprocessing](data_preprocess) for instructions on preparing a dataset with different tile sizes.
41
+
42
+
43
+ ## Requirements
44
+
45
+ To create a conda environment named `ppp` and install the repository as a python package with all dependencies run
46
+ ```
47
+ bash install.sh
48
+ ```
49
+
50
+ or, if you want to manage the environment yourself run
51
+ ```
52
+ pip install -r requirements-torch-cuda.txt
53
+ pip install .
54
+ ```
55
+ ⚠️ **Warning**: The implementation of the LiDAR point cloud encoder uses Open3D-ML. Currently, Open3D-ML officially only supports the PyTorch version specified in `requirements-torch-cuda.txt`.
56
+
57
+
58
+
59
+ ## Model Zoo
60
+
61
+
62
+ | Model | \<model> | Encoder | \<encoder> |Image |LiDAR | IoU | C-IoU |
63
+ |--------------- |---- |--------------- |--------------- |--- |--- |----- |----- |
64
+ | Frame Field Learning |\<ffl> | Vision Transformer (ViT) | \<vit_cnn> | βœ… | | 0.85 | 0.90 |
65
+ | Frame Field Learning |\<ffl> | PointPillars (PP) + ViT | \<pp_vit_cnn> | | βœ… | 0.80 | 0.88 |
66
+ | Frame Field Learning |\<ffl> | PP+ViT \& ViT | \<fusion_vit_cnn> | βœ… |βœ… | 0.78 | 0.85 |
67
+ | HiSup |\<hisup> | Vision Transformer (ViT) | \<vit_cnn> | βœ… | | 0.85 | 0.90 |
68
+ | HiSup |\<hisup> | PointPillars (PP) + ViT | \<pp_vit_cnn> | | βœ… | 0.80 | 0.88 |
69
+ | HiSup |\<hisup> | PP+ViT \& ViT | \<fusion_vit> | βœ… |βœ… | 0.78 | 0.85 |
70
+ | Pix2Poly |\<pix2poly>| Vision Transformer (ViT) | \<vit> | βœ… | | 0.85 | 0.90 |
71
+ | Pix2Poly |\<pix2poly>| PointPillars (PP) + ViT | \<pp_vit> | | βœ… | 0.80 | 0.88 |
72
+ | Pix2Poly |\<pix2poly>| PP+ViT \& ViT | \<fusion_vit> | βœ… |βœ… | 0.78 | 0.85 |
73
+
74
+ ## Configuration
75
+
76
+ The project supports hydra configuration which allows to modify any parameter from the command line, such as the model and encoder types from the table above.
77
+ To view all available options run
78
+ ```
79
+ python train.py --help
80
+ ```
81
+
82
+ ## Training
83
+
84
+ Start training with the following command:
85
+
86
+ ```
87
+ torchrun --nproc_per_node=<num GPUs> train.py model=<model> encoder=<encoder> model.batch_size=<batch size> ...
88
+
89
+ ```
90
+
91
+ ## Prediction
92
+
93
+ ```
94
+ torchrun --nproc_per_node=<num GPUs> predict.py model=<model> checkpoint=best_val_iou ...
95
+
96
+ ```
97
+
98
+ ## Evaluation
99
+
100
+ ```
101
+ python evaluate.py model=<model> checkpoint=best_val_iou
102
+ ```
103
+ <!-- ## Trained models
104
+
105
+ asd -->
106
+
107
+
108
+ <!-- ## Results
109
+
110
+ #TODO Put paper main results table here -->
111
+
112
+
113
+ ## Citation
114
+
115
+ If you find our work useful, please consider citing:
116
+ ```bibtex
117
+ ...
118
+ ```
119
+
120
+ ## Acknowledgements
121
+
122
+ This repository benefits from the following open-source work. We thank the authors for their great work.
123
+
124
+ 1. [Frame Field Learning](https://github.com/Lydorn/Polygonization-by-Frame-Field-Learning)
125
+ 2. [HiSup](https://github.com/SarahwXU/HiSup)
126
+ 3. [Pix2Poly](https://github.com/yeshwanth95/Pix2Poly)