Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -2,22 +2,16 @@
|
|
2 |
license: cc-by-4.0
|
3 |
pretty_name: Pixels Point Polygons
|
4 |
size_categories:
|
5 |
-
-
|
6 |
task_categories:
|
7 |
- image-segmentation
|
8 |
-
- object-detection
|
9 |
tags:
|
|
|
10 |
- Aerial
|
|
|
11 |
- Environement
|
12 |
- Multimodal
|
13 |
-
-
|
14 |
-
- Polygon
|
15 |
-
- Vectorization
|
16 |
-
- LIDAR
|
17 |
-
- ALS
|
18 |
-
- Image
|
19 |
-
language:
|
20 |
-
- en
|
21 |
---
|
22 |
|
23 |
|
@@ -32,9 +26,6 @@ language:
|
|
32 |
<b>Figure 1</b>: A view of our dataset of Zurich, Switzerland
|
33 |
</div>
|
34 |
|
35 |
-
|
36 |
-
<!-- [[Project Webpage]()] [[Paper](https://arxiv.org/abs/2412.07899)] [[Video]()] -->
|
37 |
-
|
38 |
## Abstract:
|
39 |
|
40 |
We present the P<sup>3</sup> dataset, a large-scale multimodal benchmark for building vectorization, constructed from aerial LiDAR point clouds, high-resolution aerial imagery, and vectorized 2D building outlines, collected across three continents. The dataset contains over 10 billion LiDAR points with decimeter-level accuracy and RGB images at a ground sampling distance of 25 cm. While many existing datasets primarily focus on the image modality, P$^3$ offers a complementary perspective by also incorporating dense 3D information. We demonstrate that LiDAR point clouds serve as a robust modality for predicting building polygons, both in hybrid and end-to-end learning frameworks. Moreover, fusing aerial LiDAR and imagery further improves accuracy and geometric quality of predicted polygons. The P<sup>3</sup> dataset is publicly available, along with code and pretrained weights of three state-of-the-art models for building polygon prediction at https://github.com/raphaelsulzer/PixelsPointsPolygons.
|
@@ -147,4 +138,4 @@ This repository benefits from the following open-source work. We thank the autho
|
|
147 |
|
148 |
1. [Frame Field Learning](https://github.com/Lydorn/Polygonization-by-Frame-Field-Learning)
|
149 |
2. [HiSup](https://github.com/SarahwXU/HiSup)
|
150 |
-
3. [Pix2Poly](https://github.com/yeshwanth95/Pix2Poly)
|
|
|
2 |
license: cc-by-4.0
|
3 |
pretty_name: Pixels Point Polygons
|
4 |
size_categories:
|
5 |
+
- 10B<n<100B
|
6 |
task_categories:
|
7 |
- image-segmentation
|
|
|
8 |
tags:
|
9 |
+
- IGN
|
10 |
- Aerial
|
11 |
+
- Satellite
|
12 |
- Environement
|
13 |
- Multimodal
|
14 |
+
- Earth Observation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
---
|
16 |
|
17 |
|
|
|
26 |
<b>Figure 1</b>: A view of our dataset of Zurich, Switzerland
|
27 |
</div>
|
28 |
|
|
|
|
|
|
|
29 |
## Abstract:
|
30 |
|
31 |
We present the P<sup>3</sup> dataset, a large-scale multimodal benchmark for building vectorization, constructed from aerial LiDAR point clouds, high-resolution aerial imagery, and vectorized 2D building outlines, collected across three continents. The dataset contains over 10 billion LiDAR points with decimeter-level accuracy and RGB images at a ground sampling distance of 25 cm. While many existing datasets primarily focus on the image modality, P$^3$ offers a complementary perspective by also incorporating dense 3D information. We demonstrate that LiDAR point clouds serve as a robust modality for predicting building polygons, both in hybrid and end-to-end learning frameworks. Moreover, fusing aerial LiDAR and imagery further improves accuracy and geometric quality of predicted polygons. The P<sup>3</sup> dataset is publicly available, along with code and pretrained weights of three state-of-the-art models for building polygon prediction at https://github.com/raphaelsulzer/PixelsPointsPolygons.
|
|
|
138 |
|
139 |
1. [Frame Field Learning](https://github.com/Lydorn/Polygonization-by-Frame-Field-Learning)
|
140 |
2. [HiSup](https://github.com/SarahwXU/HiSup)
|
141 |
+
3. [Pix2Poly](https://github.com/yeshwanth95/Pix2Poly)
|