Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,9 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
<div align="center">
|
2 |
-
<h2 align="center">
|
3 |
<!-- <h3 align="center">Arxiv</h3> -->
|
4 |
<!-- <h3 align="center"><a href="https://raphaelsulzer.de/">Raphael Sulzer<sup>1,2</sup></a><br></h3> -->
|
5 |
-
<h3><align="center">Raphael Sulzer<sup>1,2</sup
|
6 |
-
<
|
|
|
7 |
<img src="./media/teaser.jpg" width=100% height=100%>
|
8 |
<b>Figure 1</b>: A view of our dataset of Zurich, Switzerland
|
9 |
</div>
|
@@ -13,7 +31,7 @@
|
|
13 |
|
14 |
## Abstract:
|
15 |
|
16 |
-
|
17 |
|
18 |
## Highlights
|
19 |
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
pretty_name: Pixels Point Polygons
|
4 |
+
size_categories:
|
5 |
+
- 10B<n<100B
|
6 |
+
task_categories:
|
7 |
+
- image-segmentation
|
8 |
+
tags:
|
9 |
+
- IGN
|
10 |
+
- Aerial
|
11 |
+
- Satellite
|
12 |
+
- Environement
|
13 |
+
- Multimodal
|
14 |
+
- Earth Observation
|
15 |
+
---
|
16 |
+
|
17 |
+
|
18 |
<div align="center">
|
19 |
+
<h2 align="center">The P<sup>3</sup> dataset: Pixels, Points and Polygons <br> for Multimodal Building Vectorization</h2>
|
20 |
<!-- <h3 align="center">Arxiv</h3> -->
|
21 |
<!-- <h3 align="center"><a href="https://raphaelsulzer.de/">Raphael Sulzer<sup>1,2</sup></a><br></h3> -->
|
22 |
+
<h3><align="center">Raphael Sulzer<sup>1,2</sup> Liuyun Duan<sup>1</sup>
|
23 |
+
Nicolas Girard<sup>1</sup> Florent Lafarge<sup>2</sup></a></h3>
|
24 |
+
<align="center"><sup>1</sup>LuxCarta Technology <br> <sup>2</sup>Centre Inria d'Universit茅 C么te d'Azur
|
25 |
<img src="./media/teaser.jpg" width=100% height=100%>
|
26 |
<b>Figure 1</b>: A view of our dataset of Zurich, Switzerland
|
27 |
</div>
|
|
|
31 |
|
32 |
## Abstract:
|
33 |
|
34 |
+
We present the P<sup>3</sup> dataset, a large-scale multimodal benchmark for building vectorization, constructed from aerial LiDAR point clouds, high-resolution aerial imagery, and vectorized 2D building outlines, collected across three continents. The dataset contains over 10 billion LiDAR points with decimeter-level accuracy and RGB images at a ground sampling distance of 25 cm. While many existing datasets primarily focus on the image modality, P$^3$ offers a complementary perspective by also incorporating dense 3D information. We demonstrate that LiDAR point clouds serve as a robust modality for predicting building polygons, both in hybrid and end-to-end learning frameworks. Moreover, fusing aerial LiDAR and imagery further improves accuracy and geometric quality of predicted polygons. The P<sup>3</sup> dataset is publicly available, along with code and pretrained weights of three state-of-the-art models for building polygon prediction at https://github.com/raphaelsulzer/PixelsPointsPolygons.
|
35 |
|
36 |
## Highlights
|
37 |
|