update readme
Browse files
README.md
CHANGED
@@ -54,8 +54,9 @@ language:
|
|
54 |
# - split: test
|
55 |
# path: "data/224/annotations/annotations_NZ_test.json"
|
56 |
---
|
|
|
57 |
<div align="center">
|
58 |
-
<h1 align="center">The P<sup>3</sup>
|
59 |
<h3><align="center">Raphael Sulzer<sup>1,2</sup> Liuyun Duan<sup>1</sup>
|
60 |
Nicolas Girard<sup>1</sup> Florent Lafarge<sup>2</sup></a></h3>
|
61 |
<align="center"><sup>1</sup>LuxCarta Technology <br> <sup>2</sup>Centre Inria d'Universit茅 C么te d'Azur
|
@@ -84,6 +85,7 @@ We present the P<sup>3</sup> dataset, a large-scale multimodal benchmark for bui
|
|
84 |
- A global, multimodal dataset of aerial images, aerial LiDAR point clouds and building outline polygons, available at [huggingface.co/datasets/rsi/PixelsPointsPolygons](https://huggingface.co/datasets/rsi/PixelsPointsPolygons)
|
85 |
- A library for training and evaluating state-of-the-art deep learning methods on the dataset, available at [github.com/raphaelsulzer/PixelsPointsPolygons](https://github.com/raphaelsulzer/PixelsPointsPolygons)
|
86 |
- Pretrained model weights, available at [huggingface.co/rsi/PixelsPointsPolygons](https://huggingface.co/rsi/PixelsPointsPolygons)
|
|
|
87 |
|
88 |
## Dataset
|
89 |
|
@@ -631,7 +633,15 @@ python scripts/train.py experiment=p2p_fusion checkpoint=latest
|
|
631 |
|
632 |
If you use our work please cite
|
633 |
```bibtex
|
634 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
635 |
```
|
636 |
|
637 |
## Acknowledgements
|
|
|
54 |
# - split: test
|
55 |
# path: "data/224/annotations/annotations_NZ_test.json"
|
56 |
---
|
57 |
+
|
58 |
<div align="center">
|
59 |
+
<h1 align="center">The P<sup>3</sup> Dataset: Pixels, Points and Polygons <br> for Multimodal Building Vectorization</h1>
|
60 |
<h3><align="center">Raphael Sulzer<sup>1,2</sup> Liuyun Duan<sup>1</sup>
|
61 |
Nicolas Girard<sup>1</sup> Florent Lafarge<sup>2</sup></a></h3>
|
62 |
<align="center"><sup>1</sup>LuxCarta Technology <br> <sup>2</sup>Centre Inria d'Universit茅 C么te d'Azur
|
|
|
85 |
- A global, multimodal dataset of aerial images, aerial LiDAR point clouds and building outline polygons, available at [huggingface.co/datasets/rsi/PixelsPointsPolygons](https://huggingface.co/datasets/rsi/PixelsPointsPolygons)
|
86 |
- A library for training and evaluating state-of-the-art deep learning methods on the dataset, available at [github.com/raphaelsulzer/PixelsPointsPolygons](https://github.com/raphaelsulzer/PixelsPointsPolygons)
|
87 |
- Pretrained model weights, available at [huggingface.co/rsi/PixelsPointsPolygons](https://huggingface.co/rsi/PixelsPointsPolygons)
|
88 |
+
- A paper with an extensive experimental validation, available at [arxiv.org/abs/2505.15379](https://arxiv.org/abs/2505.15379)
|
89 |
|
90 |
## Dataset
|
91 |
|
|
|
633 |
|
634 |
If you use our work please cite
|
635 |
```bibtex
|
636 |
+
@misc{sulzer2025p3datasetpixelspoints,
|
637 |
+
title={The P$^3$ dataset: Pixels, Points and Polygons for Multimodal Building Vectorization},
|
638 |
+
author={Raphael Sulzer and Liuyun Duan and Nicolas Girard and Florent Lafarge},
|
639 |
+
year={2025},
|
640 |
+
eprint={2505.15379},
|
641 |
+
archivePrefix={arXiv},
|
642 |
+
primaryClass={cs.CV},
|
643 |
+
url={https://arxiv.org/abs/2505.15379},
|
644 |
+
}
|
645 |
```
|
646 |
|
647 |
## Acknowledgements
|