Datasets:
image
imagewidth (px) 1.17k
5.28k
|
---|
π GridNet-HD dataset
1. Introduction
This dataset was developed for 3D semantic segmentation tasks using both images and 3D point clouds specialized on electrical infrastructure. Grid (electrical) Network at High Density and High Resolution represents the first Image+LiDAR dataset accurately co-referenced in the electrical infrastructure domain. This dataset is associated with a public leaderboard hosted on Hugging Face Spaces, available at: leaderboard.
The dataset is associated with the following paper:
Title: GridNet-HD: A High-Resolution Multi-Modal Dataset for LiDAR-Image Fusion on Power Line Infrastructure
Authors: Masked for instance
Conference: Submitted to NeurIPS 2025
This repository hosts the official data splits and resources used in the experiments reported in the paper.
2. Dataset Structure
This dataset consists of 36 geographic zones, each represented by a folder named after its area code (e.g. t1z4, t1z5a, etc.).
Each zone contains aligned multimodal data (images, segmentation masks, LiDAR point cloud, and camera parameters), enabling high-precision image-to-3D projection for multimodal fusion 3D semantic segmentation task.
A split.json file at the root of the dataset defines the official train/test partition of the zones.
To ensure fair evaluation on the official test set, ground truth annotations are not provided for either the images or the LiDAR point clouds. Instead, participants must submit their predictions to the leaderboard, where the official metrics (mIoU) are automatically computed against the hidden labels.
π Folder layout
dataset-root/
βββ t1z5b/
β βββ images/ # RGB images (.JPG)
β βββ masks/ # Semantic segmentation masks (.png, single-channel label)
β βββ lidar/ # LiDAR point cloud (.las format with field "ground_truth")
β βββ pose/ # Camera poses and intrinsics (text files)
βββ t1z6a/
β βββ ...
βββ ...
βββ split.json # JSON file specifying the train/test split
βββ README.md
π§Ύ Contents per zone
Inside each zone folder, you will find:
π· images/
- High-resolution RGB images (.JPG)
- Captured from a UAV
π·οΈ masks/
- One .png mask per image, same filename as image
- Label-encoded masks (1 channel)
π lidar/
- Single .las file for the entire zone captured from a UAV
- Contains 3D point cloud data at high denisty with semantic ground_truth labels (stored in field named "ground_truth")
π pose/
- camera_pose.txt: Camera positions and orientations per image (using Metashape Agisoft convention, more details in paper)
- camera_calibration.xml: Camera calibration parameters (using Metashape Agisoft calibration model)
3. Class Grouping
Original classes have been grouped into 12 semantic groups as follows:
Group ID | Original Classes | Description |
---|---|---|
0 | 0,1,2,3,4 | Pylon |
1 | 5 | Conductor cable |
2 | 6,7 | Structural cable |
3 | 8,9,10,11 | Insulator |
4 | 14 | High vegetation |
5 | 15 | Low vegetation |
6 | 16 | Herbaceous vegetation |
7 | 17,18 | Rock, gravel, soil |
8 | 19 | Impervious soil (Road) |
9 | 20 | Water |
10 | 21 | Building |
255 | 12,13,255 | Unassigned-Unlabeled |
If interested the original classes are described in the Appendices of the paper.
π Note: group
(12,13,255)
is ignored during official evaluations.
4. Dataset Splits
The dataset is split into two parts:
- Train (~70% of LiDAR points)
- Test (~30% of LiDAR points)
The splits were carefully constructed to guarantee:
- Full coverage of all semantic groups (except the ignored group)
- No project overlap between train and test
- Balanced distribution in terms of class representation
Project assignments are listed in split.json
with a proposal of split train/val.
Note that the test set give only the LiDAR without labels (without ground_truth field) and without mask labeled for images, this label part is keep by us in private mode for leaderboard management. To submit results on test set and obtain mIoU score on leaderboard, please follow instructions here: leaderboard on the remap classes presented below.
5. Dataset Statistics
π Class Distribution
The table below summarizes the number of points per semantic group across the train and test splits, including the total number of points, the proportion of each class present in the test set (% test/total), and the relative class distributions within each split (Distribution classes in train/test set (%)).
Group ID | Train Points | Test Points | Total points | % test/total | Distribution classes in train set (%) | Distribution classes in test set (%) |
---|---|---|---|---|---|---|
0 | 11'490'104 | 3'859'573 | 15'349'677 | 25.1 | 0.7 | 0.5 |
1 | 7'273'270 | 3'223'720 | 10'496'990 | 30.7 | 0.4 | 0.4 |
2 | 1'811'422 | 903'089 | 2'714'511 | 33.3 | 0.1 | 0.1 |
3 | 821'712 | 230'219 | 1'051'931 | 21.9 | 0.05 | 0.03 |
4 | 278'527'781 | 135'808'699 | 414'336'480 | 32.8 | 16.5 | 17.9 |
5 | 78'101'152 | 37'886'731 | 115'987'883 | 32.7 | 4.6 | 5.0 |
6 | 1'155'217'319 | 461'212'378 | 1'616'429'697 | 28.5 | 68.4 | 60.7 |
7 | 135'026'058 | 99'817'139 | 234'843'197 | 42.5 | 8.0 | 13.1 |
8 | 13'205'411 | 12'945'414 | 26'150'825 | 49.5 | 0.8 | 1.7 |
9 | 1'807'216 | 1'227'892 | 3'035'108 | 40.5 | 0.1 | 0.2 |
10 | 6'259'260 | 2'107'391 | 8'366'651 | 25.2 | 0.4 | 0.3 |
TOTAL | 1'689'540'705 | 759'222'245 | 2'448'762'950 | 31.0 | 100 | 100 |
The same table summarizes the same features as above for the proposed split train/val:
Group ID | Train Points | Val Points | Total points | % val/total | Distribution classes in train set (%) | Distribution classes in val set (%) |
---|---|---|---|---|---|---|
0 | 8'643'791 | 2'846'313 | 11'490'104 | 24.8 | 0.7 | 0.7 |
1 | 5'782'668 | 1'490'602 | 7'273'270 | 20.5 | 0.4 | 0.4 |
2 | 1'370'331 | 441'091 | 1'811'422 | 24.4 | 0.1 | 0.1 |
3 | 625'937 | 195'775 | 821'712 | 23.8 | 0.05 | 0.05 |
4 | 160'763'512 | 117'764'269 | 278'527'781 | 42.3 | 12.4 | 29.7 |
5 | 43'442'079 | 34'659'073 | 78'101'152 | 44.4 | 3.4 | 8.7 |
6 | 968'689'542 | 186'527'777 | 1'155'217'319 | 16.1 | 74.9 | 47.0 |
7 | 87'621'550 | 47'404'508 | 135'026'058 | 35.1 | 6.8 | 11.9 |
8 | 10'420'302 | 2'785'109 | 13'205'411 | 21.1 | 0.8 | 0.7 |
9 | 310'240 | 1'496'976 | 1'807'216 | 82.8 | 0.02 | 0.4 |
10 | 4'793'225 | 1'466'035 | 6'259'260 | 23.4 | 0.4 | 0.4 |
TOTAL | 1'292'463'177 | 397'077'528 | 1'689'540'705 | 23.5 | 100 | 100 |
π Class Distribution Visualisation
6. How to Use
Download via Hugging Face Hub
β οΈ Warning: This dataset is large, the full download size is approximately 170 GB. Make sure you have sufficient disk space and a stable internet connection before downloading.
To download the full dataset, please don't use the function datasets.load_dataset()
from huggingface, this parquet version of the dataset is automatically done by huggingface but not adapted for this type of dataset (>5GB).
Use instead:
from huggingface_hub import snapshot_download
local_dir = snapshot_download(
repo_id="heig-vd-geo/GridNet-HD",
repo_type="dataset",
local_dir="GridNet-HD" # where to replicate the file tree
)
7. Running baselines
Please follow instructions on dedicated git repository for models running on this dataset:
- Baseline based on image segmentation and reprojection into LiDAR: ImageVote baseline
- Baseline based on LiDAR 3D segmentation directly using Superpoint Trasnformer (SPT): SPT baseline
- Baseline based on late fusion between softmax logits from SPT and ImageVote: LateFusionMLP baseline
Results are visible here with the 3 different baselines:
Baseline | ImageVote baseline | ImageVote baseline | SPT baseline | SPT baseline | Late fusion MLP | Late fusion MLP |
---|---|---|---|---|---|---|
Class | IoU (Validation set) (%) | IoU (Test set) (%) | IoU (Validation set) (%) | IoU (Test set) (%) | IoU (Validation set) (%) | IoU (Test set) (%) |
Pylon | 86.43 | 85.09 | 93.92 | 92.75 | ||
Conductor cable | 58.22 | 64.82 | 89.36 | 91.05 | ||
Structural cable | 48.84 | 45.06 | 68.44 | 70.51 | ||
Insulator | 81.16 | 71.07 | 85.79 | 80.60 | ||
High vegetation | 90.89 | 83.86 | 87.60 | 85.15 | ||
Low vegetation | 71.89 | 63.43 | 57.95 | 55.91 | ||
Herbaceous vegetation | 93.87 | 84.45 | 91.40 | 84.64 | ||
Rock, gravel, soil | 91.40 | 38.62 | 82.42 | 40.63 | ||
Impervious soil (Road) | 85.84 | 80.69 | 74.12 | 73.57 | ||
Water | 44.14 | 74.87 | 50.94 | 3.69 | ||
Building | 86.76 | 68.09 | 73.97 | 57.38 | ||
Mean IoU (mIoU) | 76.31 | 69.10 | 77.81 | 66.90 |
8. License and Citation
This dataset is released under the CC-BY-4.0 license.
If you use this dataset, please cite the following paper:
GridNet-HD: A High-Resolution Multi-Modal Dataset for LiDAR-Image Fusion on Power Line Infrastructure
Masked Authors
Submitted to NeurIPS 2025.
- Downloads last month
- 365