Datasets:
metadata
license: cc-by-nc-4.0
language:
- en
size_categories:
- 10K<n<100K
task_categories:
- image-to-3d
- image-to-image
- object-detection
- keypoint-detection
tags:
- nerf
- aerial
- uav
- 6-dof
- multi-view
- pose-estimation
- neural-rendering
- 3d-reconstruction
- gps
- imu
pretty_name: AeroGrid100
dataset_info:
title: >-
AeroGrid100: A Real-World Multi-Pose Aerial Dataset for Implicit Neural
Scene Reconstruction
authors:
- Qingyang Zeng
- Adyasha Mohanty
paper: https://im4rob.github.io/attend/papers/7_AeroGrid100_A_Real_World_Mul.pdf
workshop: RSS 2025 Workshop on Leveraging Implicit Methods in Aerial Autonomy
bibtex: |
@inproceedings{zeng2025aerogrid100,
title = {AeroGrid100: A Real-World Multi-Pose Aerial Dataset for Implicit Neural Scene Reconstruction},
author = {Zeng, Qingyang and Mohanty, Adyasha},
booktitle = {RSS Workshop on Leveraging Implicit Methods in Aerial Autonomy},
year = {2025},
url = {https://im4rob.github.io/attend/papers/7_AeroGrid100_A_Real_World_Mul.pdf}
}
AeroGrid100
AeroGrid100 is a large-scale, structured aerial dataset collected via UAV to support 3D neural scene reconstruction tasks such as NeRF. It consists of 17,100 high-resolution images with accurate 6-DoF camera poses, collected over a 10Γ10 geospatial grid at 5 altitude levels and multi-angle views per point.
π Access
To access the full dataset, click here to open the Google Drive folder.
π Dataset Overview
- Platform: DJI Air 3 drone with wide-angle lens
- Region: Urban site in Claremont, California (~0.209 kmΒ²)
- Image Resolution: 4032 Γ 2268 (JPEG, 24mm FOV)
- Total Images: 17,100
- Grid Layout: 10 Γ 10 spatial points
- Altitudes: 20m, 40m, 60m, 80m, 100m
- Viewpoints per Altitude: Up to 8 yaw Γ 5 pitch combinations
- Pose Metadata: Provided in JSON (extrinsics, GPS, IMU)
π¦ Whatβs Included
- High-resolution aerial images
- Per-image pose metadata in NeRF-compatible OpenGL format
- Full drone flight log
- Scene map and sampling diagrams
- Example reconstruction using NeRF
π― Key Features
- β Dense and structured spatial-angular coverage
- β Real-world variability (lighting, pedestrians, cars, vegetation)
- β Precise pose annotations from onboard GNSS + IMU
- β Designed for photorealistic NeRF reconstruction and benchmarking
- β Supports pose estimation, object detection, keypoint detection, and novel view synthesis
π Use Cases
- Neural Radiance Fields (NeRF)
- View synthesis and novel view generation
- Pose estimation and camera localization
- Multi-view geometry and reconstruction benchmarks
- UAV scene understanding in complex environments
π Citation
If you use AeroGrid100 in your research, please cite:
@inproceedings{zeng2025aerogrid100,
title = {AeroGrid100: A Real-World Multi-Pose Aerial Dataset for Implicit Neural Scene Reconstruction},
author = {Zeng, Qingyang and Mohanty, Adyasha},
booktitle = {RSS Workshop on Leveraging Implicit Methods in Aerial Autonomy},
year = {2025},
url = {https://im4rob.github.io/attend/papers/7_AeroGrid100_A_Real_World_Mul.pdf}
}