YkiWu nielsr HF Staff commited on
Commit
936b392
Β·
verified Β·
1 Parent(s): 6f1a396

Improve dataset card: Add task category, links, abstract, sample usage, and citation (#1)

Browse files

- Improve dataset card: Add task category, links, abstract, sample usage, and citation (349f9159035e6953f1938a0a764c0e9a62a83af1)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +97 -3
README.md CHANGED
@@ -1,3 +1,97 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - image-to-3d
5
+ tags:
6
+ - 3d-occupancy-prediction
7
+ - robotics
8
+ - scene-understanding
9
+ - computer-vision
10
+ ---
11
+
12
+ This repository contains the EmbodiedOcc-ScanNet dataset, which is a reorganized benchmark based on local annotations, designed to facilitate the evaluation of the embodied 3D occupancy prediction task. It accompanies the paper [EmbodiedOcc: Embodied 3D Occupancy Prediction for Vision-based Online Scene Understanding](https://huggingface.co/papers/2412.04380).
13
+
14
+ Project page: https://ykiwu.github.io/EmbodiedOcc/
15
+ Code: https://github.com/YkiWu/EmbodiedOcc
16
+
17
+ ## Paper Abstract
18
+
19
+ 3D occupancy prediction provides a comprehensive description of the surrounding scenes and has become an essential task for 3D perception. Most existing methods focus on offline perception from one or a few views and cannot be applied to embodied agents that demand to gradually perceive the scene through progressive embodied exploration. In this paper, we formulate an embodied 3D occupancy prediction task to target this practical scenario and propose a Gaussian-based EmbodiedOcc framework to accomplish it. We initialize the global scene with uniform 3D semantic Gaussians and progressively update local regions observed by the embodied agent. For each update, we extract semantic and structural features from the observed image and efficiently incorporate them via deformable cross-attention to refine the regional Gaussians. Finally, we employ Gaussian-to-voxel splatting to obtain the global 3D occupancy from the updated 3D Gaussians. Our EmbodiedOcc assumes an unknown (i.e., uniformly distributed) environment and maintains an explicit global memory of it with 3D Gaussians. It gradually gains knowledge through the local refinement of regional Gaussians, which is consistent with how humans understand new scenes through embodied exploration. We reorganize an EmbodiedOcc-ScanNet benchmark based on local annotations to facilitate the evaluation of the embodied 3D occupancy prediction task. Our EmbodiedOcc outperforms existing methods by a large margin and accomplishes the embodied occupancy prediction with high accuracy and efficiency.
20
+
21
+ ## Getting Started
22
+
23
+ To utilize this dataset with the EmbodiedOcc framework, follow the data preparation and usage instructions below, derived from the [official GitHub repository](https://github.com/YkiWu/EmbodiedOcc).
24
+
25
+ ### Data Preparation
26
+
27
+ 1. Prepare **posed_images** and **gathered_data** following the [Occ-ScanNet dataset](https://huggingface.co/datasets/hongxiaoy/OccScanNet) and move them to **data/occscannet**.
28
+ 2. Download **global_occ_package** and **streme_occ_new_package** from this dataset repository (`EmbodiedOcc-ScanNet`). Unzip and move them to **data/scene_occ**.
29
+
30
+ The expected folder structure within your `EmbodiedOcc` project directory should be:
31
+
32
+ ```
33
+ EmbodiedOcc
34
+ β”œβ”€β”€ ...
35
+ β”œβ”€β”€ data/
36
+ β”‚ β”œβ”€β”€ occscannet/
37
+ β”‚ β”‚ β”œβ”€β”€ gathered_data/
38
+ β”‚ β”‚ β”œβ”€β”€ posed_images/
39
+ β”‚ β”‚ β”œβ”€β”€ train_final.txt
40
+ β”‚ β”‚ β”œβ”€β”€ train_mini_final.txt
41
+ β”‚ β”‚ β”œβ”€β”€ test_final.txt
42
+ β”‚ β”‚ β”œβ”€β”€ test_mini_final.txt
43
+ β”‚ β”œβ”€β”€ scene_occ/
44
+ β”‚ β”‚ β”œβ”€β”€ global_occ_package/
45
+ β”‚ β”‚ β”œβ”€β”€ streme_occ_new_package/
46
+ β”‚ β”‚ β”œβ”€β”€ train_online.txt
47
+ β”‚ β”‚ β”œβ”€β”€ train_mini_online.txt
48
+ β”‚ β”‚ β”œβ”€β”€ test_online.txt
49
+ β”‚ β”‚ β”œβ”€β”€ test_mini_online.txt
50
+ ```
51
+
52
+ ### Train
53
+
54
+ After installing the environment as described in the [GitHub repository](https://github.com/YkiWu/EmbodiedOcc), you can train the models:
55
+
56
+ 1. Train local occupancy prediction module using 8 GPUs on Occ-ScanNet and Occ-ScanNet-mini2:
57
+ ```bash
58
+ $ cd EmbodiedOcc
59
+ $ torchrun --nproc_per_node=8 train_mono.py --py-config config/train_mono_config.py
60
+ $ torchrun --nproc_per_node=8 train_mono.py --py-config config/train_mono_mini_config.py
61
+ ```
62
+ 2. Train EmbodiedOcc using 8 GPUs on EmbodiedOcc-ScanNet and 4 GPUs on EmbodiedOcc-ScanNet-mini:
63
+ ```bash
64
+ $ cd EmbodiedOcc
65
+ $ torchrun --nproc_per_node=8 train_embodied.py --py-config config/train_embodied_config.py
66
+ $ torchrun --nproc_per_node=4 train_embodied.py --py-config config/train_embodied_mini_config.py
67
+ ```
68
+
69
+ ### Visualize
70
+
71
+ 1. Local occupancy prediction:
72
+ ```bash
73
+ $ cd EmbodiedOcc
74
+ $ torchrun --nproc_per_node=1 vis_mono.py --work-dir workdir/train_mono
75
+ $ torchrun --nproc_per_node=1 vis_mono.py --work-dir workdir/train_mono_mini
76
+ ```
77
+
78
+ 2. Embodied occupancy prediction:
79
+ ```bash
80
+ $ cd EmbodiedOcc
81
+ $ torchrun --nproc_per_node=1 vis_embodied.py --work-dir workdir/train_embodied
82
+ $ torchrun --nproc_per_node=1 vis_embodied.py --work-dir workdir/train_embodied_mini
83
+ ```
84
+ Please use the same workdir path with training setting.
85
+
86
+ ## Citation
87
+
88
+ If you find this project helpful, please consider citing the following paper:
89
+
90
+ ```bibtex
91
+ @article{wu2024embodiedoccembodied3doccupancy,
92
+ title={EmbodiedOcc: Embodied 3D Occupancy Prediction for Vision-based Online Scene Understanding},
93
+ author={Yuqi Wu and Wenzhao Zheng and Sicheng Zuo and Yuanhui Huang and Jie Zhou and Jiwen Lu},
94
+ journal={arXiv preprint arXiv:2412.04380},
95
+ year={2024}
96
+ }
97
+ ```