added README
Browse files
README.md
ADDED
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# 3D MLLM Dataset and Codebooks Details
|
2 |
+
|
3 |
+
This document provides details about the dataset and codebooks provided in the `3d-mllm-datasets-and-codebooks` repository. We will provide the details about each of the folders in the repository and the contents of each folder.
|
4 |
+
|
5 |
+
## Data Generation Pipeline
|
6 |
+
|
7 |
+
The pipeline that we follow to generate the pre-tokenized data is as follows:
|
8 |
+
|
9 |
+
* **3D Scenes**: 3D Scene JSON --> Serialized 3D Scene --> Tokenized 3D Scene
|
10 |
+
* **Images**: Image --> VQGAN Codebook Indices --> Tokenized Image
|
11 |
+
* **Text**: Text --> Tokenized Text
|
12 |
+
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
## Pre-tokenized Data
|
17 |
+
|
18 |
+
The `pretokenized-data` folder contains all the pre-tokenized data for the datasets used in the 3D MLLM project. The pre-tokenized data is stored in the following structure:
|
19 |
+
|
20 |
+
```python
|
21 |
+
pretokenized-data/
|
22 |
+
|-- clevr/
|
23 |
+
| |-- 3d-scenes/ # contains all pre-tokenized 3D scenes for CLEVR for all tasks
|
24 |
+
| |-- images/ # contains all pre-tokenized images for CLEVR for all tasks
|
25 |
+
| |-- text/ # contains all pre-tokenized text for CLEVR for all tasks
|
26 |
+
|-- objaworld/
|
27 |
+
| |-- 3d-scenes/ # contains all pre-tokenized 3D scenes for ObjaWorld for all tasks
|
28 |
+
| |-- images/ # contains all pre-tokenized images for ObjaWorld for all tasks
|
29 |
+
|-- objectron/
|
30 |
+
| |-- 3d-scenes/ # contains all pre-tokenized 3D scenes for Objectron for all tasks
|
31 |
+
| |-- images/ # contains all pre-tokenized images for Objectron for all tasks
|
32 |
+
```
|
33 |
+
|
34 |
+
|
35 |
+
For a given task, an input can be any combination of 3d-scenes, images, and text. The output can be any combination of images, text, and 3d-scenes. In the following table we outline the tasks for each dataset and the corresponding input and output data that are needed for each task.
|
36 |
+
|
37 |
+
| **Task** | **Input Image** | **Input 3D Scene** | **Input Text** | **Output Image** | **Output 3D Scene** | **Output Text** |
|
38 |
+
|:----------------------:|:------------------:|:----------------------:|:-----------------:|:------------------:|:-----------------------:|:-----------------:|
|
39 |
+
| **CLEVR** | | | | | | |
|
40 |
+
| Rendering | π | β | π | β | π | π |
|
41 |
+
| Recognition | β | π | π | π | β | π |
|
42 |
+
| Instruction-Following | β | β | β | β | β | π |
|
43 |
+
| Question-Answering | β | β | β | π | π | β |
|
44 |
+
| | | | | | | |
|
45 |
+
| **ObjaWorld** | | | | | | |
|
46 |
+
| Rendering | π | β | π | β | π | π |
|
47 |
+
| Recognition | β | π | π | π | β | π |
|
48 |
+
| | | | | | | |
|
49 |
+
| **Objectron** | | | | | | |
|
50 |
+
| Recognition | β | π | π | π | β | π |
|
51 |
+
|
52 |
+
For the exact files that correspond to the input and output data for each task, please refer to the corresponding configuration files in the `configs/llama3_2/train` folder.
|
53 |
+
|
54 |
+
|
55 |
+
## VQGAN Models and Codebooks
|
56 |
+
|
57 |
+
The `vqgan-models-and-codebooks` folder contains all the VQGAN model checkpoints and codebooks for the datasets used in the 3D MLLM project. The VQGAN model checkpoints and codebooks are stored in the following structure:
|
58 |
+
|
59 |
+
```python
|
60 |
+
vqgan-models-and-codebooks/
|
61 |
+
|-- clevr/
|
62 |
+
| |-- 2024-10-10T09-21-36_custom_vqgan_CLEVR-LARGE/ # contains the VQGAN model checkpoint for CLEVR
|
63 |
+
| |-- custom_vqgan_embedding_1024CLEVRLARGE_256dim.npy # contains the VQGAN codebook for CLEVR
|
64 |
+
|-- domain-agnostic/
|
65 |
+
| |-- vqgan_gumbel_f8/ # contains the VQGAN model checkpoint for Domain Agnostic VQGAN (provided by taming-transformers)
|
66 |
+
| |-- quantize_weight_8192.npy # contains the VQGAN codebook for Domain Agnostic VQGAN
|
67 |
+
|-- objaworld/
|
68 |
+
| |-- 2025-01-17T09-02-22_custom_vqgan_SYNTHETIC_LIVINGROOM_PARK_LARGE_EP100/ # contains the VQGAN model checkpoint for ObjaWorld
|
69 |
+
| |-- custom_vqgan_embedding_256SYNTHETIC_LIVINGROOM_PARK_LARGE_EP100_256dim.npy # contains the VQGAN codebook for ObjaWorld
|
70 |
+
|-- objectron/
|
71 |
+
| |-- 2024-11-03T05-41-42_custom_vqgan_OMNI3D_OBJECTRON_ep200/ # contains the VQGAN model checkpoint for Objectron
|
72 |
+
| |-- custom_vqgan_embedding_256Omni3D-OBJECTRON_256dim.npy # contains the VQGAN codebook for Objectron
|
73 |
+
```
|
74 |
+
|
75 |
+
## Images and Scenes for Evaluation
|
76 |
+
|
77 |
+
The `images-and-scenes-for-evaluation` folder contains all the groundtruth images and scenes for the datasets used in the 3D MLLM project. The images and scenes are used to compute the evaluation metrics for the different tasks. The images and scenes are stored in the following structure:
|
78 |
+
|
79 |
+
```python
|
80 |
+
images-and-scenes-for-evaluation/
|
81 |
+
|-- clevr/ # contains all images and scenes for evaluation for CLEVR
|
82 |
+
|-- objaworld/ # contains all images and scenes for evaluation for ObjaWorld
|
83 |
+
|-- objectron/ # contains all scenes for evaluation for Objectron
|
84 |
+
```
|
85 |
+
|
86 |
+
|
87 |
+
|