Factral commited on
Commit
4e55943
·
verified ·
1 Parent(s): 85078c3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -0
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - image-segmentation
5
+ pretty_name: nespof
6
+ ---
7
+
8
+
9
+ # Extended NeSpoF Dataset
10
+
11
+ <div align="center">
12
+ <img src="https://i.imgur.com/D3SaEU8.png" alt="UnMix-NeRF Overview" width="40%">
13
+
14
+ <div align="center">
15
+
16
+ **[Fabian Perez](https://github.com/Factral)¹² · [Sara Rojas](https://sararoma95.github.io/sr/)² · [Carlos Hinojosa](https://carloshinojosa.me/)² · [Hoover Rueda-Chacón](http://hfarueda.com/)¹ · [Bernard Ghanem](https://www.bernardghanem.com/)²**
17
+
18
+ ¹Universidad Industrial de Santander · ²King Abdullah University of Science and Technology (KAUST)
19
+
20
+ </div>
21
+
22
+ This dataset is an extension of the NeSpoF dataset, enriched with ground-truth material labels for evaluating material segmentation in synthetic multi-view settings. The annotations provide consistent material labeling across different viewpoints for comprehensive scene analysis.
23
+
24
+ </div>
25
+
26
+
27
+ ### Dataset Sources
28
+
29
+ * **Github:** [Official Code](https://github.com/Factral/UnMix-NeRF)
30
+ * **Paper:** [UnMix-NeRF: Spectral Unmixing Meets Neural Radiance Fields (ICCV 2025)](https://arxiv.org/pdf/2506.21884)
31
+ * **Repository:** [Original NeSpoF Repository](https://github.com/youngchan-k/nespof)
32
+
33
+
34
+ ## Direct Use
35
+
36
+ This dataset is intended for training and evaluating models for material segmentation tasks, particularly useful for multi-view segmentation scenarios and NeRF-based material analysis.
37
+
38
+ ## Dataset Structure
39
+
40
+ The dataset has the following directory structure:
41
+
42
+ ```
43
+ scene/
44
+ ├── color/
45
+ │ ├── eval/
46
+ │ └── train/
47
+ │ └── r_x.png
48
+ └── raw/
49
+ ├── eval/
50
+ └── train/
51
+ └── r_x.png
52
+ ```
53
+
54
+ Here, `x` corresponds to the matching frame ID from the original NeSpoF dataset.
55
+
56
+ ## Dataset Creation
57
+
58
+ ### Source Data
59
+
60
+ #### Who are the source data producers?
61
+
62
+ The dataset extension was produced by the authors of the paper "UnMix-NeRF: Spectral Unmixing Meets Neural Radiance Fields," accepted at ICCV 2025.
63
+
64
+ ### Annotations
65
+
66
+ #### Annotation process
67
+
68
+ Annotations were automatically generated by rendering the ground-truth material indices, corresponding consistently across views and matching original scene frames.
69
+
70
+ #### Who are the annotators?
71
+
72
+ Automated rendering processed by mitsuba 3.
73
+
74
+ ## Bias, Risks, and Limitations
75
+
76
+ No known biases or risks are identified in this synthetic dataset. However, its synthetic nature may limit direct applicability to real-world scenarios without additional adaptation or fine-tuning.
77
+
78
+ ### Recommendations
79
+
80
+ Users should be aware that performance on this synthetic dataset may not fully generalize to real-world data without further adaptation.
81
+
82
+ ## Citation
83
+
84
+ If you use this dataset, please cite the following paper:
85
+
86
+ ```bibtex
87
+ @inproceedings{perez2025unmix,
88
+ title={UnMix-NeRF: Spectral Unmixing Meets Neural Radiance Fields},
89
+ author={Perez, Fabian and Rojas, Sara and Hinojosa, Carlos and Rueda-Chac{\'o}n, Hoover and Ghanem, Bernard},
90
+ booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
91
+ year={2025}
92
+ }
93
+ ```
94
+
95
+
96
+ ## Dataset Card Contact
97
+
98
+ For inquiries regarding the dataset, please contact the corresponding authors listed in the referenced paper.