Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
Zhong commited on
Commit
4c47bcc
·
verified ·
1 Parent(s): e34ac59

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -15
README.md CHANGED
@@ -64,11 +64,11 @@ Referring Expression Generation (REG)—the task of producing a concise and unam
64
  1. **Data leakage in RefCOCO/RefCOCO+**, which raises concerns about evaluation contamination, especially for VLMs trained on MSCOCO.
65
  2. **Lack of spoken data**, despite the fact that real-world referring is often **real-time** and **spontaneous**, unlike written language, which benefits from planning and revision.
66
 
67
- **To address these gaps**, we introduce **RefOI**, a curated dataset built from the [OpenImages V7](https://storage.googleapis.com/openimages/web/index.html) Instance Segmentation validation set.
68
 
69
  **Key features:**
70
  - 1,485 real-world object instances, equally distributed across **COCO** (744) and **non-COCO** (741) classes.
71
- - Includes **single presence** and **co-occurence** images for each class.
72
  - Each instance annotated with **3 written** and **2 spoken** human referring expressions.
73
 
74
 
@@ -78,12 +78,12 @@ Using RefOI, we evaluate several state-of-the-art VLMs and uncover **three tiers
78
  - **Redundancy**: Models include excessive or irrelevant details, violating principles of informativeness and efficiency.
79
  - **Misalignment**: Model preferences diverge from human pragmatics, favoring visual complexity over minimal spatial cues.
80
 
81
- Our results also highlight the inadequacy of standard automatic metrics (e.g., BLEU, CIDEr) and listener-based scores (e.g., REC), which fail to capture these pragmatic shortcomings—emphasizing the need for more cognitively grounded evaluation protocols.
82
-
83
  ![Overview](vlm-reg.png)
84
 
85
- ## Dataset Structure
86
 
 
 
 
87
  Each entry in the dataset contains the following fields:
88
 
89
  - `image`: The original image file.
@@ -96,9 +96,12 @@ Each entry in the dataset contains the following fields:
96
  - `written_descriptions`: Three human‑typed referring expressions.
97
  - `spoken_descriptions`: Two human‑spoken expressions (transcribed and optionally corrected by annotators).
98
 
99
- **Dataset Split:**
100
- `single_presence`: `co_occurence` = 1.
101
- `co_occurence`: `co_occurence` > 1.
 
 
 
102
 
103
 
104
  ## Usage
@@ -106,13 +109,13 @@ Each entry in the dataset contains the following fields:
106
  ```python
107
  from datasets import load_dataset
108
 
109
- # COCO class
110
- ds_coco = load_dataset("Seed42Lab/RefOI", split="single_presence")
111
- # non-COCO class
112
- ds_oi = load_dataset("Seed42Lab/RefOI", split="co_occurence")
113
 
114
- print(ds_coco[0])
115
- print(ds_oi[0])
116
  ```
117
 
118
 
@@ -149,7 +152,7 @@ Model performance under different **Instr.** (Instruction) settings: **Dft.** (D
149
 
150
  ## Recommended Use of Our Dataset
151
 
152
- The `RefOI` dataset is designed for fine-grained REG/REC analysis. It distinguishes between **COCO** and **non-COCO classes**, and between scenes with **single presence vs. co-occurence** of the same class.
153
  We encourage users to leverage these distinctions for deeper insights and invite community contributions to expand non-COCO annotations.
154
 
155
 
 
64
  1. **Data leakage in RefCOCO/RefCOCO+**, which raises concerns about evaluation contamination, especially for VLMs trained on MSCOCO.
65
  2. **Lack of spoken data**, despite the fact that real-world referring is often **real-time** and **spontaneous**, unlike written language, which benefits from planning and revision.
66
 
67
+ To address these gaps, we introduce **RefOI**, a curated dataset built from the [OpenImages V7](https://storage.googleapis.com/openimages/web/index.html) Instance Segmentation validation set.
68
 
69
  **Key features:**
70
  - 1,485 real-world object instances, equally distributed across **COCO** (744) and **non-COCO** (741) classes.
71
+ - Includes **single presence** and **co-occurrence** images for each class.
72
  - Each instance annotated with **3 written** and **2 spoken** human referring expressions.
73
 
74
 
 
78
  - **Redundancy**: Models include excessive or irrelevant details, violating principles of informativeness and efficiency.
79
  - **Misalignment**: Model preferences diverge from human pragmatics, favoring visual complexity over minimal spatial cues.
80
 
 
 
81
  ![Overview](vlm-reg.png)
82
 
 
83
 
84
+ ## ## Dataset Schema and Split
85
+
86
+ ### Data Fields
87
  Each entry in the dataset contains the following fields:
88
 
89
  - `image`: The original image file.
 
96
  - `written_descriptions`: Three human‑typed referring expressions.
97
  - `spoken_descriptions`: Two human‑spoken expressions (transcribed and optionally corrected by annotators).
98
 
99
+ ### Dataset Split
100
+ - `single_presence` (`co_occurrence = 1`):
101
+ Only one object of the target class appears (no same‑class distractors in the image).
102
+
103
+ - `co_occurrence` (`co_occurrence > 1`):
104
+ Multiple objects of the same class appear in the image, introducing potential referential ambiguity.
105
 
106
 
107
  ## Usage
 
109
  ```python
110
  from datasets import load_dataset
111
 
112
+ # only one object of the class
113
+ ds_single = load_dataset("Seed42Lab/RefOI", split="single_presence")
114
+ # multiple objects of the class
115
+ ds_multi = load_dataset("Seed42Lab/RefOI", split="co_occurrence")
116
 
117
+ print(ds_single[0])
118
+ print(ds_multi[0])
119
  ```
120
 
121
 
 
152
 
153
  ## Recommended Use of Our Dataset
154
 
155
+ The `RefOI` dataset is designed for fine-grained REG/REC analysis. It distinguishes between **COCO** and **non-COCO classes**, and between scenes with **single presence vs. co-occurrence** of the same class.
156
  We encourage users to leverage these distinctions for deeper insights and invite community contributions to expand non-COCO annotations.
157
 
158