Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Zhong commited on
Commit
e34ac59
·
verified ·
1 Parent(s): 9d0de69

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +120 -0
README.md CHANGED
@@ -46,3 +46,123 @@ dataset_info:
46
  download_size: 429876515
47
  dataset_size: 542102506.0
48
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  download_size: 429876515
47
  dataset_size: 542102506.0
48
  ---
49
+
50
+
51
+
52
+ <h1 align="center">💬 VLM-REG: Vision-Language Models Are Not Pragmatically Competent in Referring Expression Generation</h1>
53
+
54
+ <p align="center">
55
+ 📃 <a href="https://openreview.net/pdf?id=oj3ETSitjb" target="_blank">Paper</a> |🏠 <a href="https://vlm-reg.github.io" target="_blank">Homepage</a>
56
+ </p>
57
+
58
+
59
+
60
+ ## Overview
61
+
62
+ Referring Expression Generation (REG)—the task of producing a concise and unambiguous description that allows a listener to identify a target object—lies at the heart of pragmatic communication in vision-language systems. However, existing benchmarks suffer from two major limitations:
63
+
64
+ 1. **Data leakage in RefCOCO/RefCOCO+**, which raises concerns about evaluation contamination, especially for VLMs trained on MSCOCO.
65
+ 2. **Lack of spoken data**, despite the fact that real-world referring is often **real-time** and **spontaneous**, unlike written language, which benefits from planning and revision.
66
+
67
+ **To address these gaps**, we introduce **RefOI**, a curated dataset built from the [OpenImages V7](https://storage.googleapis.com/openimages/web/index.html) Instance Segmentation validation set.
68
+
69
+ **Key features:**
70
+ - 1,485 real-world object instances, equally distributed across **COCO** (744) and **non-COCO** (741) classes.
71
+ - Includes **single presence** and **co-occurence** images for each class.
72
+ - Each instance annotated with **3 written** and **2 spoken** human referring expressions.
73
+
74
+
75
+ Using RefOI, we evaluate several state-of-the-art VLMs and uncover **three tiers of pragmatic failure**:
76
+
77
+ - **Ambiguity**: Generated expressions often fail to uniquely identify the referent.
78
+ - **Redundancy**: Models include excessive or irrelevant details, violating principles of informativeness and efficiency.
79
+ - **Misalignment**: Model preferences diverge from human pragmatics, favoring visual complexity over minimal spatial cues.
80
+
81
+ Our results also highlight the inadequacy of standard automatic metrics (e.g., BLEU, CIDEr) and listener-based scores (e.g., REC), which fail to capture these pragmatic shortcomings—emphasizing the need for more cognitively grounded evaluation protocols.
82
+
83
+ ![Overview](vlm-reg.png)
84
+
85
+ ## Dataset Structure
86
+
87
+ Each entry in the dataset contains the following fields:
88
+
89
+ - `image`: The original image file.
90
+ - `mask`: A binary segmentation mask isolating the target object.
91
+ - `boxed_image`: The original image overlaid with a red bounding box highlighting the target object.
92
+ - `box_xmin`, `box_xmax`, `box_ymin`, `box_ymax`: The normalized bounding‑box coordinates.
93
+ - `is_coco`: A binary flag (1 for COCO-class, 0 for non‑COCO).
94
+ - `label_name`: The object’s category label (e.g., “muffin,” “giraffe”).
95
+ - `co_occurrence`: The number of same‑class instances in the image (1 = no distractors; >1 = multiple).
96
+ - `written_descriptions`: Three human‑typed referring expressions.
97
+ - `spoken_descriptions`: Two human‑spoken expressions (transcribed and optionally corrected by annotators).
98
+
99
+ **Dataset Split:**
100
+ `single_presence`: `co_occurence` = 1.
101
+ `co_occurence`: `co_occurence` > 1.
102
+
103
+
104
+ ## Usage
105
+
106
+ ```python
107
+ from datasets import load_dataset
108
+
109
+ # COCO class
110
+ ds_coco = load_dataset("Seed42Lab/RefOI", split="single_presence")
111
+ # non-COCO class
112
+ ds_oi = load_dataset("Seed42Lab/RefOI", split="co_occurence")
113
+
114
+ print(ds_coco[0])
115
+ print(ds_oi[0])
116
+ ```
117
+
118
+
119
+ ## Experiments
120
+
121
+ We compare multiple models across standard metrics, listener-based accuracy, and human judgment. Humans outperform all models by large margins (e.g., >90% vs. ~50%).
122
+ Automatic metrics such as BLEU and CIDEr show poor correlation with human judgment, frequently ranking verbose models higher.
123
+ Even listener-based scores (REC) fail to consistently match human preferences, indicating that existing metrics do not capture pragmatic competence effectively.
124
+
125
+ | Model | Instr. | BLEU-1 | BLEU-4 | ROUGE-1 | ROUGE-L | METEOR | CIDEr | SPICE | BERT | CLIP | REC | Human | Irrel% |
126
+ | --------- | ------ | --------- | -------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- |
127
+ | LLaVA-7B | Dft. | 13.27 | 1.60 | 18.09 | 16.30 | 19.29 | 2.10 | 10.50 | 85.51 | 79.02 | 17.28 | 39.46 | 87.30 |
128
+ | | Brf. | 28.74 | 6.05 | **36.46** | 35.50 | 19.15 | 10.80 | 24.59 | 89.02 | 70.72 | 13.58 | 30.57 | 41.95 |
129
+ | LLaVA-13B | Dft. | 8.17 | 1.07 | 11.98 | 10.94 | 16.89 | 0.77 | 7.92 | 84.61 | 79.85 | 15.27 | 46.40 | 91.85 |
130
+ | | Brf. | 28.96 | 5.81 | 36.44 | **35.64** | 20.13 | 8.14 | 21.63 | 88.42 | 72.99 | 15.33 | 32.53 | 49.65 |
131
+ | LLaVA-34B | Dft. | 6.29 | 0.78 | 9.82 | 9.11 | 16.15 | 0.07 | 7.61 | 84.39 | 79.86 | 16.21 | 46.53 | 92.90 |
132
+ | | Brf. | 28.55 | 6.38 | 32.99 | 31.67 | 20.48 | 9.60 | 16.50 | 88.50 | 74.95 | 17.22 | 36.77 | 56.11 |
133
+ | XComposer | Dft. | 5.25 | 0.65 | 8.38 | 7.81 | 14.58 | 3.10 | 6.37 | 84.11 | 79.86 | 18.56 | 52.19 | 92.81 |
134
+ | | Brf. | 13.59 | 2.17 | 17.77 | 16.69 | 19.95 | 5.52 | 10.63 | 85.52 | 79.66 | 18.36 | 51.65 | 80.36 |
135
+ | MiniCPM-V | Dft. | 6.38 | 0.67 | 9.86 | 8.78 | 15.28 | 0.05 | 6.30 | 84.29 | 80.38 | 19.10 | 45.12 | 92.97 |
136
+ | | Brf. | 16.03 | 3.15 | 19.56 | 18.19 | 18.77 | 6.36 | 11.16 | 86.29 | 78.55 | 17.15 | 45.79 | 72.87 |
137
+ | GLaMM | Dft. | 15.01 | 3.32 | 16.69 | 16.29 | 11.49 | 9.08 | 3.90 | 86.42 | 58.26 | 3.70 | 3.84 | 74.68 |
138
+ | | Brf. | 18.46 | 4.45 | 20.92 | 20.46 | 14.18 | 10.48 | 4.44 | 86.65 | 58.60 | 3.77 | 4.85 | 70.52 |
139
+ | CogVLM | Dft. | 31.13 | **8.70** | 33.89 | 32.32 | 23.50 | **41.62** | 24.09 | 89.78 | 66.54 | 15.97 | 26.67 | **26.39** |
140
+ | | Brf. | **31.39** | 8.69 | 34.70 | 32.94 | **24.87** | 41.41 | **24.74** | **90.00** | 69.15 | 18.06 | 33.53 | 29.88 |
141
+ | GPT-4o | Dft. | 7.47 | 0.85 | 11.61 | 10.43 | 17.39 | 0.03 | 7.21 | 84.57 | **80.81** | **21.65** | **59.80** | 89.81 |
142
+ | | Brf. | 25.30 | 5.78 | 28.76 | 27.36 | 19.02 | 8.17 | 15.31 | 88.11 | 76.58 | 19.03 | 51.72 | 52.75 |
143
+ | Human | Spk. | 66.18 | 22.58 | 70.15 | 66.45 | 48.28 | 112.04 | 42.35 | 93.89 | 71.60 | 30.46 | 92.20 | 9.15 |
144
+ | | Wrt. | - | - | - | - | - | - | - | - | 70.43 | 30.06 | 89.29 | 7.29 |
145
+
146
+ Model performance under different **Instr.** (Instruction) settings: **Dft.** (Default) prompt and **Brf.** (Brief) prompt. All model predictions are evaluated against Human **Wrt.** (Written) results as the reference texts. We also compute Human **Spk.** (Spoken) data in comparison with human-written data. **Irrel%** refers to the percentage of irrelevant words in the referring expression of the examples evaluated as successful.
147
+
148
+
149
+
150
+ ## Recommended Use of Our Dataset
151
+
152
+ The `RefOI` dataset is designed for fine-grained REG/REC analysis. It distinguishes between **COCO** and **non-COCO classes**, and between scenes with **single presence vs. co-occurence** of the same class.
153
+ We encourage users to leverage these distinctions for deeper insights and invite community contributions to expand non-COCO annotations.
154
+
155
+
156
+ ## Citation
157
+ If you find our data useful, please consider citing our work:
158
+
159
+ ```bibtex
160
+ @misc{vlmreg2025,
161
+ title = {Vision–Language Models Are Not Pragmatically Competent in Referring Expression Generation},
162
+ year = {2025},
163
+ note = {Under review at COLM 2025, ACL GEM 2025, CVinW CVPR 2025},
164
+ url = {https://vlm-reg.github.io},
165
+ }
166
+ ```
167
+
168
+