nielsr HF Staff commited on
Commit
7d98db6
·
verified ·
1 Parent(s): 583b29b

Add link to paper and code repository, update task category

Browse files

This PR ensures the dataset is linked to the paper https://huggingface.co/papers/2412.14133, as well as the code repository. It also updates the task category to `image-text-to-text`.

Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -1,26 +1,26 @@
1
  ---
2
- license: mit
3
- task_categories:
4
- - visual-question-answering
5
  language:
6
  - en
7
- pretty_name: PopVQA
8
  size_categories:
9
  - 10K<n<100K
 
 
 
10
  ---
11
 
12
  # PopVQA: Popular Entity Visual Question Answering
13
 
14
  PopVQA is a dataset designed to study the performance gap in vision-language models (VLMs) when answering factual questions about entities presented in **images** versus **text**.
15
 
 
 
16
 
17
  ![PopVQA Teaser](./popvqa_teaser.png)
18
 
19
-
20
  ## 🔍 Motivation
21
  <img src="./paper_teaser.png" alt="Motivation" width="700">
22
 
23
-
24
  PopVQA was curated to explore the disparity in model performance when answering factual questions about an entity described in text versus depicted in an image. This is achieved by asking the same questions twice, once with the textual representation (the entity's name), then, with the visual representation (entity image). We include several questions about every entity to allow a more fine grained evaluation.
25
  This dataset was introduced in the paper:
26
 
@@ -85,3 +85,4 @@ To build the dataset, run:
85
 
86
  ```bash
87
  python scripts/build_dataset.py --base-df path/to/base_entities.csv
 
 
1
  ---
 
 
 
2
  language:
3
  - en
4
+ license: mit
5
  size_categories:
6
  - 10K<n<100K
7
+ task_categories:
8
+ - image-text-to-text
9
+ pretty_name: PopVQA
10
  ---
11
 
12
  # PopVQA: Popular Entity Visual Question Answering
13
 
14
  PopVQA is a dataset designed to study the performance gap in vision-language models (VLMs) when answering factual questions about entities presented in **images** versus **text**.
15
 
16
+ Paper: https://huggingface.co/papers/2412.14133
17
+ Code: https://github.com/idocohen/vlm-modality-gap
18
 
19
  ![PopVQA Teaser](./popvqa_teaser.png)
20
 
 
21
  ## 🔍 Motivation
22
  <img src="./paper_teaser.png" alt="Motivation" width="700">
23
 
 
24
  PopVQA was curated to explore the disparity in model performance when answering factual questions about an entity described in text versus depicted in an image. This is achieved by asking the same questions twice, once with the textual representation (the entity's name), then, with the visual representation (entity image). We include several questions about every entity to allow a more fine grained evaluation.
25
  This dataset was introduced in the paper:
26
 
 
85
 
86
  ```bash
87
  python scripts/build_dataset.py --base-df path/to/base_entities.csv
88
+ ```