nielsr HF Staff commited on
Commit
598fd6c
·
verified ·
1 Parent(s): cdc5f74

Improve dataset card: Update task category, add tags, and add sample usage

Browse files

This PR updates the dataset card for "Forgotten Polygons: Multimodal Large Language Models are Shape-Blind" to enhance its accuracy and usability.

Key changes include:
- Updating the `task_categories` metadata from `image-classification` to `image-text-to-text`, which more accurately reflects the dataset's use in evaluating Multimodal Large Language Models (MLLMs) on visual-mathematical reasoning tasks.
- Adding relevant `tags` to improve discoverability, such as `mllm`, `multimodal`, `geometric-reasoning`, `shape-recognition`, `visual-question-answering`, `chain-of-thought`, `mathematics`, and `reasoning`.
- Including `language: - en` in the metadata.
- Introducing a "Sample Usage" section with a code snippet directly from the associated GitHub repository, demonstrating how to run an evaluation using the dataset.

These updates provide clearer context for users and improve the dataset's overall documentation on the Hugging Face Hub.

Files changed (1) hide show
  1. README.md +26 -2
README.md CHANGED
@@ -1,4 +1,17 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: shape
@@ -40,8 +53,6 @@ configs:
40
  path: data/heptagons_with_visual_cues-*
41
  - split: arrow_on_plus_with_visual_cues
42
  path: data/arrow_on_plus_with_visual_cues-*
43
- task_categories:
44
- - image-classification
45
  library_name:
46
  - pytorch
47
  ---
@@ -56,6 +67,19 @@ This dataset is part of the work **"Forgotten Polygons: Multimodal Large Languag
56
 
57
  This dataset is designed to evaluate the shape understanding capabilities of Multimodal Large Language Models (MLLMs).
58
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
  ## Dataset Splits
60
 
61
  Each split corresponds to a different reasoning task and shape identification challenge.
 
1
  ---
2
+ task_categories:
3
+ - image-text-to-text
4
+ tags:
5
+ - multimodal
6
+ - mllm
7
+ - geometric-reasoning
8
+ - visual-question-answering
9
+ - shape-recognition
10
+ - chain-of-thought
11
+ - mathematics
12
+ - reasoning
13
+ language:
14
+ - en
15
  dataset_info:
16
  features:
17
  - name: shape
 
53
  path: data/heptagons_with_visual_cues-*
54
  - split: arrow_on_plus_with_visual_cues
55
  path: data/arrow_on_plus_with_visual_cues-*
 
 
56
  library_name:
57
  - pytorch
58
  ---
 
67
 
68
  This dataset is designed to evaluate the shape understanding capabilities of Multimodal Large Language Models (MLLMs).
69
 
70
+ ## Sample Usage
71
+
72
+ This dataset is designed to be used with the evaluation code provided in the [GitHub Repository](https://github.com/rsinghlab/Shape-Blind/tree/main). To evaluate MLLMs on various tasks using this dataset, follow the instructions in the `evaluation` folder of the repository.
73
+
74
+ For example, to run a shape identification task using LLaVA-1.5:
75
+
76
+ ```bash
77
+ # Navigate to the 'evaluation' folder in the cloned GitHub repository
78
+ cd Shape-Blind/evaluation
79
+ # Run the evaluation script
80
+ python3 evaluate_MLLMs.py --model_version llava-1.5 --task shape_id --dataset_size full
81
+ ```
82
+
83
  ## Dataset Splits
84
 
85
  Each split corresponds to a different reasoning task and shape identification challenge.