Improve dataset card: Update task category, add tags, and add sample usage

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +26 -2
README.md CHANGED
@@ -1,4 +1,17 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: shape
@@ -40,8 +53,6 @@ configs:
40
  path: data/heptagons_with_visual_cues-*
41
  - split: arrow_on_plus_with_visual_cues
42
  path: data/arrow_on_plus_with_visual_cues-*
43
- task_categories:
44
- - image-classification
45
  library_name:
46
  - pytorch
47
  ---
@@ -56,6 +67,19 @@ This dataset is part of the work **"Forgotten Polygons: Multimodal Large Languag
56
 
57
  This dataset is designed to evaluate the shape understanding capabilities of Multimodal Large Language Models (MLLMs).
58
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
  ## Dataset Splits
60
 
61
  Each split corresponds to a different reasoning task and shape identification challenge.
 
1
  ---
2
+ task_categories:
3
+ - image-text-to-text
4
+ tags:
5
+ - multimodal
6
+ - mllm
7
+ - geometric-reasoning
8
+ - visual-question-answering
9
+ - shape-recognition
10
+ - chain-of-thought
11
+ - mathematics
12
+ - reasoning
13
+ language:
14
+ - en
15
  dataset_info:
16
  features:
17
  - name: shape
 
53
  path: data/heptagons_with_visual_cues-*
54
  - split: arrow_on_plus_with_visual_cues
55
  path: data/arrow_on_plus_with_visual_cues-*
 
 
56
  library_name:
57
  - pytorch
58
  ---
 
67
 
68
  This dataset is designed to evaluate the shape understanding capabilities of Multimodal Large Language Models (MLLMs).
69
 
70
+ ## Sample Usage
71
+
72
+ This dataset is designed to be used with the evaluation code provided in the [GitHub Repository](https://github.com/rsinghlab/Shape-Blind/tree/main). To evaluate MLLMs on various tasks using this dataset, follow the instructions in the `evaluation` folder of the repository.
73
+
74
+ For example, to run a shape identification task using LLaVA-1.5:
75
+
76
+ ```bash
77
+ # Navigate to the 'evaluation' folder in the cloned GitHub repository
78
+ cd Shape-Blind/evaluation
79
+ # Run the evaluation script
80
+ python3 evaluate_MLLMs.py --model_version llava-1.5 --task shape_id --dataset_size full
81
+ ```
82
+
83
  ## Dataset Splits
84
 
85
  Each split corresponds to a different reasoning task and shape identification challenge.