williamium commited on
Commit
a819561
·
verified ·
1 Parent(s): 9092179

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -8
README.md CHANGED
@@ -52,22 +52,21 @@ configs:
52
 
53
  ## Dataset Description
54
 
55
- **CoreCognition** is a large-scale benchmark encompassing **12 core knowledge concepts** grounded in developmental cognitive science, designed to evaluate the fundamental cognitive abilities of Multi-modal Large Language Models (MLLMs).
56
 
57
- While MLLMs demonstrate impressive abilities over high-level perception and reasoning, their robustness in the wild remains limited, often falling short on tasks that are intuitive and effortless for humans. We examine the hypothesis that these deficiencies stem from the absence of **core knowledge**—rudimentary cognitive abilities innate to humans from early childhood.
58
 
59
- This dataset contains **1,423** multimodal cognitive assessment samples with images and questions, covering fundamental concepts like object permanence, spatial reasoning, counting, and other core cognitive abilities that emerge in human development.
60
 
61
  (Additional **80 Concept Hacking** questions in our paper will be released separately)
62
 
63
- 🔗 **Project Website**: [https://williamium3000.github.io/core-knowledge/](https://williamium3000.github.io/core-knowledge/)
64
  🔗 **Paper**: [https://arxiv.org/abs/2410.10855](https://arxiv.org/abs/2410.10855)
 
65
 
66
- ## Repository Formats
67
 
68
- This repository provides **2 formats**:
69
-
70
- 1. **HuggingFace Preview** - For browsing and exploration (visible in HuggingFace viewer, contains embedded 448*448-pixel image preview but not videos)
71
  2. **Complete Dataset ZIP (Recommended)** - Full data with all images and videos before resizing, 6.41GB
72
 
73
  ```
 
52
 
53
  ## Dataset Description
54
 
55
+ **CoreCognition** is a large-scale benchmark encompassing **12 core knowledge ** grounded in developmental cognitive science, designed to evaluate the fundamental core abilities of Multi-modal Large Language Models (MLLMs).
56
 
57
+ While MLLMs demonstrate impressive abilities over high-level perception and reasoning, their robustness in the wild remains limited, often falling short on tasks that are intuitive and effortless for humans. We examine the hypothesis that these deficiencies stem from the absence of **core knowledge**—rudimentary core abilities innate to humans.
58
 
59
+ This dataset contains **1,423** multimodal samples with images/videos and questions, covering fundamental concepts like object permanence, spatial reasoning, counting, and other core abilities that emerge in human development.
60
 
61
  (Additional **80 Concept Hacking** questions in our paper will be released separately)
62
 
63
+ 🔗 **Website**: [https://williamium3000.github.io/core-knowledge/](https://williamium3000.github.io/core-knowledge/)
64
  🔗 **Paper**: [https://arxiv.org/abs/2410.10855](https://arxiv.org/abs/2410.10855)
65
+ 🔗 **Github**: [https://github.com/williamium3000/core-knowledge](https://github.com/williamium3000/core-knowledge)
66
 
67
+ ## Formats
68
 
69
+ 1. **HuggingFace Preview** - For browsing and exploration (visible in HuggingFace viewer, contains embedded 448*448-pixel image preview but no videos)
 
 
70
  2. **Complete Dataset ZIP (Recommended)** - Full data with all images and videos before resizing, 6.41GB
71
 
72
  ```