Update README.md
Browse files
README.md
CHANGED
|
@@ -38,4 +38,4 @@ The instruction-tuning phase employs [4M samples](https://huggingface.co/dataset
|
|
| 38 |
See [here](https://github.com/OpenGVLab/all-seeing/tree/main/all-seeing-v2#training) for more details.
|
| 39 |
|
| 40 |
## Evaluation dataset
|
| 41 |
-
A collection of 20 benchmarks, including 5 academic VQA benchmarks, 7 multimodal benchmarks specifically proposed for instruction-following LMMs, 3 referring expression comprehension
|
|
|
|
| 38 |
See [here](https://github.com/OpenGVLab/all-seeing/tree/main/all-seeing-v2#training) for more details.
|
| 39 |
|
| 40 |
## Evaluation dataset
|
| 41 |
+
A collection of 20 benchmarks, including 5 academic VQA benchmarks, 7 multimodal benchmarks specifically proposed for instruction-following LMMs, 3 referring expression comprehension benchmarks, 2 region captioning benchmarks, 1 referring question answering benchmark, 1 scene graph generation benchmark, and 1 relation comprehension benchmark.
|