Datasets:

Languages:
English
ArXiv:
License:
MMEB-V2 / README.md
ziyjiang's picture
Update README.md
ed695c9 verified
---
license: apache-2.0
task_categories:
- text-retrieval
- text-classification
- token-classification
language:
- en
tags:
- multimodal
pretty_name: MMEB-V2
size_categories:
- 1M<n<10M
viewer: false
---
# MMEB-V2 (Massive Multimodal Embedding Benchmark)
[**Website**](https://tiger-ai-lab.github.io/VLM2Vec/) |[**Github**](https://github.com/TIGER-AI-Lab/VLM2Vec) | [**πŸ†Leaderboard**](https://huggingface.co/spaces/TIGER-Lab/MMEB) | [**πŸ“–MMEB-V2/VLM2Vec-V2 Paper**](https://arxiv.org/abs/2507.04590) | | [**πŸ“–MMEB-V1/VLM2Vec-V1 Paper**](https://arxiv.org/abs/2410.05160) |
## Introduction
Building upon on our original [**MMEB**](https://arxiv.org/abs/2410.05160), **MMEB-V2** expands the evaluation scope to include five new tasks: four video-based tasks β€” Video Retrieval, Moment Retrieval, Video Classification, and Video Question Answering β€” and one task focused on visual documents, Visual Document Retrieval. This comprehensive suite enables robust evaluation of multimodal embedding models across static, temporal, and structured visual data settings.
**This Hugging Face repository contains only the image and video files used in MMEB-V2, which need to be downloaded in advance.**
**For the video component, please note that this repository contains both sampled frames and raw videos. Typically, the sampled frames are sufficient for use with our evaluation suite. However, we’ve also included the raw videos to accommodate any specific needs from users. If you prefer not to download the large raw video files, please use ```git lfs``` carefully.**
The test data for each task in MMEB-V2 is available [here](https://huggingface.co/VLM2Vec) and will be automatically downloaded and used by our code.
## πŸš€ What's New
- **\[2025.07\]** Release [tech report](https://arxiv.org/abs/2507.04590).
- **\[2025.05\]** Initial release of MMEB-V2/VLM2Vec-V2.
## Dataset Overview
We present an overview of the MMEB-V2 dataset below:
<img width="900" alt="abs" src="overview.png">
## Dataset Structure
The directory structure of this Hugging Face repository is shown below.
For video tasks, we provide both sampled frames and raw videos (the latter will be released later). For image tasks, we provide the raw images.
Files from each meta-task are zipped together, resulting in six files. For example, ``video_cls.tar.gz`` contains the sampled frames for the video classification task.
```
β†’ video-tasks/
β”œβ”€β”€ frames/
β”‚ β”œβ”€β”€ video_cls.tar.gz
β”‚ β”œβ”€β”€ video_qa.tar.gz
β”‚ β”œβ”€β”€ video_ret.tar.gz
β”‚ └── video_mret.tar.gz
β”œβ”€β”€ raw videos/ (To be released)
β†’ image-tasks/
β”œβ”€β”€ mmeb_v1.tar.gz
└── visdoc.tar.gz
```
After downloading and unzipping these files locally, you can organize them as shown below. (You may choose to use ``Git LFS`` or ``wget`` for downloading.)
Then, simply specify the correct file path in the configuration file used by your code.
```
β†’ MMEB
β”œβ”€β”€ video-tasks/
β”‚ └── frames/
β”‚ β”œβ”€β”€ video_cls/
β”‚ β”‚ β”œβ”€β”€ UCF101/
β”‚ β”‚ β”‚ └── video_1/ # video ID
β”‚ β”‚ β”‚ β”œβ”€β”€ frame1.png # frame from video_1
β”‚ β”‚ β”‚ β”œβ”€β”€ frame2.png
β”‚ β”‚ β”‚ └── ...
β”‚ β”‚ β”œβ”€β”€ HMDB51/
β”‚ β”‚ β”œβ”€β”€ Breakfast/
β”‚ β”‚ └── ... # other datasets from video classification category
β”‚ β”œβ”€β”€ video_qa/
β”‚ β”‚ └── ... # video QA datasets
β”‚ β”œβ”€β”€ video_ret/
β”‚ β”‚ └── ... # video retrieval datasets
β”‚ └── video_mret/
β”‚ └── ... # moment retrieval datasets
β”œβ”€β”€ image-tasks/
β”‚ β”œβ”€β”€ mmeb_v1/
β”‚ β”‚ β”œβ”€β”€ OK-VQA/
β”‚ β”‚ β”‚ β”œβ”€β”€ image1.png
β”‚ β”‚ β”‚ β”œβ”€β”€ image2.png
β”‚ β”‚ β”‚ └── ...
β”‚ β”‚ β”œβ”€β”€ ImageNet-1K/
β”‚ β”‚ └── ... # other datasets from MMEB-V1 category
β”‚ └── visdoc/
β”‚ └── ... # visual document retrieval datasets
```