license: apache-2.0
task_categories:
- text-retrieval
language:
- zh
tags:
- text
- retrieval
size_categories:
- 1K<n<10K
configs:
- config_name: passages
data_files:
- split: test
path: passages/test*
- config_name: queries
data_files:
- split: test
path: queries/test*
Dataset CapRetrieval introduced in Dense Retrievers Can Fail on Simple Queries: Revealing The Granularity Dilemma of Embeddings.
CapRetrieval evaluates the fine-grained embedding matching (dense passage retrieval) in Chinese, tailored towards a practical image search scenario:
- Candidate passages are image captions, and queries are short phrases of entities or events reflected in captions.
- Overall, the dataset comprises seemingly simple queries and captions; however, text encoders are shown limitations resolving these cases.
- Evaluation results call for attention on embedding training strategies with different granularity.
Format
CapRetrieval follows the same retrieval task format as in MTEB, with relevance labels in [0,1,2] for each pair. Note that unlike prior datasets, we annotate full labels for each query-passage pair (1.3 million pairs), minimizing false negatives for more accurate evaluation.
A small amount of queries do not have any relevant captions; they are excluded in computation of retrieval metrics (e.g. nDCG), but can be useful for other analysis, e.g. in classification setting.
Evaluation
Please see the evaluation script and results at https://github.com/lxucs/CapRetrieval.
Type | Model | nDCG@10 |
---|---|---|
BM25 | Basic BM25 | 66.54 |
0.1B | bge-base-zh-v1.5 | 78.86 |
gte-multilingual-base | 79.67 | |
multilingual-e5-base | 76.33 | |
0.3B | bge-large-zh-v1.5 | 79.15 |
multilingual-e5-large | 81.01 | |
Conan-embedding-v1 | 77.04 | |
0.6B | Qwen3-Embedding-0.6B | 81.04 |
>1B | gte-Qwen2-1.5B-instruct | 77.35 |
gte-Qwen2-7B-instruct | 86.55 | |
e5-mistral-7b-instruct | 76.40 | |
Qwen3-Embedding-8B | 84.61 | |
Trained | Out-of-Domain | 87.23 |
In-Domain | 91.83 |
The trained models (based on bge-base-zh-v1.5
) are trained with queries by our data generation strategies described in the paper. The in-domain model can be downloaded from Google Drive.
Citation
@misc{xu2025denseretrieversfailsimple,
title={Dense Retrievers Can Fail on Simple Queries: Revealing The Granularity Dilemma of Embeddings},
author={Liyan Xu and Zhenlin Su and Mo Yu and Jiangnan Li and Fandong Meng and Jie Zhou},
year={2025},
eprint={2506.08592},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.08592},
}