Datasets:
mteb
/

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
Samoed commited on
Commit
9c09abc
·
verified ·
1 Parent(s): e2ce602

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +162 -0
README.md CHANGED
@@ -1,4 +1,32 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  - config_name: ar-corpus
4
  features:
@@ -937,4 +965,138 @@ configs:
937
  data_files:
938
  - split: dev
939
  path: zh-queries/dev-*
 
 
 
940
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-annotated
4
+ language:
5
+ - ara
6
+ - ben
7
+ - deu
8
+ - eng
9
+ - fas
10
+ - fin
11
+ - fra
12
+ - hin
13
+ - ind
14
+ - jpn
15
+ - kor
16
+ - rus
17
+ - spa
18
+ - swa
19
+ - tel
20
+ - tha
21
+ - yor
22
+ - zho
23
+ license: cc-by-sa-4.0
24
+ multilinguality: multilingual
25
+ source_datasets:
26
+ - RSamoed/MIRACLRetrieval
27
+ task_categories:
28
+ - text-retrieval
29
+ task_ids: []
30
  dataset_info:
31
  - config_name: ar-corpus
32
  features:
 
965
  data_files:
966
  - split: dev
967
  path: zh-queries/dev-*
968
+ tags:
969
+ - mteb
970
+ - text
971
  ---
972
+ <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
973
+
974
+ <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
975
+ <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">MIRACLRetrieval</h1>
976
+ <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
977
+ <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
978
+ </div>
979
+
980
+ MIRACL (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages.
981
+
982
+ | | |
983
+ |---------------|---------------------------------------------|
984
+ | Task category | t2t |
985
+ | Domains | Encyclopaedic, Written |
986
+ | Reference | http://miracl.ai/ |
987
+
988
+
989
+
990
+
991
+ ## How to evaluate on this task
992
+
993
+ You can evaluate an embedding model on this dataset using the following code:
994
+
995
+ ```python
996
+ import mteb
997
+
998
+ task = mteb.get_tasks(["MIRACLRetrieval"])
999
+ evaluator = mteb.MTEB(task)
1000
+
1001
+ model = mteb.get_model(YOUR_MODEL)
1002
+ evaluator.run(model)
1003
+ ```
1004
+
1005
+ <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
1006
+ To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
1007
+
1008
+ ## Citation
1009
+
1010
+ If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
1011
+
1012
+ ```bibtex
1013
+
1014
+ @article{10.1162/tacl_a_00595,
1015
+ abstract = {{MIRACL is a multilingual dataset for ad hoc retrieval across 18 languages that collectively encompass over three billion native speakers around the world. This resource is designed to support monolingual retrieval tasks, where the queries and the corpora are in the same language. In total, we have gathered over 726k high-quality relevance judgments for 78k queries over Wikipedia in these languages, where all annotations have been performed by native speakers hired by our team. MIRACL covers languages that are both typologically close as well as distant from 10 language families and 13 sub-families, associated with varying amounts of publicly available resources. Extensive automatic heuristic verification and manual assessments were performed during the annotation process to control data quality. In total, MIRACL represents an investment of around five person-years of human annotator effort. Our goal is to spur research on improving retrieval across a continuum of languages, thus enhancing information access capabilities for diverse populations around the world, particularly those that have traditionally been underserved. MIRACL is available at http://miracl.ai/.}},
1016
+ author = {Zhang, Xinyu and Thakur, Nandan and Ogundepo, Odunayo and Kamalloo, Ehsan and Alfonso-Hermelo, David and Li, Xiaoguang and Liu, Qun and Rezagholizadeh, Mehdi and Lin, Jimmy},
1017
+ doi = {10.1162/tacl_a_00595},
1018
+ eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00595/2157340/tacl\_a\_00595.pdf},
1019
+ issn = {2307-387X},
1020
+ journal = {Transactions of the Association for Computational Linguistics},
1021
+ month = {09},
1022
+ pages = {1114-1131},
1023
+ title = {{MIRACL: A Multilingual Retrieval Dataset Covering 18 Diverse Languages}},
1024
+ url = {https://doi.org/10.1162/tacl\_a\_00595},
1025
+ volume = {11},
1026
+ year = {2023},
1027
+ }
1028
+
1029
+
1030
+ @article{enevoldsen2025mmtebmassivemultilingualtext,
1031
+ title={MMTEB: Massive Multilingual Text Embedding Benchmark},
1032
+ author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
1033
+ publisher = {arXiv},
1034
+ journal={arXiv preprint arXiv:2502.13595},
1035
+ year={2025},
1036
+ url={https://arxiv.org/abs/2502.13595},
1037
+ doi = {10.48550/arXiv.2502.13595},
1038
+ }
1039
+
1040
+ @article{muennighoff2022mteb,
1041
+ author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
1042
+ title = {MTEB: Massive Text Embedding Benchmark},
1043
+ publisher = {arXiv},
1044
+ journal={arXiv preprint arXiv:2210.07316},
1045
+ year = {2022}
1046
+ url = {https://arxiv.org/abs/2210.07316},
1047
+ doi = {10.48550/ARXIV.2210.07316},
1048
+ }
1049
+ ```
1050
+
1051
+ # Dataset Statistics
1052
+ <details>
1053
+ <summary> Dataset Statistics</summary>
1054
+
1055
+ The following code contains the descriptive statistics from the task. These can also be obtained using:
1056
+
1057
+ ```python
1058
+ import mteb
1059
+
1060
+ task = mteb.get_task("MIRACLRetrieval")
1061
+
1062
+ desc_stats = task.metadata.descriptive_stats
1063
+ ```
1064
+
1065
+ ```json
1066
+ {
1067
+ "dev": {
1068
+ "num_samples": 106345647,
1069
+ "number_of_characters": 37176781172,
1070
+ "num_documents": 106332152,
1071
+ "min_document_length": 2,
1072
+ "average_document_length": 349.6241542163089,
1073
+ "max_document_length": 84930,
1074
+ "unique_documents": 106332152,
1075
+ "num_queries": 13495,
1076
+ "min_query_length": 5,
1077
+ "average_query_length": 36.49225639125602,
1078
+ "max_query_length": 176,
1079
+ "unique_queries": 13495,
1080
+ "none_queries": 0,
1081
+ "num_relevant_docs": 130408,
1082
+ "min_relevant_docs_per_query": 1,
1083
+ "average_relevant_docs_per_query": 2.3059651722860317,
1084
+ "max_relevant_docs_per_query": 20,
1085
+ "unique_relevant_docs": 119924,
1086
+ "num_instructions": null,
1087
+ "min_instruction_length": null,
1088
+ "average_instruction_length": null,
1089
+ "max_instruction_length": null,
1090
+ "unique_instructions": null,
1091
+ "num_top_ranked": null,
1092
+ "min_top_ranked_per_query": null,
1093
+ "average_top_ranked_per_query": null,
1094
+ "max_top_ranked_per_query": null
1095
+ }
1096
+ }
1097
+ ```
1098
+
1099
+ </details>
1100
+
1101
+ ---
1102
+ *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*