Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:

Protecting the integrity of MultiLoKo for evaluation

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

MultiLoKo: a multilingual local knowledge benchmark for LLMs

MultiLoKo is a multilingual knowledge benchmark, covering 30 languages plus English. The questions are separately sourced for each language, with an annotation protocol designed to target locally relevant topics for the respective language. MultiLoKo contains the original data for each language, as well as both human and machine-authored translations of each non-English subset into English and vice versa, facilitating studies into a variety of research questions relating to multilinguality. More information about the benchmark design can be found in the release paper of the benchmark:

@article{hupkes2025multiloko,
      title={MultiLoKo: a multilingual local knowledge benchmark for LLMs spanning 31 languages
},
      author = {Dieuwke Hupkes and Nikolay Bogoychev},
      year = {2025},
      journal = {CoRR},
      volume = {abs/2504.10356},
      eprinttype = {arXiv},
      eprint = {2504.10356},
      url = {https://doi.org/10.48550/arXiv.2504.10356},
}

Data

Each language in MultiLoKo has its own subdirectory, containing:

  • A jsonl file dev.jsonl containing the 250 locally sourced questions for the respective language.
  • A jsonl file knowledge_fewshot.jsonl, containing five fewshot examples for the language.
  • A jsonl file dev_translated_human_english.jsonl with the human translations of the file into English (for English, instead there will be 30 files dev_translated_human_{language}.jsonl, for each of the respective languages).
  • A jsonl file dev_translated_machine_english.jsonl with the machine translations of the file into English (for English, instead there will be 30 files dev_translated_machine_{language}.jsonl, for each of the respective languages).

Multilingual prompts can be found in our our github repository, in the file examples/prompts.py

Data format

The benchmark data is stored in jsonl files, containing a separate json object for each question. Each such objects has six fields:

  • text: the paragraph from which the question was created. The answer to the question can be derived from this text. It is possible to transform the benchmark into a reading comprehension benchmark by preceeding the question itself from this text.
  • question: the question itself.
  • targets: a list of acceptable (short) answers to the question.
  • target: a long answer to the question, which could potentially be used for COT. Note that the long answers have not been checked as extensively as the questions and short answers.
  • id: the wikipedia page that the text is sourced from, along with the rank of that page in the relevant locale.
  • output_type: the expected type of the output (e.g.\ number, date, year, word, etc), in the question language.
  • source_language: the language in which the question was originally sourced.
  • question_language: the language in which the question is asked.
  • translated: whether the question is translated.
  • translation_type: whether the translation is machine- or human-authored.

MultiLoKo-test

MultiLoKo also has a secret test set with 250 examples for each languages, which will be released later on, blindly. The split is similar to the dev split, apart from the fact that the topics it includes are more obscure. If you would like to obtain scores for your model on the set, please upload your model on HuggingFace and send us a request to compute scores through the creation of an issue on this MultiLoKo github page.

We will return the full set of results to you, and put the test results of your model on the leaderboard on this page. If it is not possible to upload your model on HuggingFace, please reach out to either one of the authors to discuss other options.

Evaluation

On our github page, you can find an evaluation script eval.py that can be ran directly on a model's output, provided in jsonl or csv. The script postprocesses and normalises the model answers and computes f1 score, exact match score, sentence chrf, edit distance between the closest target, edit distance between the first target, and max edit similarity between the targets. It also computes five aggregate metrics for each of the metrics above: the average score, the maximum and minimum scores (across languages), and the gap between the best and worst performing language. We support both CSV and JSONL.

Usage

$ ./eval.py --help
usage: eval.py [-h] --dataset_location DATASET_LOCATION --subset SUBSET --predictions PREDICTIONS [--output OUTPUT]

options:
  -h, --help            show this help message and exit
  --dataset_location, -d DATASET_LOCATION
                        Dataset root directory where all language folders are located
  --subset, -s SUBSET   subset, ie dev, test, etc. Should match the jsonl filename inside the individual dataset language folders
  --predictions, -p PREDICTIONS
                        Prediction file to evaluate. Could be CSV or JSONL
  --output, -o OUTPUT   Output file (json) to write results to. If not specified, will print to stdout
./eval.py -d benchmark_data -s dev -p examples/test.json -o testscore.json

Metric cheat sheet

Metric Description
Average EM The first main metric we use to quantify performance for MultiLoKo is the average Exact Match score across languages, which expresses how many of the answers match one of the gold standard answers verbatim (after post-processing the answers).
Gap The second main metric is the gap between a model’s best and worst performing language. We gap to quantify the extent to which a model has achieved parity across languages. Because a small gap can be achieved both through parity on high scores as parity on low scores, it is most informative in combination with average benchmark performance.
Mother tongue effect (MTE) MTE expresses the impact of asking questions in a language in which the requested information is locally salient, compared to asking it in English. A positive MTE indicates information is more readily available in the language it was (likely) present in the training data, whereas a negative mother tongue effect indicates the information is more easily accessible in English.
Locality effect (LE) LE quantifies the effect of using locally sourced vs translated data. It is measured by computing the difference between scores for locally sourced data and translated English-sourced data. A positive LE implies that using translated English data underestimates performance on a language, a negative LE that using translated English data overestimates performance.
Downloads last month
0