Datasets:
File size: 4,456 Bytes
b4340d6 e4d062b b4340d6 e4d062b b4340d6 687dc61 b4340d6 e0d9033 b4340d6 e4d062b b4340d6 e4d062b b4340d6 6c4bbaa b4340d6 e4d062b b4340d6 7fc785b b4340d6 7fc785b b4340d6 e4d062b b4340d6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 |
---
dataset_name: fineweb2-llm-annotated
pretty_name: JQL LLMs Multilingual Educational Quality Annotations
license: odc-by
source_license: Same as FineWeb2 (see upstream dataset)
size_categories:
- 10M<n<100M
language:
- bg
- cs
- hr
- mk
- pl
- sl
- sk
- sr
- uk
- da
- de
- is
- nl
- nn
- nb
- sv
- ca
- es
- fr
- ga
- gl
- it
- pt
- ro
- et
- fi
- hu
- lt
- lv
- el
- mt
- tr
- sq
- eu
- hy
- en
---
# 📚 JQL Educational Quality Annotations from LLMs
This dataset provides 17,186,606 documents with high-quality LLM annotations for evaluating the **educational value of web documents**, and serves as a benchmark for training and evaluating **multilingual LLM annotators** as described in the JQL [paper](https://arxiv.org/abs/2505.22232).
---
## 📝 Dataset Summary
Multilingual document-level quality annotations scored on a 0–5 educational value scale by three state-of-the-art LLMs:
Gemma-3-27B-it, Mistral-3.1-24B-it, and LLaMA-3.3-70B-it. Up to 500k documents per language from FineWeb2 are included.
Annotations are aligned with human ratings and intended for quality estimation, distillation, and multilingual benchmark research.
## 🌐 Languages
In total we included 35 European languages. Input documents are in their native language, but models were prompted and responded in English.
## 🧱 Dataset Structure:
| Name | Description |
|------------------|-----------------------------------------------------|
| id | Unique FW2 identifier for the document |
| text | Full textual content extracted from the webpage |
| dum | Common Crawl dump identifier from which the data originates |
| url | Source URL of the document |
| date | Timestamp indicating when the document was crawled (ISO 8601 format) |
| file_path | Path to the WARC file in the Common Crawl S3 bucket |
| language | ISO 639-3 language code of the document (e.g., deu) |
| language_script | Script used in the document (e.g., Latn for Latin script) |
| language_score | Confidence score of the language identification (float between 0 and 1) |
| top_langs | JSON string mapping detected language-script pairs to their scores |
| minhash_cluster_size | Number of documents in the deduplication cluster |
| filter_reason | Reason for filtering or deduplication (e.g., duplicated_5_n_grams), NaN if it would have been filtered |
| edu_score | Dictionary with per-model aggregated scores (modelname_score), **-1 if a invalid score was generated** |
| aggregation | Dictionary with per-model aggregated type (modelname_type), either majority or average |
## ✂️ Data Splits:
This dataset is not pre-split. Users can generate custom splits by:
- Language
- Model agreement
- Prediction validity
- Document length or other features
## 🎯 Intended Use
- Training multilingual document quality models
- Benchmarking multilingual LLM performance
- Distillation and teacher-student LLM training
- Creating filters for noisy web-scale data
## ⚠️ Limitations:
- LLM-generated scores, not human-authored
- Some predictions may be invalid or inconsistent
- No domain control across documents
- Educational value is a subjective, task-specific metric
## 📖 Citation
```bibtex
@article{ali2025judging,
title = {Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models},
author = {
Mehdi Ali,
Manuel Brack,
Max Lübbering,
Elias Wendt,
Abbas Goher Khan,
Richard Rutmann,
Alex Jude,
Maurice Kraus,
Alexander Arno Weber,
Felix Stollenwerk,
David Kaczér,
Florian Mai,
Lucie Flek,
Rafet Sifa,
Nicolas Flores-Herr,
Joachim Köhler,
Patrick Schramowski,
Michael Fromm,
Kristian Kersting
},
year = {2025},
journal = {arXiv preprint arXiv:2505:22232}
}
```
## 🔗 Links:
- Base Dataset: [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2)
- Related Work: [FineWeb2 LLM Judging Section](https://huggingface.co/papers/llm-quality-judging-fineweb2) |