mfromm commited on
Commit
b4340d6
·
verified ·
1 Parent(s): 53ad029

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +140 -0
README.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_name: fineweb2-llm-annotated
3
+ pretty_name: JQL LLMs Multilingual Educational Quality Annotations
4
+ license: odc-by
5
+ source_license: Same as FineWeb2 (see upstream dataset)
6
+ size_categories:
7
+ - 100K<n<1M
8
+ language:
9
+ - bg
10
+ - cs
11
+ - hr
12
+ - mk
13
+ - pl
14
+ - sl
15
+ - sk
16
+ - sr
17
+ - uk
18
+ - da
19
+ - de
20
+ - is
21
+ - nl
22
+ - nn
23
+ - nb
24
+ - sv
25
+ - ca
26
+ - es
27
+ - fr
28
+ - ga
29
+ - gl
30
+ - it
31
+ - pt
32
+ - ro
33
+ - et
34
+ - fi
35
+ - hu
36
+ - lt
37
+ - lv
38
+ - el
39
+ - mt
40
+ - tr
41
+ - sq
42
+ - eu
43
+ - hy
44
+ - en
45
+ ---
46
+
47
+ # 📚 JQL Educational Quality Annotations from LLMs
48
+
49
+ This dataset provides high-quality LLMs annotations for evaluating the **educational value of web documents**, and serves as a benchmark for training and evaluating **multilingual LLM annotators**.
50
+
51
+ ---
52
+
53
+ ## 📝 Dataset Summary
54
+
55
+ Multilingual document-level quality annotations scored on a 0–5 educational value scale by three state-of-the-art LLMs:
56
+ Gemma-3-27B-it, Mistral-3.1-24B-it, and LLaMA-3.3-70B-it. Up to 500k documents per language from FineWeb2 are included.
57
+ Annotations are aligned with human ratings and intended for quality estimation, distillation, and multilingual benchmark research.
58
+
59
+ ## 🌐 Languages
60
+
61
+ total: 35
62
+
63
+ Includes both high-resource and low-resource languages. Input documents are in their native language, but models were prompted and responded in English.
64
+
65
+ ## Dataset Structure: 🧱
66
+
67
+ | Name | Description |
68
+ |------------------|-----------------------------------------------------|
69
+ | id | Unique FW2 identifier for the document |
70
+ | text | Full textual content extracted from the webpage |
71
+ | dum | Common Crawl dump identifier from which the data originates |
72
+ | url | Source URL of the document |
73
+ | date | Timestamp indicating when the document was crawled (ISO 8601 format) |
74
+ | file_path | Path to the WARC file in the Common Crawl S3 bucket |
75
+ | language | ISO 639-3 language code of the document (e.g., deu) |
76
+ | language_script | Script used in the document (e.g., Latn for Latin script) |
77
+ | language_score | Confidence score of the language identification (float between 0 and 1) |
78
+ | top_langs | JSON string mapping detected language-script pairs to their scores |
79
+ | minhash_cluster_size | Number of documents in the deduplication cluster |
80
+ | filter_reason | Reason for filtering or deduplication (e.g., duplicated_5_n_grams), NaN if it would have been filtered |
81
+ | edu_score | Dictionary with per-model aggregated scores (modelname_score), **-1 if a invalid score was generated** |
82
+ | aggregation | Dictionary with per-model aggregated type (modelname_type), either majority or average |
83
+
84
+ ## Data Splits: ✂️
85
+
86
+ This dataset is not pre-split. Users can generate custom splits by:
87
+ - Language
88
+ - Model agreement
89
+ - Prediction validity
90
+ - Document length or domain metadata (if available)
91
+
92
+ ## 🎯 Intended Use
93
+
94
+ - Training multilingual document quality models
95
+ - Benchmarking multilingual LLM performance
96
+ - Distillation and teacher-student LLM training
97
+ - Creating filters for noisy web-scale data
98
+
99
+ ## Limitations: ⚠️
100
+
101
+ - LLM-generated scores, not human-authored
102
+ - Some predictions may be invalid or inconsistent
103
+ - No domain control across documents
104
+ - Educational value is a subjective, task-specific metric
105
+
106
+ ## 📖 Citation
107
+
108
+ ```bibtex
109
+ @article{mehdi2025judging,
110
+ title = {Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models},
111
+ author = {
112
+ Mehdi Ali,
113
+ Manuel Brack,
114
+ Max Lübbering,
115
+ Elias Wendt,
116
+ Abbas Goher Khan,
117
+ Richard Rutmann,
118
+ Alex Jude,
119
+ Maurice Kraus,
120
+ Alexander Arno Weber,
121
+ Felix Stollenwerk,
122
+ David Kaczér,
123
+ Florian Mai,
124
+ Lucie Flek,
125
+ Rafet Sifa,
126
+ Nicolas Flores-Herr,
127
+ Joachim Köhler,
128
+ Patrick Schramowski,
129
+ Michael Fromm,
130
+ Kristian Kersting
131
+ },
132
+ year = {2025},
133
+ journal = {arXiv preprint arXiv:25XX:XXXX}
134
+ }
135
+ ```
136
+
137
+
138
+ ## links: 🔗
139
+ - Base Dataset: [FineWeb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2)
140
+ - Related Work: [FineWeb2 LLM Judging Section](https://huggingface.co/papers/llm-quality-judging-fineweb2)