Jinruiy commited on
Commit
3828178
Β·
verified Β·
1 Parent(s): 43ac7fa

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +164 -0
README.md ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: "Multi-EuP v2: European Parliament Debates with MEP Metadata (24 languages)"
3
+ dataset_name: multi-eup-v2
4
+ configs:
5
+ - config_name: default
6
+ data_files: "clean_all_with_did_qid.MEP.csv"
7
+ license: cc-by-4.0
8
+ multilinguality: multilingual
9
+ task_categories:
10
+ - text-classification
11
+ - text-retrieval
12
+ - text-generation
13
+ language:
14
+ - bg
15
+ - cs
16
+ - da
17
+ - de
18
+ - el
19
+ - en
20
+ - es
21
+ - et
22
+ - fi
23
+ - fr
24
+ - ga
25
+ - hr
26
+ - hu
27
+ - it
28
+ - lt
29
+ - lv
30
+ - mt
31
+ - nl
32
+ - pl
33
+ - pt
34
+ - ro
35
+ - sk
36
+ - sl
37
+ - sv
38
+ size_categories:
39
+ - 10K<n<100K
40
+ homepage: ""
41
+ repository: ""
42
+ paper: "https://aclanthology.org/2024.mrl-1.23/"
43
+ tags:
44
+ - multilingual
45
+ - european-parliament
46
+ - political-discourse
47
+ - metadata
48
+ - mep
49
+ ---
50
+ # Multi-EuP-v2
51
+
52
+ This dataset card documents **Multi-EuP-v2**, a multilingual corpus of European Parliament debate speeches enriched with Member of European Parliament (MEP) metadata and multilingual debate titles/IDs. It supports research on political text analysis, speaker-attribute prediction, stance/vote prediction, multilingual NLP, and retrieval.
53
+
54
+ ## Dataset Details
55
+
56
+ ### Dataset Description
57
+
58
+ **Multi-EuP-v2** aggregates **50,337** debate speeches (each a unique `did`) in **24 languages**. Each row contains the speech text (`TEXT`), speaker identity (`NAME`, `MEPID`), language (`LANGUAGE`), political group (`PARTY`), country and gender of the MEP, date, video timestamps, plus **multilingual debate titles `title_<LANG>`** and **per-language debate/vote linkage IDs `qid_<LANG>`**.
59
+
60
+ - **Curated by:** Jinrui Yang, Fan Jiang, Timothy Baldwin
61
+ - **Funded by:** Melbourne Research Scholarship; LIEF HPC-GPGPU Facility (LE170100200)
62
+ - **Shared by:** University of Melbourne
63
+ - **Language(s) (NLP):** `bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv` (24 total)
64
+ - **License:** cc-by-4.0
65
+
66
+ ### Dataset Sources
67
+
68
+ - **Repository:** [https://github.com/jrnlp/MLIR_language_bias]
69
+ - **Paper:** https://aclanthology.org/2024.mrl-1.23/
70
+
71
+ ## Uses
72
+
73
+ ### Direct Use
74
+ - **Text classification:** predict `gender`, `PARTY`, or `country` from `TEXT`.
75
+ - **Stance/vote prediction:** link `qid_<LANG>` to external roll-call vote labels.
76
+ - **Multilingual representation learning:** train/evaluate models across 24 EU languages.
77
+ - **Information retrieval:** index `TEXT` and use `title_*`/`qid_*` as multilingual query anchors.
78
+
79
+ ### Out-of-Scope Use
80
+ - Inferring private attributes beyond public MEP metadata.
81
+ - Automated profiling for sensitive decisions.
82
+ - Misrepresenting model outputs as factual statements.
83
+
84
+ ## Dataset Structure
85
+
86
+ Each row corresponds to a single speech/document.
87
+
88
+ **Core fields:**
89
+ - `did` *(string)* β€” unique speech ID
90
+ - `TEXT` *(string)* β€” speech text
91
+ - `DATE` *(string/date)* β€” debate date
92
+ - `LANGUAGE` *(string)* β€” language code
93
+ - `NAME` *(string)* β€” MEP name
94
+ - `MEPID` *(string)* β€” MEP ID
95
+ - `PARTY` *(string)* β€” political group
96
+ - `country` *(string)* β€” MEP's country
97
+ - `gender` *(string)* β€” `Female`, `Male`, or `Unknown`
98
+ - Additional provenance fields: `PRESIDENT`, `TEXTID`, `CODICT`, `VOD-START`, `VOD-END`
99
+
100
+ **Multilingual metadata:**
101
+ - `title_<LANG>` *(string)* β€” debate title in that language
102
+ - `qid_<LANG>` *(string)* β€” debate/vote linkage ID in that language
103
+
104
+ **Splits:** Single CSV, no predefined splits.
105
+
106
+ **Basic stats:**
107
+ - Rows: 50,337
108
+ - Languages: 24
109
+ - Top political groups: PPE 8,869; S-D 8,468; Renew 5,313; ECR 4,130; Verts/ALE 4,001; ID 3,286; The Left 2,951; NI 2,539; GUE/NGL 468
110
+ - Gender counts: Female 25,536; Male 23,461; Unknown 349
111
+ - Top countries: Germany 7,226; France 6,158; Poland 3,706; Spain 3,312; Italy 3,222; Netherlands 1,924; Greece 1,756; Romania 1,701; Czechia 1,661; Portugal 1,150; Belgium 1,134; Hungary 1,106
112
+
113
+ ## Dataset Creation
114
+
115
+ ### Curation Rationale
116
+ Support multilingual political text research, enabling standardized tasks in gender/group prediction, stance/vote prediction, and IR.
117
+
118
+ ### Source Data
119
+
120
+ #### Data Collection and Processing
121
+ - **Source:** Official EP debates.
122
+ - **Processing:** metadata linking, language verification, deduplication, multilingual title extraction.
123
+ - **Quality checks:** consistency in language tags and IDs.
124
+
125
+ #### Who are the source data producers?
126
+ MEPs speaking in plenary debates; titles from official EP records.
127
+
128
+ ### Annotations
129
+
130
+ #### Annotation process
131
+ Metadata compiled from public records; no manual stance labels.
132
+
133
+ #### Personal and Sensitive Information
134
+ Contains names and political opinions of public officials.
135
+
136
+ ## Bias, Risks, and Limitations
137
+ - Domain bias: formal political discourse.
138
+ - Risk in demographic inference tasks.
139
+ - Language/script differences affect comparability.
140
+
141
+ ### Recommendations
142
+ - Report per-language metrics.
143
+ - Avoid over-claiming causal interpretations.
144
+
145
+ ## Citation
146
+
147
+ **BibTeX:**
148
+ ```bibtex
149
+ @inproceedings{yang-etal-2024-language-bias,
150
+ title = {Language Bias in Multilingual Information Retrieval: The Nature of the Beast and Mitigation Methods},
151
+ author = {Yang, Jinrui and Jiang, Fan and Baldwin, Timothy},
152
+ booktitle = {Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)},
153
+ year = {2024},
154
+ pages = {280--292},
155
+ publisher = {Association for Computational Linguistics},
156
+ url = {https://aclanthology.org/2024.mrl-1.23/},
157
+ doi = {10.18653/v1/2024.mrl-1.23}
158
+ }
159
+ ```
160
+
161
+ **APA:** Yang, J., Jiang, F., & Baldwin, T. (2024). *Language Bias in Multilingual Information Retrieval: The Nature of the Beast and Mitigation Methods*. In MRL 2024. ACL.
162
+
163
+ ## Contact
164