|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:43:00.012224Z" |
|
}, |
|
"title": "How Universal is Genre in Universal Dependencies?", |
|
"authors": [ |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "M\u00fcller-Eberstein", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IT University of Copenhagen", |
|
"location": { |
|
"country": "Denmark" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Rob", |
|
"middle": [], |
|
"last": "Van Der Goot", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IT University of Copenhagen", |
|
"location": { |
|
"country": "Denmark" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IT University of Copenhagen", |
|
"location": { |
|
"country": "Denmark" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This work provides the first in-depth analysis of genre in Universal Dependencies (UD). In contrast to prior work on genre identification which uses small sets of well-defined labels in mono-/bilingual setups, UD contains 18 genres with varying degrees of specificity spread across 114 languages. As most treebanks are labeled with multiple genres while lacking annotations about which instances belong to which genre, we propose four methods for predicting instance-level genre using weak supervision from treebank metadata. The proposed methods recover instancelevel genre better than competitive baselines as measured on a subset of UD with labeled instances and adhere better to the global expected distribution. Our analysis sheds light on prior work using UD genre metadata for treebank selection, finding that metadata alone are a noisy signal and must be disentangled within treebanks before it can be universally applied.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This work provides the first in-depth analysis of genre in Universal Dependencies (UD). In contrast to prior work on genre identification which uses small sets of well-defined labels in mono-/bilingual setups, UD contains 18 genres with varying degrees of specificity spread across 114 languages. As most treebanks are labeled with multiple genres while lacking annotations about which instances belong to which genre, we propose four methods for predicting instance-level genre using weak supervision from treebank metadata. The proposed methods recover instancelevel genre better than competitive baselines as measured on a subset of UD with labeled instances and adhere better to the global expected distribution. Our analysis sheds light on prior work using UD genre metadata for treebank selection, finding that metadata alone are a noisy signal and must be disentangled within treebanks before it can be universally applied.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Identifying document genre automatically has long been of interest to the NLP community due to its immediate applications both in document grouping (Petrenz, 2012) as well as task-specific data selection (Ruder and Plank, 2017; Sato et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 163, |
|
"text": "(Petrenz, 2012)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 227, |
|
"text": "(Ruder and Plank, 2017;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 246, |
|
"text": "Sato et al., 2017)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Cross-lingual genre identification has however remained a challenge, mainly due to the lack of stable cross-lingual representations (Petrenz, 2012) . Recent work has shown that pre-trained masked language models (MLMs) capture monolingual genre (Aharoni and Goldberg, 2020) . Do such distinctions manifest in highly multilingual spaces as well? In this work, we investigate whether this property holds for the genre distribution in the 114 language Universal Dependencies corpus (UD version 2.8; Zeman et al., 2021) using the multilingual mBERT MLM (Devlin et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 147, |
|
"text": "(Petrenz, 2012)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 245, |
|
"end": 273, |
|
"text": "(Aharoni and Goldberg, 2020)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 479, |
|
"end": 495, |
|
"text": "(UD version 2.8;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 515, |
|
"text": "Zeman et al., 2021)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 549, |
|
"end": 570, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In absence of an exact definition of textual genre (Kessler et al., 1997; Webber, 2009; Plank, 2016) , this work will focus on the information specifically denoted by the genres metadata tag in UD. We hope that an in-depth, cross-lingual analysis of what this label represents will enable practitioners to better control for the effects of domain shift in their experiments. Previous work using these UD metadata for proxy training data selection have produced mixed results (Stymne, 2020) . We investigate possible reasons and identify inconsistencies in genre annotation. The fact that genre labels are only available at the level of treebanks makes it difficult to gather a clear picture of the sentence-level genre distribution -especially with some treebanks having up to 10 genre labels. We therefore investigate the degree to which instance-level genre is recoverable using only the treebank-level metadata as weak supervision.", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 73, |
|
"text": "(Kessler et al., 1997;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 74, |
|
"end": 87, |
|
"text": "Webber, 2009;", |
|
"ref_id": "BIBREF51" |
|
}, |
|
{ |
|
"start": 88, |
|
"end": 100, |
|
"text": "Plank, 2016)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 475, |
|
"end": 489, |
|
"text": "(Stymne, 2020)", |
|
"ref_id": "BIBREF48" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our contributions entail the, to our knowledge, first detailed definition of all UD metadata genre labels (Section 3), four weakly supervised methods for extracting instance-level genre across 114 languages (Section 4) as well as genre identification experiments which show that our proposed two-step procedure allows for effective genre recovery in multilingual setups where language relatedness typically outweighs genre similarities (Section 5). 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The largest hurdle for cross-lingual genre classification is the lack of shared representational spaces. Sharoff (2007) use shared POS n-grams in order to jointly classify the genre of English and Russian documents. Petrenz (2012) similarly seek out features which are stable across languages in order to classify English and Chinese documents into four shared genres. A recent data-driven approach finds that monolingual MLM embeddings can be clustered into five groups closely representing the data sources of the original corpus (Aharoni and Goldberg, 2020) . In this work, we investigate whether this holds for multilingual settings as well.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 119, |
|
"text": "Sharoff (2007)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 230, |
|
"text": "Petrenz (2012)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 532, |
|
"end": 560, |
|
"text": "(Aharoni and Goldberg, 2020)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Being able to identify textual genre has been crucial for domain-specific fine-tuning (Dai et al., 2020; Gururangan et al., 2020) including dependency parsing. For parser training, in-genre data is typically selected by proxy of the data source (Plank and van Noord, 2011; Rehbein and Bildhauer, 2017; Sato et al., 2017) . Data-driven approaches which include automatically inferred topics based on word and embedding distributions (Ruder and Plank, 2017) as well as POS-based approaches (S\u00f8gaard, 2011; Rosa, 2015; Vania et al., 2019) have also been found effective.", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 104, |
|
"text": "(Dai et al., 2020;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 105, |
|
"end": 129, |
|
"text": "Gururangan et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 245, |
|
"end": 272, |
|
"text": "(Plank and van Noord, 2011;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 301, |
|
"text": "Rehbein and Bildhauer, 2017;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 302, |
|
"end": 320, |
|
"text": "Sato et al., 2017)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 432, |
|
"end": 455, |
|
"text": "(Ruder and Plank, 2017)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 488, |
|
"end": 503, |
|
"text": "(S\u00f8gaard, 2011;", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 504, |
|
"end": 515, |
|
"text": "Rosa, 2015;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 516, |
|
"end": 535, |
|
"text": "Vania et al., 2019)", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Universal Dependencies (Nivre et al., 2020) aims to consolidate syntactic annotations for a wide variety of languages and genres under a single scheme. The latest release contains 114 languages -many with fewer than 100 sentences. In order for languages at all resource levels to benefit from domain adaptation, it will continue to be important to identify cross-lingually stable signals for genre. While language labels are generally agreed upon, differences in genre are more subtle. Metadata at the treebank level provides some insights into genres of original data sources, however these are \"neither mutually exclusive nor based on homogeneous criteria, but [are] currently the best documentation that can be obtained\" (Nivre et al., 2020) . Stymne (2020) performs an initial study on using these treebank metadata labels for the selection of spoken and Twitter data. Results show that training on out-of-language/in-genre data is superior to out-of-language/out-of-genre data. However the best results are obtained using in-language data regardless of genre-adherence. This holds across multiple methods of proxy dataset selection (e.g. treebank embeddings; Smith et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 43, |
|
"text": "(Nivre et al., 2020)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 724, |
|
"end": 744, |
|
"text": "(Nivre et al., 2020)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 747, |
|
"end": 760, |
|
"text": "Stymne (2020)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 1164, |
|
"end": 1183, |
|
"text": "Smith et al., 2018)", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Recently, M\u00fcller-Eberstein et al. (2021) have shown that combining UD genre metadata and MLM embeddings can improve proxy training data selection for zero-shot parsing of low-resource languages. The use of genre in their work is more implicit as it is mainly driven by the genre of the target data. In contrast, this work takes a holistic view and explicitly examines the classification of instance-level genre for all sentences in UD.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 40, |
|
"text": "M\u00fcller-Eberstein et al. (2021)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As genre appears to be a valuable signal, we set out to investigate how it is defined and distributed within UD. Due to the coarse, treebank-level nature of current genre annotations, we hypothesize that a clearer picture can only be obtained by moving to the sentence level. We therefore transition from prior supervised document genre prediction to weakly supervised instance genre prediction. Additionally, we expand the linguistic scope from mono-or bilingual corpora to all 114 languages currently in UD. More generally, this task can be viewed as predicting genre labels for all sentences in all corpora of a collection while only being given the set of labels said to be contained in each corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We analyze genre as currently used in the genres metadata of 200 treebanks from Universal Dependencies version 2.8 (Zeman et al., 2021) . Section 3.1 provides an overview of all UD genre types and Section 3.2 analyzes how these global labels relate to the subset of treebanks which do provide treebank-specific, instance genre annotations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 135, |
|
"text": "(Zeman et al., 2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "UD-level Genre", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "UD 2.8 (Zeman et al., 2021) contains 18 genres which are denoted in each treebank's accompanying metadata. Around 36% of treebanks contain a single genre while the remaining majority can contain between 2-10 which are not further labeled at the instance level. There is no official description of each genre label, however they can be roughly categorized as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 7, |
|
"end": 27, |
|
"text": "(Zeman et al., 2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "academic Collections of scientific articles covering multiple disciplines. Note that this label may subsume others such as medical.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "bible Passages from the bible, frequently from older languages (e.g. Old Church Slavonic-PROIEL by Haug and J\u00f8hndal, 2008) . Largely non-overlapping passages are used across treebanks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 122, |
|
"text": "Haug and J\u00f8hndal, 2008)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "blog Internet documents on various topics which may overlap with other genres such as news. They are typically more informal in register. Some treebanks group social media content and reviews under this category (e.g. Russian-Taiga by Shavrina and Shapovalova, 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 235, |
|
"end": 266, |
|
"text": "Shavrina and Shapovalova, 2017)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "email Formal, written communication. This includes English-EWT's (Silveira et al., 2014) subsection based on the Enronsent Corpus (Styler, 2011) as well as letters attributed to Dante Alighieri as part of Latin-UDante (Cecchini et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 88, |
|
"text": "(Silveira et al., 2014)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 130, |
|
"end": 144, |
|
"text": "(Styler, 2011)", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 241, |
|
"text": "Latin-UDante (Cecchini et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "fiction Mostly paragraphs from diverse sets of fiction books and magazines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "government The least represented genre, mainly denoting texts from governmental sources. These include political speeches (English-GUM by Zeldes, 2017) as well as inscriptions from Neo-Assyrian kings from around 900 BCE (Akkadian-RIAO by Luukko et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 151, |
|
"text": "Zeldes, 2017)", |
|
"ref_id": "BIBREF56" |
|
}, |
|
{ |
|
"start": 238, |
|
"end": 258, |
|
"text": "Luukko et al., 2020)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "grammar-examples Sentences from teaching or grammatical reference books which are typically short, but cover a wide range of dependency relations (e.g. Tagalog-TRG by Samson and C\u00f6ltekin, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 193, |
|
"text": "Samson and C\u00f6ltekin, 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "learner-essays Small genre occurring in three single-genre treebanks. Sentences were written by second-language learners and either contain original errors (English-ESL by Berzak et al., 2016) , manual corrections (IT-Valico by Di Nuovo et al., 2019) or both (Chinese-CFL by Lee et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 192, |
|
"text": "Berzak et al., 2016)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 292, |
|
"text": "Lee et al., 2017)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "legal Relatively frequent genre based mostly on laws and legal corpora within the public domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "medical Scientific articles/books in the field of medicine (e.g. cardiology, diabetes, endocrinology for Romanian-SiMoNERo by Mitrofan et al., 2019) . It is subsumed by academic for some treebanks (e.g. Czech-CAC by Hladk\u00e1 et al., 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 148, |
|
"text": "Mitrofan et al., 2019)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 236, |
|
"text": "Hladk\u00e1 et al., 2008)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "news The highest-resource genre by a large margin corresponding to news-wire texts as well as online newspapers on specific topics (e.g. IT-news in German-HDT by Borges V\u00f6lker et al., 2019).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "nonfiction Second most frequent genre with a high degree of variance, subsuming e.g. academic and legal. German-LIT (Salomoni, 2019) contains three philosophical books from the 18th century. Other non-fiction treebanks can originate from multiple sources (e.g. books and internet) and time spans.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 132, |
|
"text": "(Salomoni, 2019)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "poetry Smaller, yet distinct genre covering mostly older texts and language variations (e.g. Old French-SRCMF by Stein and Pr\u00e9vost, 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 137, |
|
"text": "Stein and Pr\u00e9vost, 2013)", |
|
"ref_id": "BIBREF46" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "reviews Medium-resource genre covering informal online reviews with unnormalized orthography (e.g. English-EWT) as well as formal reviews (e.g. newspaper film reviews in Czech-CAC).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "social Encompasses social media data such as tweets (e.g. Italian-TWITTIR\u00d2 by Cignarella et al., 2019) as well as newsgroups (e.g. English-EWT). Some spoken data is co-labeled with this genre when it refers to colloquial speech (e.g. South Levantine Arabic-MADAR by Zahra, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 102, |
|
"text": "Cignarella et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 278, |
|
"text": "Zahra, 2020)", |
|
"ref_id": "BIBREF55" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "spoken Distinct genre which typically consists of spoken language transcriptions. Sentences contain filler words and may have abrupt boundaries. Sources range from elicited speech of native speakers (Komi Zyrian-IKDP by Partanen et al., 2018) to radio program transcriptions (Frisian Dutch-Fame by Braggaar and van der Goot, 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 220, |
|
"end": 242, |
|
"text": "Partanen et al., 2018)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 298, |
|
"end": 330, |
|
"text": "Braggaar and van der Goot, 2021)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "web Similarly ambiguous genre as non-fiction. It occurs in conjunction with specific genres such as blog and social and never appears alone (e.g. Persian-PerDT by Sadegh Rasooli et al., 2020).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "wiki Denotes data from Wikipedia for which cross-lingual authoring guidelines exist. Figure 1 shows the approximated distribution of these genres in UD. Maximum/minimum sentence counts are inferred from the size of single-genre treebanks plus the size of all treebanks in which a genre is said to occur. The center line denotes the distribution under the assumption that genres are uniformly distributed within each treebank.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 93, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "It is clear that news and non-fiction constitute more than half of the entire dataset. Specialized genres such as medical are less represented. For broader genres such as web, which frequently co-occurs with others, the exact number of sentences is hard to estimate, but must lie between 0-20%. Considering these large variances, access to instance-level genre will likely be crucial for effective proxy data selection and downstream domain adaptation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Available Metadata", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In addition to the aforementioned 18 treebank-level genre labels, some treebanks provide instance-level genre annotations in the comment-metadata before each sentence. We find such annotations in 26 out of 200 treebanks in UD 2.8 amounting to 124k or 8.25% of all sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance-level Annotations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Out of this set, 20 treebanks belong to the Parallel Universal Dependencies (PUD; Nivre et al., 2017) . They are split 500/500 between news and wiki, as denoted by sentence IDs beginning with n and w respectively. The parallel nature of PUD makes it interesting for analyzing cross-lingual genre identification performance. However these two genres only represent a small fraction of non-fiction texts and furthermore, each PUD-treebank is test-split-only. Note also that Polish-PUD as an exception has the metadata labels news and non-fiction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 101, |
|
"text": "Nivre et al., 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance-level Annotations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The remaining six treebanks for which we were able to identify instance-level genre annotations are Belarusian-HSE (Lyashevskaya et al., 2017) , Czech-CAC (Hladk\u00e1 et al., 2008) , English-EWT (Silveira et al., 2014) , German-LIT (Salomoni, 2019) , Polish-LFG (Patejuk and Przepi\u00f3rkowski, 2018) and Russian-Taiga (Shavrina and Shapovalova, 2017) . They cover a wider set of 12 genres. Annotation schema vary across treebanks and are neither fully compatible amongst each other nor with the 18 UD labels. Approximate mappings can however be drawn thanks to source data documentation by the respective authors (Section 4.2).", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 142, |
|
"text": "(Lyashevskaya et al., 2017)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 155, |
|
"end": 176, |
|
"text": "(Hladk\u00e1 et al., 2008)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 214, |
|
"text": "(Silveira et al., 2014)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 244, |
|
"text": "(Salomoni, 2019)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 258, |
|
"end": 292, |
|
"text": "(Patejuk and Przepi\u00f3rkowski, 2018)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 343, |
|
"text": "(Shavrina and Shapovalova, 2017)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance-level Annotations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Further comment-metadata which may guide genre separation within treebanks includes document, paragraph and source identifiers. Again, these are unfortunately not available for all sentences (although coverage of these metadata reaches up to 45%) and their values do not provide further indications about genre adherence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance-level Annotations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "From the previous analysis, it is evident that finer-grained genre labels are needed before domain adaptation can be successful across all languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance-level Annotations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Formally, the task of predicting instance-level UD genre can be defined as assigning a set of labels L = {l 0 , l 1 , . . . , l K } (i.e. genres) to all instances x n of a corpus X (i.e. UD). The corpus consists of S distinct subsets X = {X 0 \u222a X 1 \u222a . . . \u222a X S } (i.e. treebanks) each with a subset of labels L s \u2286 L. As no instance-level labels x n \u2192 l are available, models must learn this mapping based solely on the subset of labels said to be contained in each data subset X s \u2192 L s .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance-level Annotations", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As instance-level labels are noisy and sparse, we investigate two classification-based and two clusteringbased approaches for inferring instance genre labels from the treebank metadata L s alone. Building on M\u00fcller-Eberstein et al. 2021, our proposed methods leverage latent genre information in the pre-trained mBERT language model (Devlin et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 333, |
|
"end": 354, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Genre Prediction Methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "BOOT In order to select proxy training data which matches the genre of an unseen target, M\u00fcller-Eberstein et al. 2021propose a bootstrapping-based approach to genre classification (BOOT). An mBERT-based classifier (Devlin et al., 2019) is initially trained on sentences from single-genre treebanks, corresponding to standard supervised classification. Above a confidence threshold (i.e. softmax probability of 0.99), sentences from treebanks containing a known genre in mixture are bootstrapped as single-genre training data for the next round. After bootstrapping sentences from all known genres, the remaining unclassified instances of any treebank containing a single unknown genre are inferred to be of that last genre. While this method was previously used for targeted data selection, we investigate the degree to which it actually recovers instance-level genre.", |
|
"cite_spans": [ |
|
{ |
|
"start": 214, |
|
"end": 235, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Genre Prediction Methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "CLASS With approximate classification (CLASS), we simplify BOOT to naively learn instance genre labels from weak supervision. It fine-tunes the same mBERT MLM with a 18-genre classification layer on the [CLS]-token. For single-genre treebanks it is possible to measure the exact cross entropy between the predicted probability and the target (i.e. x n \u2192 l with l \u2208 L s and |L s | = 1). For multi-genre treebanks with |L s | > 1, this is not possible as the gold label is unknown. For the CLASS approach, each sentence from a k-genre treebank is therefore classified k times -once for each class in L s .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Genre Prediction Methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "GMM In addition to classification, we also evaluate two common clustering algorithms. First we investigate whether clusters formed by untuned MLM sentence embeddings (mean over sentence subwords) represent genre to such a degree that Gaussian Mixture Models can recover the 18 UD genre groups. For monolingual data from five genres, such clusters were shown to be recoverable (Aharoni and Goldberg, 2020) . We extend this approach to the 114 language setting of UD.", |
|
"cite_spans": [ |
|
{ |
|
"start": 376, |
|
"end": 404, |
|
"text": "(Aharoni and Goldberg, 2020)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Genre Prediction Methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "LDA As all methods so far are to some degree dependent on the pre-trained MLM representations, we also evaluate the recoverability of genre using Latent Dirichlet Allocation (Blei et al., 2003) with lexical features. Feature vectors are constructed using the frequency of character 3-6-grams.", |
|
"cite_spans": [ |
|
{ |
|
"start": 174, |
|
"end": 193, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Genre Prediction Methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Cluster Labeling Both clustering methods produce 18 groups of sentences from UD, however these will not carry meaningful labels as with classification. While labels could be assigned manually post-hoc by matching representative sentences in each cluster to one of the 18 global UD genres, this process is bound to be subjective and also depends on the annotator to be fluent in most of the 114 languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Genre Prediction Methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In order to automate this procedure, we propose GMM+L and LDA+L which combine clustering and classification. Both methods start by clustering each treebank X s into the number of genres specified by its metadata (note that standard GMM and LDA cluster all of UD at once, i.e. X ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Genre Prediction Methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Next, the mean embedding of each cluster is computed such that they can be compared in a single representational space. Note that this would not be possible using monolingual models as their latent spaces are not as cross-lingually aligned. Analogous to BOOT, single-genre treebanks can then be used as a single-label signal such that the closest cluster from each treebank containing the respective genre can be extracted. Newly identified clusters are added to the pool of single-genre clusters. This process need only be repeated for three rounds before all sentences in UD can be assigned a single label.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Genre Prediction Methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Using these four methods, we aim to assign a single genre label to each sentence in UD. By comparing model ablations, we further depart from prior work and explicitly quantify the genre information in MLM embeddings as well as how it manifests within and across treebanks in UD.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Genre Prediction Methods", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For the 26 treebanks with instance genre labels, we are able to measure standard F1 after applying a mapping from the treebank-specific labels to the 18 global UD genre labels. The mapping was created according to the following criteria.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "First, we only allowed treebank-specific genre labels to be mapped to the set of UD genre labels specified in each treebank's metadata.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Second, if possible treebank labels are mapped to UD labels of the same name (e.g. fiction \u2192 fiction) or to the closest subsuming category (e.g. spoken (prepared) \u2192 spoken).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Third, decisions involving subjective uncertainty were based on the label which covers the majority of data sources. E.g., Czech-CAC has the metadata label set {legal, medical, news, non-fiction, reviews} and only three types of instance labels (aw, nw, sw). The sw (scientific-written) label is attached to many medical articles, but also to articles on philosophy or music. While academic may be the most fitting label, it is not in the metadata. As such we chose the broader non-fiction as the target label.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The full mapping is in Appendix A and we hope future work will be able to expand upon it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For the remaining 174 treebanks without sentence-level gold labels it is difficult to measure the exact quality of the predicted genre distributions. Nonetheless, treebank annotations provide enough information for approximate, global comparisons. Based on label/cluster assignments, it is possible to compute the standard cluster purity measure (PUR; Sch\u00fctze et al., 2008) . Across treebanks of the same genre, the majority of sentences should belong to the same label/cluster. We measure this using the ratio of cross-treebank label agreement (AGR). As in prior work (Aharoni and Goldberg, 2020) it is important to note that the aforementioned metrics can be misleading when taken on their own: A perfect score can for example be achieved by simply assigning all instances to the same genre.", |
|
"cite_spans": [ |
|
{ |
|
"start": 352, |
|
"end": 373, |
|
"text": "Sch\u00fctze et al., 2008)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 597, |
|
"text": "(Aharoni and Goldberg, 2020)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "To mitigate this issue we turn to the expected overlap of inter-treebank genre distributions. For multigenre treebanks, it is known which genres are present, but not how they are distributed. Since treebanks are expected to have a certain amount of overlap, we can however estimate a global error. A {fiction, spoken, wiki} treebank should for example have no clusters in common with a {news} treebank, but should have many sentences in the same clusters as a {fiction, medical, spoken} one. Assuming that genres are uniformly distributed within each treebank, the first pair would share 0 mass between distributions while the second pair would share 2 3 . Intuitively, a good prediction would produce a global genre distribution that falls precisely between the metadata range bars of Figure 1 , close to the center markers.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 786, |
|
"end": 794, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Unsupervised Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "To quantify the overlap between two treebank genre distributions p and q over the genres in L s , we use the discrete Bhattacharyya coefficient:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "BC(p, q) = l\u2208Ls p(l)q(l)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Unsupervised Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "which has often been applied to distributional comparisons (Choi and Lee, 2003; Ruder and Plank, 2017) . It is computed for all pairs of treebanks such that the overlap error \u2206BC \u2208 [0, 100] is the mean absolute difference between the expected distributional overlap of each treebank pair and the predicted one (i.e. lower is better). While none of these metrics can individually provide an exact measure of a prediction method's fit to the UD-specified distribution, they complement each other as to allow for global comparisons in absence of any sentence-level annotations. Next, all original training and development splits are concatenated and split 10/90 into a global training and development split with 102k and 915k sentences respectively. The reason for this small \"training\" split is that it is only required for training CLASS and BOOT. Within it, we again split the data 70/30 (71k and 31k sentences) for classifier training and held-out data for early stopping. All exact splits are provided in Appendix A.", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 79, |
|
"text": "(Choi and Lee, 2003;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 80, |
|
"end": 102, |
|
"text": "Ruder and Plank, 2017)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Baselines For our comparisons, we use a maximum frequency baseline (FREQ) which labels all sentences within a treebank with the metadata genre label that is most frequent overall. For example, in any treebank containing news, all instances are labeled as such.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In order to measure the untuned classification performance of mBERT, we propose an additional zeroshot classification baseline (ZERO). Prior research has found that classifying sentences based solely on their cosine similarity to genre label strings in MLM embedding space can be remarkably effective (Veeranna et al., 2016; Yin et al., 2019; Davison, 2020) . For example, a sentence is labeled as academic if this is the closest embedded label out of all 18 genre strings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 301, |
|
"end": 324, |
|
"text": "(Veeranna et al., 2016;", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 325, |
|
"end": 342, |
|
"text": "Yin et al., 2019;", |
|
"ref_id": "BIBREF54" |
|
}, |
|
{ |
|
"start": 343, |
|
"end": 357, |
|
"text": "Davison, 2020)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Training Every method from Section 4.1 is run with three initializations. CLASS and BOOT are trained for a maximum of 30 epochs with an early stopping patience of 3. ZERO, GMM+L and LDA+L (by extension GMM, LDA) do not require training and can be directly applied to the target data. Implementation details and development results are reported in Appendices B and C.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Using the 8% subset of annotated instances (Section 4.2) in addition to the unsupervised metrics from Section 4.3, we can gather an estimate of each method's performance in Table 1 . UD-level genre predictions in addition to instance-level confusions are further visualized in Figures 2 and 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 180, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 277, |
|
"end": 292, |
|
"text": "Figures 2 and 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Baselines The FREQ baseline highlights the issue of using individual unsupervised metrics for estimating performance. As it assigns all sentences per treebank to the same genre, it automatically achieves 100% single-genre treebank purity and agreement. Considering that the instance-level F1 covers 12 genres, a baseline score of 47 is also competitive. Note that this is mostly due to the data imbalance towards news. This unlikely distribution predicted by FREQ is also reflected in Figure 2 . ZERO-shot classification is not fine-tuned on UD-specific signals and as such predicts a genre distribution that does not adhere to the metadata at all (see Figure 2 ). It severely underpredicts high-frequency genres such as news and overpredicts less frequent genres such as email. This reflects in our metrics, with ZERO obtaining the lowest PUR, AGR and F1 while having the highest \u2206BC of 47.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 485, |
|
"end": 493, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 653, |
|
"end": 661, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Classification With regard to explicit genre fine-tuning, CLASS increases purity by 38 points compared to ZERO. Agreement across treebanks also improves, while overlap error decreases. These differences are also reflected in Figure 2 in that the predicted distribution is more within the range that would be expected given the metadata.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 225, |
|
"end": 233, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "BOOT fits the UD genre distribution more closely, resulting in a purity that is 4 points higher and agreement that is 11 points higher than CLASS. F1 also increases by 6 points while overlap error decreases by 4 points, indicating that these improvements are not merely due to e.g. assigning all sentences to the same genre. While instance-level F1 is below the FREQ baseline, both methods improve upon the untuned ZERO by a factor of 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The benefits of the less noisy training signal are visible in Figure 2 : Compared to CLASS, BOOT predicts labels in a way that more closely resembles the expected distribution even when the label only occurs in multi-genre treebanks and is ambiguous (e.g. web). While BOOT agrees upon the same genrelabel across languages (e.g. all social treebanks are labeled as such), CLASS tends to overassign the globally most frequent labels (e.g. half of social treebanks are labeled wiki) and has a larger variance in its assignments across initializations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 70, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Clustering GMM clusters from untuned mBERT embeddings follow the distribution specified by UD metadata more than the LDA clusters produced from lexical information. Although sentence representations are gathered using a naive mean-pooling approach, the resulting clusters reach 90% PUR compared to 77% for LDA. AGR follows a similar pattern and \u2206BC is equivalent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Turning to our cluster labelling approaches, both GMM+L and LDA+L obtain the highest overall F1 scores, outperforming both baselines. They achieve 100% PUR and AGR by the same process as the FREQ-baseline while their overlap error is significantly lower at 4 and 2 points respectively. Figure 2 reflects this, as GMM+L and LDA+L are always closest to the expected genre distribution, regardless of overall genre frequency. This shows how focusing on treebank-internal differences before applying a global labelling procedure combines the benefits of local clustering with the benefits of bootstrapped classification, resulting in an effective overall method.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 286, |
|
"end": 294, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "From the F1 scores in Table 1 it is clear that predicting instance genre based on treebank metadata alone -while accounting for its skewed distribution and inter-treebank shifts of genre definitions -is a difficult task. In the following we analyze the performance characteristics of each method. Overall, trends of the unsupervised metrics follow the supervised F1, leading us to believe that the methods would behave comparatively should labels for all instances in UD be available. The confusion matrices with prediction ratios per gold label in Figure 3 reflect our previous observations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 29, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 549, |
|
"end": 557, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Baselines The FREQ baseline's predictions are clearly dominated by the most frequent news genre, followed by the similarly high frequency non-fiction and blog (see Figure 3d ). ZERO appears to follow a pattern similar to BOOT (e.g. blog and email), however it also makes more predictions away from the diagonal (see Figure 3a) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 173, |
|
"text": "Figure 3d", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 326, |
|
"text": "Figure 3a)", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Classification Both CLASS ( Figure 3c ) and BOOT ( Figure 3b ) assign most instances of a genre to a single prediction label, often strongly aligning with the target diagonal. CLASS more often assigns a single label per target instead of spreading out predictions across multiple labels as in BOOT. Nonetheless, both methods make some unintuitive errors such as BOOT classifying parts of poetry as wiki. For these 68 samples from Russian-Taiga, BOOT likely overfits the language signal from Russian-GSD (McDonald et al., 2013; wiki) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 503, |
|
"end": 526, |
|
"text": "(McDonald et al., 2013;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 527, |
|
"end": 532, |
|
"text": "wiki)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 37, |
|
"text": "Figure 3c", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 51, |
|
"end": 60, |
|
"text": "Figure 3b", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Compared to ZERO which approximates the predictions of an untuned mBERT model, BOOT and CLASS fine-tuning appears to amplify existing patterns and shifts some predictions to better align with genres as defined in UD (e.g. fiction and legal in BOOT).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Clustering Grouping all 1.5 million sentences of UD into 18 unlabeled clusters using GMM and LDA results in purity and \u2206BC comparable to CLASS and BOOT. However, looking into the cluster contents of the former reveals that they are oversaturated with large treebanks such as German-HDT. Cosine similarities of cluster centroids from the mBERT-based GMM further indicate that proximity corresponds foremost to language similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Some clusters predominantly contain news, wiki or social. This corresponds to cases such as the Italian Twitter treebank TWITTIR\u00d2 in which specific tokens (e.g. \"@user\") are distinct enough to override the language signal. Overall, most UD-level clusters do not have clear genre distinctions and are influenced more strongly by language than genre, resulting in high treebank purity while having low intra-treebank agreement. Attempting to cross-lingually cluster all sentences in UD directly is therefore not as effective for recovering instance-level genre as it was in the monolingual setting (Aharoni and Goldberg, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 596, |
|
"end": 624, |
|
"text": "(Aharoni and Goldberg, 2020)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Initially constructing clusters within each treebank as in the GMM+L and LDA+L methods appears to restore the benefits observed in the monolingual setting. A qualitative analysis of the treebank-level LDA clusters reveals that wiki clusters often contain lexical indicators for the genre, such as brackets, while news features often contain n-grams which may be related to spoken quotes such as \"said\", \"Ik \" (first person pronoun).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Attaching labels to these clusters using the globally shared mBERT space yields confusion plots for GMM+L and LDA+L which most closely follow the diagonal (see Figures 3e and 3f ). Overall, their predictions follow a similar pattern indicating that clustering at the treebank-level using either mBERT embeddings or lexical features results in similar sentence groups.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 177, |
|
"text": "Figures 3e and 3f", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Within the instance-labeled subset, all models share confusions between news and wiki (mainly from PUD). While wiki is often predicted as news, both GMM+L and LDA+L substantially improve upon this \"news-bias\" with a confusion ratio that is 13%-56% lower compared to all other methods. The sentencebounded context from which all models must make their genre predictions nonetheless limits the amount of improvement possible. For example, using the aforementioned LDA features the algorithm would very likely be unable to distinguish between news and wiki (both non-fiction, edited texts describing facts) for cases such as, \"Weiss was honored with the literature prizes from the cities of Cologne and Bremen.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "This work provided an in-depth analysis of the 18 genres in Universal Dependencies (UD) and identified challenges for projecting this treebank metadata to the instance level. As these genre labels were not part of the first UD releases, but were added in later versions, we identified large variations in the way they are interpreted and applied -resulting in far less universal definitions of genre than for syntactic dependencies. Most treebanks furthermore contain multiple genres while not providing finer-grained instance-level annotations thereof. This also sheds light on prior work which used UD metadata for training data selection, where treebank-level genre improved in-language parsing performance (Stymne, 2020) and where moving to instance-level genre signals lead to additional increases even across languages (M\u00fcller-Eberstein et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 710, |
|
"end": 724, |
|
"text": "(Stymne, 2020)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 825, |
|
"end": 856, |
|
"text": "(M\u00fcller-Eberstein et al., 2021)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Building on the latent genre information stored in MLM embeddings, we investigated four methods for projecting treebank-level labels to the instance level. In contrast to prior monolingual work, immediately clustering multilingual embeddings yielded clusters dominated by language similarity instead of genre (Section 5.3). Similarly, zero-shot labelling using the untuned mBERT latent space proved to be insufficient for producing a genre distribution which adheres to the UD metadata. The classification-based CLASS and BOOT methods are able to extract a stronger genre signal from mBERT than ZERO.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our proposed GMM+L and LDA+L methods which combine local treebank clusters with the global, cross-lingual representation space reach the best overall performance, outperforming both baselines as well as both classification methods at a much lower computational cost (Section 5.2; Appendix B). This highlights how the current genre annotations are far from universal, yet can still guide our local-to-global instance-level genre predictors in identifying cross-lingually consistent, data-driven notions of genre.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Future work may be able to improve instance genre prediction by using a more consistent label set or human annotations. The definition of genre macro-classes or a broader taxonomy covering existing annotations could also guide further investigations into cross-lingual language variation. Nonetheless, we expect the task of predicting sentence genre to remain difficult due to the short context within which both annotators and models must make their predictions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Within the complex scenario of highly cross-lingual, instance-level genre classification, our methods have nonetheless demonstrated that genre is recoverable across the 114 languages in UD -shedding light on prior genre-driven work as well as enabling future research to more deliberately control for additional dimensions of language variation in their data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank the NLPnorth group for insightful discussions on this work -in particular Elisa Bassignana and Mike Zhang. Thanks to H\u00e9ctor Mart\u00ednez Alonso for feedback on an early draft as well as ITU's High-performance Computing Cluster team. Finally, we thank the anonymous reviewers for their helpful feedback. This research is supported by the Independent Research Fund Denmark (DFF) grant 9063-00077B and an Amazon Faculty Research Award (ARA).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "All experiments make use of Universal Dependencies v2.8 (Zeman et al., 2021) . From the total set of 202 treebanks, we use all except for the following two (due to licensing restrictions): Arabic-NYUAD and Japanese-BCCWJ. In total 1.51 million sentences are used in our experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 76, |
|
"text": "(Zeman et al., 2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Appendix A Universal Dependencies Setup", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The experiments in Section 5 use the 204k global test split. Initial comparisons were performed on the 915k dev set. The 102k training split was used to fine-tune CLASS and BOOT. For early stopping, 31k sentences from the latter split were used as a held-out set. The exact instances are available in the associated code repository for future reproducibility.Genre Mapping For 26 treebanks with instance-level genre labels in the metadata comments before each sentence, we created mappings from the treebank genre labels to the UD genre label set according to the guidelines described in Section 4.2. The genre metadata typically either follow the format genre = X or are implied by the document source specified in the sentence ID (e.g. sent_id = genre-...). There are a total of 91 mappings which will be made available with the codebase upon publication.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Splits", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The following describes architecture and training details for all methods. When not further defined, default hyperparameters are used. Implementations and predictions are available in the code repository at https://personads.me/x/syntaxfest-2021-code.Infrastructure Neural models are trained on an NVIDIA A100 GPU with 40 GB of VRAM.Language Model This work uses mBERT (Devlin et al., 2019) as implemented in the Transformers library (Wolf et al., 2020) as bert-base-multilingual-cased. Embeddings are of size d emb = 768 and the model has 178 million parameters. To create sentence embeddings, we use the mean-pooled WordPiece embeddings (Wu et al., 2016) of the final layer.Classification CLASS and BOOT build on the standard mBERT architecture as follows: mBERT \u2192 CLS-token \u2192 linear layer (d emb \u00d7 18) \u2192 softmax. The training has an epoch limit of 30 with early stopping after 3 iterations without improvements on the development set. Backpropagation is performed using AdamW (Loshchilov and Hutter, 2017 ) with a learning rate of 10 \u22127 on batches of size 16. The finetuning procedure requires GPU hardware which can host mBERT, corresponding to 10 GB of VRAM. Training on the 71k relevant instances takes approximately 10 hours.Clustering Both Gaussian Mixture Models (GMM) and Latent Dirichlet Allocation (Blei et al., 2003 ; LDA) use scikit-learn v0.23 (Pedregosa et al., 2011) . LDA uses bags of character 3-6-grams which occur in at least 2 and in at most 30% of sentences. GMMs use the mBERT sentence embeddings as input. Both methods are CPU-bound and cluster all treebanks in UD in under 45 minutes.Random Initializations Each experiment is run thrice using the seeds 41, 42 and 43. Table 2 shows results on the 915k development split of UD. Performance patterns are similar to those on the test split: the labeled clustering methods GMM+L and LDA+L perform best out of our proposed methods and outperform the baselines on the majority of metrics. With respect to classification, BOOT outperforms both the noisier CLASS and ZERO. Note that the frequency baseline FREQ performs especially well on the dev set, since only 5 of 26 instance labeled treebanks are included and 4 of these have the majority genre news. 100\u00b10.0 100\u00b10.0 5\u00b10.0 15\u00b10.9 Table 2 : Results of Genre Prediction on UD (Dev). Purity (PUR \u2191), agreement (AGR \u2191), overlap error (\u2206BC \u2193) and micro-F1 over instance-labeled TBs (F1 \u2191) for FREQ, ZERO, CLASS, BOOT and GMM, LDA with/without labels. Standard deviation denoted \u00b1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 369, |
|
"end": 390, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 453, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 639, |
|
"end": 656, |
|
"text": "(Wu et al., 2016)", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 979, |
|
"end": 1007, |
|
"text": "(Loshchilov and Hutter, 2017", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1310, |
|
"end": 1328, |
|
"text": "(Blei et al., 2003", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1359, |
|
"end": 1383, |
|
"text": "(Pedregosa et al., 2011)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1694, |
|
"end": 1701, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2253, |
|
"end": 2260, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "B Model and Training Details", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Unsupervised Domain Clusters in Pretrained Language Models", |
|
"authors": [ |
|
{ |
|
"first": "Roee", |
|
"middle": [], |
|
"last": "Aharoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7747--7763", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roee Aharoni and Yoav Goldberg. 2020. Unsupervised Domain Clusters in Pretrained Language Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7747-7763, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Universal Dependencies for learner English", |
|
"authors": [ |
|
{ |
|
"first": "Yevgeni", |
|
"middle": [], |
|
"last": "Berzak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jessica", |
|
"middle": [], |
|
"last": "Kenney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolyn", |
|
"middle": [], |
|
"last": "Spadine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing Xian", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Lam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keiko", |
|
"middle": [ |
|
"Sophie" |
|
], |
|
"last": "Mori", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Garza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boris", |
|
"middle": [], |
|
"last": "Katz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "737--746", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yevgeni Berzak, Jessica Kenney, Carolyn Spadine, Jing Xian Wang, Lucia Lam, Keiko Sophie Mori, Sebastian Garza, and Boris Katz. 2016. Universal Dependencies for learner English. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 737-746, Berlin, Germany, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Latent dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "993--1022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3(Jan):993-1022.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "HDT-UD: A very large Universal Dependencies treebank for German", |
|
"authors": [ |
|
{ |
|
"first": "Emanuel", |
|
"middle": [], |
|
"last": "Borges V\u00f6lker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maximilian", |
|
"middle": [], |
|
"last": "Wendt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hennig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arne", |
|
"middle": [], |
|
"last": "K\u00f6hn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Third Workshop on Universal Dependencies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "46--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emanuel Borges V\u00f6lker, Maximilian Wendt, Felix Hennig, and Arne K\u00f6hn. 2019. HDT-UD: A very large Uni- versal Dependencies treebank for German. In Proceedings of the Third Workshop on Universal Dependencies (UDW, SyntaxFest 2019), pages 46-57, Paris, France, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Challenges in annotating and parsing spoken, code-switched, Frisian-Dutch data", |
|
"authors": [ |
|
{ |
|
"first": "Anouck", |
|
"middle": [], |
|
"last": "Braggaar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rob", |
|
"middle": [], |
|
"last": "Van Der Goot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the Second Workshop on Domain Adaptation for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "50--58", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anouck Braggaar and Rob van der Goot. 2021. Challenges in annotating and parsing spoken, code-switched, Frisian-Dutch data. In Proceedings of the Second Workshop on Domain Adaptation for NLP, pages 50-58, Kyiv, Ukraine, April. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Udante: First steps towards the universal dependencies treebank of dante's latin works", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Flavio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rachele", |
|
"middle": [], |
|
"last": "Cecchini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giovanni", |
|
"middle": [], |
|
"last": "Sprugnoli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Moretti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Passarotti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Seventh Italian Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--7", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Flavio M Cecchini, Rachele Sprugnoli, Giovanni Moretti, and Marco Passarotti. 2020. Udante: First steps towards the universal dependencies treebank of dante's latin works. In Seventh Italian Conference on Computational Linguistics, pages 1-7. CEUR-WS. org.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Feature extraction based on the Bhattacharyya distance", |
|
"authors": [ |
|
{ |
|
"first": "Euisun", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chulhee", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Pattern Recognition", |
|
"volume": "36", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Euisun Choi and Chulhee Lee. 2003. Feature extraction based on the Bhattacharyya distance. Pattern Recognition, 36:1703-1709, 08.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Presenting TWITTIR\u00d2-UD: An Italian Twitter treebank in Universal Dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Alessandra", |
|
"middle": [ |
|
"Teresa" |
|
], |
|
"last": "Cignarella", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cristina", |
|
"middle": [], |
|
"last": "Bosco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Rosso", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fifth International Conference on Dependency Linguistics (Depling, SyntaxFest 2019)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "190--197", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alessandra Teresa Cignarella, Cristina Bosco, and Paolo Rosso. 2019. Presenting TWITTIR\u00d2-UD: An Italian Twitter treebank in Universal Dependencies. In Proceedings of the Fifth International Conference on Depen- dency Linguistics (Depling, SyntaxFest 2019), pages 190-197, Paris, France, August. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Cost-effective selection of pretraining data: A case study of pretraining BERT on social media", |
|
"authors": [ |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarvnaz", |
|
"middle": [], |
|
"last": "Karimi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Hachey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cecile", |
|
"middle": [], |
|
"last": "Paris", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1675--1681", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2020. Cost-effective selection of pretraining data: A case study of pretraining BERT on social media. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1675-1681, Online, November. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Zero-Shot Learning in Modern NLP", |
|
"authors": [ |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joe Davison. 2020. Zero-Shot Learning in Modern NLP, May. Accessed December 4th, 2020.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Towards an italian learner treebank in universal dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Elisa", |
|
"middle": [], |
|
"last": "Di Nuovo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cristina", |
|
"middle": [], |
|
"last": "Bosco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Mazzei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manuela", |
|
"middle": [], |
|
"last": "Sanguinetti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "6th Italian Conference on Computational Linguistics, CLiC-it 2019", |
|
"volume": "2481", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elisa Di Nuovo, Cristina Bosco, Alessandro Mazzei, and Manuela Sanguinetti. 2019. Towards an italian learner treebank in universal dependencies. In 6th Italian Conference on Computational Linguistics, CLiC-it 2019, volume 2481, pages 1-6. CEUR-WS.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Don't stop pretraining: Adapt language models to domains and tasks", |
|
"authors": [ |
|
{ |
|
"first": "Ana", |
|
"middle": [], |
|
"last": "Suchin Gururangan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Swabha", |
|
"middle": [], |
|
"last": "Marasovi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Swayamdipta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iz", |
|
"middle": [], |
|
"last": "Lo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Doug", |
|
"middle": [], |
|
"last": "Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Downey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Annual Meeting of the Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8342--8360", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Creating a parallel treebank of the old Indo-European Bible translations", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Dag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marius", |
|
"middle": [], |
|
"last": "Haug", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "J\u00f8hndal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the second workshop on language technology for cultural heritage data", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "27--34", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dag TT Haug and Marius J\u00f8hndal. 2008. Creating a parallel treebank of the old Indo-European Bible translations. In Proceedings of the second workshop on language technology for cultural heritage data (LaTeCH 2008), pages 27-34.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The Czech academic corpus 2.0 guide. The Prague Bulletin of Mathematical Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Barbora", |
|
"middle": [], |
|
"last": "Hladk\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jirka", |
|
"middle": [], |
|
"last": "Hana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaroslava", |
|
"middle": [], |
|
"last": "Hlav\u00e1\u010dov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji\u0159\u00ed", |
|
"middle": [], |
|
"last": "M\u00edrovsk\u1ef3", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "89", |
|
"issue": "", |
|
"pages": "41--96", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbora Hladk\u00e1, Jan Haji\u010d, Jirka Hana, Jaroslava Hlav\u00e1\u010dov\u00e1, Ji\u0159\u00ed M\u00edrovsk\u1ef3, and Jan Raab. 2008. The Czech academic corpus 2.0 guide. The Prague Bulletin of Mathematical Linguistics, 89(1):41-96.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Automatic detection of text genre", |
|
"authors": [ |
|
{ |
|
"first": "Brett", |
|
"middle": [], |
|
"last": "Kessler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Nunberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Schutze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "32--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brett Kessler, Geoffrey Nunberg, and Hinrich Schutze. 1997. Automatic detection of text genre. In 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics, pages 32-38, Madrid, Spain, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Towards Universal Dependencies for learner Chinese", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herman", |
|
"middle": [], |
|
"last": "Leung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keying", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies (UDW 2017)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "67--71", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Lee, Herman Leung, and Keying Li. 2017. Towards Universal Dependencies for learner Chinese. In Pro- ceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies (UDW 2017), pages 67-71, Gothenburg, Sweden, May. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Fixing weight decay regularization in adam", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Loshchilov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Hutter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Computing Research Repository", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1711.05101" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. Computing Research Repository, arXiv: 1711.05101. version 3.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Akkadian treebank for early neoassyrian royal inscriptions", |
|
"authors": [ |
|
{ |
|
"first": "Mikko", |
|
"middle": [], |
|
"last": "Luukko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aleksi", |
|
"middle": [], |
|
"last": "Sahala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Hardwick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krister", |
|
"middle": [], |
|
"last": "Lind\u00e9n", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 19th International Workshop on Treebanks and Linguistic Theories", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "124--134", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikko Luukko, Aleksi Sahala, Sam Hardwick, and Krister Lind\u00e9n. 2020. Akkadian treebank for early neo- assyrian royal inscriptions. In Proceedings of the 19th International Workshop on Treebanks and Linguistic Theories, pages 124-134, D\u00fcsseldorf, Germany, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Universal Dependency annotation for multilingual parsing", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvonne", |
|
"middle": [], |
|
"last": "Quirmbach-Brundage", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kuzman", |
|
"middle": [], |
|
"last": "Ganchev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oscar", |
|
"middle": [], |
|
"last": "T\u00e4ckstr\u00f6m", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claudia", |
|
"middle": [], |
|
"last": "Bedini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N\u00faria", |
|
"middle": [], |
|
"last": "Bertomeu Castell\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jungmee", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "92--97", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T\u00e4ckstr\u00f6m, Claudia Bedini, N\u00faria Bertomeu Castell\u00f3, and Jungmee Lee. 2013. Universal Dependency annotation for multilingual parsing. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 92-97, Sofia, Bulgaria, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "MoNERo: a biomedical gold standard corpus for the Romanian language", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Mitrofan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Grigorina", |
|
"middle": [], |
|
"last": "Verginica Barbu Mititelu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mitrofan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "71--79", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Mitrofan, Verginica Barbu Mititelu, and Grigorina Mitrofan. 2019. MoNERo: a biomedical gold standard corpus for the Romanian language. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 71-79, Florence, Italy, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Genre as weak supervision for cross-lingual dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "M\u00fcller-Eberstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rob", |
|
"middle": [], |
|
"last": "Van Der Goot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4786--4802", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Max M\u00fcller-Eberstein, Rob van der Goot, and Barbara Plank. 2021. Genre as weak supervision for cross-lingual dependency parsing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Pro- cessing, pages 4786-4802, Online and Punta Cana, Dominican Republic, November. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Universal Dependencies v2: An evergrowing multilingual treebank collection", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Tyers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Zeman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4034--4043", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Haji\u010d, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034-4043, Marseille, France, May. European Language Resources Association.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "The first Komi-Zyrian Universal Dependencies treebanks", |
|
"authors": [ |
|
{ |
|
"first": "Niko", |
|
"middle": [], |
|
"last": "Partanen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rogier", |
|
"middle": [], |
|
"last": "Blokland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyungtae", |
|
"middle": [], |
|
"last": "Lim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thierry", |
|
"middle": [], |
|
"last": "Poibeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Rie\u00dfler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Second Workshop on Universal Dependencies (UDW 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "126--132", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Niko Partanen, Rogier Blokland, KyungTae Lim, Thierry Poibeau, and Michael Rie\u00dfler. 2018. The first Komi- Zyrian Universal Dependencies treebanks. In Proceedings of the Second Workshop on Universal Dependencies (UDW 2018), pages 126-132, Brussels, Belgium, November. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "From Lexical Functional Grammar to Enhanced Universal Dependencies: Linguistically informed treebanks of Polish", |
|
"authors": [ |
|
{ |
|
"first": "Agnieszka", |
|
"middle": [], |
|
"last": "Patejuk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Przepi\u00f3rkowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Agnieszka Patejuk and Adam Przepi\u00f3rkowski. 2018. From Lexical Functional Grammar to Enhanced Universal Dependencies: Linguistically informed treebanks of Polish. Institute of Computer Science, Polish Academy of Sciences, Warsaw.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Scikit-learn: Machine learning in Python", |
|
"authors": [ |
|
{ |
|
"first": "Fabian", |
|
"middle": [], |
|
"last": "Pedregosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ga\u00ebl", |
|
"middle": [], |
|
"last": "Varoquaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Gramfort", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Michel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bertrand", |
|
"middle": [], |
|
"last": "Thirion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Grisel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mathieu", |
|
"middle": [], |
|
"last": "Blondel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Prettenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Dubourg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Matthieu Brucher, Matthieu Perrot, and \u00c9douard Duchesnay", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2825--2830", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Math- ieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cour- napeau, Matthieu Brucher, Matthieu Perrot, and \u00c9douard Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Cross-lingual genre classification", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Petrenz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Student Research Workshop at the 13th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--21", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Petrenz. 2012. Cross-lingual genre classification. In Proceedings of the Student Research Workshop at the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 11-21, Avignon, France, April. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Effective measures of domain similarity for parsing", |
|
"authors": [ |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gertjan Van Noord", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1566--1576", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbara Plank and Gertjan van Noord. 2011. Effective measures of domain similarity for parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1566-1576, Portland, Oregon, USA, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "What to do about non-standard (or non-canonical) language in nlp", |
|
"authors": [ |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "KONVENS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbara Plank. 2016. What to do about non-standard (or non-canonical) language in nlp. In KONVENS, Bochum, Germany, September.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Data point selection for genre-aware parsing", |
|
"authors": [ |
|
{ |
|
"first": "Ines", |
|
"middle": [], |
|
"last": "Rehbein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Bildhauer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 16th International Workshop on Treebanks and Linguistic Theories", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "95--105", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ines Rehbein and Felix Bildhauer. 2017. Data point selection for genre-aware parsing. In Proceedings of the 16th International Workshop on Treebanks and Linguistic Theories, pages 95-105, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Parsing natural language sentences by semi-supervised methods", |
|
"authors": [ |
|
{ |
|
"first": "Rudolf", |
|
"middle": [], |
|
"last": "Rosa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1506.04897" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rudolf Rosa. 2015. Parsing natural language sentences by semi-supervised methods. Computing Research Repos- itory, arXiv: 1506.04897. version 1.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Learning to select data for transfer learning with Bayesian optimization", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "372--382", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with Bayesian optimization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 372-382, Copenhagen, Denmark, September. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Amirsaeid Moloodi, and Alireza Nourian. 2020. The Persian dependency treebank made universal", |
|
"authors": [ |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Sadegh Rasooli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pegah", |
|
"middle": [], |
|
"last": "Safari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohammad Sadegh Rasooli, Pegah Safari, Amirsaeid Moloodi, and Alireza Nourian. 2020. The Persian depen- dency treebank made universal. arXiv e-prints, pages arXiv-2009.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "UD_German-LIT", |
|
"authors": [ |
|
{ |
|
"first": "Alessio", |
|
"middle": [], |
|
"last": "Salomoni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alessio Salomoni. 2019. UD_German-LIT. https://github.com/UniversalDependencies/UD_ German-LIT.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Adversarial training for cross-domain universal dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Motoki", |
|
"middle": [], |
|
"last": "Sato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hitoshi", |
|
"middle": [], |
|
"last": "Manabe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hiroshi", |
|
"middle": [], |
|
"last": "Noji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuji", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Motoki Sato, Hitoshi Manabe, Hiroshi Noji, and Yuji Matsumoto. 2017. Adversarial training for cross-domain universal dependency parsing. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Introduction to information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prabhakar", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Raghavan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "39", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hinrich Sch\u00fctze, Christopher D Manning, and Prabhakar Raghavan. 2008. Introduction to information retrieval, volume 39. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Classifying web corpora into domain and genre using automatic feature identification", |
|
"authors": [ |
|
{ |
|
"first": "Serge", |
|
"middle": [], |
|
"last": "Sharoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 3rd Web as Corpus Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "83--94", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Serge Sharoff. 2007. Classifying web corpora into domain and genre using automatic feature identification. In Proceedings of the 3rd Web as Corpus Workshop, pages 83-94.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "To the methodology of corpus construction for machine learning:\"Taiga\" syntax tree corpus and parser", |
|
"authors": [ |
|
{ |
|
"first": "Tatiana", |
|
"middle": [], |
|
"last": "Shavrina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Shapovalova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of \"CORPORA-2017\" International Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "78--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tatiana Shavrina and Olga Shapovalova. 2017. To the methodology of corpus construction for machine learn- ing:\"Taiga\" syntax tree corpus and parser. In Proceedings of \"CORPORA-2017\" International Conference, pages 78-84.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "A gold standard dependency corpus for English", |
|
"authors": [ |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Silveira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Dozat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miriam", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Chris Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth Inter- national Conference on Language Resources and Evaluation (LREC'14), Reykjavik, Iceland, May. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "82 treebanks, 34 models: Universal Dependency parsing with multi-treebank models", |
|
"authors": [ |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernd", |
|
"middle": [], |
|
"last": "Bohnet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Miryam De Lhoneux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yan", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Shao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Stymne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "113--123", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aaron Smith, Bernd Bohnet, Miryam de Lhoneux, Joakim Nivre, Yan Shao, and Sara Stymne. 2018. 82 tree- banks, 34 models: Universal Dependency parsing with multi-treebank models. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 113-123, Brussels, Belgium, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Data point selection for cross-language adaptation of dependency parsers", |
|
"authors": [ |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "682--686", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anders S\u00f8gaard. 2011. Data point selection for cross-language adaptation of dependency parsers. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 682-686, Portland, Oregon, USA, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Syntactic annotation of medieval texts", |
|
"authors": [ |
|
{ |
|
"first": "Achim", |
|
"middle": [], |
|
"last": "Stein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sophie", |
|
"middle": [], |
|
"last": "Pr\u00e9vost", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "New methods in historical corpora", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Achim Stein and Sophie Pr\u00e9vost. 2013. Syntactic annotation of medieval texts. New methods in historical corpora, 3:275.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "The Enronsent Corpus", |
|
"authors": [ |
|
{ |
|
"first": "Will", |
|
"middle": [], |
|
"last": "Styler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Will Styler. 2011. The Enronsent Corpus.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Cross-lingual domain adaptation for dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Stymne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 19th International Workshop on Treebanks and Linguistic Theories", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "62--69", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sara Stymne. 2020. Cross-lingual domain adaptation for dependency parsing. In Proceedings of the 19th Interna- tional Workshop on Treebanks and Linguistic Theories, pages 62-69, D\u00fcsseldorf, Germany, October. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages", |
|
"authors": [ |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Vania", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yova", |
|
"middle": [], |
|
"last": "Kementchedjhieva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1105--1116", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Clara Vania, Yova Kementchedjhieva, Anders S\u00f8gaard, and Adam Lopez. 2019. A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Con- ference on Natural Language Processing (EMNLP-IJCNLP), pages 1105-1116, Hong Kong, China, November. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Using semantic similarity for multi-label zero-shot classification of text documents", |
|
"authors": [ |
|
{ |
|
"first": "Jinseok", |
|
"middle": [], |
|
"last": "Sappadla Prateek Veeranna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneldo", |
|
"middle": [], |
|
"last": "Nam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Loza Menc\u0131a", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "F\u00fcrnkranz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceeding of European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "423--428", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sappadla Prateek Veeranna, Jinseok Nam, Eneldo Loza Menc\u0131a, and Johannes F\u00fcrnkranz. 2016. Using semantic similarity for multi-label zero-shot classification of text documents. In Proceeding of European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges, Belgium: Elsevier, pages 423-428.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Genre distinctions for discourse in the Penn TreeBank", |
|
"authors": [ |
|
{ |
|
"first": "Bonnie", |
|
"middle": [], |
|
"last": "Webber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "674--682", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bonnie Webber. 2009. Genre distinctions for discourse in the Penn TreeBank. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 674-682, Suntec, Singapore, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Shleifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Patrick Von Platen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Canwen", |
|
"middle": [], |
|
"last": "Plu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teven", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Scao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gugger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "38--45", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexan- der M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Norouzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Klingner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Apurva", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaobing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Gouws", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshikiyo", |
|
"middle": [], |
|
"last": "Kato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideto", |
|
"middle": [], |
|
"last": "Kazawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Stevens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Kurian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nishant", |
|
"middle": [], |
|
"last": "Patil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Computing Research Repository", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1609.08144" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, \u0141ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. Computing Research Repository, arXiv: 1609.08144. version 2.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach", |
|
"authors": [ |
|
{ |
|
"first": "Wenpeng", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jamaal", |
|
"middle": [], |
|
"last": "Hay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3914--3923", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3914-3923, Hong Kong, China, November. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "Parsing low-resource Levantine Arabic: Annotation projection versus small-sized annotated data", |
|
"authors": [ |
|
{ |
|
"first": "Shorouq", |
|
"middle": [], |
|
"last": "Zahra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shorouq Zahra. 2020. Parsing low-resource Levantine Arabic: Annotation projection versus small-sized annotated data.", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "The GUM corpus: Creating multilayer resources in the classroom", |
|
"authors": [ |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Zeldes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Language Resources and Evaluation", |
|
"volume": "51", |
|
"issue": "3", |
|
"pages": "581--612", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amir Zeldes. 2017. The GUM corpus: Creating multilayer resources in the classroom. Language Resources and Evaluation, 51(3):581-612.", |
|
"links": null |
|
}, |
|
"BIBREF57": { |
|
"ref_id": "b57", |
|
"title": "Emeka Onwuegbuzia, Petya Osenova, Robert \u00d6stling, Lilja \u00d8vrelid,\u015eaziye Bet\u00fcl \u00d6zate\u015f, Merve \u00d6z\u00e7elik, Arzucan \u00d6zg\u00fcr, Balk\u0131z \u00d6zt\u00fcrk Ba\u015faran, Hyunji Hayley Park, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Guilherme Paulino-Passos, Angelika Peljak-\u0141api\u0144ska, Siyao Peng, Cenel-Augusto Perez", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Zeman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Abrams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elia", |
|
"middle": [], |
|
"last": "Ackermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "No\u00ebmi", |
|
"middle": [], |
|
"last": "Aepli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hamid", |
|
"middle": [], |
|
"last": "Aghaei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u017deljko", |
|
"middle": [], |
|
"last": "Agi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Ahmadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lars", |
|
"middle": [], |
|
"last": "Ahrenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chika", |
|
"middle": [], |
|
"last": "Kennedy Ajede", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriel\u0117", |
|
"middle": [], |
|
"last": "Aleksandravi\u010di\u016bt\u0117", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ika", |
|
"middle": [], |
|
"last": "Alfina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lene", |
|
"middle": [], |
|
"last": "Antonsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katya", |
|
"middle": [], |
|
"last": "Aplonova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angelina", |
|
"middle": [], |
|
"last": "Aquino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolina", |
|
"middle": [], |
|
"last": "Aragon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [ |
|
"Jesus" |
|
], |
|
"last": "Aranzabe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bilge", |
|
"middle": [], |
|
"last": "Nas Ar\u0131can", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H\u00f3runn", |
|
"middle": [], |
|
"last": "Arnard\u00f3ttir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gashaw", |
|
"middle": [], |
|
"last": "Arutie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jessica", |
|
"middle": [ |
|
"Naraiswari" |
|
], |
|
"last": "Arwidarasti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masayuki", |
|
"middle": [], |
|
"last": "Asahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deniz", |
|
"middle": [], |
|
"last": "Baran Aslan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luma", |
|
"middle": [], |
|
"last": "Ateyah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Furkan", |
|
"middle": [], |
|
"last": "Atmaca", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammed", |
|
"middle": [], |
|
"last": "Attia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aitziber", |
|
"middle": [], |
|
"last": "Atutxa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liesbeth", |
|
"middle": [], |
|
"last": "Augustinus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Badmaeva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keerthana", |
|
"middle": [], |
|
"last": "Balasubramani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Esha", |
|
"middle": [], |
|
"last": "Banerjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Bank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Starka\u00f0ur", |
|
"middle": [], |
|
"last": "Verginica Barbu Mititelu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victoria", |
|
"middle": [], |
|
"last": "Barkarson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Basmov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Batchelor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kepa", |
|
"middle": [], |
|
"last": "Seyyit Talha Bedir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00f6zde", |
|
"middle": [], |
|
"last": "Bengoetxea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yevgeni", |
|
"middle": [], |
|
"last": "Berk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Berzak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmad", |
|
"middle": [], |
|
"last": "Irshad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Riyaz", |
|
"middle": [ |
|
"Ahmad" |
|
], |
|
"last": "Bhat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erica", |
|
"middle": [], |
|
"last": "Bhat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eckhard", |
|
"middle": [], |
|
"last": "Biagetti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Agn\u0117", |
|
"middle": [], |
|
"last": "Bick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krist\u00edn", |
|
"middle": [], |
|
"last": "Bielinskien\u0117", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rogier", |
|
"middle": [], |
|
"last": "Bjarnad\u00f3ttir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victoria", |
|
"middle": [], |
|
"last": "Blokland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lo\u00efc", |
|
"middle": [], |
|
"last": "Bobicev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emanuel", |
|
"middle": [ |
|
"Borges" |
|
], |
|
"last": "Boizou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carl", |
|
"middle": [], |
|
"last": "V\u00f6lker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cristina", |
|
"middle": [], |
|
"last": "B\u00f6rstell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gosse", |
|
"middle": [], |
|
"last": "Bosco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Bouma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adriane", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anouck", |
|
"middle": [], |
|
"last": "Boyd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Braggaar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aljoscha", |
|
"middle": [], |
|
"last": "Brokait\u0117", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [], |
|
"last": "Burchardt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernard", |
|
"middle": [], |
|
"last": "Candito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gauthier", |
|
"middle": [], |
|
"last": "Caron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lauren", |
|
"middle": [], |
|
"last": "Caron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tatiana", |
|
"middle": [], |
|
"last": "Cassidy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00fcl\u015fen", |
|
"middle": [], |
|
"last": "Cavalcanti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Flavio", |
|
"middle": [], |
|
"last": "Cebiroglu Eryigit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giuseppe", |
|
"middle": [ |
|
"G A" |
|
], |
|
"last": "Massimiliano Cecchini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Celano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Neslihan", |
|
"middle": [], |
|
"last": "Slavom\u00edr\u010d\u00e9pl\u00f6", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Savas", |
|
"middle": [], |
|
"last": "Cesur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u00d6zlem", |
|
"middle": [], |
|
"last": "Cetin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabricio", |
|
"middle": [], |
|
"last": "\u00c7etinoglu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shweta", |
|
"middle": [], |
|
"last": "Chalub", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ethan", |
|
"middle": [], |
|
"last": "Chauhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taishi", |
|
"middle": [], |
|
"last": "Chi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yongseok", |
|
"middle": [], |
|
"last": "Chika", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinho", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jayeol", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandra", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Chun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Silvie", |
|
"middle": [], |
|
"last": "Cignarella", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aur\u00e9lie", |
|
"middle": [], |
|
"last": "Cinkov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u00c7agr\u0131", |
|
"middle": [], |
|
"last": "Collomb", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miriam", |
|
"middle": [], |
|
"last": "\u00c7\u00f6ltekin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marine", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihaela", |
|
"middle": [], |
|
"last": "Courtin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philemon", |
|
"middle": [], |
|
"last": "Cristescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "Davidson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valeria", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehmet", |
|
"middle": [], |
|
"last": "De Paiva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elvis", |
|
"middle": [], |
|
"last": "Oguz Derin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arantza", |
|
"middle": [], |
|
"last": "De Souza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carly", |
|
"middle": [], |
|
"last": "Diaz De Ilarraza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arawinda", |
|
"middle": [], |
|
"last": "Dickerson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elisa", |
|
"middle": [ |
|
"Di" |
|
], |
|
"last": "Dinakaramani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bamba", |
|
"middle": [], |
|
"last": "Nuovo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Dione", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaja", |
|
"middle": [], |
|
"last": "Dirix", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Dobrovoljc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kira", |
|
"middle": [], |
|
"last": "Dozat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Puneet", |
|
"middle": [], |
|
"last": "Droganova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanne", |
|
"middle": [], |
|
"last": "Dwivedi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "Eckhoff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marhaba", |
|
"middle": [], |
|
"last": "Eiche", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Eli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Binyam", |
|
"middle": [], |
|
"last": "Elkahky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Ephrem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Toma\u017e", |
|
"middle": [], |
|
"last": "Erina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aline", |
|
"middle": [], |
|
"last": "Erjavec", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wograine", |
|
"middle": [], |
|
"last": "Etienne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sidney", |
|
"middle": [], |
|
"last": "Evelyn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rich\u00e1rd", |
|
"middle": [], |
|
"last": "Facundes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mar\u00edlia", |
|
"middle": [], |
|
"last": "Farkas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hector", |
|
"middle": [], |
|
"last": "Fernanda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Fernandez Alcalde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cl\u00e1udia", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kazunori", |
|
"middle": [], |
|
"last": "Freitas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katar\u00edna", |
|
"middle": [], |
|
"last": "Fujita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gajdo\u0161ov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcos", |
|
"middle": [], |
|
"last": "Galbraith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Moa", |
|
"middle": [], |
|
"last": "Garcia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "G\u00e4rdenfors", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabr\u00edcio", |
|
"middle": [ |
|
"Ferraz" |
|
], |
|
"last": "Garza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kim", |
|
"middle": [], |
|
"last": "Gerardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Gerdes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gustavo", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iakes", |
|
"middle": [], |
|
"last": "Godoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koldo", |
|
"middle": [], |
|
"last": "Goenaga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Memduh", |
|
"middle": [], |
|
"last": "Gojenola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "G\u00f6k\u0131rmak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xavier", |
|
"middle": [ |
|
"G\u00f3mez" |
|
], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Berta", |
|
"middle": [ |
|
"Gonz\u00e1lez" |
|
], |
|
"last": "Guinovart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernadeta", |
|
"middle": [], |
|
"last": "Saavedra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matias", |
|
"middle": [], |
|
"last": "Grici\u016bt\u0117", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lo\u00efc", |
|
"middle": [], |
|
"last": "Grioni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Normunds", |
|
"middle": [], |
|
"last": "Grobol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruno", |
|
"middle": [], |
|
"last": "Gr\u016bz\u012btis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C\u00e9line", |
|
"middle": [], |
|
"last": "Guillaume", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tunga", |
|
"middle": [], |
|
"last": "Guillot-Barbance", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "G\u00fcng\u00f6r", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrik", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mika", |
|
"middle": [], |
|
"last": "Hafsteinsson ; Haji\u010d Jr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linh", |
|
"middle": [ |
|
"H\u00e0" |
|
], |
|
"last": "H\u00e4m\u00e4l\u00e4inen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Na-Rae", |
|
"middle": [], |
|
"last": "M\u1ef9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Muhammad", |
|
"middle": [ |
|
"Yudistira" |
|
], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Hanifmuti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kim", |
|
"middle": [], |
|
"last": "Hardwick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dag", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Haug", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oliver", |
|
"middle": [], |
|
"last": "Heinecke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hellwig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbora", |
|
"middle": [], |
|
"last": "Hennig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaroslava", |
|
"middle": [], |
|
"last": "Hladk\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Florinel", |
|
"middle": [], |
|
"last": "Hlav\u00e1\u010dov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petter", |
|
"middle": [], |
|
"last": "Hociung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Hohle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jena", |
|
"middle": [], |
|
"last": "Huber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takumi", |
|
"middle": [], |
|
"last": "Hwang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anton", |
|
"middle": [ |
|
"Karl" |
|
], |
|
"last": "Ikeda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Ingason", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Ion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Irimia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaoru", |
|
"middle": [], |
|
"last": "Ishola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom\u00e1\u0161", |
|
"middle": [], |
|
"last": "Ito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Apoorva", |
|
"middle": [], |
|
"last": "Jel\u00ednek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "Jha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hildur", |
|
"middle": [], |
|
"last": "Johannsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fredrik", |
|
"middle": [], |
|
"last": "J\u00f3nsd\u00f3ttir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "J\u00f8rgensen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Juutinen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sarveswaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H\u00fcner", |
|
"middle": [], |
|
"last": "Ka\u015f\u0131kara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andre", |
|
"middle": [], |
|
"last": "Kaasen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nadezhda", |
|
"middle": [], |
|
"last": "Kabaeva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hiroshi", |
|
"middle": [], |
|
"last": "Sylvain Kahane", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenna", |
|
"middle": [], |
|
"last": "Kanayama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Neslihan", |
|
"middle": [], |
|
"last": "Kanerva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boris", |
|
"middle": [], |
|
"last": "Kara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tolga", |
|
"middle": [], |
|
"last": "Katz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jessica", |
|
"middle": [], |
|
"last": "Kayadelen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V\u00e1clava", |
|
"middle": [], |
|
"last": "Kenney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jesse", |
|
"middle": [], |
|
"last": "Kettnerov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Kirchner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arne", |
|
"middle": [], |
|
"last": "Klementieva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abdullatif", |
|
"middle": [], |
|
"last": "K\u00f6hn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kamil", |
|
"middle": [], |
|
"last": "K\u00f6ksal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timo", |
|
"middle": [], |
|
"last": "Kopacewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Korkiakangas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jolanta", |
|
"middle": [], |
|
"last": "Kotsyba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Kovalevskait\u0117", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Parameswari", |
|
"middle": [], |
|
"last": "Krek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oguzhan", |
|
"middle": [], |
|
"last": "Krishnamurthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Asl\u0131", |
|
"middle": [], |
|
"last": "Kuyruk\u00e7u", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sookyoung", |
|
"middle": [], |
|
"last": "Kuzgun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Kwak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Laippala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lorenzo", |
|
"middle": [], |
|
"last": "Lam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tatiana", |
|
"middle": [], |
|
"last": "Lambertino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lando", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexei", |
|
"middle": [], |
|
"last": "Septina Dian Larasati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Lavrentiev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phuong", |
|
"middle": [ |
|
"L\u00ea" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "H\u1ed3ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saran", |
|
"middle": [], |
|
"last": "Lenci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herman", |
|
"middle": [], |
|
"last": "Lertpradit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Leung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Levina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ying", |
|
"middle": [], |
|
"last": "Cheuk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josie", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keying", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyungtae", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruna", |
|
"middle": [ |
|
"Lima" |
|
], |
|
"last": "Lim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krister", |
|
"middle": [], |
|
"last": "Padovani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikola", |
|
"middle": [], |
|
"last": "Lind\u00e9n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Ljube\u0161i\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andry", |
|
"middle": [], |
|
"last": "Loginova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mikko", |
|
"middle": [], |
|
"last": "Luthfi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Luukko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teresa", |
|
"middle": [], |
|
"last": "Lyashevskaya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vivien", |
|
"middle": [], |
|
"last": "Lynn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aibek", |
|
"middle": [], |
|
"last": "Macketanz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Makazhanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Mandl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruli", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B\u00fc\u015fra", |
|
"middle": [], |
|
"last": "Manurung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C\u0203t\u0203lina", |
|
"middle": [], |
|
"last": "Mar\u015fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "M\u0203r\u0203nduc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Mare\u010dek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Marheinecke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andr\u00e9", |
|
"middle": [], |
|
"last": "H\u00e9ctor Mart\u00ednez Alonso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Martins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hiroshi", |
|
"middle": [], |
|
"last": "Ma\u0161ek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuji", |
|
"middle": [], |
|
"last": "Matsuda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mazzei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gustavo", |
|
"middle": [], |
|
"last": "Mcguinness", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niko", |
|
"middle": [], |
|
"last": "Mendon\u00e7a", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karina", |
|
"middle": [], |
|
"last": "Miekka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rachele", |
|
"middle": [], |
|
"last": "Mischenkova ; Carolyn Spadine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sprugnoli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Stein H\u00f3r Steingr\u00edmsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milan", |
|
"middle": [], |
|
"last": "Stella", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emmett", |
|
"middle": [], |
|
"last": "Straka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jana", |
|
"middle": [], |
|
"last": "Strickland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alane", |
|
"middle": [], |
|
"last": "Strnadov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yogi", |
|
"middle": [ |
|
"Lesmana" |
|
], |
|
"last": "Suhr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Umut", |
|
"middle": [], |
|
"last": "Sulestio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shingo", |
|
"middle": [], |
|
"last": "Sulubacak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zsolt", |
|
"middle": [], |
|
"last": "Suzuki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dima", |
|
"middle": [], |
|
"last": "Sz\u00e1nt\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuta", |
|
"middle": [], |
|
"last": "Taji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabio", |
|
"middle": [], |
|
"last": "Takahashi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [], |
|
"last": "Tamburini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ann", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takaaki", |
|
"middle": [], |
|
"last": "Tanaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samson", |
|
"middle": [], |
|
"last": "Tella", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isabelle", |
|
"middle": [], |
|
"last": "Tellier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marinella", |
|
"middle": [], |
|
"last": "Testori", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liisi", |
|
"middle": [], |
|
"last": "Torga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marsida", |
|
"middle": [], |
|
"last": "Toska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trond", |
|
"middle": [], |
|
"last": "Trosterud", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Faculty of Mathematics and Physics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Zeman, Joakim Nivre, Mitchell Abrams, Elia Ackermann, No\u00ebmi Aepli, Hamid Aghaei, \u017deljko Agi\u0107, Amir Ahmadi, Lars Ahrenberg, Chika Kennedy Ajede, Gabriel\u0117 Aleksandravi\u010di\u016bt\u0117, Ika Alfina, Lene Antonsen, Katya Aplonova, Angelina Aquino, Carolina Aragon, Maria Jesus Aranzabe, Bilge Nas Ar\u0131can, H\u00f3runn Arnard\u00f3ttir, Gashaw Arutie, Jessica Naraiswari Arwidarasti, Masayuki Asahara, Deniz Baran Aslan, Luma Ateyah, Furkan Atmaca, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Keerthana Balasubramani, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, Starka\u00f0ur Barkarson, Victoria Basmov, Colin Batchelor, John Bauer, Seyyit Talha Bedir, Kepa Bengoetxea, G\u00f6zde Berk, Yevgeni Berzak, Irshad Ahmad Bhat, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Agn\u0117 Bielinskien\u0117, Krist\u00edn Bjarnad\u00f3ttir, Rogier Blokland, Victoria Bobicev, Lo\u00efc Boizou, Emanuel Borges V\u00f6lker, Carl B\u00f6rstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Anouck Braggaar, Kristina Brokait\u0117, Aljoscha Burchardt, Marie Candito, Bernard Caron, Gauthier Caron, Lauren Cassidy, Tatiana Cavalcanti, G\u00fcl\u015fen Cebiroglu Eryigit, Flavio Massi- miliano Cecchini, Giuseppe G. A. Celano, Slavom\u00edr\u010c\u00e9pl\u00f6, Neslihan Cesur, Savas Cetin, \u00d6zlem \u00c7etinoglu, Fabricio Chalub, Shweta Chauhan, Ethan Chi, Taishi Chika, Yongseok Cho, Jinho Choi, Jayeol Chun, Alessan- dra T. Cignarella, Silvie Cinkov\u00e1, Aur\u00e9lie Collomb, \u00c7agr\u0131 \u00c7\u00f6ltekin, Miriam Connor, Marine Courtin, Mihaela Cristescu, Philemon. Daniel, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Mehmet Oguz Derin, Elvis de Souza, Arantza Diaz de Ilarraza, Carly Dickerson, Arawinda Dinakaramani, Elisa Di Nuovo, Bamba Dione, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Hanne Eckhoff, Sandra Eiche, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Olga Erina, Toma\u017e Erjavec, Aline Etienne, Wograine Evelyn, Sidney Facundes, Rich\u00e1rd Farkas, Mar\u00edlia Fernanda, Hector Fernandez Alcalde, Jennifer Foster, Cl\u00e1u- dia Freitas, Kazunori Fujita, Katar\u00edna Gajdo\u0161ov\u00e1, Daniel Galbraith, Marcos Garcia, Moa G\u00e4rdenfors, Sebastian Garza, Fabr\u00edcio Ferraz Gerardi, Kim Gerdes, Filip Ginter, Gustavo Godoy, Iakes Goenaga, Koldo Gojenola, Memduh G\u00f6k\u0131rmak, Yoav Goldberg, Xavier G\u00f3mez Guinovart, Berta Gonz\u00e1lez Saavedra, Bernadeta Grici\u016bt\u0117, Matias Grioni, Lo\u00efc Grobol, Normunds Gr\u016bz\u012btis, Bruno Guillaume, C\u00e9line Guillot-Barbance, Tunga G\u00fcng\u00f6r, Nizar Habash, Hinrik Hafsteinsson, Jan Haji\u010d, Jan Haji\u010d jr., Mika H\u00e4m\u00e4l\u00e4inen, Linh H\u00e0 M\u1ef9, Na-Rae Han, Muhammad Yudistira Hanifmuti, Sam Hardwick, Kim Harris, Dag Haug, Johannes Heinecke, Oliver Hellwig, Felix Hennig, Barbora Hladk\u00e1, Jaroslava Hlav\u00e1\u010dov\u00e1, Florinel Hociung, Petter Hohle, Eva Huber, Jena Hwang, Takumi Ikeda, Anton Karl Ingason, Radu Ion, Elena Irimia, O . l\u00e1j\u00edd\u00e9 Ishola, Kaoru Ito, Tom\u00e1\u0161 Jel\u00ednek, Apoorva Jha, Anders Johannsen, Hildur J\u00f3nsd\u00f3ttir, Fredrik J\u00f8rgensen, Markus Juutinen, Sarveswaran K, H\u00fcner Ka\u015f\u0131kara, Andre Kaasen, Nadezhda Kabaeva, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Neslihan Kara, Boris Katz, Tolga Kayadelen, Jessica Kenney, V\u00e1clava Kettnerov\u00e1, Jesse Kirchner, Elena Klementieva, Arne K\u00f6hn, Abdullatif K\u00f6ksal, Kamil Kopacewicz, Timo Korkiakangas, Natalia Kotsyba, Jolanta Kovalevskait\u0117, Simon Krek, Parameswari Krishnamurthy, Oguzhan Kuyruk\u00e7u, Asl\u0131 Kuzgun, Sookyoung Kwak, Veronika Laippala, Lucia Lam, Lorenzo Lambertino, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phuong L\u00ea H\u1ed3ng, Alessandro Lenci, Saran Lertpradit, Herman Leung, Maria Levina, Cheuk Ying Li, Josie Li, Keying Li, Yuan Li, KyungTae Lim, Bruna Lima Padovani, Krister Lind\u00e9n, Nikola Ljube\u0161i\u0107, Olga Loginova, Andry Luthfi, Mikko Luukko, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, B\u00fc\u015fra Mar\u015fan, C\u0203t\u0203lina M\u0203r\u0203nduc, David Mare\u010dek, Katrin Marheinecke, H\u00e9ctor Mart\u00ednez Alonso, Andr\u00e9 Martins, Jan Ma\u0161ek, Hiroshi Matsuda, Yuji Matsumoto, Alessan- dro Mazzei, Ryan McDonald, Sarah McGuinness, Gustavo Mendon\u00e7a, Niko Miekka, Karina Mischenkova, Margarita Misirpashayeva, Anna Missil\u00e4, C\u0203t\u0203lin Mititelu, Maria Mitrofan, Yusuke Miyao, AmirHossein Mo- jiri Foroushani, Judit Moln\u00e1r, Amirsaeid Moloodi, Simonetta Montemagni, Amir More, Laura Moreno Romero, Giovanni Moretti, Keiko Sophie Mori, Shinsuke Mori, Tomohiko Morioka, Shigeki Moro, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Robert Munro, Yugo Murawaki, Kaili M\u00fc\u00fcrisep, Pinkey Nainwani, Mariam Nakhl\u00e9, Juan Ignacio Navarro Hor\u00f1iacek, Anna Nedoluzhko, Gunta Ne\u0161pore-B\u0113rzkalne, Manuela Nevaci, Luong Nguy\u1ec5n Thi . , Huy\u1ec1n Nguy\u1ec5n Thi . Minh, Yoshihiro Nikaido, Vitaly Nikolaev, Rattima Nitisaroj, Alireza Nourian, Hanna Nurmi, Stina Ojala, Atul Kr. Ojha, Ad\u00e9dayo . Ol\u00fa\u00f2kun, Mai Omura, Emeka Onwueg- buzia, Petya Osenova, Robert \u00d6stling, Lilja \u00d8vrelid,\u015eaziye Bet\u00fcl \u00d6zate\u015f, Merve \u00d6z\u00e7elik, Arzucan \u00d6zg\u00fcr, Balk\u0131z \u00d6zt\u00fcrk Ba\u015faran, Hyunji Hayley Park, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Pate- juk, Guilherme Paulino-Passos, Angelika Peljak-\u0141api\u0144ska, Siyao Peng, Cenel-Augusto Perez, Natalia Perkova, Guy Perrier, Slav Petrov, Daria Petrova, Jason Phelan, Jussi Piitulainen, Tommi A Pirinen, Emily Pitler, Bar- bara Plank, Thierry Poibeau, Larisa Ponomareva, Martin Popel, Lauma Pretkalnin , a, Sophie Pr\u00e9vost, Prokopis Prokopidis, Adam Przepi\u00f3rkowski, Tiina Puolakainen, Sampo Pyysalo, Peng Qi, Andriela R\u00e4\u00e4bis, Alexandre Rademaker, Taraka Rama, Loganathan Ramasamy, Carlos Ramisch, Fam Rashel, Mohammad Sadegh Rasooli, Vinit Ravishankar, Livy Real, Petru Rebeja, Siva Reddy, Georg Rehm, Ivan Riabov, Michael Rie\u00dfler, Erika Rimkut\u0117, Larissa Rinaldi, Laura Rituma, Luisa Rocha, Eir\u00edkur R\u00f6gnvaldsson, Mykhailo Romanenko, Rudolf Rosa, Valentin Ros , ca, Davide Rovati, Olga Rudina, Jack Rueter, Kristj\u00e1n R\u00fanarsson, Shoval Sadde, Pegah Sa- fari, Beno\u00eet Sagot, Aleksi Sahala, Shadi Saleh, Alessio Salomoni, Tanja Samard\u017ei\u0107, Stephanie Samson, Manuela Sanguinetti, Ezgi San\u0131yar, Dage S\u00e4rg, Baiba Saul\u012bte, Yanin Sawanakunanon, Shefali Saxena, Kevin Scannell, Salvatore Scarlata, Nathan Schneider, Sebastian Schuster, Lane Schwartz, Djam\u00e9 Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Hiroyuki Shirasu, Yana Shishkina, Muh Shohibussirri, Dmitry Sichi- nava, Janine Siewert, Einar Freyr Sigur\u00f0sson, Aline Silveira, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simk\u00f3, M\u00e1ria \u0160imkov\u00e1, Kiril Simov, Maria Skachedubova, Aaron Smith, Isabela Soares-Bastos, Car- olyn Spadine, Rachele Sprugnoli, Stein h\u00f3r Steingr\u00edmsson, Antonio Stella, Milan Straka, Emmett Strickland, Jana Strnadov\u00e1, Alane Suhr, Yogi Lesmana Sulestio, Umut Sulubacak, Shingo Suzuki, Zsolt Sz\u00e1nt\u00f3, Dima Taji, Yuta Takahashi, Fabio Tamburini, Mary Ann C. Tan, Takaaki Tanaka, Samson Tella, Isabelle Tellier, Marinella Testori, Guillaume Thomas, Liisi Torga, Marsida Toska, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Utku T\u00fcrk, Francis Tyers, Sumire Uematsu, Roman Untilov, Zde\u0148ka Ure\u0161ov\u00e1, Larraitz Uria, Hans Uszkoreit, Andrius Utka, Sowmya Vajjala, Rob van der Goot, Martine Vanhove, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Natalia Vlasova, Aya Wakasa, Joel C. Wallenberg, Lars Wallin, Abigail Walsh, Jing Xian Wang, Jonathan North Washington, Maximilan Wendt, Paul Widmer, Seyi Williams, Mats Wir\u00e9n, Christian Wittern, Tsegay Woldemariam, Tak-sum Wong, Alina Wr\u00f3blewska, Mary Yako, Kayo Yamashita, Naoki Yamazaki, Chunxiao Yan, Koichi Yasuoka, Marat M. Yavrumyan, Arife Bet\u00fcl Yenice, Olcay Taner Y\u0131ld\u0131z, Zhuoran Yu, Zden\u011bk \u017dabokrtsk\u00fd, Shorouq Zahra, Amir Zeldes, Hanzhi Zhu, Anna Zhuravleva, and Rayan Ziane. 2021. Universal dependencies 2.8.1. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles Univer- sity.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "f ic t io n f ic t io n w e b w ik i le g a l r e v ie w s b lo g b ib le g Genre Distribution in UD Version 2.8. Ranges indicate upper/lower bounds for sentences per genre inferred from UD metadata. Center marker reflects the distribution under the assumption that genres within treebanks are uniformly distributed. Labels above the bars indicate the number of treebanks which contain each genre.", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Genre Predictions on UD (Test). Ranges indicate upper/lower bounds inferred from UD metadata and the distribution under treebank-level uniformity at the center marker. Bars show averaged distribution predictions with standard deviations by FREQ, ZERO, BOOT, CLASS, GMM+L and LDA+L.", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"text": "b lo g e m a il f ic t io n le g a l m e d ic a l n e w s n o n f ic t io n p o e t r y r e v ie w s s o c ia l s Confusions of Instance-level Genre. Ratios of predicted labels (columns) per target (row) for ZERO, BOOT, CLASS, FREQ, GMM+L, LDA+L on test splits of 26 instance-annotated treebanks.", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |