ACL-OCL / Base_JSON /prefixT /json /tlt /2021.tlt-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:42:55.140363Z"
},
"title": "Typological Approach to Improve Dependency Parsing for Croatian Language",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Alves",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zagreb Zagreb",
"location": {
"country": "Croatia"
}
},
"email": ""
},
{
"first": "Boke",
"middle": [],
"last": "Bekavac",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zagreb Zagreb",
"location": {
"country": "Croatia"
}
},
"email": "[email protected]"
},
{
"first": "Marko",
"middle": [],
"last": "Tadi\u0107",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zagreb Zagreb",
"location": {
"country": "Croatia"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This article presents the results of the experiments concerning different typological approaches considering syntactic structures with the aim to identify similar languages which can be combined with Croatian to improve UAS and LAS metrics when using a deep learning tool. From the eight selected languages coming from different linguistic families and genera, we showed that Slovene and Irish are the best candidates which improved significantly dependency parsing results. Slovak is the only language presenting negative synergy when combined with Croatian. Both typological approaches presented in this study, using quantitative data concerning rules from context-free grammar extracted from corpora using Marsagram tool and using syntactic features from lang2vec language vectors, did not allow us to explain the observed synergy when the different languages were combined. The traditional genealogical classification does not explain either the improvement provided by Irish or the negative impact of the Slovak language in both considered metrics.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This article presents the results of the experiments concerning different typological approaches considering syntactic structures with the aim to identify similar languages which can be combined with Croatian to improve UAS and LAS metrics when using a deep learning tool. From the eight selected languages coming from different linguistic families and genera, we showed that Slovene and Irish are the best candidates which improved significantly dependency parsing results. Slovak is the only language presenting negative synergy when combined with Croatian. Both typological approaches presented in this study, using quantitative data concerning rules from context-free grammar extracted from corpora using Marsagram tool and using syntactic features from lang2vec language vectors, did not allow us to explain the observed synergy when the different languages were combined. The traditional genealogical classification does not explain either the improvement provided by Irish or the negative impact of the Slovak language in both considered metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Since the 1980s, NLP field has increasingly relied on statistics, probability, and machine learning methods which require a large amount of linguistic data. Furthermore, from 2015 onward, the usage of deep learning techniques has been dominant in this field (Otter et al., 2018) . These approaches require a large amount of annotated data which can be problematic for some languages considered as low-resourced.",
"cite_spans": [
{
"start": 258,
"end": 278,
"text": "(Otter et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Linguistic manual annotation of texts can be very costly (Fort et al., 2014) , therefore, other solutions for improving PoS-MSD (Part-of-Speech and Morphosyntactic descriptors) and Dependency Parsing tagging scores have been proposed. One way to overcome this issue is to combine data from similar languages according to established typological classifications (Smith et al., 2018) (Alzetta et al., 2020) . Although some improvement can be observed, most of these studies, however, do not present a deep analysis of typological features which may play a significant role when corpora are combined. Furthermore, none has considered statistics concerning possible (or impossible) syntactic constructions inside the available training datasets as a possible typological classification.",
"cite_spans": [
{
"start": 57,
"end": 76,
"text": "(Fort et al., 2014)",
"ref_id": "BIBREF6"
},
{
"start": 361,
"end": 381,
"text": "(Smith et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 382,
"end": 404,
"text": "(Alzetta et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Therefore, our aim in this paper is to propose an innovative way of considering typological aspects when combining datasets for dependency parsing improvement. The study is focused on the Croatian language and its association with several European languages from different linguistic families. Our hypothesis is that by comparing syntactic rules automatically extracted from Universal Dependencies datasets by inferring context-free grammars (together with its statistics), we are able to classify languages according to these syntactic criteria. Combining languages closer in terms of syntactic structure to train deep learning parsing models should improve final LAS and UAS metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is composed as follows: Section 2 presents related work to this topic. Section 3 describes the campaign design: datasets selection, typological classification strategies, and extrinsic evaluation using trained models; Section 4 present the obtained results which are discussed in Section 5. In Section 6 we provide conclusions and possible future directions for research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Combining data from multiple languages has the ultimate aim of creating Universal Morphological and Dependency Parsing systems by considering the relationship between different languages morphology and syntactic structure (Otter et al., 2018) . The Universal Dependencies (UD) framework (Nivre et al., 2020) proposes a robust set of rules for annotating parts of speech, morphological features, and syntactic dependencies across different human languages, and is inserted in this strategy as it allows multi-lingual data to be annotated with the same set of tags.",
"cite_spans": [
{
"start": 222,
"end": 242,
"text": "(Otter et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 287,
"end": 307,
"text": "(Nivre et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Udify tool (Kondratyuk and Straka, 2019) proposes an architecture aimed for PoS-MSD and dependency parsing tagging integrating Multilingual BERT language model 1 (104 languages) (Pires et al., 2019) . It can be fine-tuned using specific corpora (mono or multilingual) to enhance overall results. The authors showed that by using a corpus composed by the association of all Universal Dependencies training sets, there is a considerable improvement in the results of parsing for low-resourced languages. Nevertheless, the authors did not conduct an experiment based on typological features to test the potential of the model when only similar languages are combined.",
"cite_spans": [
{
"start": 11,
"end": 40,
"text": "(Kondratyuk and Straka, 2019)",
"ref_id": "BIBREF8"
},
{
"start": 178,
"end": 198,
"text": "(Pires et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "An interesting example of the usage of typological features to improve results of NLP methods was presented by (\u00dcst\u00fcn et al., 2020) . They proposed UDapter, a tool that uses a mix of automatically curated and predicted typological features obtained via URIEL language typology database (Littell et al., 2017) . These features were used as direct input to a neural parser as language-typology vectors and results showed that they were crucial for the improvement of the dependency parsing accuracy for lowresourced languages. A similar study, using different deep learning architecture had been performed by (Ammar et al., 2016) , however, in both cases, there is no detailed analysis on which features were the most relevant.",
"cite_spans": [
{
"start": 111,
"end": 131,
"text": "(\u00dcst\u00fcn et al., 2020)",
"ref_id": null
},
{
"start": 286,
"end": 308,
"text": "(Littell et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 607,
"end": 627,
"text": "(Ammar et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The above-mentioned language typology database offers the lang2vec tool (Littell et al., 2017) which provides uniform, consistent and standardized information about languages drawn from typological, geographical and phylogenetic databases. Its sources include WALS (Dryer and Haspelmath, 2013) , PHOIBLE (Moran and McCloy, 2019) , Ethnologue (Lewis, 2009) , and Glottolog (Hammarstr\u00f6m et al., 2020) . While (\u00dcst\u00fcn et al., 2020) used lang2vec in an automatized way to cluster languages, (Naseem et al., 2012) selected specific typological features to fine-tune effective automatic annotation of data from languages with no available training sets.",
"cite_spans": [
{
"start": 72,
"end": 94,
"text": "(Littell et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 265,
"end": 293,
"text": "(Dryer and Haspelmath, 2013)",
"ref_id": null
},
{
"start": 296,
"end": 328,
"text": "PHOIBLE (Moran and McCloy, 2019)",
"ref_id": null
},
{
"start": 342,
"end": 355,
"text": "(Lewis, 2009)",
"ref_id": null
},
{
"start": 372,
"end": 398,
"text": "(Hammarstr\u00f6m et al., 2020)",
"ref_id": null
},
{
"start": 486,
"end": 507,
"text": "(Naseem et al., 2012)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The strategy proposed by concerns sharing 27 parameters using Uppsala parser using pairs of languages from the same linguistic family, showing that general typological classifications can already contribute to enhancing final results on low-resourced languages. They also observed that by combining features even from unrelated languages overall scores can be improved in some specific cases. Nevertheless, as it is the case for most of the similar studies, no specific linguistic analysis was presented in order to explain why languages coming from different families can improve overall results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "An interesting and detailed experiment was conducted by (Lynn et al., 2014) concerning the Irish language. The authors performed a series of cross-lingual direct transfer parsing for the Irish language and the best results were achieved when using Indonesian, a language from the Austronesian language family. They also propose some analysis considering similarities between the treebanks of both languages in terms of dependency parsing labels, however, detailed statistical analysis of corpora and complete comparison of specific typological features were not carried out.",
"cite_spans": [
{
"start": 56,
"end": 75,
"text": "(Lynn et al., 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Concerning syntax more specifically, (Alzetta et al., 2020) presented a study whose main objective was to identify cross-lingual quantitative trends in the distribution of dependency relations in annotated corpora from distinct languages by using an algorithm (LISCA -LInguiStically-driven Selection of Correct Arcs) (Dell'Orletta et al., 2013) capable of detecting patterns of syntactic constructions in large datasets. Only four Indo-European languages were scrutinised but some interesting insights concerning languages peculiarities were observed.",
"cite_spans": [
{
"start": 37,
"end": 59,
"text": "(Alzetta et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Another approach to extract and to compare syntactic information from treebanks is proposed by (Blache et al., 2016) tures inside annotated corpora. The analysis comparing 10 different languages showed the potential of the proposed tool (MarsaGram), however, like (Alzetta et al., 2020) , the authors do not explore how this information can be used to improve existing NLP tools, which is the main objective of this paper.",
"cite_spans": [
{
"start": 95,
"end": 116,
"text": "(Blache et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 264,
"end": 286,
"text": "(Alzetta et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we describe the corpora that have been selected, the typological classification methods that were considered, and the experimental design used to evaluate the effects on dependency parsing metrics of the combination of different training datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Campaign Design",
"sec_num": "3"
},
{
"text": "As mentioned before, the focus of this study is the Croatian language. The main idea is to combine its training dataset with other European languages to improve UAS and LAS scores. From all 24 European Union official languages, we have chosen the following ones for our experiments: Bulgarian, Greek, Hungarian, Irish, Latvian, Maltese, Slovak, and Slovene. We have decided to work with European languages as this ensemble already provides languages from diverse linguistic families and allows us to test our hypothesis. All the selected languages have Universal Dependencies datasets (version 2.7) and were chosen as they have only one UD corpus. Slovene is the exception, it has two different UD datasets but one is composed of spoken language, therefore, the other available corpus (written language) was used. The choice of including Slovene is also due to its genealogical proximity to Croatian. Table 1 presents the languages involved in the experiment, with the respective linguistic family and genus (from Worl Atlas of Language Structure Online 2 ) and the size of their UD corpora (Version 2.7).",
"cite_spans": [],
"ref_spans": [
{
"start": 901,
"end": 908,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Languages and Datasets selection",
"sec_num": "3.1"
},
{
"text": "In this study, we propose to compare the chosen languages using two different typological approaches. One considers the statistical analysis of context-free grammar rules extracted from dependency parsing trees using the software Marsagram, while the other strategy uses information from lang2vec tool language vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Typological Analysis",
"sec_num": "3.2"
},
{
"text": "Marsagram is a tool for exploring treebanks, it extracts context-free grammars (CFG) from annotated datasets that allow statistical comparison between languages as proposed by (Blache et al., 2016) . We have used the latest release of this software 3 developed by ORTOLANG. This software has been chosen as it allows easy extraction and analysis of surface word order patterns which have never been used before as a way to interpret results of language combination for training deep learning models.",
"cite_spans": [
{
"start": 176,
"end": 197,
"text": "(Blache et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical comparison of Dependency Parsing Trees",
"sec_num": "3.2.1"
},
{
"text": "All rules, all properties 714 399 All rules, only linear properties 96 789 Common rules, all properties 1 912 Common rules, only linear properties 247 Table 2 : Different approaches for the statistical typological approach and the respective number of the considered syntactic rules.",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 158,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Number of Rules",
"sec_num": null
},
{
"text": "For this analysis, we have combined train, development, and test sets, and extracted quantitative information about its syntactic properties for each language. Distance matrices were, then, generated using R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Rules",
"sec_num": null
},
{
"text": "This software identifies four types of properties: precede, require, exclude, and unicity. The extracted syntactic rules contain information concerning part-of-speech and dependency parsing label as well as the associated property type. For example: NOUN-conj precede CCONJ-cc DET-det which means that a CCONJ which has the dependency relation cc precedes a DET with det as dependency label in the context of a node having NOUN as head. Marsagram also indicates the frequency of each rule inside the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Rules",
"sec_num": null
},
{
"text": "In the previous work (Blache et al., 2016) , the authors proposed two different analyses: considering all possible properties or taking into account only the linear property (precede). They have shown that the linear approach was better for classifying languages typologically as results were closer to classic genealogical lists. Nevertheless, in our study, we still consider both scenarios in order to analyse which one is better when the aim is to combine languages for improving dependency parsing metrics.",
"cite_spans": [
{
"start": 21,
"end": 42,
"text": "(Blache et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Rules",
"sec_num": null
},
{
"text": "For each language, Marsagram generates a specific set of rules and the percentage corresponding to its frequency inside the corpus. Some rules are common to all languages and some of them appear only in one or a few corpora. Therefore, the typological classification can be done by considering all possible identified rules (frequency equal to zero for languages in which the rule does not appear) or, considering only the rules present in all corpora (common rules).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Rules",
"sec_num": null
},
{
"text": "Thus, we have 4 different possible comparisons which are presented in Table 2 together with the number of syntactic rules and considered properties used in each one.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Number of Rules",
"sec_num": null
},
{
"text": "Lang2vec is a library 4 that allows simple queries of the URIEL database which are presented as language vectors (Littell et al., 2017) . For this study, we have considered syntactic information (syntax average option). For example: S NEGATIVE SUFFIX which gives a value of 1 if the language has a negative suffix and 0 if it does not have, and S SUBJECT AFTER VERB, 1 for languages in which the subject appears after the verb and 0 otherwise.",
"cite_spans": [
{
"start": 113,
"end": 135,
"text": "(Littell et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison using Language Vectors",
"sec_num": "3.2.2"
},
{
"text": "One disadvantage of this tool is that for some languages, not all information is available. If all official European Union are considered, the number of existing syntactic properties in lang2vec is 103. However, Croatian has values for only 12 of them. As our focus is this language, we have considered the syntactic features for which Croatian has associated values 5 . The distance between languages was calculated using cosine similarity. Among the other selected languages, only Maltese and Slovak do not have values for all these features and, therefore, were discarded for this specific analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison using Language Vectors",
"sec_num": "3.2.2"
},
{
"text": "We have selected Udify tool to train dependency parsing modules using the combined corpora as it allows fine-tuning of Multilingual BERT language model and for which the authors showed that multilingual corpus can potentially enhance overall results (specially for under-resourced languages) (Kondratyuk and Straka, 2019) . Training parameters were defined as:",
"cite_spans": [
{
"start": 292,
"end": 321,
"text": "(Kondratyuk and Straka, 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Models",
"sec_num": "3.3"
},
{
"text": "\u2022 Number of epochs: 80",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Models",
"sec_num": "3.3"
},
{
"text": "\u2022 Warmup: 500",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Models",
"sec_num": "3.3"
},
{
"text": "\u2022 Baseline training set: Croatian SET",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Models",
"sec_num": "3.3"
},
{
"text": "\u2022 Development and test sets: Croatian SET Our baseline is the result obtained by training Udify using the Croatian Universal Dependencies training set (SET) which contains 6 914 sentences. To obtain statistical significance, for each test using a specific dataset we have conducted 6 experiments varying the Random Seed value in the configuration file of Udify: standard value, 13370 (proposed by the developers), 10, 100, 1000, and 100000. For each test, we have calculated the standard deviation and the p-value when compared to the baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Models",
"sec_num": "3.3"
},
{
"text": "As explained before, the objective is to combine the Croatian dataset with annotated data of the other selected languages. We have combined its training set with three different sizes of the other languages annotated data as detailed in table 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Models",
"sec_num": "3.3"
},
{
"text": "One problem is that each training set has a different size, thus, to have homogeneity in terms of size to allow results to be compared, we have decided to add the first 909 sentences of the second language training corpus to the Croatian one. This value corresponds to the size of the Hungarian training set (the smallest one among the chosen languages and, therefore, being totally used), this limitation concerning the Hungarian language is what determined the ratio of all language combinations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Models",
"sec_num": "3.3"
},
{
"text": "The final size of the combined training sets is 7 823 sentences (88% Croatian and 12% from the other language).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Models",
"sec_num": "3.3"
},
{
"text": "In this section, we present the typological classification of the languages obtained using the methods presented previously followed by the results of the combination of the different datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Tables 5 shows the distance between each language and Croatian concerning the different choices of rules and properties selection using Marsagram.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Typological classification using statistics from syntactic trees",
"sec_num": "4.1"
},
{
"text": "In the scenario considered in the second column of table 4 (considering all rules and properties), we observe that Slovene and Slovak are closest to Croatian (all Slavic languages), however, Bulgarian, which is also Slavic, comes after Greek, Maltese, Hungarian and Latvian which are from different genealogical families.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Typological classification using statistics from syntactic trees",
"sec_num": "4.1"
},
{
"text": "The third column of table 4 shows the results of the analysis of all rules but considering only the linear properties (precede). Again, Slovene and Slovak are the most similar to Croatian, followed by Greek. When only linear properties are considered, Latvian and Irish are classified as closer to Croatian compared to the previous scenario. Bulgarian, again beside being Slavic, occupies the second to last position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Typological classification using statistics from syntactic trees",
"sec_num": "4.1"
},
{
"text": "When only common rules are considered (fourth column of table 4), Slovene is still the closest one to Croatian, however, in this case, Bulgarian is classified as much closer. Slovak loses the second position to Latvian. Maltese, Greek, and Hungarian are the most distant languages. Finally, when only common rules and linear properties are taken into account (fifth column of Table 4 ), we observe important changes in the classification. Slovene is no longer classified as the closest to Croatian. Maltese, and Bulgarian are the closest ones (second and third position) behind Latvian only.",
"cite_spans": [],
"ref_spans": [
{
"start": 376,
"end": 384,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Typological classification using statistics from syntactic trees",
"sec_num": "4.1"
},
{
"text": "Typological classification differs when different sets of rules and properties are considered. Slovene and Slovak are most of the time the closest languages to Croatian which was expected considering that they are all Slavic languages. These results show that it is difficult to determine which type of choice concerning rules and properties is the most adapted for syntactical language classification. Results may be biased by size, genre, and also the type of sentences composing the corpora (for example: length of sentences and syntactic complexity).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Typological classification using statistics from syntactic trees",
"sec_num": "4.1"
},
{
"text": "By using cosine distance between the language vectors built with syntactic features from lang2vec, we obtain the classification present in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Typological classification using similarity between language vectors",
"sec_num": "4.2"
},
{
"text": "Both Slavic languages (Slovenian and Bulgarian) are the most similar to Croatian, therefore more coherent to the typical genealogical classification of languages. As mentioned before, Slovak, also Slavic, does not have values for the analysed features and was therefore excluded from this comparison. Latvian, Greek, and Hungarian have similar distances, but much higher than the ones concerning Slavic languages and Irish is the most distant one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Typological classification using similarity between language vectors",
"sec_num": "4.2"
},
{
"text": "In tables 6, 7, and 8 we present the UAS and LAS values obtained when Udify was trained using the Croatian training set alone (baseline) and with the combined datasets (Croatian associated with another language) with three different ratios, as well as the delta when compared to the baseline. Each result corresponds to the mean value calculated with the six different trials using different Random Seed initial values. Highlighted results concern the experiments for which p-value is inferior to 0.05. Development and test sets were purely Croatian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency parsing results with combined corpora",
"sec_num": "4.3"
},
{
"text": "When the smaller ratio is used to train Udify (94% Croatian, 6% other language), we observe that only Bulgarian, Greek and Irish contribute positively in increasing both UAS and LAS metrics. Association of Croatian and Irish being the one providing the highest increase. Negative synergy is only observed for LAS metric when Croatian is combined with Slovak. For the medium ratio (88% Croatian, 12% other language), combinations of Croatian with Irish and with Slovene provide a positive synergy. As for the smaller ratio, when Croatian is combined with Slovak, there is a negative synergy which is, this time, observed for both UAS and LAS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency parsing results with combined corpora",
"sec_num": "4.3"
},
{
"text": "Concerning the larger ratio (81% Croatian, 19% other language), again the combination of Croatian and Slovak decrease significantly both UAS and LAS metrics. The corpus composed by both Croatian and Irish no longer provides a positive synergy. The only significant increase is obtained for LAS metric when Croatian is combined with Slovene.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency parsing results with combined corpora",
"sec_num": "4.3"
},
{
"text": "By analysing the UAS and LAS results presented in the previous section, it is possible to observe that Bulgarian, Greek, Irish, and Slovene training corpora have the potential to improve UAS and LAS metrics when combined with the Croatian training dataset. However, results strongly depend on the ratio between Croatian sentences and the other combined language. Bulgarian and Greek languages provided a positive synergy only for the smaller ratio, while the combination with Irish was positive for both smaller and medium ratios. Slovene did not improve the metrics for the smaller ratio but had a positive impact for both medium and larger ones. What is clear for all three ratios is the strong negative impact of Slovak when this language is associated with Croatian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In their article, (Kondratyuk and Straka, 2019) presented results for Croatian from a model which was trained combining 124 languages. The obtained UAS and LAS values are respectively 91.10 and 86.78. It is possible to see that all the models presented in this study are higher than these, even for our baseline and for the combination with Slovak. Thus, it seems that finding typological ways to combine languages wisely and on the smaller scale is more effective. Moreover, included the Croatian language in their study and the LAS obtained was 77.9, also inferior to the values in our experiments. However, the combined languages were not the same.",
"cite_spans": [
{
"start": 18,
"end": 47,
"text": "(Kondratyuk and Straka, 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In terms of typology, if we consider the traditional genealogical classification of languages, we can state that being part of the same linguistic family and genus do not guarantee a positive synergy when corpora are combined. Even though Bulgarian and, especially, Slovene can improve the final results when combined with Croatian, Slovak, which is also from the same genus, is the only language with a negative influence in all tested scenarios. Moreover, Irish, which is from a different genus is a good candidate for improving UAS and LAS results when combined with Croatian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "If we consider the classifications established using Marsagram, it is not possible to find any correlation between the classification lists considering the syntactic criteria with the observed results from Udify. Slovene is the closest language to Croatian when all rules are considered (with all properties considered and only linear ones too) and also when only common rules are compared. However, the calculated distances between Irish and Croatian do not explain the improvement obtained by associating both languages. Also, Slovak does not appear as being the most distant language when compared to Croatian, a result that would explain the negative synergy observed when its corpus is combined with the Croatian dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "One possible explanation for this lack of correlation may come from the fact that the distances were calculated using the results obtained by Marsagram which were composed of rules coming from the whole Universal Dependency datasets for each language. However, when Udify experiments were conducted, only a small part of the respective corpora have been used. Therefore, a more precise correlation may be possible if distances are calculated using only the sentences that have been added to the combined training corpus. Another aspect that may need further research concern the homogeneity of extracted rules using Marsagram from subcorpora of a dataset from a single language. It may be possible that the variation inside a corpus may be higher than when two different languages are compared. This case could be accommodated with the usage of controlled content, i.e. parallel corpora of languages investigated. However, this is not always available, particularly for under-resourced languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Furthermore, the selected corpora have different sizes and different contents. It may impact heavily the type of syntactic patterns that were extracted using Marsagram. The number of patterns obtained seems to be correlated with the size of the corpus. A comparison using parallel corpora could avoid this bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Moreover, positive synergies may not be caused by the whole ensemble of extracted rules but maybe by specific syntactic relations which are shared by the associated languages. Further qualitative analysis of similarities between Irish and Croatian Marsagram results should be conducted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "When analysing the typological classification using lang2Vec, Slovene and Bulgarian are the closest to Croatian, which we can relate to the positive synergy observed in Udify results. However, Irish is the most distant one which is contradictory with the improvement obtained for both UAS and LAS in two different scenarios. Also, as Slovak does not have values for the selected syntactic features, it was impossible to check whether the combination with Croatian has any negative impact. Thus, even though this tool is a powerful instrument to compare languages, in the approach described here, it seems limited. The idea of combining corpora to improve parsing is most useful for under-resourced languages, and, unfortunately, some of these languages are also under-resourced in terms of language vector information in lang2vec. For example, from the 103 possible syntactic features, the Croatian language only has values for 12 of them. Considering all the aspects presented above, we can affirm that none of the genealogical and typological approaches were able to explain precisely what was observed when different languages were combined to Croatian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In this article, we presented different approaches to identify languages that can be combined with Croatian to improve dependency parsing evaluation metrics (UAS and LAS) when using Udify deep learning tool.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Perspectives",
"sec_num": "6"
},
{
"text": "The possible typological classifications were compared to the results obtained when combining the Croatian training dataset to other European languages from different linguistic families to train Udify models. Three different association ratios were used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Perspectives",
"sec_num": "6"
},
{
"text": "We showed that the association of Croatian with Irish and Slovene languages showed the best positive synergy, increasing UAS and LAS for at least two different combination ratios. Moreover, from all selected languages, the only one which decreased significantly in both metrics is Slovak.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Perspectives",
"sec_num": "6"
},
{
"text": "These results show that the classical genealogical classification of languages is not enough to explain the observed phenomena. Slovak and Slovene are from the same linguistic family and genus as Croatian but with totally different impacts on the final results. Also, the Irish language does not belong to the same genus as Croatian, nevertheless, it helped improve UAS and LAS significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Perspectives",
"sec_num": "6"
},
{
"text": "The two typological approaches proposed in this paper, using rules from a context-free grammar with Marsagram and comparing lang2vec syntactic features of language vectors, also did not allow us to predict the results obtained when languages were combined. Slovene is identified as the closest language to Croatian in three out of four different analysed Marsagram scenarios. However, the classification of Irish and Slovak does not correspond to the influence these languages have when combined with Croatian. Moreover, the lang2vec classification shows Irish as being the least similar to Croatian, and, unfortunately, Slovak was not analysed due to the lack of syntactic information of this language in this tool.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Perspectives",
"sec_num": "6"
},
{
"text": "The study presented in this article was conducted only for Croatian, therefore, we intend to test this approach with other under-resourced languages, also enlarging the selection of languages to be combined to understand better the existing synergies and, also, possible exceptions as the one that has been identified in this article concerning the association between Croatian and Irish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Perspectives",
"sec_num": "6"
},
{
"text": "For future research we will check the quality of Slovak data because it consistently differ from other Slavic languages although genealogically and culturally Slovak is closely connected to Croatian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Perspectives",
"sec_num": "6"
},
{
"text": "Furthermore, our aim is to conduct a more detailed analysis concerning Marsagram results, first, checking the homogeneity of rules extracted from different subcorpora of the same language, and, secondly, using only the sentences that were appended to the combined training corpora to calculate the distances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Perspectives",
"sec_num": "6"
},
{
"text": "https://github.com/google-research/bert/blob/master/multilingual.md",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://wals.info/languoid/genealogy 3 Available at: https://www.ortolang.fr/market/tools/ortolang-000917",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pypi.org/project/lang2vec/ 5 Selected syntactic features:S SVO, S SOV, S VSO, S VOS, S OVS, S OSV, S SUBJECT BEFORE VERB, S SUBJECT AFTER VERB, S OBJECT AFTER VERB, S OBJECT BEFORE VERB, S SUBJECT BEFORE OBJECT, and S SUBJECT AFTER OBJECT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work presented in this paper has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sk\u0142odowska-Curie grant agreement no. 812997 and under the name CLEOPATRA (Cross-lingual Event-centric Open Analytics Research Academy).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Quantitative linguistic investigations across universal dependencies treebanks",
"authors": [
{
"first": "Chiara",
"middle": [],
"last": "Alzetta",
"suffix": ""
},
{
"first": "Felice",
"middle": [],
"last": "Dell'orletta",
"suffix": ""
},
{
"first": "Simonetta",
"middle": [],
"last": "Montemagni",
"suffix": ""
},
{
"first": "Petya",
"middle": [],
"last": "Osenova",
"suffix": ""
},
{
"first": "Kiril",
"middle": [],
"last": "Simov",
"suffix": ""
},
{
"first": "Giulia",
"middle": [],
"last": "Venturi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Seventh Italian Conference on Computational Linguistics, CLiC-it 2020",
"volume": "2769",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiara Alzetta, Felice Dell'Orletta, Simonetta Montemagni, Petya Osenova, Kiril Simov, and Giulia Venturi. 2020. Quantitative linguistic investigations across universal dependencies treebanks. In Johanna Monti, Felice Dell'Orletta, and Fabio Tamburini, editors, Proceedings of the Seventh Italian Conference on Computational Linguistics, CLiC-it 2020, Bologna, Italy, March 1-3, 2021, volume 2769 of CEUR Workshop Proceedings. CEUR-WS.org.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Many languages, one parser",
"authors": [
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "431--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2016. Many languages, one parser. Transactions of the Association for Computational Linguistics, 4:431-444.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "MarsaGram: an excursion in the forests of parsing trees",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Blache",
"suffix": ""
},
{
"first": "St\u00e9phane",
"middle": [],
"last": "Rauzy",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Montcheuil",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "2336--2342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philippe Blache, St\u00e9phane Rauzy, and Gr\u00e9goire Montcheuil. 2016. MarsaGram: an excursion in the forests of parsing trees. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 2336-2342, Portoro\u017e, Slovenia, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Parameter sharing between dependency parsers for related languages",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Miryam De Lhoneux",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Bjerva",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4992--4997",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miryam de Lhoneux, Johannes Bjerva, Isabelle Augenstein, and Anders S\u00f8gaard. 2018. Parameter sharing be- tween dependency parsers for related languages. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4992-4997, Brussels, Belgium, October-November. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Linguistically-driven selection of correct arcs for dependency parsing",
"authors": [
{
"first": "Felice",
"middle": [],
"last": "Dell'orletta",
"suffix": ""
},
{
"first": "Giulia",
"middle": [],
"last": "Venturi",
"suffix": ""
},
{
"first": "Simonetta",
"middle": [],
"last": "Montemagni",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felice Dell'Orletta, Giulia Venturi, and Simonetta Montemagni. 2013. Linguistically-driven selection of correct arcs for dependency parsing. Computaci\u00f3n y Sistemas, 17.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Creating zombilingo, a game with a purpose for dependency syntax annotation",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Fort",
"suffix": ""
},
{
"first": "Bruno",
"middle": [],
"last": "Guillaume",
"suffix": ""
},
{
"first": "Hadrien",
"middle": [],
"last": "Chastant",
"suffix": ""
}
],
"year": 2014,
"venue": "Gamification for Information Retrieval (GamifIR'14) Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Fort, Bruno Guillaume, and Hadrien Chastant. 2014. Creating zombilingo, a game with a purpose for dependency syntax annotation. In Gamification for Information Retrieval (GamifIR'14) Workshop.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "75 languages, 1 model: Parsing universal dependencies universally",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Kondratyuk",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Kondratyuk and Milan Straka. 2019. 75 languages, 1 model: Parsing universal dependencies universally.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "URIEL and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Littell",
"suffix": ""
},
{
"first": "David",
"middle": [
"R"
],
"last": "Mortensen",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Kairis",
"suffix": ""
},
{
"first": "Carlisle",
"middle": [],
"last": "Turner",
"suffix": ""
},
{
"first": "Lori",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "2",
"issue": "",
"pages": "8--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Littell, David R. Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. URIEL and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 8-14, Valencia, Spain, April. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Cross-lingual transfer parsing for lowresourced languages: An Irish case study",
"authors": [
{
"first": "Teresa",
"middle": [],
"last": "Lynn",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dras",
"suffix": ""
},
{
"first": "Lamia",
"middle": [],
"last": "Tounsi",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Celtic Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "41--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Teresa Lynn, Jennifer Foster, Mark Dras, and Lamia Tounsi. 2014. Cross-lingual transfer parsing for low- resourced languages: An Irish case study. In Proceedings of the First Celtic Language Technology Workshop, pages 41-49, Dublin, Ireland, August. Association for Computational Linguistics and Dublin City University.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "PHOIBLE 2.0. Max Planck Institute for the Science of Human History",
"authors": [],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Moran and Daniel McCloy, editors. 2019. PHOIBLE 2.0. Max Planck Institute for the Science of Human History, Jena.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Selective sharing for multilingual dependency parsing",
"authors": [
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "629--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 629-637, Jeju Island, Korea, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Universal Dependencies v2: An evergrowing multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4034--4043",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Haji\u010d, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034-4043, Marseille, France, May. European Language Resources Association.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A survey of the usages of deep learning in natural language processing",
"authors": [
{
"first": "Daniel",
"middle": [
"W"
],
"last": "Otter",
"suffix": ""
},
{
"first": "Julian",
"middle": [
"R"
],
"last": "Medina",
"suffix": ""
},
{
"first": "Jugal",
"middle": [
"K"
],
"last": "Kalita",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel W. Otter, Julian R. Medina, and Jugal K. Kalita. 2018. A survey of the usages of deep learning in natural language processing. CoRR, abs/1807.10854.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "How multilingual is multilingual BERT?",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4996--5001",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "82 treebanks, 34 models: Universal Dependency parsing with multi-treebank models",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Miryam De Lhoneux",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stymne",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "113--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Smith, Bernd Bohnet, Miryam de Lhoneux, Joakim Nivre, Yan Shao, and Sara Stymne. 2018. 82 tree- banks, 34 models: Universal Dependency parsing with multi-treebank models. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 113-123, Brussels, Belgium, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "UDapter: Language adaptation for truly Universal Dependency parsing",
"authors": [
{
"first": "Arianna",
"middle": [],
"last": "Ahmet\u00fcst\u00fcn",
"suffix": ""
},
{
"first": "Gosse",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "Gertjan",
"middle": [],
"last": "Bouma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Noord",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2302--2315",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmet\u00dcst\u00fcn, Arianna Bisazza, Gosse Bouma, and Gertjan van Noord. 2020. UDapter: Language adaptation for truly Universal Dependency parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2302-2315, Online, November. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"text": "Information concerning the different combinations of the Croatian training set and other languages.",
"content": "<table/>"
},
"TABREF3": {
"num": null,
"html": null,
"type_str": "table",
"text": "Distance from Croatian using Marsagram results, first word correspond to the type of rules considered and the second word to the type of properties.",
"content": "<table><tr><td colspan=\"5\">Language d(All/All) d(All/Linear) d(Common/All) d(Common/Linear)</td></tr><tr><td>Slovene</td><td>68.0</td><td>21.4</td><td>4.0</td><td>1.1</td></tr><tr><td>Slovak</td><td>69.4</td><td>24.0</td><td>4.9</td><td>1.2</td></tr><tr><td>Greek</td><td>70.0</td><td>24.5</td><td>5.9</td><td>1.6</td></tr><tr><td>Maltese</td><td>73.7</td><td>24.8</td><td>5.8</td><td>1.1</td></tr><tr><td>Hungarian</td><td>77.2</td><td>26.4</td><td>6.2</td><td>1.5</td></tr><tr><td>Latvian</td><td>78.5</td><td>24.5</td><td>4.3</td><td>1.0</td></tr><tr><td>Bulgarian</td><td>80.0</td><td>25.5</td><td>4.4</td><td>1.1</td></tr><tr><td>Irish</td><td>80.6</td><td>25.2</td><td>5.3</td><td>1.7</td></tr><tr><td colspan=\"4\">Table 4: Language Distance</td><td/></tr><tr><td/><td/><td>Slovene</td><td>0.01</td><td/></tr><tr><td/><td/><td>Bulgarian</td><td>0.03</td><td/></tr><tr><td/><td/><td>Latvian</td><td>0.11</td><td/></tr><tr><td/><td/><td>Greek</td><td>0.11</td><td/></tr><tr><td/><td/><td>Hungarian</td><td>0.12</td><td/></tr><tr><td/><td/><td>Irish</td><td>0.51</td><td/></tr></table>"
},
"TABREF4": {
"num": null,
"html": null,
"type_str": "table",
"text": "",
"content": "<table/>"
},
"TABREF6": {
"num": null,
"html": null,
"type_str": "table",
"text": "UAS and LAS metrics obtained by training Udify with different training datasets: Croatian alone and associated with other languages (94% Croatian, 6% other language).",
"content": "<table><tr><td>Training Corpus</td><td colspan=\"4\">UAS delta UAS LAS delta LAS</td></tr><tr><td>Croatian (baseline)</td><td>92.32</td><td>-</td><td>88.99</td><td>-</td></tr><tr><td colspan=\"2\">Croatian + Bulgarian (Medium) 92.35</td><td>0.03</td><td>89.02</td><td>0.03</td></tr><tr><td>Croatian + Greek (Medium)</td><td>92.35</td><td>0.03</td><td>89.98</td><td>-0.01</td></tr><tr><td colspan=\"2\">Croatian + Hungarian (Medium) 92.33</td><td>0.02</td><td>89.01</td><td>0.02</td></tr><tr><td>Croatian + Irish (Medium)</td><td>92.43</td><td>0.12</td><td>89.07</td><td>0.08</td></tr><tr><td>Croatian + Latvian (Medium)</td><td>92.26</td><td>-0.06</td><td>88.92</td><td>-0.06</td></tr><tr><td>Croatian + Maltese (Medium)</td><td>92.36</td><td>0.04</td><td>88.97</td><td>-0.01</td></tr><tr><td>Croatian + Slovak (Medium)</td><td>92.21</td><td>-0.11</td><td>88.89</td><td>-0,09</td></tr><tr><td>Croatian + Slovene (Medium)</td><td>92.42</td><td>0.10</td><td>89.09</td><td>0.10</td></tr></table>"
},
"TABREF7": {
"num": null,
"html": null,
"type_str": "table",
"text": "",
"content": "<table/>"
},
"TABREF9": {
"num": null,
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>: UAS and LAS metrics obtained by training Udify with different training datasets: Croatian</td></tr><tr><td>alone and associated with other languages (81% Croatian, 19% other language). Hungarian and Maltese</td></tr><tr><td>training corpora do not have enough annotated sentences to be combined with Croatian in this specific</td></tr><tr><td>ratio.</td></tr></table>"
}
}
}
}