Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C16-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:03:48.682093Z"
},
"title": "Zero-resource Dependency Parsing: Boosting Delexicalized Cross-lingual Transfer with Linguistic Knowledge",
"authors": [
{
"first": "Lauriane",
"middle": [],
"last": "Aufrant",
"suffix": "",
"affiliation": {
"laboratory": "LIMSI",
"institution": "Universit\u00e9 Paris-Saclay",
"location": {
"postCode": "91 405",
"settlement": "Orsay",
"country": "France"
}
},
"email": "[email protected]"
},
{
"first": "Guillaume",
"middle": [],
"last": "Wisniewski",
"suffix": "",
"affiliation": {
"laboratory": "LIMSI",
"institution": "Universit\u00e9 Paris-Saclay",
"location": {
"postCode": "91 405",
"settlement": "Orsay",
"country": "France"
}
},
"email": "[email protected]"
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": "",
"affiliation": {
"laboratory": "LIMSI",
"institution": "Universit\u00e9 Paris-Saclay",
"location": {
"postCode": "91 405",
"settlement": "Orsay",
"country": "France"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper studies cross-lingual transfer for dependency parsing, focusing on very low-resource settings where delexicalized transfer is the only fully automatic option. We show how to boost parsing performance by rewriting the source sentences so as to better match the linguistic regularities of the target language. We contrast a data-driven approach with an approach relying on linguistically motivated rules automatically extracted from the World Atlas of Language Structures. Our findings are backed up by experiments involving 40 languages. They show that both approaches greatly outperform the baseline, the knowledge-driven method yielding the best accuracies, with average improvements of +2.9 UAS, and up to +90 UAS (absolute) on some frequent PoS configurations.",
"pdf_parse": {
"paper_id": "C16-1012",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper studies cross-lingual transfer for dependency parsing, focusing on very low-resource settings where delexicalized transfer is the only fully automatic option. We show how to boost parsing performance by rewriting the source sentences so as to better match the linguistic regularities of the target language. We contrast a data-driven approach with an approach relying on linguistically motivated rules automatically extracted from the World Atlas of Language Structures. Our findings are backed up by experiments involving 40 languages. They show that both approaches greatly outperform the baseline, the knowledge-driven method yielding the best accuracies, with average improvements of +2.9 UAS, and up to +90 UAS (absolute) on some frequent PoS configurations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The need to automatically process an increasing number of languages has made obvious the extreme dependency of standard development pipelines on in-domain, annotated resources that are required to train efficient statistical models. However, for most languages, annotated corpora only exist for a restricted number of domains, when they exist at all. In response to such low-resource scenario, four main strategies have been considered. The first is to hire experts and handcraft these resources, possibly with the help of active learning techniques: Garrette and Baldridge (2013) show that this strategy can be effective and probably cheaper than expected. An alternative is to use models learned on some resource-rich source language(s) to process a low-resource target language; note that this is only possible once the source and target data have been mapped into a shared representation space (Zeman and Resnik, 2008) . When source-target parallel corpora are available, a third approach projects annotations across languages via alignment links (Yarowsky and Ngai, 2001; Hwa et al., 2005; Lacroix et al., 2016) . A variant using artificial parallel corpora, obtained via Machine Translation, is suggested and discussed by Tiedemann et al. (2014) .",
"cite_spans": [
{
"start": 551,
"end": 580,
"text": "Garrette and Baldridge (2013)",
"ref_id": "BIBREF7"
},
{
"start": 898,
"end": 922,
"text": "(Zeman and Resnik, 2008)",
"ref_id": "BIBREF22"
},
{
"start": 1051,
"end": 1076,
"text": "(Yarowsky and Ngai, 2001;",
"ref_id": "BIBREF21"
},
{
"start": 1077,
"end": 1094,
"text": "Hwa et al., 2005;",
"ref_id": "BIBREF9"
},
{
"start": 1095,
"end": 1116,
"text": "Lacroix et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 1228,
"end": 1251,
"text": "Tiedemann et al. (2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we focus on the problem of learning dependency parsers for an under-resourced language and consider the delexicalized transfer approach of Zeman and Resnik (2008) , in which the shared source-target representation is obtained by replacing all tokens by their PoS (assuming a common tagset). Thanks to this language-independent representation, a model trained with annotated sentences in a source language can be readily applied to parse sentences in any other language. Delexicalized techniques are especially useful in very low-resource settings, in which existing parallel corpora are likely to be too small or even non-existing. The development of cross-linguistically homogeneous and consistent schemes for PoS labels (Petrov et al., 2012) and, more recently, for dependency trees (McDonald et al., 2013) has been of great help to improve the applicability and effectiveness of delexicalized transfer methods. We contend, however, that having a universal PoS inventory is only a first step towards making the source and target languages more alike. In particular, these shared representations may hide fundamental differences in word order between source and target languages. As explained in \u00a7 2, these divergences introduce systematic biases in parsers: since many features rely on word linear sequence, their distribution across languages varies in great proportions, preventing useful generalizations to be effectively transferred cross-linguistically.",
"cite_spans": [
{
"start": 153,
"end": 176,
"text": "Zeman and Resnik (2008)",
"ref_id": "BIBREF22"
},
{
"start": 736,
"end": 757,
"text": "(Petrov et al., 2012)",
"ref_id": "BIBREF17"
},
{
"start": 799,
"end": 822,
"text": "(McDonald et al., 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the remaining sections, we study ways to improve the performance of delexicalized techniques by making the source word sequence more similar to target sentences, prior to transferring information. Two extensions are contrasted: a data-driven approach and a knowledge-driven approach ( \u00a7 3). The former uses PoS-based statistical language models estimated on target data while the latter relies only on the World Atlas of Language Structures (WALS) (Dryer and Haspelmath, 2013) , which contains a series of linguistic typological features documenting 2,679 languages. Experiments on 40 languages exhibiting very different characteristics and covering several language families show that both methods outperform standard delexicalized transfer by a wide margin ( \u00a7 4), with the knowledge-based approach having the additional benefit to even dispense with the need of unlabeled target data and consequently to be readily usable for more than thousand languages. Incidentally, our experiments thoroughly re-evaluates previous proposals for improving baseline delexicalized transfer.",
"cite_spans": [
{
"start": 451,
"end": 479,
"text": "(Dryer and Haspelmath, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Transition-based dependency parsers (Nivre, 2008) are among the most popular methods for computing a syntactic structure. For clarity, we illustrate our work on greedy ARCEAGER parsers which have achieved state-of-the-art performance for many languages. However, our motivations hold regardless of the chosen parsing system, and exploratory experiments with our methods have shown similar improvements with other parsers (including graph-based parsers).",
"cite_spans": [
{
"start": 36,
"end": 49,
"text": "(Nivre, 2008)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivations 2.1 Principles of Transition-Based Dependency Parsing",
"sec_num": "2"
},
{
"text": "In an ARCEAGER parser, the parse tree is built incrementally while traversing the sentence from left to right, by executing elementary actions that either move words in a buffer and a stack (via SHIFT and REDUCE actions) or create dependency relationships between the word on top of the stack and the leftmost word in the buffer (using the LEFT or RIGHT actions depending whether the head is in the buffer or on the stack).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivations 2.1 Principles of Transition-Based Dependency Parsing",
"sec_num": "2"
},
{
"text": "The actions performed during parsing are predicted by a feature-based classifier, a common choice being the averaged perceptron of Collins and Roark (2004) . It is custom to base the classifier decisions on a limited window centered on the two tokens which could be moved or attached; the following features 1 are typically extracted from these neighborhoods and combined together to yield feature tuples: top of the stack (generally denoted s 0 ) and deeper stack elements (s 1 , s 2 ) to its left, head of the buffer (n 0 ) and additional tokens (n 1 , n 2 ) on its right.",
"cite_spans": [
{
"start": 131,
"end": 155,
"text": "Collins and Roark (2004)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivations 2.1 Principles of Transition-Based Dependency Parsing",
"sec_num": "2"
},
{
"text": "Transition-based parsers heavily rely on word order: for instance, as shown in Figure 1 , in an ARCEA-GER system, the dependency between two words will be predicted by two different actions depending whether the head occurs before or after its dependent. More importantly, most features used in a dependency parser (no matter the transition system) are sensitive to the word order, as they encode the position of the word in the stack or in the buffer which, in turn, depends on the position of the word in the sentence. ",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 87,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Motivations 2.1 Principles of Transition-Based Dependency Parsing",
"sec_num": "2"
},
{
"text": "Delexicalized transfer has proven to be an effective method to transfer parsers between languages (Zeman and Resnik, 2008; McDonald et al., 2013) . However, while delexicalized transfer extracts useful language-independent knowledge from training instances in the source language, we claim that this knowledge is often not encoded in the right form to be effectively used to process target sentences, due to divergences in word ordering. 3 We illustrate this on delexicalized transfer from English to French. Let us assume that we have a delexicalized English parser that is able to perfectly predict the dependency structure of the noun phrase the following question and we use it to annotate the corresponding French phrase la question suivante (literally, the question following 4 ). Thanks to recent efforts in defining universal annotation schemes for syntactic information, notably the Universal Dependencies (UD) project (Nivre et al., 2016) , these phrases can be represented in a unified manner by mapping word forms into the corresponding PoS, yielding respectively DET ADJ NOUN and DET NOUN ADJ. As the English parser has learned that 'DETs depend on NOUNs' and that 'ADJs depend on NOUNs', the appropriate parse for the French phrase should be obvious, as these rules apply cross-linguistically. PoS sequences thus seem to provide an appropriate level of abstraction for cross-lingual transfer.",
"cite_spans": [
{
"start": 109,
"end": 122,
"text": "Resnik, 2008;",
"ref_id": "BIBREF22"
},
{
"start": 123,
"end": 145,
"text": "McDonald et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 438,
"end": 439,
"text": "3",
"ref_id": null
},
{
"start": 928,
"end": 948,
"text": "(Nivre et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Waste of Cross-Lingual Knowledge",
"sec_num": "2.2"
},
{
"text": "However, contrary to what this intuition suggests, the transfer of the ADJ-NOUN dependency often fails in practice. This is because the features underlying the high-level rules stated above are in fact order-dependent. Indeed, when parsing the French phrase, the parser configuration will be mainly described by the feature pair 's 0 =NOUN \u2227 n 0 =ADJ' (as question appears before suivante, it will be put on the stack first) while for the English phrase the relevant parser configuration would look like 's 0 =ADJ \u2227 n 0 =NOUN'. For lack of connecting these two situations, the parser has no way to predict the correct attachment in French using only English training instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Waste of Cross-Lingual Knowledge",
"sec_num": "2.2"
},
{
"text": "Experimentally, 5 and denoting UAS C 2 C 1 the percentage of C 1 tokens depending on a C 2 token that are correctly attached by the parser, while the English delexicalized model has a UAS NOUN ADJ of 91.1% on English, it drops down to 50.8% for French. This decrease results directly from the word order difference between French and English, as English adjectives are almost always preposed 6 while their position in French is less deterministic: in the French UD, 28% of the adjectives occur before their head noun and 72% after it. As a result, the UAS NOUN ADJ score on French actually decomposes as 96.8% for UAS NOUN preposed ADJ and 34.5% for UAS NOUN postposed ADJ . 7 These observations highlight the impact of word order on delexicalized transfer: attachment patterns are not robust to variations in word ordering. Note that transfer from French to English is much more successful, with a UAS NOUN ADJ of 80.5%. This is because the source language (here French) contains a sufficiently large number of preposed adjectives, which makes it possible to learn the patterns that are useful for English.",
"cite_spans": [
{
"start": 618,
"end": 622,
"text": "NOUN",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Waste of Cross-Lingual Knowledge",
"sec_num": "2.2"
},
{
"text": "The discrepancies in word order can have an even more dramatic effect when transferring parsers between languages in which adjectives have a fixed position. This, for instance, happens when the source is Bulgarian (almost only preposed adjectives) and the target is Hebrew (only postposed): the resulting UAS NOUN ADJ is as low as 28.7% (compared to an overall UAS of 60.1%). In the reverse direction, it drops down to 2.8% (UAS NOUN preposed ADJ of 0.7%, UAS NOUN postposed ADJ of 54.5%, with an overall UAS of 50.6%). 8 The impact of differences in word order on cross-lingual transfer is not limited to the attachment of adjectives. Consider, for instance, the English phrase the neighbor's car (DET NOUN PART NOUN) and its French translation la voiture du voisin (DET NOUN ADP NOUN). After attaching function words, all that remains for the parser to process is the bigram NOUN NOUN: while the English parser has been trained to predict a left dependency (car being the head of neighbor), for French it must predict a right dependency (voiture being the head of voisin). Here the discrepancy of genitives' position across languages does not involve unseen features, but still leads the model to predict a wrong dependency with high confidence.",
"cite_spans": [
{
"start": 429,
"end": 433,
"text": "NOUN",
"ref_id": null
},
{
"start": 520,
"end": 521,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Waste of Cross-Lingual Knowledge",
"sec_num": "2.2"
},
{
"text": "Our work aims at addressing such scenarios in which knowledge transfer is impeded by the word order of the source language. While current state-of-the-art models learn that 'an ADJ followed by a NOUN depends on that NOUN' and 'the first NOUN depends on the second NOUN', we would like them to transfer more abstract patterns such as 'ADJs depend on NOUNs' and 'genitives depend on NOUNs', leaving it up to the target side to decide which of both NOUNs plays the role of genitive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Waste of Cross-Lingual Knowledge",
"sec_num": "2.2"
},
{
"text": "In this work, we propose to preprocess the source data before they are used to train a delexicalized parser, that will then be directly applied on target sentences. This preprocessing aims at making the source word sequences more similar to target sentences, with the goal to make the cross-lingual knowledge more accessible after transfer. The available information is the same before and after preprocessing (no dependency is ever added), but is presented at training time in a form that should make it more useful at test time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reshaping Training Instances",
"sec_num": "3.1"
},
{
"text": "In the following, we introduce two ways of generating such transformations, by removing or permuting tokens. The first approach uses a language model estimated on target PoS sequences to find the most similar word order between the source and target languages in a lattice containing local reorderings of the source sentence. The second strategy relies on a data bank of linguistic typological features, the WALS (Dryer and Haspelmath, 2013) , to generate a series of heuristic transformation operations.",
"cite_spans": [
{
"start": 413,
"end": 441,
"text": "(Dryer and Haspelmath, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reshaping Training Instances",
"sec_num": "3.1"
},
{
"text": "The problem of finding good reorderings of a source sentence is closely related to the problem of word (p)reordering in Statistical Machine Translation (SMT) (Bisazza and Federico, 2016) . However, where preordering aims to find an optimal (for SMT) permutation of source words for each source sentence, our objective is less ambitious, as we only intend to 'fix' a sufficiently large number of divergent patterns between the source and target languages, so as to increase the effectiveness of transfer at the model level.",
"cite_spans": [
{
"start": 158,
"end": 186,
"text": "(Bisazza and Federico, 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reshaping Training Instances",
"sec_num": "3.1"
},
{
"text": "Our first resource-light approach consists of two steps. We first generate a small subset of possible token permutations, compactly encoded in a finite-state graph. In our experiments, we consider all the permutations licensed by the MJ-2 reordering scheme (Kumar and Byrne, 2005) , which generates all possible local permutations within a window of three words. Machine Translation experiments have shown that the MJ-2 constraints capture lots of plausible reorderings (Dreyer et al., 2007) . In the context of cross-lingual transfer, its local nature allows to correct several important divergences in word order (e.g. the adjective-noun divergence described in \u00a7 2.2), while keeping the size of the reordering lattice polynomial with respect to the sentence length (Lopez, 2009) .",
"cite_spans": [
{
"start": 257,
"end": 280,
"text": "(Kumar and Byrne, 2005)",
"ref_id": "BIBREF10"
},
{
"start": 470,
"end": 491,
"text": "(Dreyer et al., 2007)",
"ref_id": "BIBREF5"
},
{
"start": 768,
"end": 781,
"text": "(Lopez, 2009)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Optimally Reordering the Training Corpus with a Language Model",
"sec_num": "3.2"
},
{
"text": "The permutation lattice is then searched for a reordering that (a) corresponds to a high probability target PoS sequence and (b) preserves the projectivity constraint. In practice, we first generate the lattice of MJ-2 reorderings, score it with a language model estimated on the target PoS sequences, and extract the 1,000-best sequences. After filtering non-projective trees, we retain the one-best sequence (if one projective tree exists), or the original sequence otherwise. We expect the word order of this transformed source to be very close to the word order of a typical target sentence. We then transform the gold dependency tree according to this permutation and use it to train a target-adapted model. This approach can be viewed as an extension of the data selection technique of S\u00f8gaard (2011) in which the delexicalized model is trained only on the source examples that are the most relevant for the target at hand. The similarity between the source and target languages is based on the similarity between their PoS sequences: experimentally, the author retains the 90% sentences with lowest perplexity according to a target PoS language model (PoSLM). We add here an extra degree of freedom by allowing changes in the word order of the source PoS sequence, rather than simply discarding sentences.",
"cite_spans": [
{
"start": 792,
"end": 806,
"text": "S\u00f8gaard (2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Optimally Reordering the Training Corpus with a Language Model",
"sec_num": "3.2"
},
{
"text": "Our second proposal takes advantage of the linguistic knowledge that is now available for many languages. We use here the WALS, which contains a series of linguistic features documenting 2,679 languages. Some of these features are of prime interest for our study, and express general properties related to word order. In this work we focus on the following seven features that describe whether some PoS classes exist in a language and their relative position (preposed or postposed to the noun, or no dominant order): 37A (definite articles), 38A (indefinite articles), 85A (order of adposition and noun), 86A (order of genitive and noun), 87A (order of adjective and noun), 88A (order of demonstrative and noun) and 89A (order of numeral and noun). 9 We first extract the relevant features for each language considered in our study, quantify their value and automatically transform them to relate to the raw PoS sequences found in our corpora. We extrapolate the order of DET and NOUN from feature 88A and identify the genitives mentioned by feature 86A as NOUNs or PROPNs depending on a NOUN. With an otherwise straightforward mapping, this results in the following set of properties: no definite DET, no indefinite DET (including the affix cases), and a precedence rate (denoted PR) of 0% (postposed), 50% (no dominant order) or 100% (preposed) for ADPs (resp. genitives, ADJs, DETs, NUMs) depending on a NOUN. 10 The 'No dominant order' feature value of WALS provides very useful quantitative information: contrary to the PoSLM-based approach, which puts hard constraints on each phenomenon by choosing a reordering even when several choices would be almost equally likely, WALS features indicate when and how to balance our transformed treebanks.",
"cite_spans": [
{
"start": 750,
"end": 751,
"text": "9",
"ref_id": null
},
{
"start": 1414,
"end": 1416,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting the Training Corpus with Rewrite Rules",
"sec_num": "3.3"
},
{
"text": "By comparing two languages based on their feature values, it is then possible to define actionable transformation rules that remove or permute tokens and their associated subtrees. Table 1 lists the transformation rules derived from each pair of features. We preferred smooth transformations (with mean PR objectives and error margins) to prevent a full transformation of the corpus and a risk of information losses if the child position is less deterministic than expected. For instance, in the case of transfer from English (ADP-NOUN) to Japanese (NOUN-ADP) and according to the fourth transformation rule, we target a precedence rate of ADPs to NOUNs between 45% and 55%. This means that during source treebank traversal, while the precedence rate in previous sentences is above 55% (resp. below 45%), any encountered ADP-NOUN (resp. NOUN-ADP) bigram holding a dependency is switched, along with their dependents to preserve projectivity. According to first rule, for transfer to Czech (no definite article) from any source, all definite articles are systematically removed from source data. Target feature Transformation rule any no DEF-DET remove all definite DETs any no IND-DET remove all indefinite DETs PR = 0% PR \u2265 50% switch subtrees to reach PR = 50% (with 5% error margin) PR = 100% PR \u2264 50% switch subtrees to reach PR = 50% (with 5% error margin) PR = 50% PR = 100% switch subtrees to reach PR = 75% (with 5% error margin) PR = 50% PR = 0% switch subtrees to reach PR = 25% (with 5% error margin) For each sentence, we apply each rule on the whole sequence (and then iterate 3 times to capture recur-sive phenomena). Such heuristic rule application strategy is undoubtedly sensitive to the rule ordering, but we have not yet investigated this aspect and simply apply rules according to the lexicographic order of the child tag, breaking ties using word position. In comparison to the PoSLM-based approach, the WALS-based approach suffers from a lack of exhaustivity regarding word order; by design, less phenomena will be captured. However, since the objective is not the best possible reordering but only more compatible PoS sequences, exhaustivity is probably not a big issue. Besides, working with a discrete and reduced set of transformation operations gives us a better control on the rewriting of dependencies. It also allows us to use extra operations such as word deletion, a transformation that may be difficult to control in the approach described in \u00a7 3.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 188,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1095,
"end": 1135,
"text": "Target feature Transformation rule any",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adapting the Training Corpus with Rewrite Rules",
"sec_num": "3.3"
},
{
"text": "Altogether, this linguistically rich method presents a notable upside: since all the required information is available in WALS, it is readily usable for more than thousand languages. Provided that PoS tags can be generated for the target data to parse, no extra resource is required, while estimating a PoSLM requires a sufficiently large corpus of reliably PoS-tagged target data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source feature",
"sec_num": null
},
{
"text": "We evaluate our proposals on the Universal Dependencies corpus 11 (Nivre et al., 2016) and compare them with three baselines: (a) standard delexicalized transfer, (b) the data point selection method of S\u00f8gaard (2011) and (c) the weighted multi-source combination of Rosa and Zabokrtsky (2015) , that weights and combines the hypotheses of several delexicalized source models using KL cpos 3 (Kullback-Leibler divergence of coarse PoS trigram distributions) as a syntactic similarity metric between languages. We also include the UAS of KL cpos 3 multi-source combination built on top of our knowledge-based model.",
"cite_spans": [
{
"start": 66,
"end": 86,
"text": "(Nivre et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 202,
"end": 216,
"text": "S\u00f8gaard (2011)",
"ref_id": "BIBREF19"
},
{
"start": 266,
"end": 292,
"text": "Rosa and Zabokrtsky (2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 4.1 Experimental Setup",
"sec_num": "4"
},
{
"text": "In all our experiments, we consider 3-gram PoS language models estimated on the training sets of UD. The KL cpos 3 metric is estimated on the same PoS sequences. From WALS, we extract and use the 37A, 38A and 85A to 89A features. For some languages, this information was incomplete. We completed missing features with a majority vote of the languages of same genus if available in the database; otherwise (i.e. for ancient languages, all absent from WALS) we assumed that there were separate article tokens and that there was no dominant order for word order features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 4.1 Experimental Setup",
"sec_num": "4"
},
{
"text": "For each component of the algorithms, we use the universal PoS tagset and gold PoS tags. While this scenario is probably unrealistic, it allows us to get a clearer picture of the net effect of a better syntactic knowledge transfer, since possible sources of discrepancies between languages (e.g. more or less noisy tag labels) have been removed. The parser is a greedy ARCEAGER transition-based parser trained with a dynamic oracle (Goldberg and Nivre, 2012) , an averaged perceptron classifier (Collins and Roark, 2004) and Zhang and Nivre (2011) 's feature templates (assuming fully delexicalized representations and unlabeled arcs). We use the PanParser implementation and all the code used in this work is available at https://perso.limsi.fr/aufrant/. Table 2 presents UAS results for the various transfer methods considered. As these experiments amount to 6,320 evaluated parsers, we provide the results in a compacted form as follows. For each target language, for mono-source transfer, we report the scores of the worst, median, best sources and the average score (or average gain) over all sources.",
"cite_spans": [
{
"start": 432,
"end": 458,
"text": "(Goldberg and Nivre, 2012)",
"ref_id": "BIBREF8"
},
{
"start": 495,
"end": 520,
"text": "(Collins and Roark, 2004)",
"ref_id": "BIBREF4"
},
{
"start": 525,
"end": 547,
"text": "Zhang and Nivre (2011)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 756,
"end": 763,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments 4.1 Experimental Setup",
"sec_num": "4"
},
{
"text": "Overall, both preprocessing techniques outperform the direct transfer method of Zeman and Resnik (2008) as well as the selection strategy of S\u00f8gaard (2011) . The WALS-based rewriting approach yields higher improvements (+2.9 UAS on average) than the PoSLM-based reordering strategy (+2.3 UAS on average). Thanks to the variety and the large number of sources, the multi-source methods have here much higher accuracies, often better than the best source; even in this setting, using WALS provides us with a slight improvement over the baseline multi-source parser. Table 2 : UAS of the various mono-source and multi-source transfer methods, on each UD target language (using UD language codes). The first line reads as follows: for delexicalized transfer to Arabic, the worst, median and best sources yield UAS scores of 5.1, 43.2 and 56.9, and the average score over all 39 sources is 36.1, which the WALS-based method improves by 5.7 points.",
"cite_spans": [
{
"start": 80,
"end": 103,
"text": "Zeman and Resnik (2008)",
"ref_id": "BIBREF22"
},
{
"start": 141,
"end": 155,
"text": "S\u00f8gaard (2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 564,
"end": 571,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Our experiments also show that the selection baseline method does not perform as well on Universal Dependencies (Nivre et al., 2016) as it did on the CoNLL 2006 Shared Task. Those differences can be explained in two ways. First, we experiment with cleaner treebanks and benefit from the availability of unified tagsets and annotation schemes. This is in contrast with previous experiments, which were using a tagset mapping as a preprocessing step, making the net effect of data selection more difficult to single out and evaluate precisely. Second, the data selection method was primarily intended for distantly related languages, whereas the UD corpus now offers a wide language diversity and often a few good sources for which data size reduction is only detrimental.",
"cite_spans": [
{
"start": 112,
"end": 132,
"text": "(Nivre et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "In general, our methods do not improve the best source but have a large effect on bad and average sources, often turning them into competitive sources. This is particularly true with PoSLM reordering, which improves the worst sources by 8.1 points and degrades the best ones by 2.6 points. By contrast, the WALS-based method is more conservative and offers lower but more reliable improvements, which in average proves successful. Table 3 reports the average over some language families 12 of the UAS of the baseline, reordered and WALS-based mono-source models. It shows that accuracies of related sources are only marginally mod- ified when source sentences are transformed according to WALS, which could be expected as related languages share most of their typological features. On the contrary, large gains are obtained for distantly related languages. Such languages are typically poor sources in direct delexicalized transfer due to systematic labeling errors that mostly concern few frequent word classes (in correlation with their typological features). We have found that such errors can often be corrected by transforming the source sentences. With those errors handled, the now competitive sources can in turn contribute with valuable knowledge in multi-source settings.",
"cite_spans": [],
"ref_spans": [
{
"start": 431,
"end": 438,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We have also investigated the improvements made over the baseline by our best method, the WALSbased rewriting rules, by analyzing the gain in accuracy separately for various PoS. It appears that, in most cases, improvements mostly concern PoS classes covered by the WALS features. For instance, the issue mentioned in \u00a7 2.2 for the English-French pair is almost solved with source reordering: 90.4% of the postposed ADJs are correctly predicted by the WALS-based method (34.5% in the baseline), without any detrimental impact on the preposed ones. The same holds for the Hebrew-Bulgarian textbook case, where the UAS NOUN ADJ raises from 0.7% to 95.1%. We observe similar behaviors across the board for all the classes targeted by transformations: transfer from Czech to Danish had UAS NOUN preposed NOUN and UAS NOUN postposed NOUN scores of 2.8% and 78.4%, with WALS-based preprocessing they are respectively 61.1% and 80.4%. In Finnish-Arabic, scores of 6.3% and 30.9% on ADJs and ADPs become 65.8% and 61.4%, etc. In whole, 21% of the considered language pairs present very large gains (over +50 points) for at least one frequent tag pair (over 30 dependency occurrences in test data).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Fine-grained Analysis",
"sec_num": "4.3"
},
{
"text": "Careful comparison of results for both PoSLM-based methods shows that reordering improves ADJs' attachment for instance, when data selection does not. This can be explained in two ways. First, if the source corpus contains a very limited number of preposed ADJs, even with perfect selection the ADJs in final data cannot be mostly preposed. Second, data selection mostly targets sequences very far from the target syntax: sentences that only disrespect a local preference of child position are less fluent, and consequently have a lower rank, than their hypothetical counterpart with switched positions; but they are not ungrammatical enough to be pushed into the 10% worst territory. On the other hand, the data transformation approach is not restricted to preexisting n-grams, and it directly confronts the given sequence with its counterpart to keep only the most fluent, thus acknowledging local preferences. These key differences are confirmed experimentally on English-French data: PoSLM-based reordering lowers the precedence rate of ADJs to NOUNs from 93% to 60%, while the rate varies by less than 1% in S\u00f8gaard (2011)'s approach, leaving the majority case adjectives still under-trained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Fine-grained Analysis",
"sec_num": "4.3"
},
{
"text": "Finally, detailed analyses reveal that the PoSLM reordering approach has lower improvements than the WALS-based one on easy reorderings such as the nearly deterministic Hebrew-Bulgarian adjectives (UAS NOUN ADJ of 93.2% with WALS, versus 86.1% with the PoSLM). This suggests that the data-driven technique still wastes part of the available knowledge: indeed, the use of a probabilistic model to rank reorderings does not guarantee that any interesting reordering is in fact selected. Another advantage of the knowledge-rich approach is that the distribution of local word orderings is easier to control, since it explicitly regulates the balance between co-existing word orders. Indeed, when two structures are possible and fluent, the PoSLM-based method will always prefer the majority class. While the projectivity requirement generally softens this hard constraint by discarding many reordering candidates, the effect holds for instance on typologically close languages: during transfer from French (PR = 28% for ADJ-NOUN) to Italian (PR = 32%), PoSLM-based reorderings harden the preference down to PR = 16%, and end up under-training the ADJ-NOUN minority class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Fine-grained Analysis",
"sec_num": "4.3"
},
{
"text": "In spite of this, the PoSLM reordering is still competitive, since it covers more diverse phenomena. For instance, when transferring from English to Tamil, the UAS VERB AUX only raises from 31.4% to 35.0% with the WALS-based method, but achieves a nice 91.2% with the PoSLM. Such improvements are however less predictable and unexplained losses also occur, as for the UAS VERB AUX in Hungarian-Tamil (98.5% for the baseline and WALS, 66.4% with PoSLM reordering).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Fine-grained Analysis",
"sec_num": "4.3"
},
{
"text": "These results suggest that both approaches have their upsides and downsides, which remain to be combined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Fine-grained Analysis",
"sec_num": "4.3"
},
{
"text": "The observation that cross-lingual transfer works better with typologically close or related languages has been already made by many. Indeed, several works have already pointed out that unified annotation schemes cannot compensate for syntactic divergences between source and target languages and that reducing these divergences was likely to improve the performance of transfer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "When several sources are available, a natural approach is to give more weight to the instances observed in related languages, where relatedness can be measured either based on linguistic description (Berg-Kirkpatrick and Klein, 2010) or empirically (Cohen et al., 2011) . S\u00f8gaard (2011) follows similar intuitions but binarizes the weights to apply instance selection. Thus, the delexicalized model is trained only on the source examples that are the most relevant for the target at hand, using PoSLM perplexity as a relevance metric. Note that this strategy can be applied both in mono-source and multi-source settings.",
"cite_spans": [
{
"start": 199,
"end": 233,
"text": "(Berg-Kirkpatrick and Klein, 2010)",
"ref_id": "BIBREF1"
},
{
"start": 249,
"end": 269,
"text": "(Cohen et al., 2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In Rosa and Zabokrtsky (2015) 's work, the syntactic similarity between languages is also based on the similarity between their PoS sequences. They show how the KL cpos 3 measure can be used to improve cross-lingual transfer either by selecting the best source language, or by weighting the source contribution to the output in a multi-source setting.",
"cite_spans": [
{
"start": 3,
"end": 29,
"text": "Rosa and Zabokrtsky (2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Both multi-source combination and data selection follow the same intuition that any source sentence or part of it can provide useful information on the syntax of the target language, even when the divergence between the source and the target is large. Indeed, a language is subject to many influences throughout its evolution and can borrow a phenomenon from a very distant language. This is for instance the case of Romanian, which belongs to the Romance family but has also strong Slavic influences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "As a result, both works aim at extracting useful knowledge even from poor sources, and our proposal can be viewed as an extension that pushes further this intuition, to a finer grain: Rosa and Zabokrtsky (2015) reward good source languages, S\u00f8gaard (2011) rewards target-relevant sentences, and our method rewards relevant local patterns, by performing a local reordering of target-irrelevant parse subtrees rather than ignoring the whole sentence. This has the effect of using the knowledge embedded in these subtrees as well as in the rest of the sentence more effectively. To see this, consider the case of transferring an English parser to a language in which no verb is labeled as auxiliary. 13 In this case, the method of (S\u00f8gaard, 2011) is likely to discard all the English sentences containing auxiliaries and the parser will hardly see, in training, sentences involving passive constructions or past participles; by contrast, methods based on data transformation would not remove the full sentence, but just the auxiliary -all the other dependencies, e.g. the by-agent, can still contribute to learning.",
"cite_spans": [
{
"start": 697,
"end": 699,
"text": "13",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Thus, in comparison to previous works favoring close word orders at the cost of discarding some training examples or reducing source contribution, our method differs by improving cross-lingual transfer without knowledge loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In another line of work, Naseem et al. (2012) also distinguish the knowledge 'ADJs depend on NOUNs' from the ordering of both tokens, and use WALS to predict the latter. However, where our methods compensate for word order divergences at the data level, their work aims at abstracting the dependency prediction from word order, by designing a new parsing algorithm from scratch: their parser decomposes as a dependent selection component, shared among languages, and an ordering component that is specific to the target language. Even though it does not provide full order abstraction, our approach has the double advantage of wrapping any state-of-the-art parsing system, and allowing an extra degree of flexibility by manipulating the data, e.g. to handle PoS classes existing in only one language.",
"cite_spans": [
{
"start": 25,
"end": 45,
"text": "Naseem et al. (2012)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The contribution of this work is twofold. First, we have updated earlier results on delexicalized crosslingual model transfer by reproducing them on the recent Universal Dependencies treebank. This collection of treebanks contains more languages than were previously available. Furthermore, the consistency of annotation schemes makes the analysis of results more reliable and enables to draw firmer conclusions. Second, based on a thorough analysis of the weaknesses of delexicalized transfer, we have proposed two strategies that aim to compensate for word sequence biases when transferring models across languages: a data-driven method using PoSLMs for reordering and a knowledge-based method exploiting heuristic rewrite rules extracted from WALS. The latter method proved to be the most effective of the two, with the additional benefit of being entirely resource-free and thus readily usable for the over thousand languages whose word order is specified in WALS. For the frequent PoS classes targeted by this method, we were able to obtain huge improvements, often 30 and up to 90 points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "A first natural continuation of this work will be to complete our repertoire of preprocessing rules with article insertions, PoS substitutions and patterns involving verbs, which were not considered so far. Another direction we would like to investigate is to contrast our techniques with annotation projection, which is another way to compensate for word order biases in cross-lingual transfer: by analyzing the pros and cons of each method we might find ways to combine them so that we can also use parallel data when available. We finally also aim at generalizing our WALS approach to other order-dependent tasks. Indeed, from a higher-level point of view, the aforementioned issues are not specific to dependency parsing, but occur theoretically with all data-driven NLP methods: however general it is, the linguistic knowledge is always only available as instantiated on a given word sequence and through the proxy of a particular data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "This work is licenced under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For lexicalized parsers: the word forms and the PoS, for delexicalized: only the PoS. 2 Graph-based parsing with standard feature templates is slightly less order-dependent, since the classification task and the features of the candidate dependent and head are already abstracted from the linear sequence. However, many features, related for instance to words located between these two tokens, remain sensitive to word order and our statement still holds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The issues described in this section are at least partially solved by transfer with annotation projection but these techniques require parallel data that are not always available.4 Keeping the original order (la suivante question) would be wrong in French.5 Our experimental data and protocols are presented in Section 4.6 In the English UD corpus, 93% of the adjectives come before the noun they depend on.7 This observation is consistent with the English monolingual scores (93.2% for the UAS NOUN preposed ADJ majority case, and 55.0% for the much rarer UAS NOUN postposed ADJ case). 8 Source data quality cannot be the only cause of such poor results: when delexicalized models apply monolingually, UAS NOUN ADJ is 97.4% in Bulgarian and 88.4% in Hebrew.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We do not consider here features (81A, 82A, etc.) describing the relative position of a head VERB and its dependents. Their use would require us to condition our preprocessing patterns on labeled dependency relationship in the source, a task we leave for future work.10 For ADPs, and for resilience to annotation inconsistencies, we also include ADPs that are heads of NOUNs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We consider the version 1.3 of the UD treebank. In order to present only fair sources, for languages where several treebanks are available, we retain only the main treebank. This is the case for the following languages: cs, en, es, fi, grc, la, nl, pt, ru, sl and sv.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "While the considered ancient languages belong to some of those families, we chose to gather them into a separate category, since they rely on the same WALS completion heuristic, instead of their actual typological features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the UD treebank this is, for instance, the case for Greek, Latvian or Galician.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been partly funded by the French Direction g\u00e9n\u00e9rale de l'armement and by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 645452 (QT21). We thank the anonymous reviewers for their detailed and inspiring comments on the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "PanParser: a Modular Implementation for Efficient Transition-Based Dependency Parsing",
"authors": [
{
"first": "Lauriane",
"middle": [],
"last": "Aufrant",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wisniewski",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lauriane Aufrant and Guillaume Wisniewski. 2016. PanParser: a Modular Implementation for Efficient Transition-Based Dependency Parsing. Technical report, LIMSI-CNRS, March.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Phylogenetic grammar induction",
"authors": [
{
"first": "Taylor",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Kirkpatrick",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2010,
"venue": "The 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1288--1297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taylor Berg-Kirkpatrick and Dan Klein. 2010. Phylogenetic grammar induction. In The 48th Annual Meeting of the Association for Computational Linguistics, pages 1288-1297, Uppsala, Sweden.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Survey of Word Reordering in Statistical Machine Translation: Computational Models and Language Phenomena",
"authors": [
{
"first": "Arianna",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2016,
"venue": "Computational Linguistics",
"volume": "42",
"issue": "2",
"pages": "163--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arianna Bisazza and Marcello Federico. 2016. A Survey of Word Reordering in Statistical Machine Translation: Computational Models and Language Phenomena. Computational Linguistics, 42(2):163-205.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised Structure Prediction with Non-Parallel Multilingual Guidance",
"authors": [
{
"first": "B",
"middle": [],
"last": "Shay",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Das",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP 2011, the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "50--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shay B. Cohen, Dipanjan Das, and Noah A. Smith. 2011. Unsupervised Structure Prediction with Non-Parallel Multilingual Guidance. In Proceedings of EMNLP 2011, the Conference on Empirical Methods in Natural Language Processing, pages 50-61, Edinburgh, Scotland, UK., July.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Incremental parsing with the perceptron algorithm",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42Nd Annual Meeting on Association for Computational Linguistics, ACL '04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42Nd Annual Meeting on Association for Computational Linguistics, ACL '04.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Comparing reordering constraints for smt using efficient bleu oracle computation",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of SSST, NAACL-HLT 2007 / AMTA Workshop on Syntax and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "103--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dreyer, Keith Hall, and Sanjeev Khudanpur. 2007. Comparing reordering constraints for smt using efficient bleu oracle computation. In Proceedings of SSST, NAACL-HLT 2007 / AMTA Workshop on Syntax and Structure in Statistical Translation, pages 103-110, Rochester, New York, April. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning a part-of-speech tagger from two hours of annotation",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "138--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Garrette and Jason Baldridge. 2013. Learning a part-of-speech tagger from two hours of annotation. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 138-147, Atlanta, Georgia.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Dynamic Oracle for Arc-Eager Dependency Parsing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING 2012, the International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "959--976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Joakim Nivre. 2012. A Dynamic Oracle for Arc-Eager Dependency Parsing. In Proceedings of COLING 2012, the International Conference on Computational Linguistics, pages 959-976, Bombay, India.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bootstrapping Parsers via Syntactic Projection accross Parallel Texts",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Hwa",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Weinberg",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cabezas",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Kolak",
"suffix": ""
}
],
"year": 2005,
"venue": "Natural language engineering",
"volume": "11",
"issue": "",
"pages": "311--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Hwa, Philip Resnik, A.Weinberg, C. Cabezas, and O. Kolak. 2005. Bootstrapping Parsers via Syntactic Projection accross Parallel Texts. Natural language engineering, 11:311-325.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Local phrase reordering models for statistical machine translation",
"authors": [
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "161--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shankar Kumar and William Byrne. 2005. Local phrase reordering models for statistical machine translation. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 161-168, Vancouver, British Columbia, Canada.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Frustratingly Easy Cross-Lingual Transfer for Transition-Based Dependency Parsing",
"authors": [
{
"first": "Oph\u00e9lie",
"middle": [],
"last": "Lacroix",
"suffix": ""
},
{
"first": "Lauriane",
"middle": [],
"last": "Aufrant",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wisniewski",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1058--1063",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oph\u00e9lie Lacroix, Lauriane Aufrant, Guillaume Wisniewski, and Fran\u00e7ois Yvon. 2016. Frustratingly Easy Cross- Lingual Transfer for Transition-Based Dependency Parsing. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1058-1063.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Translation as weighted deduction",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, EACL '09",
"volume": "",
"issue": "",
"pages": "532--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Lopez. 2009. Translation as weighted deduction. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, EACL '09, pages 532-540, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Universal Dependency Annotation for Multilingual Parsing",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Yvonne",
"middle": [],
"last": "Quirmbach-Brundage",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Bedini",
"suffix": ""
},
{
"first": "N\u00faria",
"middle": [],
"last": "Bertomeu Castell\u00f3",
"suffix": ""
},
{
"first": "Jungmee",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL 2013, the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "92--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T\u00e4ckstr\u00f6m, Claudia Bedini, N\u00faria Bertomeu Castell\u00f3, and Jungmee Lee. 2013. Universal Dependency Annotation for Multilingual Parsing. In Proceedings of ACL 2013, the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 92-97, Sofia, Bulgaria, August.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Selective sharing for multilingual dependency parsing",
"authors": [
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "629--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers -Volume 1, ACL '12, pages 629-637.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Universal dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Silveira",
"suffix": ""
},
{
"first": "Reut",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association (ELRA).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Algorithms for deterministic incremental dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "4",
"pages": "513--553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4):513-553.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A universal part-of-speech tagset",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Khalid",
"middle": [],
"last": "Choukri",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Declerck",
"suffix": ""
},
{
"first": "Bente",
"middle": [],
"last": "Mehmet Ugur Dogan",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Maegaard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mariani",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Nicoletta Cal- zolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet Ugur Dogan, Bente Maegaard, Joseph Mariani, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Eight International Conference on Lan- guage Resources and Evaluation (LREC'12), Istanbul, Turkey, may. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "KL cpos 3 -a Language Similarity Measure for Delexicalized Parser Transfer",
"authors": [
{
"first": "Rudolf",
"middle": [],
"last": "Rosa",
"suffix": ""
},
{
"first": "Zdenek",
"middle": [],
"last": "Zabokrtsky",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "243--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rudolf Rosa and Zdenek Zabokrtsky. 2015. KL cpos 3 -a Language Similarity Measure for Delexicalized Parser Transfer. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 243-249, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Data point selection for cross-language adaptation of dependency parsers",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL 2011, the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "682--686",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard. 2011. Data point selection for cross-language adaptation of dependency parsers. In Proceedings of ACL 2011, the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 682-686, Portland, Oregon, USA, June.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Treebank translation for cross-lingual parser induction",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "\u017deljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "130--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann,\u017deljko Agi\u0107, and Joakim Nivre. 2014. Treebank translation for cross-lingual parser induction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning (CoNLL 2014), pages 130-140, Ann Arbor, Michigan.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Inducing multilingual pos taggers and np bracketers via robust projection across aligned corpora",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Grace",
"middle": [],
"last": "Ngai",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, NAACL '01",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky and Grace Ngai. 2001. Inducing multilingual pos taggers and np bracketers via robust projection across aligned corpora. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, NAACL '01, pages 1-8, Stroudsburg, PA, USA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Cross-Language Parser Adaptation between Related Languages",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages",
"volume": "",
"issue": "",
"pages": "35--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Zeman and Philip Resnik. 2008. Cross-Language Parser Adaptation between Related Languages. In Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages, pages 35-42, Hyderabad, India, January. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Transition-based dependency parsing with rich non-local features",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "188--193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Pro- ceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Tech- nologies, pages 188-193, Portland, Oregon, USA.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"uris": null,
"text": "An order-sensitive sequence of transitions computing a dependency tree.",
"type_str": "figure"
},
"TABREF0": {
"num": null,
"type_str": "table",
"text": "Transformation rules extracted from the comparison of the feature values of a language pair. All other feature pairs result in a no-op.",
"content": "<table/>",
"html": null
},
"TABREF2": {
"num": null,
"type_str": "table",
"text": "65.6 67.2 60.4 60.4 61.7 63.1 63.5 63.0 46.4 50.8 52.5 54.1 52.1 52.9 56.7 56.5 54.9 Germanic 61.2 63.5 65.8 65.9 63.1 65.8 61.3 62.2 63.2 57.2 58.6 58.5 41.2 48.2 49.8 54.5 57.1 56.7 Slavic 63.5 61.7 66.0 63.8 60.5 64.3 72.6 68.4 71.8 53.2 57.0 58.4 54.7 53.6 56.8 59.0 59.2 60.1 Finno-Ugric 46.3 51.9 52.3 57.1 56.2 57.6 53.8 58.6 56.9 64.1 63.0 64.2 30.0 43.6 41.5 50.8 55.7 56.1 Semitic 54.1 54.2 54.1 40.6 48.2 51.1 42.5 54.6 56.1 30.8 41.2 44.1 55.4 55.6 54.8 53.7 55.9 54.4",
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">Target language</td><td/></tr><tr><td/><td>Romance</td><td>Germanic</td><td>Slavic</td><td>Finno-Ugric</td><td>Semitic</td><td>Ancient</td></tr><tr><td>Source language</td><td colspan=\"6\">Romance 67.1 Ancient 56.1 49.2 55.9 56.7 51.5 56.1 60.9 57.5 60.6 52.2 54.9 56.0 51.1 47.0 50.6 62.7 60.0 62.6</td></tr></table>",
"html": null
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Delexicalized, PoSLM-based reordered and WALS-based UAS aggregated over language family</td></tr><tr><td>pairs.</td></tr><tr><td>The first column reads as follows: the average UAS over all pairs of two Romance languages is 67.1</td></tr><tr><td>for mono-source delexicalized transfer; it is slightly improved (67.2) by the WALS-based method. Over</td></tr><tr><td>all pairs of a Germanic source and a Romance target, the average mono-source UAS raises from 61.2</td></tr><tr><td>(delexicalized baseline) to 63.5 (PoSLM-based reordering) and 65.8 (WALS-based rules).</td></tr></table>",
"html": null
}
}
}
}