|
{ |
|
"paper_id": "Q16-1022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:06:52.160832Z" |
|
}, |
|
"title": "Multilingual Projection for Parsing Truly Low-Resource Language\u0161", |
|
"authors": [ |
|
{ |
|
"first": "Zeljko", |
|
"middle": [], |
|
"last": "Agi\u0107", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Copenhagen", |
|
"location": { |
|
"country": "Denmark" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "Johannsen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Copenhagen", |
|
"location": { |
|
"country": "Denmark" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Copenhagen", |
|
"location": { |
|
"country": "Denmark" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "\u2665", |
|
"middle": [ |
|
"\u2663" |
|
], |
|
"last": "H\u00e9ctor Mart\u00ednez", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Copenhagen", |
|
"location": { |
|
"country": "Denmark" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Natalie", |
|
"middle": [], |
|
"last": "Schluter", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Copenhagen", |
|
"location": { |
|
"country": "Denmark" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Copenhagen", |
|
"location": { |
|
"country": "Denmark" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We propose a novel approach to cross-lingual part-of-speech tagging and dependency parsing for truly low-resource languages. Our annotation projection-based approach yields tagging and parsing models for over 100 languages. All that is needed are freely available parallel texts, and taggers and parsers for resource-rich languages. The empirical evaluation across 30 test languages shows that our method consistently provides top-level accuracies, close to established upper bounds, and outperforms several competitive baselines.", |
|
"pdf_parse": { |
|
"paper_id": "Q16-1022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We propose a novel approach to cross-lingual part-of-speech tagging and dependency parsing for truly low-resource languages. Our annotation projection-based approach yields tagging and parsing models for over 100 languages. All that is needed are freely available parallel texts, and taggers and parsers for resource-rich languages. The empirical evaluation across 30 test languages shows that our method consistently provides top-level accuracies, close to established upper bounds, and outperforms several competitive baselines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "State-of-the-art approaches to inducing part-ofspeech (POS) taggers and dependency parsers only scale to a small fraction of the world's \u223c6,900 languages. The major bottleneck is the lack of manually annotated resources for the vast majority of these languages, including languages spoken by millions, such as Marathi (73m), Hausa (50m), and Kurdish (30m). Cross-lingual transfer learning-or simply cross-lingual learning-refers to work on using annotated resources in other (source) languages to induce models for such low-resource (target) languages. Even simple cross-lingual learning techniques outperform unsupervised grammar induction by a large margin.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Most work in cross-lingual learning, however, makes assumptions about the availability of linguistic resources that do not hold for the majority of low-resource languages. The best cross-lingual dependency parsing results reported to date were pre-sented by Rasooli and Collins (2015) . They use the intersection of languages covered in the Google dependency treebanks project and those contained in the Europarl corpus. Consequently, they only consider closely related Indo-European languages for which high-quality tokenization can be obtained with simple heuristics.", |
|
"cite_spans": [ |
|
{ |
|
"start": 258, |
|
"end": 284, |
|
"text": "Rasooli and Collins (2015)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In other words, we argue that recent approaches to cross-lingual POS tagging and dependency parsing are biased toward Indo-European languages, in particular the Germanic and Romance families. The bias is not hard to explain: treebanks, as well as large volumes of parallel data, are readily available for many Germanic and Romance languages. Several factors make cross-lingual learning between these languages easier: (i) We have large volumes of relatively representative, translated texts available for all language pairs; (ii) It is relatively easy to segment and tokenize Germanic and Romance texts; (iii) These languages all have very similar word order, making the alignments much more reliable. Therefore, it is more straightforward to train and evaluate cross-lingual transfer models for these languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, this bias means that we possibly overestimate the potential of cross-lingual learning for truly low-resource languages, i.e., languages with no supporting tools or resources for segmentation, POS tagging, or dependency parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The aim of this work is to experiment with crosslingual learning via annotation projection, making minimal assumptions about the available linguistic resources. We only want to assume what we can in fact assume for truly low-resource languages. Thus, for the target languages, we do not assume the avail-ability of any labeled data, tag dictionaries, typological information, etc. For annotation projection, we need a parallel corpus, and we therefore have to rely on resources such as the Bible (parts of which are available in 1,646 languages), and publications from the Watchtower Society (up to 583 languages). These texts have the advantage of being translated both conservatively and into hundreds of languages (massively multi-parallel). However, the Bible and the Watchtower are religious texts and are more biased than the corpora that have been assumed to be available in most previous work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to induce high-quality cross-lingual transfer models from noisy and very limited data, we exploit the fact that the available resources are massively multi-parallel. We also present a novel multilingual approach to the projection of dependency structures, projecting edge weights (rather than edges) via word alignments from multiple sources (rather than a single source). Our approach enables us to project more information than previous approaches: (i) by postponing dependency tree decoding to after the projection, and (ii) by exploiting multiple information sources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our contributions are as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(i) We present the first results on cross-lingual learning of POS taggers and dependency parsers, assuming only linguistic resources that are available for most of the world's written languages, specifically, Bible excerpts and translations of the Watchtower.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(ii) We extend annotation projection of syntactic dependencies across parallel text to the multisource scenario, introducing a new, heuristicsfree projection algorithm that projects weight matrices from multiple sources, rather than dependency trees or individual dependencies from a single source.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(iii) We show that our approach performs significantly better than commonly used heuristics for annotation projection, as well as than delexicalized transfer baselines. Moreover, in comparison to these systems, our approach performs particularly well on truly low-resource non-Indo-European languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "All code and data are made freely available for general use. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Motivation Our approach is based on the general idea of annotation projection (Yarowsky et al., 2001 ) using parallel sentences. The goal is to augment an unannotated target sentence with syntactic annotations projected from one or more source sentences through word alignments. The principle is illustrated in Figure 1 , where the source languages are German and Croatian, and the target is English.", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 100, |
|
"text": "(Yarowsky et al., 2001", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 311, |
|
"end": 319, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Weighted annotation projection", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The simplest case is projecting POS labels, which are observed in the source sentences but unknown in the target language. In order to induce the grammatical category of the target word beginning, we project POS from the aligned words Anfang and po\u010detku, both of which are correctly annotated as NOUN. Projected POS labels from several sources might disagree for various reasons, e.g., erroneous source annotations, incorrect word alignments, or legitimate differences in POS between translation equivalents. We resolve such cases by taking a majority vote, weighted by the alignment confidences. By letting several languages vote on the correct tag of each word, our projections become more robust, less sensitive to the noise in our source-side predictions and word alignments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weighted annotation projection", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We can also project syntactic dependencies across word alignments. If (u s , v s ) is a dependency edge in a source sentence, say the ingoing dependency from das to Wort, u s (Wort) is aligned to u t (word), and v s (das) is aligned to v t (the), we can project the dependency such that (u t , v t ) becomes a dependency edge in the target sentence, making the a dependent of word. Obviously, dependency annotation projection is more challenging than projecting POS, as there is a structural constraint: the projected edges must form a dependency tree on the target side. Hwa et al. (2005) were the first to consider this problem, applying heuristics to ensure well-formed trees on the target side. The heuristics were not perfect, as they have been shown to result in excessive non-projectivity and the introduction of spurious relations and tokens Tiedemann, 2014) . These design choices all lead to di- Figure 1 : An outline of dependency annotation projection, voting, and decoding in our method, using two sources i (German) and j (Croatian) and a target t (English). Part 1 represents the multi-parallel corpus preprocessing, while parts 2 and 3 relate to our projection method. The graphs are represented as adjacency matrices with column indices encoding dependency heads. We highlight how the weight of target edge (u t = was, v t = beginning) is computed from the two contributing sources. minished parsing quality.", |
|
"cite_spans": [ |
|
{ |
|
"start": 572, |
|
"end": 589, |
|
"text": "Hwa et al. (2005)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 850, |
|
"end": 866, |
|
"text": "Tiedemann, 2014)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 906, |
|
"end": 914, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Weighted annotation projection", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We introduce a heuristics-free projection algorithm. The key difference from most previous work is that we project the whole set of potential syntactic relations with associated weights-rather than binary dependency edges-from a large number of multiple sources. Instead of decoding the best tree on the source side-or for a single source-target sentence pair-we project weights prior to decoding, only decoding the aggregated multi-source weight matrix after the individual projections are done. This means that we do not lose potentially relevant information, but rather project dense information about all candidate edges.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weighted annotation projection", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We assume the existence of n source languages and a target language t. For each tuple of translations in our multi-parallel corpus, our algorithm projects syntactic annotations from the n source sentences to the target sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-source sentence graph", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Projection happens at the sentence-level, taking a tuple of n annotated sentences and an unannotated sentence as input. We formalize the projection step as label propagation in a graph structure where the words of the target and source sentences are vertices, while edges represent dependency edge candidates between words within a sentence (a parse), as well as similarity relations between words of sentences in different languages (word alignments).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-source sentence graph", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Formally, a projection graph is a graph G = (V, E). All edges are weighted by the function w e : E \u2192 R. The vertices can be decomposed into", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-source sentence graph", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "sets V = V 0 \u222a \u2022 \u2022 \u2022 \u222a V n ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-source sentence graph", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where V i is the set of words in sentence i.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-source sentence graph", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We often need to identify the target sentence V t = V 0 and the source sentences ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-source sentence graph", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "V s = V 1 \u222a \u2022 \u2022 \u2022 \u222a V n sep- arately. Edges between V s and V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-source sentence graph", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A = (V s , V t , E A ), i.e., the subgraph of G induced by all (alignment) edges, E A , connecting V s and V t .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-source sentence graph", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The subgraph induced by the set of vertices V i , written as G[V i ], represents the dependency edge candidates between the words of the sentence i. In general these subgraphs are dense, i.e., they encode weight matrices of edge scores and not just the single best parse. For the source sentences, we assume that the weights are provided by a parser, while the weights for the syntactic relations of the target sentence are unknown.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-source sentence graph", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "With the above definitions, the dependency projection problem amounts to assigning weights to the edges of G[V t ] by transferring the syntactic parse graphs", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-source sentence graph", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "G[V 1 ], . . . , G[V n ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-source sentence graph", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "from the source languages through the alignments A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-source sentence graph", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Our annotation projection for POS tagging is similar to the one proposed by Agi\u0107 et al. (2015) . The algorithm is presented in Algorithm 1. We first introduce a conditional probability distribution p(l|v) Algorithm 1: Project POS tags Data:", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 94, |
|
"text": "Agi\u0107 et al. (2015)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Part-of-speech projection", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "A projection graph G = (V s \u222a V t , E); a set of POS labels L; a function p(l|v) assigning probabilities to labels l for word vertices v. Result: A labeling of V t p \u2190 empty probability table label \u2190 empty label-to-vertex mapping for v t \u2208 V t do for l \u2208 L d\u00f5 p(l|v t ) \u2190 vs\u2208Vs p(l|v s )w a (v s , v t ) label(v t ) \u2190 arg max lp (l|v t ) return label Algorithm 2: Project dependencies Data: A projection graph G = (V s \u222a V t , E).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Part-of-speech projection", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Result: A dependency tree covering the target", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Part-of-speech projection", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "vertices V t . if project from trees then for i=1 to n do G[V i ] \u2190 DMST(G[V i ]) for (u t , v t ) \u2208 G[V t ] do w e (u t , v t ) \u2190 \u2212\u221e if (\u2022, u t ) or (\u2022, v t ) / \u2208 E A then continue w e (u t , v t ) \u2190 n i=1 max us,vs\u2208V i w e (u s , v s ) w a (u s , u t ) w a (v s , v t ) G[V t ] \u2190 normalize(G[V t ]) return DMST(G[V t ])", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Part-of-speech projection", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "over POS tags l \u2208 L for each vertex v in the graph. For all source vertices, the probability distributions are obtained by tagging the corresponding sentences in our multilingual corpus with POS taggers, assigning a probability of one to the best tag for each word, and zero for all other tags. For each target token, i.e., each vertex v, the projection works by gathering evidence for each tag from all source tokens aligned to v, weighted by the alignment score:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Part-of-speech projection", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "p(l|v t ) \u221d vs\u2208Vs p(l|v s ) w a (v s , v t )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Part-of-speech projection", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The projected tag for a target vertex v t is then arg max l p(l|v t ). When both the alignment weights and the source tag probabilities are in {0, 1}, this reduces to a simple voting scheme that assigns the most frequent POS tag among the aligned words to each target word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Part-of-speech projection", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "While in POS projection, we project vertex labels, in dependency projection we project edge scores. Our procedure for dependency annotation projection is given in Algorithm 2. For each source language, we parse the corresponding side of our multi-parallel corpus using a dependency parser trained on the source language treebank. However, instead of decoding to dependency trees, we extract the weights for all potential syntactic relations, importing them into G as edge weights.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The parser we use in our experiments assigns scores w e \u2208 R to possible edges. Since the ranges and values of these scores are dependent on the training set size and the number of model updates, we standardize the scores to make them comparable across languages. Standardization centers the scores around zero with a standard deviation of one by subtracting the mean and dividing by the standard deviation. We apply this normalization per sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Scores are then projected from source edges to target edges via word alignments w a \u2208 [0, 1]. Instead of voting among the incoming projections from multiple sources, we sum the projected edge scores. Because alignments vary in quality, we scale the score of the projected source edge by the corresponding alignment probability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "A target edge", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "(u t , v t ) \u2208 G[V t ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "can originate from multiple source edges even from a single source sentence, due to m : n alignments. In such cases, we only project the source edge", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "(u s , v s ) \u2208 G[V i>0 ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "with the maximum score, provided the words are aligned, i.e., (u s , u t ) and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "(v s , v t ) \u2208 E A .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "In the case of a single source sentence pair, the target edge scores are set as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "w e (u t , v t ) \u2190 max us,vs\u2208V i edge w e (u s , v s ) alignment w a (u s , u t ) w a (v s , v t )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We note the distinction between edge weights w e and alignment weights w a . With multiple sources, the target edge scores w e (u t , v t ) are computed as a sum over the individual sources:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "n i=1 max us,vs\u2208V i w e (u s , v s ) w a (u s , u t ) w a (v s , v t )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "After projection we have a dense set of weighted edges in the target sentence representing possible syntactic relations. This structure is equivalent to the n \u00d7 n edge matrix used in ordinary first-order graph-based dependency parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Before decoding, the weights are softmaxnormalized to form a distribution over each possible head decision. The normalization balances out the contributions of the individual head decisions; and in our development setup, we found that omitting this step resulted in a substantial (\u223c10%) decrease in parsing performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We then follow McDonald et al. (2005) in using directed maximum spanning tree (DMST) decoding to identify the best dependency tree in the matrix. We note that DMST decoding on summed projected weight matrices is similar to the idea of re-parsing with DMST decoding of the output on an ensemble of parsers (Sagae and Lavie, 2006 ), which we use as a baseline in our experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 37, |
|
"text": "McDonald et al. (2005)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 327, |
|
"text": "(Sagae and Lavie, 2006", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency projection", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We use source treebanks from the Universal Dependencies (UD) project, version 1.2 (Nivre et al., 2015 ). 2 They are harmonized in terms of POS tag inventory (17 tags) and dependency annotation scheme. In our experiments, we use the canonical data splits, and disregard lemmas, morphological features and alternative POS from all treebanks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 101, |
|
"text": "(Nivre et al., 2015", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and test sets", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Out of the 33 languages currently in UD1.2, we drop languages for which the treebank does not distribute word forms (Japanese), and languages for which we have no parallel unlabeled data (Latin, Ancient Greek, Old Church Slavonic, Irish, Gothic). Languages with more than 60k tokens (in the training data) are considered source languages, the remaining 6 smaller treebanks (Estonian, Greek, Hungarian, Latin, Romanian, Tamil) are strictly considered targets. This results in 22 treebanks for training source taggers and parsers. We use two additional test sets: Quechua and Serbian. The first one does not entirely adhere to UD, but we provide a POS tagset mapping and a few modifications and include it as a test language to deepen the robustness assessment for our approach across language families. The Serbian test set fully conforms to UD, as a fork of the closely related Croatian UD dataset. 3 This results in a total of 30 target languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 899, |
|
"end": 900, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and test sets", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We use two sources of massively parallel text. The first is the Edinburgh Bible Corpus (EBC) collected by Christodouloupoulos and Steedman (2014) containing 100 languages. EBC has either 30k or 10k sentences for each language, depending on whether they are made up of full Bibles or just translations of the New Testament, respectively. We also crawled and scraped the Watchtower Online Library website to collect what we will refer to as the Watchtower Corpus (WTC). 4 The data is from 2002-2016 and the final corpus contains 135 languages with sentences in the range of 26k-145k. While some EBC Bibles are written in dated language, we do not make any modifications to the corpus if the language is also present in WTC. However, as Basque is not represented in WTC, we replace the Basque Bible from 1571 with a contemporary version from 2004, to enable the use of Basque in the parsing experiments. 5 EBC and WTC both consist of religious texts, but they are very different in terms of style and content. If we examine Table 1 that shows the most frequent words per corpus, we observe that the English Bible-the King James Version from 1611contains many Old English verb forms (\"hath\", \"giveth\"). In contrast, the English Watchtower is written in contemporary English, both in terms of verb inflection (\"does\", \"says\") and vocabulary (\"today\", \"human\"). WTC also deals with contemporary topics such as blood \"transfusion\" (36 mentions) and \"computer\" (42 mentions).", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 145, |
|
"text": "Christodouloupoulos and Steedman (2014)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 468, |
|
"end": 469, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 901, |
|
"end": 902, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-parallel corpora", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The other languages also show differences in terms of language modernity and dialectal difference between EBC and WTC. While each Bible translation has its individual history, Watchtower transla-EBC: hath, saith, hast, spake, yea, cometh, iniquity, wilt, smote, shew, begat, doth, lo, hearken, thence, verily, neighbour, goeth, shewed, giveth, smite, didst, wherewith, knoweth, night WTC: bible, does, however, says, today, during, show, human, later, important, really, humans, meetings, personal, states, future, fact, relationship, result, attention, someone, century, attitude, article, different tions are commissioned by the same publisher, following established editorial criteria. Thus, we not only expect Watchtower to yield projected treebanks that are closer to contemporary language, but also more reliable alignments. We expect these properties to make WTC a more suitable parallel corpus for our experiments and for bootstrapping treebanks for new languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-parallel corpora", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Segmentation For the multi-parallel corpora, we apply naive sentence splitting using full-stops, question marks and exclamation points of the alphabets from our corpora. We have collected these trigger symbols from the corpora, provided that they appeared as individual tokens at the ends of lines, and belonged to the \"Punctuation, Other\" Unicode category. After sentence splitting, we use naive whitespace tokenization. 6 We also remove short-vowel diacritics from all corpora written in Arabic script.", |
|
"cite_spans": [ |
|
{ |
|
"start": 422, |
|
"end": 423, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We use the same sentence splitting and tokenization for EBC and WTC. This is done regardless of Bibles being distributed in a verse-per-line format, which means verses can be split in more than one sentence. The average sentence length across languages is 18.5 tokens in EBC and 16.7 in WTC.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The UD treebank tokenization differs from the tokenization used for the multi-parallel corpora. The UD dependency annotation is based on syntactic words, and the tokenization guidelines recommend, for example, splitting clitics from verbs, and undoing contractions (Spanish \"del\" becomes \"de el\"). These tokens made up of several syntactic words are 6 https://github.com/bplank/ multilingualtokenizer called multiword tokens in the UD convention, and are included in the treebanks but are not integrated in the dependency trees, i.e., only their forming subtokens are assigned a syntactic head. 7 In order to harmonize the tokenization, we eliminate subtokens from the dependency trees, and incorporate the original multiword tokens-which are more likely to be naive raw tokens-in the trees instead. For each multiword token, we provide it with POS and dependency label from the highest subtoken, namely the subtoken that is closest to root. For example, in the case of a verb and its clitics, the chosen subtoken is the verb, and the multiword token is interpreted as a verb. If there are more candidates, we select one through POS ranking. 8", |
|
"cite_spans": [ |
|
{ |
|
"start": 595, |
|
"end": 596, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Alignment We sentence-and word-align all language pairs in both our multi-parallel corpora. We use hunalign (Varga et al., 2005) to perform conservative sentence alignment. 9 The selected sentence pairs then enter word alignment. Here, we use two different aligners. The first one is IBM2 fastalign by Dyer et al. 2013, where we adopt the setup of Agi\u0107 et al. (2015) who observe a major advantage in using reverse-mode alignment for POS projection (4-5 accuracy points absolute). 10 In addition, we use the IBM1 aligner efmaral 11 b\u00ff Ostling (2015). The intuition behind using IBM1 is that IBM2 introduces a bias toward more closely related languages, and we confirm this intuition through our experiments. We modify both aligners so that they output the alignment probability for each aligned token pair.", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 128, |
|
"text": "(Varga et al., 2005)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 348, |
|
"end": 366, |
|
"text": "Agi\u0107 et al. (2015)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 480, |
|
"end": 482, |
|
"text": "10", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Tagging and parsing The source-sides of the two multi-parallel corpora, EBC and WTC, are POStagged by taggers trained on the respective source languages, using TnT (Brants, 2000) . We parse the corpora using TurboParser (Martins et al., 2013) . The parser is used in simple arc-factored mode with pruning. 12 We alter it to output per-sentence arc weight matrices. 13", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 178, |
|
"text": "(Brants, 2000)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 306, |
|
"end": 308, |
|
"text": "12", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 220, |
|
"end": 242, |
|
"text": "(Martins et al., 2013)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Outline For each sentence in a target language corpus, we retrieve the aligned sentences in the source corpora. Then, for each of these source-target sentence pairs, we project POS tags and dependency edge scores via word alignments, aggregating the contributions of individual sources. Once all contributions are collected, we perform a per-token majority vote on POS tags and DMST decoding on the summed edge scores. This results in a POS-tagged and dependency parsed target sentence ready to contribute in training a tagger and parser.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We remove target language sentences that contain word tokens without POS labels. This may happen due to unaligned sentences and words. We then proceed to train models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Each of the experiment steps involves a number of choices that we outline in this section. We also describe the baseline systems and upper bounds.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Below, we present results with POS taggers based on annotation projection with both IBM1 and IBM2; cf. Table 3 . We train TnT with default settings on the projected annotations. Note that we use the resulting POS taggers in our dependency parsing experiments in order not to have our parsers assume the existence of POS-annotated corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 110, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "POS tagging", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For a more extensive assessment, we refer to the work by Agi\u0107 et al. (2015) who report baseline and upper bounds. In contrast to their work, we consider two different alignment models and use the UD POS tagset (17 tags), in contrast to the 12 tags of Petrov et al. (2012) . This makes our POS tagging problem slightly more challenging, but our parsing models potentially benefit from the extended tagset. 14", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 75, |
|
"text": "Agi\u0107 et al. (2015)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 251, |
|
"end": 271, |
|
"text": "Petrov et al. (2012)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "POS tagging", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use arc-factored Tur-boParser for all parsing models, applying the same setup as in preprocessing. There are three sets of models: our systems, baselines, and upper bounds.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parsing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our systems are trained on the projected EBC and WTC texts, while the rest-except system: DCA-PROJ (see below)-are trained on the (delexicalized) source-language treebanks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parsing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To avoid a bias toward languages with big treebanks and to make our experiments tractable, we randomly subsample all training sets to a maximum of 20k sentences. In the multi-source systems, this means a uniform sample from all sources up to 20k sentences. This means our comparison is fair, and that our systems do not have the advantage of more training data over our baselines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parsing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our systems We report on four different crosslingual systems, alternating the use of word aligners (IBM1, IBM2) and the structures we project, as they can be either (i) arc-factored weight matrices from the parser (GRAPHS) or (ii) the single-best trees provided by the parser after decoding (TREES). See the if-clause in Algorithm 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parsing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We tune two parameters for these four systems using English as development set, confidence estimation and normalization, and we report the best setups only. For the IBM1-based systems, we use the word alignment probabilities in the arc projection, but we use unit votes in POS voting. The opposite yields the best IBM2 scores: binarizing the alignment scores in dependency projection, while weight-voting the POS tags. We also evaluated a number of different normalization techniques in projection, only to arrive at standardization and softmax as by far the best choices.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parsing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Baselines and upper bounds We compare our systems to three competitive baselines, as well as three informed upper bounds or oracles. First, we list our baselines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parsing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "DCA-PROJ: This is the direct correspondence assumption (DCA)-based approach to projection, i.e., the de facto standard for projecting dependencies. First introduced by Hwa et al. (2005) , it was recently elucidated by Tiedemann (2014) , whose implementation we follow here. In contrast to our approach, DCA projects trees on a source-target sentence pair basis, relying on heuristics and spurious nodes or edges to maintain the tree structure. In the setup, we basically plug DCA into our projection-voting pipeline instead of our own method.", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 185, |
|
"text": "Hwa et al. (2005)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 234, |
|
"text": "Tiedemann (2014)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DELEX-MS: This is the multi-source direct delexicalized parser transfer baseline of McDonald et al. (2011). 15", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For this baseline, we parse a target sentence using multiple single-source delexicalized parsers. Then, we collect the output trees in a graph, unit-voting the individual edge weights, and finally using DMST to compute the best dependency tree (Sagae and Lavie, 2006) . Now, we explain the three upper bounds: DELEX-SB: This result is using the best singlesource delexicalized system for a given target language following McDonald et al. (2013) . We parse a target with multiple single-source delexicalized parsers, and select the best-performing one.", |
|
"cite_spans": [ |
|
{ |
|
"start": 244, |
|
"end": 267, |
|
"text": "(Sagae and Lavie, 2006)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 422, |
|
"end": 444, |
|
"text": "McDonald et al. (2013)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "REPARSE:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For this result we parse the targetlanguage EBC and WTC data, train parsers on the output predictions, and evaluate the resulting parsers on the evaluation data. Note this result is available only for the source languages. Also, note that while we refer to this as self-training, we do not concatenate the EBC/WTC training data with the source treebank data. This upper bound tells us something about the usefulness of the parallel corpus texts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SELF-TRAIN:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "FULL: Direct in-language supervision, only available for the source languages. We train parsers on the source treebanks, and use them to parse the source test sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SELF-TRAIN:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Evaluation All our datasets-projected, training, and test sets-contain only the following CoNLL-X features: ID, FORM, CPOSTAG, and HEAD. 16 For simplicity, we do not predict dependency labels (DEPREL), and we only report unlabeled attachment scores (UAS). The POS taggers are evaluated for accuracy. We use our IBM1 taggers for all the baselines and upper bounds.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 139, |
|
"text": "16", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SELF-TRAIN:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our average results are presented in Figure 2 , including broken down by language family, the lan- guages for which we had training data (Sources) and those for which we only had test data (Targets). We see that our systems are substantially better than both multi-source delexicalized transfer, DCA, and reparsing based on delexicalized transfer models. Focusing on our system results, we see that projection with IBM1 leads to better models than projection with IBM2. We also note that our improvements are biggest with non-Indo-European languages. Our IBM1-based parsers top the ones using IBM2 alignment by 6 points UAS on Indo-European languages, while the difference amounts to almost 10 points UAS on non-Indo-European languages (cf. Table 2 ). This difference in scores exposes a systematic bias towards more closely related languages in work using even more advanced word alignment (Tiedemann and Agi\u0107, 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 891, |
|
"end": 917, |
|
"text": "(Tiedemann and Agi\u0107, 2016)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 45, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 741, |
|
"end": 748, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The detailed results using the Watchtower Corpus are listed in Table 3 , where we also list the POS tagging accuracies. Note that these are not directly comparable to Agi\u0107 et al. (2015) , since they use a more coarse-grained tagset, and the results listed here are using WTC. We list the detailed results with the Bible Corpus online. 17 The tendencies are the same, but the results are slightly lower almost consistently across the board.", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 185, |
|
"text": "Agi\u0107 et al. (2015)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 335, |
|
"end": 337, |
|
"text": "17", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 70, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Finally, we observe that our results are also better than those that can be obtained using a predictive model to select the best source language for delexi- calized transfer (Rosa and\u017dabokrtsk\u00fd, 2015) ; and better than what can be obtained using an oracle (DELEX-SB) to select the source language. Direct supervision (FULL) upper bound unsurprisingly records the highest scores in the experiment, as it uses biased in-language and in-domain training data. We also experiment with learning curves for direct supervision, with a goal of establishing the amount of manually annotated sentences needed to beat our cross-lingual systems. We find that for most languages this number falls within the range of 100-400 in-domain sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 174, |
|
"end": 200, |
|
"text": "(Rosa and\u017dabokrtsk\u00fd, 2015)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Function words In UD, a subset of function words-tags: ADP, AUX, CONJ, SCONJ, DET, PUNCT-have to be leaves in the dependency trees, unless, e.g., they participate in multiword expressions. Our predictions show some violations of this constraint (less than 1% of all words with these POS), but this ratio is similar to the amount of vi-olations found in the test data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Projectivity The UD treebanks are in general largely projective. Our UD test languages have an average of 89% fully projective sentences. However, with IBM1 for example, we only predict 55% of all sentences to be projective. Regardless of the differences in UAS, we observe a corpus effect in the difference of projectivity of the predictions between using EBC (65%) and WTC (55%). We attribute the higher level of projectivity of EBC-projected treebanks to Bible sentences being shorter.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The least projective predictions are Farsi (17%) and Hindi (19%), for which we also obtain the lowest UASs. This may be a consequence of our naive tokenization, yielding unreliable alignments. However, projectivity correlates more with UAS (\u03c1 = 0.56) than with POS prediction accuracy (\u03c1 = 0.34).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We observe that the average edge length on IBM1 and WTC is of 2.95, while for EBC it is 2.67. The average gold edge length is 3.6-which is significantly higher at p < 0.05 (Student's t-test). However, the variance in gold edge length is about 1.2 times the deviation of predicted edge length. In other words, gold edges are often longer and more far-reaching. This difference indicates our predictions have worse recall for longer dependencies such as subordinate clauses, while being more accurate in local, phrasal contexts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency length", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "POS errors Unlike most previous work on crosslingual dependency parsing, and following the notable exception of McDonald et al. (2011), we rely on POS predictions from cross-lingual transfer models. One may hypothesize that there is a significant error propagation from erroneous POS projection. We observe, however, that about 40% of wrong POS predictions are nevertheless assigned the right syntactic head. We argue that the fairly uniform noise on the POS labels helps the parsers regularize over the POS-dependency relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency length", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We treat POS and syntactic dependencies as two separate annotation layers and project them independently in our approach. Moreover, we project edge scores for dependencies, in contrast to only the single-best source POS tags. Johannsen et al. (2016) introduce an approach to joint projection of POS and dependencies, showing that exploiting the interactions between the two layers yields even better cross-lingual parsers. Their approach also accounts for transferring tag distributions instead of single-best POS tags.", |
|
"cite_spans": [ |
|
{ |
|
"start": 226, |
|
"end": 249, |
|
"text": "Johannsen et al. (2016)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Possible improvements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "All the parsers in our experiments are restricted to 20k training sentences. EBC and WTC texts offer up to 120k training instances per language. We observe limited benefits of going beyond our training set cap, indicating a more elaborate instance selection-based approach would be more beneficial than just adding more training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Possible improvements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In our dependency graph projection, we normalize the weights per sentence. For future development, we note that corpus-level normalization might achieve the same balancing effect while still preserving possibly important language-specific signals regarding structural disambiguations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Possible improvements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EBC and WTC constitute a (hopefully small) subset of the publicly available multilingual parallel corpora. The outdated EBC texts can be replaced by newer ones, and the EBC itself replaced or aug-mented by other online sources of Bible translations. Other sources include the UN Declaration of Human Rights, translated to 467 languages, 18 and repositories of movie subtitles, software localization files, and various other parallel resources, such as OPUS (Tiedemann, 2012) . 19 Our approach is languageindependent and would benefit from extension to datasets beyond EBC and WTC.", |
|
"cite_spans": [ |
|
{ |
|
"start": 457, |
|
"end": 474, |
|
"text": "(Tiedemann, 2012)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 477, |
|
"end": 479, |
|
"text": "19", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Possible improvements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "6 Related work POS tagging While projection annotation of POS labels goes back to Yarowsky's seminal work, Das and Petrov (2011) recently renewed interest in this problem. Das and Petrov (2011) go beyond our approach to POS annotation by combining annotation projection and unsupervised learning techniques, but they restrict themselves to Indo-European languages and a coarser tagset. Li et al. (2012) introduce an approach that leverages potentially noisy, but sizeable POS tag dictionaries in the form of Wiktionaries for 9 resource-rich languages. Garrette et al. (2013) also consider the problem of learning POS taggers for truly low-resource languages, but suggest crowdsourcing such POS tag dictionaries.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 128, |
|
"text": "Das and Petrov (2011)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 172, |
|
"end": 193, |
|
"text": "Das and Petrov (2011)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 402, |
|
"text": "Li et al. (2012)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 552, |
|
"end": 574, |
|
"text": "Garrette et al. (2013)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Possible improvements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, Agi\u0107 et al. (2015) were the first to introduce the idea of learning models for more than a dozen truly low-resource languages in one go, and our contribution can be seen as a non-trivial extension of theirs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 27, |
|
"text": "Agi\u0107 et al. (2015)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Possible improvements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Parsing With the exception of Zeman and Resnik (2008) , initial work on cross-lingual dependency parsing focused on annotation projection (Hwa et al., 2005; Spreyer et al., 2010) . McDonald et al. (2011) and S\u00f8gaard (2011) simultaneously took up the idea of delexicalized transfer after Zeman and Resnik (2008) , but more importantly, they also introduced the idea of multi-source cross-lingual transfer in the context of dependency parsing. McDonald et al. (2011) were the first to combine annotation projection and multi-source transfer, the approach taken in this paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 53, |
|
"text": "Zeman and Resnik (2008)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 138, |
|
"end": 156, |
|
"text": "(Hwa et al., 2005;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 157, |
|
"end": 178, |
|
"text": "Spreyer et al., 2010)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 203, |
|
"text": "McDonald et al. (2011)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 222, |
|
"text": "S\u00f8gaard (2011)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 287, |
|
"end": 310, |
|
"text": "Zeman and Resnik (2008)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 464, |
|
"text": "McDonald et al. (2011)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Possible improvements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Annotation projection has been explored in the context of cross-lingual dependency parsing since Hwa et al. (2005) . Notable approaches include the soft projection of reliable dependencies by Li et al. (2014) , and the work of Ma and Xia (2014) , who make use of the source-side distributions through a training objective function.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 114, |
|
"text": "Hwa et al. (2005)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 192, |
|
"end": 208, |
|
"text": "Li et al. (2014)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 244, |
|
"text": "Ma and Xia (2014)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Possible improvements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Tiedemann and Agi\u0107 (2016) provide a more detailed overview of model transfer and annotation projection, while introducing a competitive machine translation-based approach to synthesizing dependency treebanks. In their work, we note the IBM4 word alignments favor more closely related languages, and that building machine translation systems requires parallel data in quantities that far surpass EBC and WTC combined.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Possible improvements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The best results reported to date were presented by Rasooli and Collins (2015) . They use the intersection of languages represented in the Google dependency treebanks project and the languages represented in the Europarl corpus. Consequently, their approach-similar to all the other approaches listed in this section-is potentially biased toward closely related Indo-European languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 78, |
|
"text": "Rasooli and Collins (2015)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Possible improvements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We introduced a novel, yet simple and heuristicsfree, method for inducing POS taggers and dependency parsers for truly low-resource languages. We only assume the availability of a translation of a set of documents that have been translated into many languages. The novelty of our dependency projection method consists in projecting edge scores rather than edges, and specifically in projecting these annotations from multiple sources rather than from only one source. While we built models for more than a hundred languages during our experiments, we evaluated our approach across 30 languages for which we had test data. The results show that our approach is superior to commonly used transfer methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "https://bitbucket.org/lowlands/release", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://hdl.handle.net/11234/1-1548", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/ffnlp/sethr 4 http://wol.jw.org 5 http://www.biblija.net/biblija.cgi?l=eu", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://universaldependencies.org/ format.html 8 https://github.com/coastalcph/ ud-conversion-tools.9 Parameters used: utf, bisent, cautious, realign.10 Parameters used:d, o, v, r.11 Also reverse mode, with default settings, see https:// github.com/robertostling/efmaral.12 Parameters used: basic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our fork of TurboParser is available from https:// github.com/andersjo/TurboParser.14 For example, the AUX vs. VERB distinction from UD POS does not exist the tagset ofPetrov et al. (2012), and neither does NOUN vs. PROPN (proper noun).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Referred to as multi-dir in the original paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://bitbucket.org/lowlands/release", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.ohchr.org/EN/UDHR/Pages/ SearchByLang.aspx 19 http://opus.lingfil.uu.se/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Acknowledgements We thank the editors and the anonymous reviewers for their valuable comments. This research is funded by the ERC Starting Grant LOWLANDS (#313695).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "If All You Have is a Bit of the Bible: Learning POS Taggers for Truly Low-Resource Languages", |
|
"authors": [ |
|
{ |
|
"first": "Zeljko", |
|
"middle": [], |
|
"last": "Agi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zeljko Agi\u0107, Dirk Hovy, and Anders S\u00f8gaard. 2015. If All You Have is a Bit of the Bible: Learning POS Tag- gers for Truly Low-Resource Languages. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "TnT: A Statistical Part-of-Speech Tagger", |
|
"authors": [ |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Brants", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "ANLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thorsten Brants. 2000. TnT: A Statistical Part-of- Speech Tagger. In ANLP.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A Massively Parallel Corpus: The Bible in 100 Languages", |
|
"authors": [ |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Christodouloupoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christos Christodouloupoulos and Mark Steedman. 2014. A Massively Parallel Corpus: The Bible in 100 Languages. Language Resources and Evaluation, 49(2).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Unsupervised Partof-Speech Tagging with Bilingual Graph-Based Projections", |
|
"authors": [ |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dipanjan Das and Slav Petrov. 2011. Unsupervised Part- of-Speech Tagging with Bilingual Graph-Based Pro- jections. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A Simple, Fast, and Effective Reparameterization of IBM Model 2", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Chahuneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A Simple, Fast, and Effective Reparameteriza- tion of IBM Model 2. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Real-World Semi-Supervised Learning of POS-Taggers for Low-Resource Languages", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Garrette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Mielens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Garrette, Jason Mielens, and Jason Baldridge. 2013. Real-World Semi-Supervised Learning of POS- Taggers for Low-Resource Languages. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Bootstrapping Parsers via Syntactic Projection Across Parallel Texts", |
|
"authors": [ |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Hwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amy", |
|
"middle": [], |
|
"last": "Weinberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Cabezas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Okan", |
|
"middle": [], |
|
"last": "Kolak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Natural Language Engineering", |
|
"volume": "", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping Parsers via Syntactic Projection Across Parallel Texts. Natural Language Engineering, 11(3).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Joint Part-of-Speech and Dependency Projection from Multiple Sources", |
|
"authors": [ |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "Johannsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u017deljko", |
|
"middle": [], |
|
"last": "Agi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anders Johannsen,\u017deljko Agi\u0107, and Anders S\u00f8gaard. 2016. Joint Part-of-Speech and Dependency Projec- tion from Multiple Sources. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Wiki-ly Supervised Part-of-Speech Tagging", |
|
"authors": [ |
|
{ |
|
"first": "Shen", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jo\u00e3o", |
|
"middle": [], |
|
"last": "Gra\u00e7a", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shen Li, Jo\u00e3o Gra\u00e7a, and Ben Taskar. 2012. Wiki-ly Supervised Part-of-Speech Tagging. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Soft Cross-lingual Syntax Projection for Dependency Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Zhenghua", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenliang", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhenghua Li, Min Zhang, and Wenliang Chen. 2014. Soft Cross-lingual Syntax Projection for Dependency Parsing. In COLING.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Unsupervised Dependency Parsing with Transferring Distribution via Parallel Guidance and Entropy Regularization", |
|
"authors": [ |
|
{ |
|
"first": "Xuezhe", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xuezhe Ma and Fei Xia. 2014. Unsupervised Depen- dency Parsing with Transferring Distribution via Par- allel Guidance and Entropy Regularization. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Turning on the Turbo: Fast Third-Order Non-Projective Turbo Parsers", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Andr\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Martins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Almeida", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andr\u00e9 F. T. Martins, Miguel Almeida, and Noah A. Smith. 2013. Turning on the Turbo: Fast Third-Order Non-Projective Turbo Parsers. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Online Large-Margin Training of Dependency Parsers", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online Large-Margin Training of Dependency Parsers. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Multi-Source Transfer of Delexicalized Dependency Parsers", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-Source Transfer of Delexicalized Dependency Parsers. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Universal Dependency Annotation for Multilingual Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvonne", |
|
"middle": [], |
|
"last": "Quirmbach-Brundage", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kuzman", |
|
"middle": [], |
|
"last": "Ganchev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oscar", |
|
"middle": [], |
|
"last": "T\u00e4ckstr\u00f6m", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claudia", |
|
"middle": [], |
|
"last": "Bedini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan McDonald, Joakim Nivre, Yvonne Quirmbach- Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T\u00e4ckstr\u00f6m, Claudia Bedini, N\u00faria Bertomeu Castell\u00f3, and Jungmee Lee. 2013. Universal Dependency An- notation for Multilingual Parsing. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Word Order Typology Through Multilingual Word Alignment", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Robert\u00f6stling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert\u00d6stling. 2015. Word Order Typology Through Multilingual Word Alignment. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A Universal Part-of-Speech Tagset", |
|
"authors": [ |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A Universal Part-of-Speech Tagset. In LREC.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Density-Driven Cross-Lingual Transfer of Dependency Parsers", |
|
"authors": [ |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Sadegh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rasooli", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohammad Sadegh Rasooli and Michael Collins. 2015. Density-Driven Cross-Lingual Transfer of Depen- dency Parsers. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "KLcpos3: A Language Similarity Measure for Delexicalized Parser Transfer", |
|
"authors": [ |
|
{ |
|
"first": "Rudolf", |
|
"middle": [], |
|
"last": "Rosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zden\u011bk\u017eabokrtsk\u00fd", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rudolf Rosa and Zden\u011bk\u017dabokrtsk\u00fd. 2015. KLcpos3: A Language Similarity Measure for Delexicalized Parser Transfer. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Parser Combination by Reparsing", |
|
"authors": [ |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Sagae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenji Sagae and Alon Lavie. 2006. Parser Combination by Reparsing. In NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Training Parsers on Partial Trees: A Cross-Language Comparison", |
|
"authors": [ |
|
{ |
|
"first": "Kathrin", |
|
"middle": [], |
|
"last": "Spreyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lilja", |
|
"middle": [], |
|
"last": "\u00d8vrelid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Kuhn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kathrin Spreyer, Lilja \u00d8vrelid, and Jonas Kuhn. 2010. Training Parsers on Partial Trees: A Cross-Language Comparison. In LREC.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Data Point Selection for Cross-Language Adaptation of Dependency Parsers", |
|
"authors": [ |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anders S\u00f8gaard. 2011. Data Point Selection for Cross- Language Adaptation of Dependency Parsers. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Synthetic Treebanking for Cross-Lingual Dependency Parsing", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Agi\u0107", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00f6rg Tiedemann and\u017deljko Agi\u0107. 2016. Synthetic Tree- banking for Cross-Lingual Dependency Parsing. Jour- nal of Artificial Intelligence Research, 55.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Treebank Translation for Cross-Lingual Parser Induction", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u017deljko", |
|
"middle": [], |
|
"last": "Agi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00f6rg Tiedemann,\u017deljko Agi\u0107, and Joakim Nivre. 2014. Treebank Translation for Cross-Lingual Parser Induc- tion. In CoNLL.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Parallel Data, Tools and Interfaces in OPUS", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel Data, Tools and Inter- faces in OPUS. In LREC.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Rediscovering Annotation Projection for Cross-Lingual Parser Induction", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "COL-ING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00f6rg Tiedemann. 2014. Rediscovering Annotation Pro- jection for Cross-Lingual Parser Induction. In COL- ING.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Andr\u00e1s Kornai, Viktor Tr\u00f3n, and Viktor Nagy. 2005. Parallel Corpora for Medium Density Languages", |
|
"authors": [ |
|
{ |
|
"first": "D\u00e1niel", |
|
"middle": [], |
|
"last": "Varga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L\u00e1szl\u00f3", |
|
"middle": [], |
|
"last": "N\u00e9meth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P\u00e9ter", |
|
"middle": [], |
|
"last": "Hal\u00e1csy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "RANLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D\u00e1niel Varga, L\u00e1szl\u00f3 N\u00e9meth, P\u00e9ter Hal\u00e1csy, Andr\u00e1s Ko- rnai, Viktor Tr\u00f3n, and Viktor Nagy. 2005. Parallel Corpora for Medium Density Languages. In RANLP.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Inducing Multilingual Text Analysis Tools via Robust Projection Across Aligned Corpora", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Grace", |
|
"middle": [], |
|
"last": "Ngai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Wicentowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing Multilingual Text Analysis Tools via Robust Projection Across Aligned Corpora. In NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Cross-Language Parser Adaptation Between Related Languages", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Zeman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "IJCNLP Workshop on NLP for Less Privileged Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Zeman and Philip Resnik. 2008. Cross-Language Parser Adaptation Between Related Languages. In IJCNLP Workshop on NLP for Less Privileged Lan- guages.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "t are the result of word alignments. The alignment subgraph is the bipartite graph", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "The 25 most frequent words exclusive to the English Bible or Watchtower.", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td/><td/><td/><td>Languages</td><td/><td/></tr><tr><td>Baselines</td><td>All</td><td colspan=\"2\">Sources Targets</td><td>IE</td><td>Non-IE</td></tr><tr><td colspan=\"2\">DELEX-MS 45.43</td><td>45.64</td><td>44.59</td><td>49.53</td><td>34.88 \u2020</td></tr><tr><td colspan=\"3\">DCA-PROJ 47.87 \u2020 47.05</td><td>47.19</td><td colspan=\"2\">51.33 \u2020 40.66 \u2020</td></tr><tr><td colspan=\"2\">REPARSE 47.79</td><td>47.87</td><td>47.47</td><td>51.34</td><td>38.67</td></tr><tr><td>Our systems</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">IBM1 GRAPHS 52.82</td><td>53.01</td><td>52.07</td><td>55.44</td><td>46.08</td></tr><tr><td colspan=\"3\">TREES 53.47 IBM2 GRAPHS 46.44 \u2020 46.14 53.49</td><td>53.38 44.39</td><td colspan=\"2\">55.91 49.54 \u2020 38.47 \u2020 47.19</td></tr><tr><td colspan=\"3\">TREES 46.48 \u2020 46.67</td><td>45.54</td><td colspan=\"2\">49.58 \u2020 38.93</td></tr><tr><td>Upper bounds</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">DELEX-SB 48.52</td><td>48.64</td><td>48.02</td><td>50.91</td><td>42.35</td></tr><tr><td>SELF-TRAIN</td><td>-</td><td>58.38</td><td>-</td><td>-</td><td>-</td></tr><tr><td>FULL</td><td>-</td><td>72.55</td><td>-</td><td>-</td><td>-</td></tr></table>", |
|
"type_str": "table", |
|
"text": "16 http://ilk.uvt.nl/conll/#dataformat", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td>: Overview of the parsing experiment results for the 25 languages in EBC \u2229 WTC. We report the best av-erage UAS score per system and language subset. IE: Indo-European languages, \u2020: EBC, : WTC.</td></tr></table>", |
|
"type_str": "table", |
|
"text": "", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "POS tagging accuracies and UAS parsing scores for the models built using WTC data. The results are split for source and target languages. All baselines and upper bounds use IBM1 POS taggers, while our MULTI-PROJ systems use their respective IBM1 or IBM2 taggers.", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |