Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K15-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:09:08.523809Z"
},
"title": "Cross-lingual Transfer for Unsupervised Dependency Parsing Without Parallel Data",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Duong",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Melbourne",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Melbourne",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of New",
"location": {
"settlement": "Brunswick"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Cross-lingual transfer has been shown to produce good results for dependency parsing of resource-poor languages. Although this avoids the need for a target language treebank, most approaches have still used large parallel corpora. However, parallel data is scarce for low-resource languages, and we report a new method that does not need parallel data. Our method learns syntactic word embeddings that generalise over the syntactic contexts of a bilingual vocabulary, and incorporates these into a neural network parser. We show empirical improvements over a baseline delexicalised parser on both the CoNLL and Universal Dependency Treebank datasets. We analyse the importance of the source languages, and show that combining multiple source-languages leads to a substantial improvement.",
"pdf_parse": {
"paper_id": "K15-1012",
"_pdf_hash": "",
"abstract": [
{
"text": "Cross-lingual transfer has been shown to produce good results for dependency parsing of resource-poor languages. Although this avoids the need for a target language treebank, most approaches have still used large parallel corpora. However, parallel data is scarce for low-resource languages, and we report a new method that does not need parallel data. Our method learns syntactic word embeddings that generalise over the syntactic contexts of a bilingual vocabulary, and incorporates these into a neural network parser. We show empirical improvements over a baseline delexicalised parser on both the CoNLL and Universal Dependency Treebank datasets. We analyse the importance of the source languages, and show that combining multiple source-languages leads to a substantial improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dependency parsing is a crucial component of many natural language processing (NLP) systems for tasks such as relation extraction (Bunescu and Mooney, 2005) , statistical machine translation (Xu et al., 2009) , text classification (\u00d6zg\u00fcr and G\u00fcng\u00f6r, 2010) , and question answering (Cui et al., 2005) . Supervised approaches to dependency parsing have been very successful for many resource-rich languages, where relatively large treebanks are available (McDonald et al., 2005a) . However, for many languages, annotated treebanks are not available, and are very costly to create (B\u00f6hmov\u00e1 et al., 2001 ). This motivates the development of unsupervised approaches that can make use of unannotated, monolingual data. However, purely unsupervised approaches have relatively low accuracy (Klein and Manning, 2004; Gelling et al., 2012) .",
"cite_spans": [
{
"start": 130,
"end": 156,
"text": "(Bunescu and Mooney, 2005)",
"ref_id": "BIBREF5"
},
{
"start": 191,
"end": 208,
"text": "(Xu et al., 2009)",
"ref_id": "BIBREF35"
},
{
"start": 231,
"end": 255,
"text": "(\u00d6zg\u00fcr and G\u00fcng\u00f6r, 2010)",
"ref_id": null
},
{
"start": 281,
"end": 299,
"text": "(Cui et al., 2005)",
"ref_id": "BIBREF8"
},
{
"start": 453,
"end": 477,
"text": "(McDonald et al., 2005a)",
"ref_id": "BIBREF21"
},
{
"start": 578,
"end": 599,
"text": "(B\u00f6hmov\u00e1 et al., 2001",
"ref_id": "BIBREF2"
},
{
"start": 782,
"end": 807,
"text": "(Klein and Manning, 2004;",
"ref_id": "BIBREF18"
},
{
"start": 808,
"end": 829,
"text": "Gelling et al., 2012)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most recent work on unsupervised dependency parsing for low-resource languages has used the idea of delexicalized parsing and cross-lingual transfer (Zeman et al., 2008; S\u00f8gaard, 2011; Mc-Donald et al., 2011; Ma and Xia, 2014) . In this setting, a delexicalized parser is trained on a resource-rich source language, and is then applied directly to a resource-poor target language. The only requirement here is that the source and target languages are POS tagged must use the same tagset. This assumption is pertinent for resourcepoor languages since it is relatively quick to manually POS tag the data. Moreover, there are many reports of high accuracy POS tagging for resourcepoor languages (Duong et al., 2014; Garrette et al., 2013; Duong et al., 2013b) . The cross-lingual delexicalized approach has been shown to significantly outperform unsupervised approaches (Mc-Donald et al., 2011; Ma and Xia, 2014) .",
"cite_spans": [
{
"start": 149,
"end": 169,
"text": "(Zeman et al., 2008;",
"ref_id": "BIBREF37"
},
{
"start": 170,
"end": 184,
"text": "S\u00f8gaard, 2011;",
"ref_id": "BIBREF30"
},
{
"start": 185,
"end": 208,
"text": "Mc-Donald et al., 2011;",
"ref_id": null
},
{
"start": 209,
"end": 226,
"text": "Ma and Xia, 2014)",
"ref_id": "BIBREF20"
},
{
"start": 692,
"end": 712,
"text": "(Duong et al., 2014;",
"ref_id": "BIBREF12"
},
{
"start": 713,
"end": 735,
"text": "Garrette et al., 2013;",
"ref_id": "BIBREF13"
},
{
"start": 736,
"end": 756,
"text": "Duong et al., 2013b)",
"ref_id": "BIBREF11"
},
{
"start": 867,
"end": 891,
"text": "(Mc-Donald et al., 2011;",
"ref_id": null
},
{
"start": 892,
"end": 909,
"text": "Ma and Xia, 2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Parallel data can be used to boost the performance of a cross-lingual parser (McDonald et al., 2011; Ma and Xia, 2014) . However, parallel data may be hard to acquire for truly resource-poor languages. 1 Accordingly, we propose a method to improve the performance of a cross-lingual delexicalized parser using only monolingual data.",
"cite_spans": [
{
"start": 77,
"end": 100,
"text": "(McDonald et al., 2011;",
"ref_id": "BIBREF23"
},
{
"start": 101,
"end": 118,
"text": "Ma and Xia, 2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach is based on augmenting the delexicalized parser using syntactic word embeddings. Words from both source and target language are mapped to a shared low-dimensional space based on their syntactic context, without recourse to parallel data. While prior work has struggled to efficiently incorporate word embedding information into the parsing model (Bansal et al., 2014; Andreas and Klein, 2014; , we present a method for doing so using a neural net-work parser. We train our parser using a two stage process: first learning cross-lingual syntactic word embeddings, then learning the other parameters of the parsing model using a source language treebank. When applied to the target language, we show consistent gains across all studied languages.",
"cite_spans": [
{
"start": 359,
"end": 380,
"text": "(Bansal et al., 2014;",
"ref_id": "BIBREF1"
},
{
"start": 381,
"end": 405,
"text": "Andreas and Klein, 2014;",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work is a stepping stone towards the more ambitious goal of a universal parser that can efficiently parse many languages with little modification. This aspiration is supported by the recent release of the Universal Dependency Treebank (Nivre et al., 2015) which has consensus dependency relation types and POS annotation for many languages.",
"cite_spans": [
{
"start": 240,
"end": 260,
"text": "(Nivre et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "When multiple source languages are available, we can attempt to boost performance by choosing the best source language, or combining information from several source languages. To the best of our knowledge, no prior work has proposed a means for selecting the best source language given a target language. To address this, we introduce two metrics which outperform the baseline of always picking English as the source language. We also propose a method for combining all available source languages which leads to substantial improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows: Section 2 reviews prior work on unsupervised cross-lingual dependency parsing. Section 3 presents the methods for improving the delexicalized parser using syntactic word embeddings. Section 4 describes experiments on the CoNLL dataset and Universal Dependency Treebank. Section 5 presents methods for selecting the best source language given a target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are two main approaches for building dependency parsers for resource-poor languages without using target-language treebanks: delexicalized parsing and projection (Hwa et al., 2005; Ma and Xia, 2014; T\u00e4ckstr\u00f6m et al., 2013; Mc-Donald et al., 2011) . The delexicalized approach was proposed by Zeman et al. (2008) . They built a delexicalized parser from a treebank in a resource-rich source language. This parser can be trained using any standard supervised approach, but without including any lexical features, then applied directly to parse sentences from the resource-poor language. Delexicalized parsing relies on the fact that parts-of-speech are highly informative of dependency relations. For example, an English lexicalized discriminative arc-factored dependency parser achieved 84.1% accuracy, whereas a delexicalized version achieved 78. 9% (McDonald et al., 2005b; T\u00e4ckstr\u00f6m et al., 2013) . Zeman et al. (2008) build a parser for Swedish using Danish, two closely-related languages. S\u00f8gaard (2011) adapt this method for less similar languages by choosing sentences from the source language that are similar to the target language. T\u00e4ckstr\u00f6m et al. (2012) additionally use cross-lingual word clustering as a feature for their delexicalized parser. Also related is the work by Naseem et al. (2012) and T\u00e4ckstr\u00f6m et al. (2013) who incorporated linguistic features from the World Atlas of Language Structures (WALS; Dryer and Haspelmath (2013)) for joint modelling of multi-lingual syntax.",
"cite_spans": [
{
"start": 168,
"end": 186,
"text": "(Hwa et al., 2005;",
"ref_id": "BIBREF17"
},
{
"start": 187,
"end": 204,
"text": "Ma and Xia, 2014;",
"ref_id": "BIBREF20"
},
{
"start": 205,
"end": 228,
"text": "T\u00e4ckstr\u00f6m et al., 2013;",
"ref_id": "BIBREF32"
},
{
"start": 229,
"end": 252,
"text": "Mc-Donald et al., 2011)",
"ref_id": null
},
{
"start": 298,
"end": 317,
"text": "Zeman et al. (2008)",
"ref_id": "BIBREF37"
},
{
"start": 853,
"end": 880,
"text": "9% (McDonald et al., 2005b;",
"ref_id": null
},
{
"start": 881,
"end": 904,
"text": "T\u00e4ckstr\u00f6m et al., 2013)",
"ref_id": "BIBREF32"
},
{
"start": 907,
"end": 926,
"text": "Zeman et al. (2008)",
"ref_id": "BIBREF37"
},
{
"start": 1147,
"end": 1170,
"text": "T\u00e4ckstr\u00f6m et al. (2012)",
"ref_id": "BIBREF31"
},
{
"start": 1291,
"end": 1311,
"text": "Naseem et al. (2012)",
"ref_id": "BIBREF25"
},
{
"start": 1316,
"end": 1339,
"text": "T\u00e4ckstr\u00f6m et al. (2013)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Cross-lingual Dependency Parsing",
"sec_num": "2"
},
{
"text": "In contrast, projection approaches use parallel data to project source language dependency relations to the target language (Hwa et al., 2005) . Given a source-language parse tree along with word alignments, they generate the targetlanguage parse tree by projection. However, their approach relies on many heuristics which would be difficult to adapt to other languages. McDonald et al. (2011) exploit both delexicalized parsing and parallel data, using an English delexicalized parser as the seed parser for the target languages, and updating it according to word alignments. The model encourages the target-language parse tree to look similar to the source-language parse tree with respect to the head-modifier relation. Ma and Xia (2014) use parallel data to transfer source language parser constraints to the target side via word alignments. For the null alignment, they used a delexicalized parser instead of the source language lexicalized parser.",
"cite_spans": [
{
"start": 124,
"end": 142,
"text": "(Hwa et al., 2005)",
"ref_id": "BIBREF17"
},
{
"start": 723,
"end": 740,
"text": "Ma and Xia (2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Cross-lingual Dependency Parsing",
"sec_num": "2"
},
{
"text": "In summary, existing work generally starts with a delexicalized parser, and uses parallel data typological information to improve it. In contrast, we want to improve the delexicalized parser, but without using parallel data or any explicit linguistic resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Cross-lingual Dependency Parsing",
"sec_num": "2"
},
{
"text": "We propose a novel method to improve the performance of a delexicalized cross-lingual parser without recourse to parallel data. Our method uses no additional resources and is designed to com-plement other methods. The approach is based on syntactic word embeddings where a word is represented as a low-dimensional vector in syntactic space. The idea is simple: we want to relexicalize the delexicalized parser using word embeddings, where source and target language lexical items are represented in the same space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improving Delexicalized Parsing",
"sec_num": "3"
},
{
"text": "Word embeddings typically capture both syntactic and semantic information. However, we hypothesize (and later show empirically) that for dependency parsing, word embeddings need to better reflect syntax. In the next subsection, we review some cross-lingual word embedding methods and propose our syntactic word embeddings. Section 4 empirically compares these word embeddings when incorporated into a dependency parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improving Delexicalized Parsing",
"sec_num": "3"
},
{
"text": "We review methods that can represent words in both source and target languages in a lowdimensional space. There are many benefits of using a low-dimensional space. Instead of the traditional \"one-hot\" representation with the number of dimensions equal to vocabulary size, words are represented using much fewer dimensions. This confers the benefit of generalising over the vocabulary to alleviate issues of data sparsity, through learning representations encoding lexical relations such as synonymy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual word embeddings",
"sec_num": "3.1"
},
{
"text": "Several approaches have sought to learn crosslingual word embeddings from parallel data (Hermann and Blunsom, 2014a; Hermann and Blunsom, 2014b; Xiao and Guo, 2014; Zou et al., 2013; T\u00e4ckstr\u00f6m et al., 2012) . Hermann and Blunsom (2014a) induced a cross-lingual word representation based on the idea that representations for parallel sentences should be close together. They constructed a sentence level representation as a bag-of-words summing over word-level representations, and then optimized a hinge loss function to match a latent representation of both sides of a parallel sentence pair. While this might seem well suited to our needs as a word representation in cross-lingual parsing, it may lead to overly semantic embeddings, which are important for translation, but less useful for parsing. For example, \"economic\" and \"economical\" will have a similar representation despite having different syntactic features.",
"cite_spans": [
{
"start": 145,
"end": 164,
"text": "Xiao and Guo, 2014;",
"ref_id": "BIBREF34"
},
{
"start": 165,
"end": 182,
"text": "Zou et al., 2013;",
"ref_id": "BIBREF38"
},
{
"start": 183,
"end": 206,
"text": "T\u00e4ckstr\u00f6m et al., 2012)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual word embeddings",
"sec_num": "3.1"
},
{
"text": "Also related is (T\u00e4ckstr\u00f6m et al., 2012) who Figure 1 : Examples of the syntactic word embeddings for Spanish and English. In each case, the highlighted tags are predicted by the highlighted word. The Spanish sentence means \"your pet looks lovely\".",
"cite_spans": [
{
"start": 16,
"end": 40,
"text": "(T\u00e4ckstr\u00f6m et al., 2012)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 45,
"end": 53,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cross-lingual word embeddings",
"sec_num": "3.1"
},
{
"text": "build cross-lingual word representations using a variant of the Brown clusterer (Brown et al., 1992) applied to parallel data. Bansal et al. (2014) and Turian et al. (2010) showed that for monolingual dependency parsing, the simple Brown clustering based algorithm outperformed many word embedding techniques. In this paper we compare our approach to forming cross-lingual word embeddings with those of both Hermann and Blunsom (2014a) and T\u00e4ckstr\u00f6m et al. (2012) .",
"cite_spans": [
{
"start": 80,
"end": 100,
"text": "(Brown et al., 1992)",
"ref_id": "BIBREF3"
},
{
"start": 127,
"end": 147,
"text": "Bansal et al. (2014)",
"ref_id": "BIBREF1"
},
{
"start": 152,
"end": 172,
"text": "Turian et al. (2010)",
"ref_id": "BIBREF33"
},
{
"start": 440,
"end": 463,
"text": "T\u00e4ckstr\u00f6m et al. (2012)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual word embeddings",
"sec_num": "3.1"
},
{
"text": "We now propose a novel approach for learning cross-lingual word embeddings that is more heavily skewed towards syntax. Word embedding methods typically exploit word co-occurrences, building on traditional techniques for distributional similarity, e.g., the co-occurrences of words in a context window about a central word. Bansal et al. (2014) suggested that for dependency parsing, word embeddings be trained over dependency relations, instead of adjacent tokens, such that embeddings capture head and modifier relations. They showed that this strategy performed much better than surface embeddings for monolingual dependency parsing. However, their method is not applicable to our low resource setting, as it requires a parse tree for training. Instead we consider a simpler representation, namely part-ofspeech contexts. This requires only POS tagging, rather than full parsing, while providing syntactic information linking words to their POS context, which we expect to be informative for characterising dependency relations.",
"cite_spans": [
{
"start": 323,
"end": 343,
"text": "Bansal et al. (2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Word Embedding",
"sec_num": "3.2"
},
{
"text": "Algorithm 1 Syntactic word embedding 1: Match the source and target tagsets to the Universal Tagset. 2: Extract word n-gram sequences for both the source and target language. 3: For each n-gram, keep the middle word, and replace the other words by their POS. 4: Train a skip-gram word embedding model on the resulting list of word and POS sequences from both the source and target language",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Word Embedding",
"sec_num": "3.2"
},
{
"text": "We assume the same POS tagset is used for both the source and target language, 2 and learn word embeddings for each word type in both languages into the same syntactic space of nearby POS contexts. In particular, we develop a predictive model of the tags to the left and right of a word, as illustrated in Figure 1 and outlined in Algorithm 1. Figure 1 illustrates two training contexts extracted from our English source and Spanish target language, where the highlighted fragments reflect the tags being predicted around each focus word. Note that for this example, the POS contexts for the English and Spanish verbs are identical, and therefore the model would learn similar word embeddings for these terms, and bias the parser to generate similar dependency structures for both terms.",
"cite_spans": [],
"ref_spans": [
{
"start": 306,
"end": 314,
"text": "Figure 1",
"ref_id": null
},
{
"start": 344,
"end": 352,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Syntactic Word Embedding",
"sec_num": "3.2"
},
{
"text": "There are several motivations for our approach: (1) POS tags are too coarse-grained for accurate parsing, but with access to local context they can be made more informative; (2) leaving out the middle tag avoids duplication because this is already known to the parser; (3) dependency edges are often local, as shown in Figure 1 , i.e., there are dependency relations between most words and their immediate neighbours. Consequently, training our embeddings to predict adjacent tags is likely to learn similar information to training over dependency edges. 3 Bansal et al. (2014) studied the effect of word embeddings on dependency parsing, and found that larger embedding windows captured more semantic information, while smaller windows better reflected syntax. Therefore we choose a small \u00b11 word window in our experiments. We also experimented with bigger win- 2014dows (\u00b12, \u00b13) but observed performance degradation in these cases, supporting the argument above.",
"cite_spans": [
{
"start": 555,
"end": 556,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 319,
"end": 327,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Syntactic Word Embedding",
"sec_num": "3.2"
},
{
"text": "Step 4 of Algorithm 1 finds the word embeddings as a side-effect of training a neural language model. We use the skip-gram model (Mikolov et al., 2013) , trained to predict context tags for each word. The model is formulated as a simple bilinear logistic classifier",
"cite_spans": [
{
"start": 129,
"end": 151,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Word Embedding",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (t c |w) = exp(u tc v w ) T z=1 exp(u z v w )",
"eq_num": "(1)"
}
],
"section": "Syntactic Word Embedding",
"sec_num": "3.2"
},
{
"text": "where t c is the context tag around the current word w, U \u2208 R T \u00d7D is the tag embedding matrix, V \u2208 R V \u00d7D is the word embedding matrix, with T the number of tags, V is the total number of word types over both languages and D the capacity of the embeddings. Given a training set of word and POS contexts,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Word Embedding",
"sec_num": "3.2"
},
{
"text": "(t L i , w i , t R i ) N i=1 , 4 we maximize the log-likelihood N i=1 log P (t L i |w i ) + log P (t R i |w i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Word Embedding",
"sec_num": "3.2"
},
{
"text": "with respect to U and V using stochastic gradient descent. The learned V matrix of word embeddings is later used in parser training (the source word embeddings) and inference (the target word embeddings).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Word Embedding",
"sec_num": "3.2"
},
{
"text": "In this Section, we show how to incorporate the syntactic word embeddings into a parsing model. Our parsing model is built based on the work of Chen and Manning (2014) . They built a transition-based dependency parser using a neuralnetwork. The neural network classifier will decide which transition is applied for each configuration.",
"cite_spans": [
{
"start": 144,
"end": 167,
"text": "Chen and Manning (2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Algorithm",
"sec_num": "3.3"
},
{
"text": "The architecture of the parser is illustrated in Figure 2 , where each layer is fully connected to the layer above.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 57,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsing Algorithm",
"sec_num": "3.3"
},
{
"text": "For each configuration, the selected list of words, POS tags and labels from the Stack, Queue and Arcs are extracted. Each word, POS or label is mapped to a low-dimension vector representation (embedding) through the Mapping Layer. This layer simply concatenates the embeddings which are then fed into a two-layer neural network classifier to predict the next parsing action. The set of parameters for the neural network classifier is E word , E pos , E labels for the mapping layer, W 1 for the hidden layer and W 2 for the soft-max output layer. We incorporate the syntactic word embeddings into the neural network model by setting E word to the syntactic word embeddings, which remain fixed during training so as to retain the cross-lingual mapping. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Algorithm",
"sec_num": "3.3"
},
{
"text": "To apply the parser to a resource-poor target language, we start by building syntactic word embeddings between source and target languages as shown in algorithm 1. Next we incorporate syntactic word embeddings using the algorithm proposed in Section 3.3. The third step is to substitute source-with target-language syntactic word embeddings. Finally, we parse the target language using this substituted model. In this way, the model will recognize lexical items for the target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Summary",
"sec_num": "3.4"
},
{
"text": "We test our method of incorporating syntactic word embeddings into a neural network parser, for both the existing CoNLL dataset (Buchholz and Marsi, 2006; Nivre et al., 2007) and the newlyreleased Universal Dependency Treebank (Nivre et al., 2015) . We employed the Unlabeled Attachment Score (UAS) without punctuation for comparison with prior work on the CoNLL dataset. Where possible we also report Labeled Attachment Score (LAS) without punctuation. We use English as the source language for this experiment.",
"cite_spans": [
{
"start": 128,
"end": 154,
"text": "(Buchholz and Marsi, 2006;",
"ref_id": "BIBREF4"
},
{
"start": 155,
"end": 174,
"text": "Nivre et al., 2007)",
"ref_id": "BIBREF26"
},
{
"start": 227,
"end": 247,
"text": "(Nivre et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In this section we report experiments involving the CoNLL-X and CoNLL-07 datasets. Running on this dataset makes our model comparable with prior work. For languages included in both datasets, we use the newer one only. Crucially, for the delexicalized parser we map language-specific tags to the universal tagset (Petrov et al., 2012) . The syntactic word embeddings are trained using POS information from the CoNLL data.",
"cite_spans": [
{
"start": 313,
"end": 334,
"text": "(Petrov et al., 2012)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on CoNLL Data",
"sec_num": "4.1"
},
{
"text": "There are two baselines for our experiment. The first one is the unsupervised dependency parser of Klein and Manning (2004) , the second one is the delexicalized parser of T\u00e4ckstr\u00f6m et al. (2012) . We also compare our syntactic word embedding with the cross-lingual word embeddings of Hermann and Blunsom (2014a). These word embeddings are induced by running each language pair using Europarl (Koehn, 2005) . We incorporated Hermann and Blunsom (2014a)'s crosslingual word embeddings into the parsing model in the same way as for the syntactic word embeddings. Table 1 shows the UAS for 8 languages for several models. The first observation is that the direct transfer delexicalized parser outperformed the unsupervised approach. This is consistent with many prior studies. Our implementation of the direct transfer model performed on par with T\u00e4ckstr\u00f6m et al. (2012) on average. Table 1 also shows that using HB embeddings improve the performance over the Direct Transfer model. Our model using syntactic word embedding consistently out-performed the Direct Transfer model and HB embedding across all 8 languages. On average, it is 1.5% and 1.3% better. 6 The improvement varies across languages compared with HB embedding, and falls in the range of 0.3 to 2.6%. This confirms our initial hypothesis that we need word embeddings that capture syntactic instead of semantic information.",
"cite_spans": [
{
"start": 99,
"end": 123,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF18"
},
{
"start": 172,
"end": 195,
"text": "T\u00e4ckstr\u00f6m et al. (2012)",
"ref_id": "BIBREF31"
},
{
"start": 393,
"end": 406,
"text": "(Koehn, 2005)",
"ref_id": "BIBREF19"
},
{
"start": 1155,
"end": 1156,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 561,
"end": 568,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 880,
"end": 887,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments on CoNLL Data",
"sec_num": "4.1"
},
{
"text": "It is not strictly fair to compare our method with prior approaches to unsupervised dependency parsing, since they have different resource requirement, i.e. parallel data or typological resources. Compared with the baseline of the direct transfer model, our approach delivered a 1.5% mean performance gain, whereas T\u00e4ckstr\u00f6m et al. (2012) and McDonald et al. (2011) report approximately 3% gain, Ma and Xia (2014) and Naseem et al. (2012) have stated above, our approach is complementary to the approaches used in these other systems. For example, we could incorporate the cross-lingual word clustering feature (T\u00e4ckstr\u00f6m et al., 2012) or WALS features (Naseem et al., 2012) into our model, or use our improved delexicalized parser as the reference model for Ma and Xia (2014), which we expect would lead to better results yet.",
"cite_spans": [
{
"start": 315,
"end": 338,
"text": "T\u00e4ckstr\u00f6m et al. (2012)",
"ref_id": "BIBREF31"
},
{
"start": 343,
"end": 365,
"text": "McDonald et al. (2011)",
"ref_id": "BIBREF23"
},
{
"start": 396,
"end": 413,
"text": "Ma and Xia (2014)",
"ref_id": "BIBREF20"
},
{
"start": 418,
"end": 438,
"text": "Naseem et al. (2012)",
"ref_id": "BIBREF25"
},
{
"start": 611,
"end": 635,
"text": "(T\u00e4ckstr\u00f6m et al., 2012)",
"ref_id": "BIBREF31"
},
{
"start": 653,
"end": 674,
"text": "(Naseem et al., 2012)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on CoNLL Data",
"sec_num": "4.1"
},
{
"text": "We also experimented with the Universal Dependency Treebank V1.0, which has many desirable properties for our system, e.g. dependency types and coarse POS are the same across languages. This removes the need for mapping the source and target language tagsets to a common tagset, as was done for the CoNLL data. Secondly, instead of only reporting UAS we can report LAS, which is impossible on CoNLL dataset where the dependency edge labels differed among languages. Table 2 shows the size in thousands of tokens for each language in the treebank. The first thing to observe is that some languages have abundant amount of data such as Czech (cs), French (fr) and Spanish (es). However, there are languages with modest size i.e. Hungarian (hu) and Irish (ga).",
"cite_spans": [],
"ref_spans": [
{
"start": 466,
"end": 473,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments with Universal Dependency Treebank",
"sec_num": "4.2"
},
{
"text": "We ran our model with and without syntactic word embeddings for all languages with English as the source language. The results are shown in Table 3 . The first observation is that our model using syntactic word embeddings out-performed direct transfer for all the languages on both UAS and LAS. We observed an average improvement of 3.6% (UAS) and 3.1% (LAS). This consistent improvement shows the robustness of our method of incorporating syntactic word embedding to the model. The second observation is that the gap between UAS and LAS is as big as 13% on average for both models. This reflects the increase difficulty of labelling the edges, with unlabelled edge prediction involving only a 3-way classification 7 while labelled edge prediction involves an 81-way classification. 8 Narrowing the gap between UAS and LAS for resource-poor languages is an important research area for future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments with Universal Dependency Treebank",
"sec_num": "4.2"
},
{
"text": "In the previous sections, we used English as the source language. However, English might not be the best choice. For the delexicalized parser, it is crucial that the source and target languages have similar syntactic structures. Therefore a different choice of source language might substantially change the performance, as observed in prior studies (T\u00e4ckstr\u00f6m et al., 2013; Duong et al., 2013a; McDonald et al., 2011) . cs de es fi fr ga hu it sv UAS LAS Direct Transfer 47.2 57.9 64.7 44.9 64.8 49.1 47.8 64.9 55.5 55.2 42.7 Our Model + Syntactic embedding 50.2 60.9 67.9 51.4 66.0 51.6 52.3 69.2 59.6 58.8 45.8 Table 3 : Results comparing a direct transfer parser and our model with syntactic word embeddings. Evaluating UAS over the Universal Dependency Treebank. (We observed a similar pattern for LAS.) The rightmost UAS and LAS columns shows the average scores for the respective metric across 9 languages. Table 4 : UAS for each language pair in the Universal Dependency Treebank using our best model. The UAS/LAS column show the average UAS/LAS for all target languages, excluding the source language. The best UAS for each target language is shown in bold.",
"cite_spans": [
{
"start": 350,
"end": 374,
"text": "(T\u00e4ckstr\u00f6m et al., 2013;",
"ref_id": "BIBREF32"
},
{
"start": 375,
"end": 395,
"text": "Duong et al., 2013a;",
"ref_id": "BIBREF10"
},
{
"start": 396,
"end": 418,
"text": "McDonald et al., 2011)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 614,
"end": 621,
"text": "Table 3",
"ref_id": null
},
{
"start": 914,
"end": 921,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Different Source Languages",
"sec_num": "5"
},
{
"text": "In this section we assume that we have multiple source languages. To see how the performance changes when using a different source language, we run our best model (i.e., using syntactic embeddings) for each language pair in the Universal Dependency Treebank. Table 4 shows the UAS for each language pair, and the average across all target languages for each source language. We also considered LAS, but observed similar trends, and therefore only report the average LAS for each source language. Observe that English is rarely the best source language; Czech and French give a higher average UAS and LAS, respectively. Interestingly, while Czech gives high UAS on average, it performs relatively poorly in terms of LAS.",
"cite_spans": [],
"ref_spans": [
{
"start": 259,
"end": 266,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Different Source Languages",
"sec_num": "5"
},
{
"text": "One might expect that the relative performance from using different source languages is affected by the source corpus size, which varies greatly. We tested this question by limiting the source corpora 66K sentences (and excluded the very small ga and hu datasets), which resulted in a slight reduction in scores but overall a near identical pattern of results to the use of the full sized source corpora reported in Table 4 . Only in one instance did the best source language change (for target fi with source de not cs), and the average rankings by UAS and LAS remained unchanged.",
"cite_spans": [],
"ref_spans": [
{
"start": 416,
"end": 423,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Different Source Languages",
"sec_num": "5"
},
{
"text": "The ten languages considered belong to five families: Romance (French, Spanish, Italian), Germanic (German, English, Swedish), Slavic (Czech), Uralic (Hungarian, Finnish), and Celtic (Irish). At first glance it seems that language pairs in the same family tend to perform well. For example, the best source language for both French and Italian is Spanish, while the best source language for Spanish is French. However, this doesn't hold true for many target languages. For example, the best source language for both Finnish and German is Czech. It appears that the best choice of an appropriate source language is not predictable from language family information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Source Languages",
"sec_num": "5"
},
{
"text": "We therefore propose two methods to predict the best source language for a given target language. In devising these methods we assume that for a given resource-poor target language we do not have access to any parsed data, as this is expensive to construct. The first method is based on the Jensen-Shannon divergence between the distributions of POS n-grams (1 < n < 6) in a pair of languages. The second method converts each language into a vector of binary features based on word-order information from WALS, the World .9 76.7 60.5 64.9 52.0 Table 5 : UAS for target languages where the source language is selected in different ways. English uses English as the source language. WALS and POS choose the best source language using the WALS or POS ngrams based methods, respectively. Oracle always uses the best source language. Combined is the model that combines information from all available sources language. The UAS/LAS columns show the UAS/LAS average performance across 9 languages (English is excluded). (Dryer and Haspelmath, 2013) . These features include the relative order of adjective and noun, etc, and we compute the cosine similarity between the vectors for a pair of languages.",
"cite_spans": [
{
"start": 1013,
"end": 1041,
"text": "(Dryer and Haspelmath, 2013)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 544,
"end": 551,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Different Source Languages",
"sec_num": "5"
},
{
"text": "As an alternative to selecting a single source language, we further propose a method to combine information from all available source languages to build a parser for a target language. To do so we first train the syntactic word embeddings on all the languages. After this step, lexical items from all source languages and the target language will be in the same space. We train our parser with syntactic word embeddings on the combined corpus of all source languages. This parser is then applied to the target language directly. The intuition here is that training on multiple source languages limits over-fitting to the source language, and learns the \"universal\" structure of languages. Table 5 shows the performance of each target language with the source language given by the model (in the case of models that select a single source language). Always choosing English as the source language performs worst. Using WALS features out-performs English on 7 out of 9 languages. Using POS ngrams out-performs the WALS feature model on average for both UAS and LAS, although the improvement is small. The combined model, which combines information from all available source languages, out-performs choosing a single source language. Moreover, this model performs even better than the oracle model, which always chooses the single best source language, especially for LAS. Compared with the baseline of always choosing English, our combined model gives an improvement about 6% for both UAS and LAS.",
"cite_spans": [],
"ref_spans": [
{
"start": 689,
"end": 696,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Atlas of Language Structures",
"sec_num": null
},
{
"text": "Most prior work on cross-lingual transfer dependency parsing has relied on large parallel corpora. However, parallel data is scarce for resource-poor languages. In the first part of this paper we investigated building a dependency parser for a resourcepoor language without parallel data. We improved the performance of a delexicalized parser using syntactic word embeddings using a neural network parser. We showed that syntactic word embeddings are better at capturing syntactic information, and particularly suitable for dependency parsing. In contrast to the state-of-the-art for unsupervised cross-lingual dependency parsing, our method does not rely on parallel data. Although the state-of-the-art achieves bigger gains over the baseline than our method, our approach could be more-widely applied to resource-poor languages because of its lower resource requirements. Moreover, we have described how our method could be used to complement previous approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "The second part of this paper studied ways of improving performance when multiple source languages are available. We proposed two methods to select a single source language that both lead to improvements over always choosing English as the source language. We then showed that we can further improve performance by combining information from all the source languages. In summary, without any parallel data, we managed to improve the direct transfer delexicalized parser by about 10% for both UAS and LAS on average, for 9 languages in the Universal Dependency Treebank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "In this paper we focused only on word embeddings, however, in future work we could also build the POS embeddings and the arc-label embeddings across languages. This could help our system to move more freely across languages, facilitating not only the development of NLP for resource-poor languages, but also cross-language comparisons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Note that most research in this area (as do we) evaluates on simulated low-resource languages, through selective use of data in high-resource languages. Consequently parallel data is plentiful, however this is often not the case in the real setting, e.g., for Tagalog, where only scant parallel data exists (e.g., dictionaries, Wikipedia and the Bible).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Later we consider multiple source languages, but for now assume a single source language.3 For the 16 languages in the CoNLL-X and CoNLL-07 datasets we observed that approx. 50% of dependency relations span a distance of one word and 20% span two words. Thus our POS context of a \u00b11 word window captures the majority of dependency relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that w here can be a word type in either the source or target language, such that both embeddings will be learned for all word types in both languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is a consequence of only training the parser on the source language. If we were to update embeddings during parser training this would mean they no longer align with the target language embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "All performance comparisons in this paper are absolute.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Since there are only 3 transitions: SHIFT, LEFT-ARC, RIGHT-ARC.8 Since the Universal Dependency Treebank has 40 universal relations, each relation is attached to LEFT-ARC or RIGHT-ARC. The number 81 comes from 1 (SHIFT) + 40 (LEFT-ARC) + 40 (RIGHT-ARC).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the University of Melbourne and National ICT Australia (NICTA). Dr Cohn is the recipient of an Australian Research Council Future Fellowship (project number FT130101105).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "How much do word embeddings encode about syntax?",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "822--827",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Andreas and Dan Klein. 2014. How much do word embeddings encode about syntax? In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 822-827, Baltimore, Maryland.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Tailoring continuous word representations for dependency parsing",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "809--815",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring continuous word representations for dependency parsing. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 809- 815.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Prague Dependency Treebank: A Three-Level Annotation Scenario",
"authors": [
{
"first": "Alena",
"middle": [],
"last": "B\u00f6hmov\u00e1",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Haji\u010dov\u00e1",
"suffix": ""
},
{
"first": "Barbora",
"middle": [],
"last": "Hladk\u00e1",
"suffix": ""
}
],
"year": 2001,
"venue": "Treebanks: Building and Using Syntactically Annotated Corpora",
"volume": "",
"issue": "",
"pages": "103--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alena B\u00f6hmov\u00e1, Jan Haji\u010d, Eva Haji\u010dov\u00e1, and Barbora Hladk\u00e1. 2001. The Prague Dependency Tree- bank: A Three-Level Annotation Scenario. In Anne Abeill\u00e9, editor, Treebanks: Building and Using Syn- tactically Annotated Corpora, pages 103-127.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"V"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Desouza",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "Jenifer",
"middle": [
"C"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Peter V. deSouza, Robert L. Mer- cer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based n-gram models of natural lan- guage. Computational Linguistics, 18:467-479.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "CoNLL-X shared task on multilingual dependency parsing",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Buchholz",
"suffix": ""
},
{
"first": "Erwin",
"middle": [],
"last": "Marsi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Tenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of the Tenth Conference on Computa- tional Natural Language Learning.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A shortest path dependency kernel for relation extraction",
"authors": [
{
"first": "C",
"middle": [],
"last": "Razvan",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Bunescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05",
"volume": "",
"issue": "",
"pages": "724--731",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan C. Bunescu and Raymond J. Mooney. 2005. A shortest path dependency kernel for relation ex- traction. In Proceedings of the Conference on Hu- man Language Technology and Empirical Methods in Natural Language Processing, HLT '05, pages 724-731.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A fast and accurate dependency parser using neural networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740-750, Doha, Qatar.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Feature embedding for dependency parsing",
"authors": [
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "816--826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenliang Chen, Yue Zhang, and Min Zhang. 2014. Feature embedding for dependency parsing. In Pro- ceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Techni- cal Papers, pages 816-826, Dublin, Ireland.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Question answering passage retrieval using dependency relations",
"authors": [
{
"first": "Hang",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Renxu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Keya",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '05",
"volume": "",
"issue": "",
"pages": "400--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat-Seng Chua. 2005. Question answering passage retrieval using dependency relations. In Proceed- ings of the 28th Annual International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, SIGIR '05, pages 400-407, New York, NY, USA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Increasing the quality and quantity of source language data for Unsupervised Cross-Lingual POS tagging",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Duong",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Pecina",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1243--1249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Duong, Paul Cook, Steven Bird, and Pavel Pecina. 2013a. Increasing the quality and quan- tity of source language data for Unsupervised Cross- Lingual POS tagging. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1243-1249, Nagoya, Japan.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Simpler unsupervised POS tagging with bilingual projections",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Duong",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Pecina",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "634--639",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Duong, Paul Cook, Steven Bird, and Pavel Pecina. 2013b. Simpler unsupervised POS tagging with bilingual projections. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 634-639, Sofia, Bulgaria.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "What can we get from 1000 tokens? a case study of multilingual pos tagging for resource-poor languages",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Duong",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Verspoor",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "886--897",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Duong, Trevor Cohn, Karin Verspoor, Steven Bird, and Paul Cook. 2014. What can we get from 1000 tokens? a case study of multilingual pos tag- ging for resource-poor languages. In Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 886-897, Doha, Qatar.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Real-world semi-supervised learning of postaggers for low-resource languages",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Mielens",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL-2013)",
"volume": "",
"issue": "",
"pages": "583--592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Garrette, Jason Mielens, and Jason Baldridge. 2013. Real-world semi-supervised learning of pos- taggers for low-resource languages. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL-2013), pages 583- 592, Sofia, Bulgaria.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The pascal challenge on grammar induction",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Gelling",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Joo",
"middle": [],
"last": "Graa",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Gelling, Trevor Cohn, Phil Blunsom, and Joo Graa. 2012. The pascal challenge on grammar in- duction.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multilingual Distributed Representations without Word Alignment",
"authors": [],
"year": 2014,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann and Phil Blunsom. 2014a. Mul- tilingual Distributed Representations without Word Alignment. In Proceedings of ICLR.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Multilingual models for compositional distributed semantics",
"authors": [],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann and Phil Blunsom. 2014b. Mul- tilingual models for compositional distributed se- mantics. CoRR, abs/1404.4641.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Bootstrapping parsers via syntactic projection across parallel texts",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Hwa",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Weinberg",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Cabezas",
"suffix": ""
},
{
"first": "Okan",
"middle": [],
"last": "Kolak",
"suffix": ""
}
],
"year": 2005,
"venue": "Natural Language Engineering",
"volume": "11",
"issue": "",
"pages": "311--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural Language Engineering, 11:311-325.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Corpusbased induction of syntactic structure: Models of dependency and constituency",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, ACL '04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher Manning. 2004. Corpus- based induction of syntactic structure: Models of dependency and constituency. In Proceedings of the 42nd Annual Meeting on Association for Computa- tional Linguistics, ACL '04.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Europarl: A Parallel Corpus for Statistical Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Tenth Machine Translation Summit (MT Summit X)",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Proceedings of the Tenth Machine Translation Summit (MT Summit X), pages 79-86, Phuket, Thailand.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1337--1348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Fei Xia. 2014. Unsupervised depen- dency parsing with transferring distribution via par- allel guidance and entropy regularization. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1337-1348.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05",
"volume": "",
"issue": "",
"pages": "91--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of de- pendency parsers. In Proceedings of the 43rd An- nual Meeting on Association for Computational Lin- guistics, ACL '05, pages 91-98.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Non-projective dependency parsing using spanning tree algorithms",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Kiril",
"middle": [],
"last": "Ribarov",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Haji\u010d. 2005b. Non-projective dependency pars- ing using spanning tree algorithms. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Pro- cessing, HLT '05, pages 523-530.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Multi-source transfer of delexicalized dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "62--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing, EMNLP '11, pages 62-72.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed repre- sentations of words and phrases and their composi- tionality. In C.j.c. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 3111-3119.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Selective sharing for multilingual dependency parsing",
"authors": [
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "629--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics: Long Papers -Volume 1, ACL '12, pages 629- 637.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The CoNLL 2007 shared task on dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mc-Donald",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "915--932",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Johan Hall, Sandra K\u00fcbler, Ryan Mc- Donald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 shared task on de- pendency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 915-932, Prague, Czech Republic.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Text classification with the support of pruned dependency patterns",
"authors": [
{
"first": "Tunga",
"middle": [],
"last": "Levent\u00f6zg\u00fcr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "G\u00fcng\u00f6r",
"suffix": ""
}
],
"year": 2010,
"venue": "Pattern Recognition Letter",
"volume": "31",
"issue": "",
"pages": "1598--1607",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Levent\u00d6zg\u00fcr and Tunga G\u00fcng\u00f6r. 2010. Text classi- fication with the support of pruned dependency pat- terns. Pattern Recognition Letter, 31:1598-1607.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A universal part-of-speech tagset",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proceed- ings of the Eight International Conference on Lan- guage Resources and Evaluation (LREC'12), Istan- bul, Turkey.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Data point selection for crosslanguage adaptation of dependency parsers",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers",
"volume": "2",
"issue": "",
"pages": "682--686",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard. 2011. Data point selection for cross- language adaptation of dependency parsers. In Pro- ceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies: Short Papers -Volume 2, HLT '11, pages 682-686.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Cross-lingual word clusters for direct transfer of linguistic structure",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT '12",
"volume": "",
"issue": "",
"pages": "477--487",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Jakob Uszko- reit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In Proceedings of the 2012 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT '12, pages 477-487.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Target language adaptation of discriminative transfer parsers",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1061--1071",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discrimina- tive transfer parsers. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1061-1071, Atlanta, Georgia.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Word representations: A simple and general method for semi-supervised learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev-Arie",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384-394, Up- psala, Sweden.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning, chapter Distributed Word Representation Learning for Cross-Lingual Dependency Parsing",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Yuhong",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "119--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min Xiao and Yuhong Guo, 2014. Proceedings of the Eighteenth Conference on Computational Natural Language Learning, chapter Distributed Word Rep- resentation Learning for Cross-Lingual Dependency Parsing, pages 119-129.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Using a dependency parser to improve smt for subject-object-verb languages",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jaeho",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ringgaard",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Xu, Jaeho Kang, Michael Ringgaard, and Franz Och. 2009. Using a dependency parser to improve smt for subject-object-verb languages. In Proceed- ings of Human Language Technologies: The 2009",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "245--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 245-253, Boulder, Colorado.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Cross-language parser adaptation between related languages",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
},
{
"first": "Univerzita",
"middle": [],
"last": "Karlova",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2008,
"venue": "IJCNLP-08 Workshop on NLP for Less Privileged Languages",
"volume": "",
"issue": "",
"pages": "35--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Zeman, Univerzita Karlova, and Philip Resnik. 2008. Cross-language parser adaptation between re- lated languages. In In IJCNLP-08 Workshop on NLP for Less Privileged Languages, pages 35-42.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Bilingual word embeddings for phrase-based machine translation",
"authors": [
{
"first": "Will",
"middle": [
"Y"
],
"last": "Zou",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1393--1398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Will Y. Zou, Richard Socher, Daniel Cer, and Christo- pher D. Manning. 2013. Bilingual word embed- dings for phrase-based machine translation. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing, pages 1393- 1398, Seattle, Washington, USA.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"content": "<table><tr><td>report an approximately 6% gain. As we</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "Comparative results on the CoNLL corpora showing UAS for several parsers: unsupervised inductionKlein and Manning (2004), Direct Transfer (DT) delexicalized parser ofT\u00e4ckstr\u00f6m et al. (2012), our implementation of Direct Transfer and our neural network parsing model using cross-lingual embeddings Hermann and Blunsom (2014a) (HB) and our proposed syntactic embeddings."
},
"TABREF3": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Number of tokens (\u00d7 1000) for each language in the Universal Dependency Treebank."
},
"TABREF5": {
"content": "<table><tr><td/><td>cs</td><td>de</td><td>en</td><td>es</td><td>fi</td><td>fr</td><td>ga</td><td>hu</td><td>it</td><td>sv</td><td>UAS LAS</td></tr><tr><td>English</td><td colspan=\"3\">50.2 60.9 -</td><td>67.9</td><td/><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "51.4 66.0 51.6 52.3 69.2 59.6 58.8 45.8 WALS 50.2 59.2 44.5 72.1 51.4 73.3 53.8 44.6 77.4 59.6 60.2 47.1 POS 49.1 58.5 53.2 74.8 53.7 73.3 53.8 56.7 76.2 56.8 61.4 47.7 Oracle 60.5 65.9 63.2 74.8 53.7 73.3 59.0 56.7 77.4 59.6 64.5 50.8 Combined 61.1 67.5 64.4 75.1 54.2 72.8 58.7 57"
}
}
}
}