|
{ |
|
"paper_id": "D12-1001", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:23:12.508953Z" |
|
}, |
|
"title": "Syntactic Transfer Using a Bilingual Lexicon", |
|
"authors": [ |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Durrett", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of California", |
|
"location": { |
|
"settlement": "Berkeley" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Pauls", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of California", |
|
"location": { |
|
"settlement": "Berkeley" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of California", |
|
"location": { |
|
"settlement": "Berkeley" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We consider the problem of using a bilingual dictionary to transfer lexico-syntactic information from a resource-rich source language to a resource-poor target language. In contrast to past work that used bitexts to transfer analyses of specific sentences at the token level, we instead use features to transfer the behavior of words at a type level. In a discriminative dependency parsing framework, our approach produces gains across a range of target languages, using two different lowresource training methodologies (one weakly supervised and one indirectly supervised) and two different dictionary sources (one manually constructed and one automatically constructed).", |
|
"pdf_parse": { |
|
"paper_id": "D12-1001", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We consider the problem of using a bilingual dictionary to transfer lexico-syntactic information from a resource-rich source language to a resource-poor target language. In contrast to past work that used bitexts to transfer analyses of specific sentences at the token level, we instead use features to transfer the behavior of words at a type level. In a discriminative dependency parsing framework, our approach produces gains across a range of target languages, using two different lowresource training methodologies (one weakly supervised and one indirectly supervised) and two different dictionary sources (one manually constructed and one automatically constructed).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Building a high-performing parser for a language with no existing treebank is still an open problem. Methods that use no supervision at all (Klein and Manning, 2004) or small amounts of manual supervision (Haghighi and Klein, 2006; Cohen and Smith, 2009; Naseem et al., 2010; Berg-Kirkpatrick and Klein, 2010) have been extensively studied, but still do not perform well enough to be deployed in practice. Projection of dependency links across aligned bitexts (Hwa et al., 2005; Ganchev et al., 2009; Smith and Eisner, 2009) gives better performance, but crucially depends on the existence of large, in-domain bitexts. A more generally applicable class of methods exploits the notion of universal part of speech tags (Petrov et al., Figure 1 : Sentences in English and German both containing words that mean \"demand.\" The fact that the English demand takes nouns on its left and right indicates that the German verlangen should do the same, correctly suggesting attachments to Verzicht and Gewerkschaften. to train parsers that can run on any language with no adaptation or unsupervised adaptation (Cohen et al., 2011) . While these universal parsers currently constitute the highest-performing methods for languages without treebanks, they are inherently limited by operating at the coarse POS level, as lexical features are vital to supervised parsing models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 165, |
|
"text": "(Klein and Manning, 2004)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 231, |
|
"text": "(Haghighi and Klein, 2006;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 254, |
|
"text": "Cohen and Smith, 2009;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 275, |
|
"text": "Naseem et al., 2010;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 276, |
|
"end": 309, |
|
"text": "Berg-Kirkpatrick and Klein, 2010)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 460, |
|
"end": 478, |
|
"text": "(Hwa et al., 2005;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 479, |
|
"end": 500, |
|
"text": "Ganchev et al., 2009;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 501, |
|
"end": 524, |
|
"text": "Smith and Eisner, 2009)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 717, |
|
"end": 732, |
|
"text": "(Petrov et al.,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1098, |
|
"end": 1118, |
|
"text": "(Cohen et al., 2011)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 733, |
|
"end": 741, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we consider augmenting delexicalized parsers by transferring syntactic information through a bilingual lexicon at the word type level. These parsers are delexicalized in the sense that, although they receive target language words as input, their feature sets do not include indicators on those words. This setting is appropriate when there is too little target language data to learn lexical features directly. Our main approach is to add features which are lexical in the sense that they compute a function of specific target language words, but are still un-lexical in the sense that all lexical knowledge comes from the bilingual lexicon and training data in the source language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Consider the example English and German sentences shown in Figure 1 , and suppose that we wish to parse the German side without access to a German treebank. A delexicalized parser operating at the part of speech level does not have sufficient information to make the correct decision about, for example, the choice of subcategorization frame for the verb verlangen. However, demand, a possible English translation of verlangen, takes a noun on its left and a noun on its right, an observation that in this case gives us the information we need. We can fire features in our German parser on the attachments of Gewerkschaften and Verzicht to verlangen indicating that similar-looking attachments are attested in English for an English translation of verlangen. This allows us to exploit fine-grained lexical cues to make German parsing decisions even when we have little or no supervised German data; moreover, this syntactic transfer is possible even in spite of the fact that demand and verlangen are not observed in parallel context.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 67, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Using type-level transfer through a dictionary in this way allows us to decouple the lexico-syntactic projection from the data conditions under which we are learning the parser. After computing feature values using source language resources and a bilingual lexicon, our model can be trained very simply using any appropriate training method for a supervised parser. Furthermore, because the transfer mechanism is just a set of features over word types, we are free to derive our bilingual lexicon either from bitext or from a manually-constructed dictionary, making our method strictly more general than those of Mc-Donald et al. (2011) or T\u00e4ckstr\u00f6m et al. (2012) , who rely centrally on bitext. This flexibility is potentially useful for resource-poor languages, where a humancurated bilingual lexicon may be broader in coverage or more robust to noise than a small, domainlimited bitext. Of course, it is an empirical question whether transferring type level information about word behavior is effective; we show that, indeed, this method compares favorably with other transfer mechanisms used in past work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 613, |
|
"end": 636, |
|
"text": "Mc-Donald et al. (2011)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 640, |
|
"end": 663, |
|
"text": "T\u00e4ckstr\u00f6m et al. (2012)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The actual syntactic information that we transfer consists of purely monolingual lexical attachment statistics computed on an annotated source language resource. 1 While the idea of using large-scale summary statistics as parser features has been considered previously (Koo et al., 2008; Bansal and Klein, 2011; Zhou et al., 2011) , doing so in a projection setting is novel and forces us to design features suitable for projection through a bilingual lexicon. Our features must also be flexible enough to provide benefit even in the presence of cross-lingual syntactic differences and noise introduced by the bilingual dictionary.", |
|
"cite_spans": [ |
|
{ |
|
"start": 269, |
|
"end": 287, |
|
"text": "(Koo et al., 2008;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 288, |
|
"end": 311, |
|
"text": "Bansal and Klein, 2011;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 312, |
|
"end": 330, |
|
"text": "Zhou et al., 2011)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Under two different training conditions and with two different varieties of bilingual lexicons, we show that our method of lexico-syntactic projection does indeed improve the performance of parsers that would otherwise be agnostic to lexical information. In all settings, we see statistically significant gains for a range of languages, with our method providing up to 3% absolute improvement in unlabeled attachment score (UAS) and 11% relative error reduction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The projected lexical features that we propose in this work are based on lexicalized versions of features found in MSTParser (McDonald et al., 2005) , an edge-factored discriminative parser. We take MST-Parser to be our underlying parsing model and use it as a testbed on which to evaluate the effectiveness of our method for various data conditions. 2 By instantiating the basic MSTParser features over coarse parts of speech, we construct a state-of-the-art delexicalized parser in the style of , where feature weights can be directly transferred from a source language or languages to a desired target language. When we add projected lexical features on top of this baseline parser, we do so in a way that does not sacrifice this generality: while our new features take on values that are languagespecific, they interact with the model at a languageindependent level. We therefore have the best of 1 Throughout this work, we will use English as the source language, but it is possible to use any language for which the appropriate bilingual lexicons and treebanks exist. One might expect to find the best performance from using a source language closely related to the target. 2 We train MSTParser using the included implementation of MIRA (Crammer and Singer, 2001 ) and use projective decoding for all experiments described in this paper. We form a query from each stipulated set of characteristics, compute the values of these queries heuristically, and then fire a feature based on each query's signature. Signatures indicate which attachment properties were considered, which part of the query was lexicalized (shown by brackets here), and the POS of the query word. This procedure yields a small number of real-valued features that still capture rich lexico-syntactic information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 148, |
|
"text": "MSTParser (McDonald et al., 2005)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1180, |
|
"end": 1181, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1243, |
|
"end": 1268, |
|
"text": "(Crammer and Singer, 2001", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "two worlds in that our features can be learned on any treebank or treebanks that are available to us, but still exploit highly specific lexical information to achieve performance gains over using coarse POS features alone.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DELEX", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our DELEX feature set consists of all of the unlexicalized features in MSTParser, only lightly modified to improve performance for our setting. McDonald et al. (2005) present three basic types of such features, ATTACH, INBETWEEN, and SURROUNDING, which we apply at the coarse POS level. The AT-TACH features for a given dependency link consist of indicators of the tags of the head and modifier, separately as well as together. The INBETWEEN and SURROUNDING features are indicators on the tags of the head and modifier in addition to each intervening tag in turn (INBETWEEN) or various combinations of tags adjacent to the head or modifier (SURROUNDING). 3 MSTParser by default also includes a copy of each of these indicator features conjoined with the direction and distance of the attachment it denotes. These extra features are important to getting good performance out of the baseline model. We slightly modify the conjunction scheme and expand it with additional backed-off conjunctions, since these changes lead to features that empirically transfer better than the MSTParser defaults. Specifically, we use conjunctions with attachment direction (left or right), coarsened distance, 4 and attachment direction and coarsened distance combined.", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 166, |
|
"text": "McDonald et al. (2005)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 655, |
|
"end": 656, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DELEX Features", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We emphasize again that these baseline features are entirely standard, and all the DELEX feature set does is recreate an MSTParser-based analogue of the direct transfer parser described by McDonald et al. (2011).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DELEX Features", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We will now describe how to compute our projected lexical features, the PROJ feature set, which constitutes the main contribution of this work. Recall that we wish our method to be as general as possible and work under many different training conditions; in particular, we wish to be able to train our model on only existing treebanks in other languages when no target language trees are available (discussed in Section 3.3), or on only a very small target language treebank (Section 3.4). It would greatly increase the power of our model if we were able to include target-language-lexicalized versions of the ATTACH features, but these are not learnable without a large target language treebank. We instead must augment our baseline model with a relatively small number of features that are nonetheless rich enough to transfer the necessary lexical information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PROJ Features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Our overall approach is sketched in Figure 2 , where we show the features that fire on two proposed edges in a German dependency parse. Features on an edge in MSTParser incorporate a subset of observable properties about that edge's head, modifier, and context in the sentence. For sets of properties that do not include a lexical item, such as VERB\u2192NOUN, we fire an indicator feature from the DELEX feature set. For those that do include a lexical item, such as verlangen\u2192NOUN, we form a query, which resembles a lexicalized indicator feature. Rather than firing the query as an indicator feature directly, which would result in a model parameter for each target word, we fire a broad feature called an signature whose value reflects the specifics of the query (computation of these values is discussed in Section 2.2.2). For example, we abstract verlangen\u2192NOUN to [VERB]\u2192CHILD, with square brackets indicating the element that was lexicalized. Section 2.2.1 discusses this coarsening in more detail. The signatures are agnostic to individual words and even the language being parsed, so they can be learned on small amounts of data or data from other languages.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 44, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "PROJ Features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Our signatures allow us to instantiate features at different levels of granularity corresponding to the levels of granularity in the DELEX feature set. When a small amount of target language data is present, the variety of signatures available to us means that we can learn language-specific transfer characteristics: for example, nouns tend to follow prepositions in both French and English, but the ordering of adjectives with respect to nouns is different. We also have the capability to train on languages other than our target language, and while this is expected to be less effective, it can still teach us to exploit some syntactic properties, such as similar verb attachment configurations if we train on a group of SVO languages distinct from a target SVO language. Therefore, our feature set manages to provide the training procedure with choices about how much syntactic information to transfer at the same time as it prevents overfitting and provides language independence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PROJ Features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "A query is a subset of the following pieces of information about an edge: parent word, parent POS, child word, child POS, attachment direction, and binned attachment distance. It must contain exactly one word. 5 We experimented with properties from INBETWEEN and SURROUNDING features as well, but found that these only helped under some circumstances and could lead to overfitting. 6 A signature contains the following three pieces of information:", |
|
"cite_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 211, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 382, |
|
"end": 383, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query and Signature Types", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "1. The non-empty subset of attachment properties included in the query 2. Whether we have lexicalized on the parent or child of the attachment, indicated by brackets", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query and Signature Types", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "Because either the parent or child POS is included in the signature, there are three meaningful properties to potentially condition on, of which we must select a nonempty subset. Some multiplication shows that we have 7 \u00d7 2 \u00d7 13 = 182 total PROJ features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The part of speech of the included word", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "As an example, the queries For each occurrence of a given source word, we tabulate the attachments it takes part in (parents and children) and record their properties. We then compute relative frequency counts for each possible query type to get source language scores, which will later be projected through the dictionary to obtain target language feature values. Only two query types are shown here, but values are computed for many others as well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The part of speech of the included word", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "does not condition on the part of speech of the word in the signature. One can also imagine using more refined signatures, but we found that this led to overfitting in the small training scenarios under consideration.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The part of speech of the included word", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Each query is given a value according to a generative heuristic that involves the source training data and the probabilistic bilingual lexicon. 7 For a particular signature, a query can be written as a tuple (x 1 , x 2 , . . . , w t ) where w t is the target language query word and the x i are the values of the included language-independent attachment properties. The value this feature takes is given by a simple generative model: we imagine generating the attachment properties x i given w t by first generating a source 7 Lexicons such as those produced by automatic aligners include probabilities natively, but obviously human-created lexicons do not. For these dictionaries, we simply assume that each word translates with uniform probability into each of its possible translations. Tweaking this method did not substantially change performance. word w s from w t based on the bilingual lexicon, then jointly generating the x i conditioned on w s . Treating the choice of source translation as a latent variable to be marginalized out, we have", |
|
"cite_spans": [ |
|
{ |
|
"start": 525, |
|
"end": 526, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Value Estimation", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "value = p(x 1 , x 2 , . . . |w t ) = ws p(w s |w t )p(x 1 , x 2 , . . . |w s )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Value Estimation", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "The first term of the sum comes directly from our probabilistic lexicon, and the second we can estimate using the maximum likelihood estimator over our source language training data:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Value Estimation", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(x 1 , x 2 , . . . |w s ) = c(x 1 , x 2 , . . . , w s ) c(w s )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Query Value Estimation", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "where c(\u2022) denotes the count of an event in the source language data. The final feature value is actually the logarithm of this computed value, with a small constant added before the logarithm is taken to avoid zeroes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Value Estimation", |
|
"sec_num": "2.2.2" |
|
}, |
|
{ |
|
"text": "Before we describe the details of our experiments, we sketch the data conditions under which we evaluate our method. As described in Section 1, there is a continuum of lightly supervised parsing methods from those that make no assumptions (beyond what is directly encoded in the model), to those that use a small set of syntactic universals, to those that use treebanks from resource-rich languages, and finally to those that use both existing treebanks and bitexts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Conditions", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our focus is on parsing when one does not have access to a full-scale target language treebank, but one does have access to realistic auxiliary resources. The first variable we consider is whether we have access to a small number of target language trees or only pre-existing treebanks in a number of other languages; while not our actual target language, these other treebanks can still serve as a kind of proxy for learning which features generally transfer useful information . We notate these conditions with the following shorthand: BANKS: Large treebanks in other target languages SEED: Small treebank in the right target language Previous work on essentially unsupervised methods has investigated using a small number of target language trees (Smith and Eisner, 2009) , but the behavior of supervised models under these conditions has not been extensively studied. We will see in Section 3.4 that with only 100 labeled trees, even our baseline model can achieve performance equal to or better than that of the model of . A single linguist could plausibly annotate such a number of trees in a short amount of time for a language of interest, so we believe that this is an important setting in which to show improvement, even for a method primarily intended to augment unsupervised parsing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 750, |
|
"end": 774, |
|
"text": "(Smith and Eisner, 2009)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Conditions", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In addition, we consider two different sources for our bilingual lexicon: AUTOMATIC: Extracted from bitext MANUAL: Constructed from human annotations Both bitexts and human-curated bilingual dictionaries are more widely available than complete treebanks. Bitexts can provide rich information about lexical correspondences in terms of how words are used in practice, but for resource-poor languages, parallel text may only be available in small quantities, or be domain-limited. We show results of our method on bilingual dictionaries derived from both sources, in order to show that it is applicable under a variety of data conditions and can successfully take advantage of such resources as are available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Conditions", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We evaluate our method on a range of languages taken from the CoNLL shared tasks on multilingual dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007) . We make use of dependency treebanks for Danish, German, Greek, Spanish, Italian, Dutch, Portuguese, and Swedish, all from the 2006 shared task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 142, |
|
"text": "(Buchholz and Marsi, 2006;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 162, |
|
"text": "Nivre et al., 2007)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For our English resource, we use 500,000 English newswire sentences from English Gigaword version 3 (Graff et al., 2007) , parsed with the Berkeley Parser (Petrov et al., 2006 ) and converted to a dependency treebank using the head rules of Collins (1999) . 8 Our English test set (used in Section 3.4) consists of the first 300 sentences of section 23 of the Penn treebank (Marcus et al., 1993) , preprocessed in the same way. Our model does not use gold finegrained POS tags, but we do use coarse POS tags deterministically generated from the provided gold fine-grained tags in the style of Berg-Kirkpatrick and Klein (2010) using the mappings of Petrov et al. (2011). 9 Following McDonald et al. (2011), we strip punctuation from all treebanks for the results of Section 3.3. All results are given in terms of unlabeled attachment score (UAS), ignoring punctuation even when it is present.", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 120, |
|
"text": "(Graff et al., 2007)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 155, |
|
"end": 175, |
|
"text": "(Petrov et al., 2006", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 255, |
|
"text": "Collins (1999)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 258, |
|
"end": 259, |
|
"text": "8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 395, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We use the Europarl parallel corpus (Koehn, 2005) as the bitext from which to extract the AUTO-MATIC bilingual lexicons. For each target language, we produce one-to-one alignments on the Englishtarget bitext by running the Berkeley Aligner (Liang et al., 2006) T\u00e4ckstr\u00f6m et al. (2012) and improvements from their best methods of using bitext and lexical information. These results are not directly comparable to ours, as indicated by * and **. However, we still see that the performance of our type-level transfer method approaches that of bitext-based methods, which require complex bilingual training for each new language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 49, |
|
"text": "(Koehn, 2005)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 240, |
|
"end": 260, |
|
"text": "(Liang et al., 2006)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 284, |
|
"text": "T\u00e4ckstr\u00f6m et al. (2012)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "five iterations of the HMM aligner with agreement training. Our lexicon is then read off based on relative frequency counts of aligned instances of each word in the bitext. We also use our method on bilingual dictionaries constructed in a more conventional way. For this purpose, we scrape our MANUAL bilingual lexicons from English Wiktionary (Wikimedia Foundation, 2012). We mine entries for English words that explicitly have foreign translations listed as well as words in each target language that have English definitions. We discard all translation entries where the English side is longer than one word, except for constructions of the form \"to VERB\", where we manually remove the \"to\" and allow the word to be defined as the English infinitive. Finally, because our method requires a dictionary with probability weights, we assume that each target language word translates with uniform probability into any of the candidates that we scrape.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We first evaluate our model under the BANKS data condition. Following the procedure from , for each language, we train both our DELEX and DELEX+PROJ features on a concatenation of 2000 sentences from each other CoNLL training set, plus 2000 sentences from the Penn Treebank. Again, despite the values of our PROJ queries being sensitive to which language we are currently parsing, the signatures are language independent, so discriminative training still makes sense over such a combined treebank. Training our PROJ features on the non-English treebanks in this concatenation can be understood as trying to learn which lexico-syntactic properties transfer \"universally,\" or at least transfer broadly within the families of languages we are considering. Table 1 shows the performance of the DELEX feature set and the DELEX+PROJ feature set using both AUTOMATIC and MANUAL bilingual lexicons. Both methods provide positive gains across the board that are statistically significant in the vast majority of cases, though MANUAL is slightly less effective; we postpone until Section 4.1 the discussion of the shortcomings of the MANUAL lexicon.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 753, |
|
"end": 760, |
|
"text": "Table 1", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BANKS", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We include for reference the baseline results of and T\u00e4ckstr\u00f6m et al. (2012) (multi-direct transfer and no clusters) and the improvements from their best methods using lexical information (multi-projected transfer and crosslingual clusters). We emphasize that these results are not directly comparable to our own, as we have different training data (and even different training languages) and use a different underlying parsing model (MSTParser instead of a transition-based parser (Nivre, 2008) ). However, our baseline is competitive with theirs, 10 demonstrating that we have constructed a state-of-the-art delexicalized parser. Furthermore, our method appears to approach the performance of previous bitext-based methods, and because of its flexibility and the freedom from complex cross-lingual training for each new language, it can be applied in the MANUAL case as well, a capability which neither of the other methods has.", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 76, |
|
"text": "T\u00e4ckstr\u00f6m et al. (2012)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 482, |
|
"end": 495, |
|
"text": "(Nivre, 2008)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BANKS", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We now turn our attention to the SEED scenario, where a small number of target language trees are available for each language we consider. While it is imaginable to continue to exploit the other treebanks in the presence of target language trees, we found that training our DELEX features on the seed treebank alone gave higher performance than any attempt to also use the concatenation of treebanks from the previous section. This is not too surprising because, with this number of sentences, there is already good monolingual coverage of coarse POS features, and attempting to train features on other languages can be expected to introduce noise into otherwise accurate monolingual feature weights.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SEED", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We train our DELEX+PROJ model with both AU-TOMATIC and MANUAL lexicons on target language training sets of size 100, 200, and 400, and give results for each language in Table 2 . The performance of parsers trained on small numbers of trees can be highly variable, so we create multiple treebanks of each size by repeatedly sampling from each language's train treebank, and report averaged results. Furthermore, this evaluation is not on the standard CoNLL test sets, but is instead on those test sets with a few hundred unused training sentences added, the reason being that some of the CoNLL test sets are very small (fewer than 200 sentences) and appeared to give highly variable results. To compute statistical significance, we draw a large number of bootstrap samples for each training set used, then aggregate all of their sufficient statistics in order to compute the final p-value. We see that our DELEX+PROJ method gives statistically significant gains at the 95% level over DELEX for nearly all language and training set size pairs, giving on average a 9% relative error reduction in the 400-tree case.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 176, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "SEED", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Because our features are relatively few in number and capture heuristic information, one question we might ask is how well they can perform in a nonprojection context. In the last line of the table, we report gains that are achieved when PROJ features computed from parsed Gigaword are used directly on English, with no intermediate dictionary. These are not comparable to the other values in the table because we are using our projection strategy monolingually, which removes the barriers of imperfect lexical correspondence (from using the lexicon) and imperfect syntactic correspondence (from projecting). As one might expect, the gains on English are far higher than the gains on other languages. This indicates that performance is chiefly limited by the need to do cross-lingual feature adaptation, not inherently low feature capacity. We delay further discussion to Section 4.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SEED", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "One surprising thing to note is that the gains given by our PROJ features are in some cases larger here than in the BANKS setting. This result is slightly counterintuitive, as our baseline parsers are much better in this case and so we would expect diminished returns from our method. We conclude that accurately learning which signatures transfer between languages is important, and it is easier to learn good feature weights when some target language data is available. Further evidence supporting this hypothesis is the fact that the gains are larger and more significant on larger training set sizes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SEED", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Overall, we see that gains from using our MANUAL lexicons are slightly lower than those from our AU-TOMATIC lexicons. One might expect higher performance because scraped bilingual lexicons are not prone to some of the same noise that exists in auto- matic aligners, but this is empirically not the case.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "AUTOMATIC versus MANUAL", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Rather, as we see in Table 3 , the low recall of our MANUAL lexicons on open-class words appears to be a possible culprit. The coverage gap between these and the AUTOMATIC lexicons is partially due to the inconsistent structure of Wiktionary: inflected German and Greek words often do not have their own pages, so we miss even common morphological variants of verb forms in those languages. The inflected forms that we do scrape are also mapped to the English base form rather than the corresponding inflected form in English, which introduces further noise. Coverage is substantially higher if we translate using stems only, but this did not empirically lead to performance improvements, possibly due to conflating different parts of speech with the same base form.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 28, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "AUTOMATIC versus MANUAL", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "One might hypothesize that our uniform weighting scheme in the MANUAL lexicon is another source of problems, and that bitext-derived weights are necessary to get high performance. This is not the case here. Truncating the AUTOMATIC dictionary to at most 20 translations per word and setting the weights uniformly causes a slight performance drop, but is still better than our MANUAL lexicon. This further demonstrates that these problems are more a limitation of our dictionary than our method. English Wiktionary is not designed to be a bilingual dictionary, and while it conveniently provided an easy way for us to produce lexicons for a wide array The English verb want takes fundamentally different children than wollen does, so properties of the sort we present in Section 2.2 will not transfer effectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "AUTOMATIC versus MANUAL", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "of languages, it is not the resource that one would choose if designing a parser for a specific target language. Bitext is not necessary for our approach to work, and results on the AUTOMATIC lexicon suggest that our type-level transfer method can in fact do much better given a higher quality resource.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "AUTOMATIC versus MANUAL", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "While our method does provide consistent gains across a range of languages, the injection of lexical information is clearly not sufficient to bridge the gap between unsupervised and supervised parsers. We argued in Section 3.4 that the cross-lingual transfer step of our method imposes a fundamental limitation on how useful any such approach can be, which we now investigate further. In particular, any syntactic divergence, especially inconsistent divergences like head switching, will limit the utility of transferred structure. Consider the German example in Figure 4 , with a parallel English sentence provided. The English tree suggests that want should attach to an infinitival to, which has no correlate in German. Even disregarding this, its grandchild is the verb continue, which is realized in the German sentence as the adverb weiter. While it is still broadly true that want and wollen both have verbal elements located to their right, it is less clear how to design features that can still take advantage of this while working around the differences we have described. Therefore, a gap between the per-formance of our features on English and the performance of our projected features, as is observed in Table 2 , is to be expected in the absence of a more complete model of syntactic divergence.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 563, |
|
"end": 571, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 1217, |
|
"end": 1224, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Limitations", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this work, we showed that lexical attachment preferences can be projected to a target language at the type level using only a bilingual lexicon, improving over a delexicalized baseline parser. This method is broadly applicable in the presence or absence of target language training trees and with bilingual lexicons derived from either manually-annotated resources or bitexts. The greatest improvements arise when the bilingual lexicon has high coverage and a number of target language trees are available in order to learn exactly what lexico-syntactic properties transfer from the source language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In addition, we showed that a well-tuned discriminative model with the correct features can achieve good performance even on very small training sets. While unsupervised and existing projection methods do feature great versatility and may yet produce state-of-the-art parsers on resource-poor languages, spending time constructing small supervised resources appears to be the fastest method to achieve high performance in these settings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As inKoo et al. (2008), our feature set contains more backed-off versions of the SURROUNDING features than aredescribed in McDonald et al. (2005).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our five distance buckets are {1, 2, 3\u22125, 6\u221210, 11+}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Bilexical features are possible in our framework, but we do not use them here, so for clarity we assume that each query has one associated word.6 One hypothesis is that features looking at the sentence context are more highly specialized to a given language, since they examine the parent, the child, and one or more other parts of speech or words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Results do not degrade much if one simply uses Sections 2-21 of the Penn treebank instead. Coverage of rare words in the treebank is less important when a given word must also appear in the bilingual lexicon as the translation of an observed German word in order to be useful.9 Note that even in the absence of gold annotation, such tags could be produced from bitext using the method of or could be read off from a bilingual lexicon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The baseline ofT\u00e4ckstr\u00f6m et al. (2012) is lower because it is trained only on English rather than on many languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was partially supported by an NSF Graduate Research Fellowship to the first author, by a Google Fellowship to the second author, and by the NSF under grant 0643742. Thanks to the anonymous reviewers for their insightful comments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Web-scale Features for Full-scale Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "693--702", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohit Bansal and Dan Klein. 2011. Web-scale Features for Full-scale Parsing. In Proceedings of ACL, pages 693-702, Portland, Oregon, USA.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Phylogenetic Grammar Induction", |
|
"authors": [ |
|
{ |
|
"first": "Taylor", |
|
"middle": [], |
|
"last": "Berg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Kirkpatrick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1288--1297", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taylor Berg-Kirkpatrick and Dan Klein. 2010. Phylo- genetic Grammar Induction. In Proceedings of ACL, pages 1288-1297, Uppsala, Sweden.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "CoNLL-X Shared Task on Multilingual Dependency Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Sabine", |
|
"middle": [], |
|
"last": "Buchholz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erwin", |
|
"middle": [], |
|
"last": "Marsi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "149--164", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X Shared Task on Multilingual Dependency Parsing. In Proceedings of CoNLL, pages 149-164.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Shared Logistic Normal Distributions for Soft Parameter Tying in", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Shay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shay B. Cohen and Noah A. Smith. 2009. Shared Logis- tic Normal Distributions for Soft Parameter Tying in", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Unsupervised Grammar Induction", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--82", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Unsupervised Grammar Induction. In Proceedings of NAACL, pages 74-82, Boulder, Colorado.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Unsupervised Structure Prediction with Non-Parallel Multilingual Guidance", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Shay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "50--61", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shay B. Cohen, Dipanjan Das, and Noah A. Smith. 2011. Unsupervised Structure Prediction with Non-Parallel Multilingual Guidance. In Proceedings of EMNLP, pages 50-61, Edinburgh, UK.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Head-Driven Statistical Models for Natural Language Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, Univer- sity of Pennsylvania.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Ultraconservative Online Algorithms for Multiclass Problems", |
|
"authors": [ |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoram", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koby Crammer and Yoram Singer. 2001. Ultraconserva- tive Online Algorithms for Multiclass Problems. Jour- nal of Machine Learning Research, 3:2003.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Unsupervised Partof-Speech Tagging with Bilingual Graph-Based Projections", |
|
"authors": [ |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "600--609", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dipanjan Das and Slav Petrov. 2011. Unsupervised Part- of-Speech Tagging with Bilingual Graph-Based Pro- jections. In Proceedings of ACL, pages 600-609, Port- land, Oregon, USA.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Dependency Grammar Induction via Bitext Projection Constraints", |
|
"authors": [ |
|
{ |
|
"first": "Kuzman", |
|
"middle": [], |
|
"last": "Ganchev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Gillenwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "369--377", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency Grammar Induction via Bitext Pro- jection Constraints. In Proceedings of ACL, pages 369-377, Suntec, Singapore.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "English Gigaword Third Edition", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Graff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junbo", |
|
"middle": [], |
|
"last": "Kong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ke", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kazuaki", |
|
"middle": [], |
|
"last": "Maeda", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Linguistic Data Consortium, Catalog Number LDC2007T07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2007. English Gigaword Third Edition. Linguistic Data Consortium, Catalog Number LDC2007T07.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Prototype-driven Grammar Induction", |
|
"authors": [ |
|
{ |
|
"first": "Aria", |
|
"middle": [], |
|
"last": "Haghighi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of CoLING-ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "881--888", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aria Haghighi and Dan Klein. 2006. Prototype-driven Grammar Induction. In Proceedings of CoLING-ACL, pages 881-888, Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Bootstrapping Parsers via Syntactic Projection Across Parallel Texts", |
|
"authors": [ |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Hwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amy", |
|
"middle": [], |
|
"last": "Weinberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Cabezas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Okan", |
|
"middle": [], |
|
"last": "Kolak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Natural Language Engineering", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "311--325", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping Parsers via Syntactic Projection Across Parallel Texts. Natural Language Engineering, 11:311-325, Septem- ber.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Corpus-Based Induction of Syntactic Structure: Models of Dependency and Constituency", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "479--486", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Klein and Christopher D. Manning. 2004. Corpus- Based Induction of Syntactic Structure: Models of De- pendency and Constituency. In Proceedings of ACL, pages 479-486.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Europarl: A Parallel Corpus for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "MT Summit X", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "79--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In MT Summit X, pages 79-86, Phuket, Thailand. AAMT.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Simple Semi-Supervised Dependency Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Terry", |
|
"middle": [], |
|
"last": "Koo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xavier", |
|
"middle": [], |
|
"last": "Carreras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple Semi-Supervised Dependency Parsing. In Pro- ceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Alignment by Agreement", |
|
"authors": [ |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Percy Liang, Ben Taskar, and Dan Klein. 2006. Align- ment by Agreement. In Proceedings of NAACL, New York, New York.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Building a Large Annotated Corpus of English: the Penn Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ann" |
|
], |
|
"last": "Marcinkiewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beatrice", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "", |
|
"pages": "313--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beat- rice Santorini. 1993. Building a Large Annotated Cor- pus of English: the Penn Treebank. Computational Linguistics, 19:313-330, June.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Online Large-Margin Training of Dependency Parsers", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "91--98", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online Large-Margin Training of Dependency Parsers. In Proceedings of ACL, pages 91-98, Ann Arbor, Michigan.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Multi-Source Transfer of Delexicalized Dependency Parsers", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "62--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-Source Transfer of Delexicalized Dependency Parsers. In Proceedings of EMNLP, pages 62-72, Ed- inburgh, Scotland, UK.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Using Universal Linguistic Knowledge to Guide Grammar Induction", |
|
"authors": [ |
|
{ |
|
"first": "Tahira", |
|
"middle": [], |
|
"last": "Naseem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harr", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1234--1244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tahira Naseem, Harr Chen, Regina Barzilay, and Mark Johnson. 2010. Using Universal Linguistic Knowl- edge to Guide Grammar Induction. In Proceed- ings of EMNLP, pages 1234-1244, Cambridge, Mas- sachusetts.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "The CoNLL 2007 Shared Task on Dependency Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "K\u00fcbler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jens", |
|
"middle": [], |
|
"last": "Nilsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deniz", |
|
"middle": [], |
|
"last": "Yuret", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of EMNLP-CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "915--932", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre, Johan Hall, Sandra K\u00fcbler, Ryan Mcdon- ald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 Shared Task on Dependency Parsing. In Proceedings of EMNLP-CoNLL, pages 915-932, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Algorithms for Deterministic Incremental Dependency Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Computational Linguistics", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "513--553", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre. 2008. Algorithms for Deterministic Incre- mental Dependency Parsing. Computational Linguis- tics, 34:513-553, December.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Learning Accurate, Compact, and Interpretable Tree Annotation", |
|
"authors": [ |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Barrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Romain", |
|
"middle": [], |
|
"last": "Thibaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "433--440", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning Accurate, Compact, and In- terpretable Tree Annotation. In Proceedings of ACL, pages 433-440, Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A Universal Part-of-Speech Tagset", |
|
"authors": [ |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Slav Petrov, Dipanjan Das, and Ryan McDonald. 2011. A Universal Part-of-Speech Tagset. In ArXiv, April.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Parser Adaptation and Projection with Quasi-Synchronous Grammar Features", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "822--831", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David A. Smith and Jason Eisner. 2009. Parser Adapta- tion and Projection with Quasi-Synchronous Grammar Features. In Proceedings of EMNLP, pages 822-831, Suntec, Singapore.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Cross-lingual Word Clusters for Direct Transfer of Linguistic Structure", |
|
"authors": [ |
|
{ |
|
"first": "Oscar", |
|
"middle": [], |
|
"last": "T\u00e4ckstr\u00f6m", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Jakob Uszkoreit. 2012. Cross-lingual Word Clusters for Direct Trans- fer of Linguistic Structure. In Proceedings of NAACL, Montreal, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Wikimedia Foundation", |
|
"authors": [], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wikimedia Foundation. 2012. Wiktionary. Online at http://www.wiktionary.org/.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Exploiting Web-Derived Selectional Preference to Improve Statistical Dependency Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Guangyou", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1556--1565", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guangyou Zhou, Jun Zhao, Kang Liu, and Li Cai. 2011. Exploiting Web-Derived Selectional Preference to Im- prove Statistical Dependency Parsing. In Proceedings of ACL, pages 1556-1565, Portland, Oregon, USA.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Computation of query values.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "NN VMFIN ADV APPR ART NN VVINF Women want further for the quota fight Women want to continue to fight for the quota NNP VBP TO VB TO VB IN DT NN Example of a German tree and a parallel English sentence with high levels of syntactic divergence.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"text": "2011; Das and ... the senators demand strict new ethics rules ...", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>DT NNS</td><td>VBP</td><td>JJ</td><td>JJ</td><td colspan=\"2\">NNS NNS</td></tr><tr><td colspan=\"2\">Gewerkschaften verlangen</td><td colspan=\"2\">Verzicht</td><td>auf</td><td>die Reform</td></tr><tr><td>NN</td><td>VVFIN</td><td/><td>NN</td><td colspan=\"2\">APPR ART</td><td>NN</td></tr><tr><td>Unions</td><td colspan=\"4\">demand abandonment on</td><td>the reform</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"text": "\u2192CHILD,DIR, [ADP]\u2192CHILD, and PARENT\u2192[NOUN] as their signatures, respectively.The level of granularity for signatures is a parameter that simply must be engineered. We found some benefit in actually instantiating two signatures for every query, one as described above and one that He reports that the senators demand strict new ethics rules[...]", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">demand</td><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td>Word</td><td colspan=\"3\">POS Dir Dist</td></tr><tr><td>PRON VERB ADP DET</td><td>NOUN</td><td>VERB</td><td>ADJ ADJ</td><td>NOUN NOUN</td><td>Parents</td><td>that said <root></td><td>ADP VERB ROOT</td><td>R L L</td><td>3 7 6</td></tr><tr><td/><td/><td/><td/><td/><td/><td>senators</td><td>NOUN</td><td>L</td><td>1</td></tr><tr><td/><td/><td/><td/><td/><td/><td>rules</td><td>NOUN</td><td>R</td><td>4</td></tr><tr><td colspan=\"5\">\" We demand that these hostilities cease , PUNC PRON VERB ADP DET NOUN VERB PUNC PUNC VERB \" said [...]</td><td>Children</td><td>We that They</td><td>NOUN ADP NOUN</td><td>L R L</td><td>1 1 1</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"3\">concessions NOUN R</td><td>1</td></tr><tr><td/><td/><td/><td/><td/><td/><td>from</td><td>ADP</td><td>R</td><td>2</td></tr><tr><td colspan=\"10\">verlangen \u2192 NOUN verlangen \u2192 ADP sprechen \u2192 NOUN all share the signature [VERB]\u2192CHILD, but verlangen \u2192 NOUN,RIGHT Verzicht \u2192 ADP VERB \u2192 Verzicht PARENT\u2192demand DIR Value L 0.66 PARENT Value ADP 0.33 ROOT 0.33 They demand concessions from the Israeli authorities <root> PRON VERB NOUN ADP DET ADJ NOUN ROOT have [VERB]\u2192demand, DIR R 0.33 VERB 0.33 \u2022\u2022\u2022</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "Evaluation of features derived from AUTOMATIC and MANUAL bilingual lexicons when trained on a concatenation of non-target-language treebanks (the BANKS setting). Values reported are UAS for sentences of all lengths in the standard CoNLL test sets, with punctuation removed from training and test sets. Daggers indicate statistical significance computed using bootstrap resampling; a single dagger indicates p < 0.1 and a double dagger indicates p < 0.05. We also include the baseline results of and", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>with five iterations of IBM Model 1 and</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "Evaluation of features derived from AUTOMATIC and MANUAL bilingual lexicons when trained on various small numbers of target language trees (the SEED setting). Values reported are UAS for sentences of all lengths on our enlarged CoNLL test sets (see text); each value is based on 50 sampled training sets of the given size. Daggers indicate statistical significance as described in the text. Statistical significance is not reported for averages.", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td/><td/><td/><td>AUTOMATIC</td><td/><td/><td/><td/></tr><tr><td/><td/><td>100 train trees</td><td/><td/><td>200 train trees</td><td/><td/><td>400 train trees</td><td/></tr><tr><td/><td colspan=\"2\">DELEX DELEX+PROJ</td><td>\u2206</td><td colspan=\"2\">DELEX DELEX+PROJ</td><td>\u2206</td><td colspan=\"2\">DELEX DELEX+PROJ</td><td>\u2206</td></tr><tr><td>DA</td><td>67.2</td><td>69.5</td><td>2.32 \u2021</td><td>69.5</td><td>72.3</td><td>2.77 \u2021</td><td>71.4</td><td>74.6</td><td>3.16 \u2021</td></tr><tr><td>DE</td><td>72.9</td><td>73.9</td><td>0.97</td><td>75.4</td><td>76.5</td><td>1.09 \u2020</td><td>77.3</td><td>78.5</td><td>1.25 \u2021</td></tr><tr><td>EL</td><td>70.8</td><td>72.9</td><td>2.07 \u2021</td><td>72.6</td><td>74.9</td><td>2.30 \u2021</td><td>74.3</td><td>76.7</td><td>2.41 \u2021</td></tr><tr><td>ES</td><td>72.5</td><td>73.0</td><td>0.46</td><td>74.1</td><td>75.4</td><td>1.29 \u2021</td><td>75.3</td><td>77.2</td><td>1.81 \u2021</td></tr><tr><td>IT</td><td>73.3</td><td>75.4</td><td>2.13 \u2021</td><td>74.7</td><td>77.3</td><td>2.54 \u2021</td><td>76.0</td><td>78.7</td><td>2.74 \u2021</td></tr><tr><td>NL</td><td>63.0</td><td>65.8</td><td>2.82 \u2021</td><td>64.7</td><td>67.6</td><td>2.86 \u2021</td><td>66.1</td><td>69.2</td><td>3.06 \u2021</td></tr><tr><td>PT</td><td>78.1</td><td>79.5</td><td>1.45 \u2021</td><td>79.5</td><td>81.1</td><td>1.66 \u2021</td><td>80.7</td><td>82.4</td><td>1.63 \u2021</td></tr><tr><td>SV</td><td>76.4</td><td>78.1</td><td>1.69 \u2021</td><td>78.1</td><td>80.2</td><td>2.02 \u2021</td><td>79.6</td><td>81.7</td><td>2.07 \u2021</td></tr><tr><td>AVG</td><td>71.8</td><td>73.5</td><td>1.74</td><td>73.6</td><td>75.7</td><td>2.07</td><td>75.1</td><td>77.4</td><td>2.27</td></tr><tr><td>EN</td><td>74.4</td><td>81.5</td><td>7.06 \u2021</td><td>76.6</td><td>83.0</td><td>6.35 \u2021</td><td>78.3</td><td>84.1</td><td>5.80 \u2021</td></tr><tr><td/><td/><td/><td/><td/><td>MANUAL</td><td/><td/><td/><td/></tr><tr><td>DA</td><td>67.2</td><td>68.1</td><td>0.88</td><td>69.5</td><td>70.9</td><td>1.44 \u2021</td><td>71.4</td><td>73.3</td><td>1.92 \u2021</td></tr><tr><td>DE</td><td>72.9</td><td>73.4</td><td>0.44</td><td>75.4</td><td>76.2</td><td>0.77</td><td>77.3</td><td>78.4</td><td>1.12 \u2021</td></tr><tr><td>EL</td><td>70.8</td><td>71.9</td><td>1.06 \u2020</td><td>72.6</td><td>74.1</td><td>1.48 \u2021</td><td>74.3</td><td>75.8</td><td>1.56 \u2021</td></tr><tr><td>ES</td><td>72.5</td><td>71.9</td><td>-0.64</td><td>74.1</td><td>74.3</td><td>0.23</td><td>75.3</td><td>76.4</td><td>1.04 \u2021</td></tr><tr><td>IT</td><td>73.3</td><td>74.3</td><td>1.01 \u2020</td><td>74.7</td><td>76.4</td><td>1.66 \u2021</td><td>76.0</td><td>78.0</td><td>2.01 \u2021</td></tr><tr><td>NL</td><td>63.0</td><td>65.4</td><td>2.43 \u2021</td><td>64.7</td><td>67.5</td><td>2.76 \u2021</td><td>66.1</td><td>69.0</td><td>2.91 \u2021</td></tr><tr><td>PT</td><td>78.1</td><td>78.2</td><td>0.13</td><td>79.5</td><td>80.1</td><td>0.62</td><td>80.7</td><td>81.5</td><td>0.82 \u2021</td></tr><tr><td>SV</td><td>76.4</td><td>76.6</td><td>0.25</td><td>78.1</td><td>79.1</td><td>1.01 \u2020</td><td>79.6</td><td>81.0</td><td>1.40 \u2021</td></tr><tr><td>AVG</td><td>71.8</td><td>72.5</td><td>0.70</td><td>73.6</td><td>74.8</td><td>1.25</td><td>75.1</td><td>76.7</td><td>1.60</td></tr><tr><td>EN</td><td>74.4</td><td>81.5</td><td>7.06 \u2021</td><td>76.6</td><td>83.0</td><td>6.35 \u2021</td><td>78.3</td><td>84.1</td><td>5.80 \u2021</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"text": "", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>: Lexicon statistics for all languages for both</td></tr><tr><td>sources of bilingual lexicons. \"Voc\" indicates vocabulary</td></tr><tr><td>size and \"OCC\" indicates open-class coverage, the frac-</td></tr><tr><td>tion of open-class tokens in the test treebanks with entries</td></tr><tr><td>in our bilingual lexicon.</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |