|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:39:27.415263Z" |
|
}, |
|
"title": "The Role of Linguistic Features in Domain Adaptation: TAG Parsing of Questions", |
|
"authors": [ |
|
{ |
|
"first": "Aarohi", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Yale University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Yale University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Widder", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Yale University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chartash", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Yale University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The analysis of sentences outside the domain of the training data poses a challenge for contemporary syntactic parsing. The Penn Treebank corpus, commonly used for training constituency parsers, systematically undersamples certain syntactic structures. We examine parsing performance in Tree Adjoining Grammar (TAG) on one such structure: questions. To avoid hand-annotating a new training set including out-of-domain sentences, an expensive process, an alternate method requiring considerably less annotation effort is explored. Our method is based on three key ideas: First, pursuing the intuition that \"supertagging is almost parsing\" (Bangalore and Joshi, 1999), the parsing process is decomposed into two distinct stages, supertagging and stapling. Second, following Rimell and Clark (2008), the supertagger is trained with an extended dataset including questions, and the resultant supertags are used with an unmodified parser. Third, to maximize improvements gained from additional training of the supertagger, the parser is provided with linguistically-significant features that reflect commonalities across supertags. This novel combination of ideas leads to an improvement in question parsing accuracy of 13% LAS. This points to the conclusion that adaptation of a parser to a new domain can be achieved with limited data through the careful integration of linguistic knowledge.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The analysis of sentences outside the domain of the training data poses a challenge for contemporary syntactic parsing. The Penn Treebank corpus, commonly used for training constituency parsers, systematically undersamples certain syntactic structures. We examine parsing performance in Tree Adjoining Grammar (TAG) on one such structure: questions. To avoid hand-annotating a new training set including out-of-domain sentences, an expensive process, an alternate method requiring considerably less annotation effort is explored. Our method is based on three key ideas: First, pursuing the intuition that \"supertagging is almost parsing\" (Bangalore and Joshi, 1999), the parsing process is decomposed into two distinct stages, supertagging and stapling. Second, following Rimell and Clark (2008), the supertagger is trained with an extended dataset including questions, and the resultant supertags are used with an unmodified parser. Third, to maximize improvements gained from additional training of the supertagger, the parser is provided with linguistically-significant features that reflect commonalities across supertags. This novel combination of ideas leads to an improvement in question parsing accuracy of 13% LAS. This points to the conclusion that adaptation of a parser to a new domain can be achieved with limited data through the careful integration of linguistic knowledge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The performance of contemporary syntactic parsers for natural language depends crucially on the availability of training data that matches the sentences on which the parser will be tested. In the realm of constituency parsing, by far the most common corpus used for training is the Penn Treebank (PTB) (Marcus et al., 1993) , specifically the subset drawn from the Wall Street Journal (WSJ). It is a truism that the sentences in the WSJ are not an accurate representation of the entirety of English, and indeed the distribution of sentence types in the WSJ differs dramatically from language found in other domains. In particular, interrogative sentences (questions) are quite rare in the WSJ. It is unsurprising, then, that parsers trained on the PTB WSJ corpus perform poorly on questions, sometimes suffering reductions in accuracy of up to 20% (Petrov et al., 2010) . However, questions are common elsewhere and indeed are a highly relevant sentence type for a range of NLP applications, such as question answering.", |
|
"cite_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 323, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 848, |
|
"end": 869, |
|
"text": "(Petrov et al., 2010)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One way to resolve this difficulty involves the dedication of considerable resources to augmenting the training data set with additional handannotated parses of the questions. The work reported in this paper explores an alternative method that requires less annotation effort and makes use of three key ideas. First, we follow Bangalore and Joshi (1999) in decomposing the parsing process into two stages: supertagging, where lexicallyassociated pieces of structure are assigned to each word, and stapling, where these supertags are composed to form a parse tree. Second, we build on the work of Rimell and Clark (2008) , where improvements to a supertagger trained with an extended dataset that is less costly to produce lead to improvements in parsing performance using an unmodified parser. However, we find that the parsing benefit that results from improved supertagging can only be maximized when the parser is structured so as to be sensitive to linguistically relevant properties of the supertags. As a result, a necessary third key idea is to use a parser whose input is characterized in linguistic terms that crosscut the supertag set. This fosters the ability of the parser to generalize across linguistically related, but superficially distinct, sentence types. With the goal of increasing efficiency, following these ideas, a significant increase in parsing accuracy can be seen with a relatively small set of questions for training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 327, |
|
"end": 353, |
|
"text": "Bangalore and Joshi (1999)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 619, |
|
"text": "Rimell and Clark (2008)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Because we are interested in extracting details of the sentence's interpretation, such as those conveyed through long-distance dependencies, we make use of the Tree Adjoining Grammar (TAG) formalism. TAG is a mildly context-sensitive lexicalized grammar formalism, where the units associated with each word, called elementary trees, are pieces of phrase structure that encode detailed information about the word's combinatory potential. Past work (Kasai et al., 2018) has shown that the rich structural representations underlying TAG parsing allow better recovery of longdistance dependencies than is possible with other approaches. Our domain adaptation depends on the rich structure of TAG elementary trees, as we use linguistically-defined features to encode commonalities across trees that the parser can exploit. 1 TAG elementary trees are composed using two operations, substitution and adjoining. The resulting derivations have a structure similar to those familiar from dependency parsing, and indeed computational methods from dependency parsing can be used to accomplish broad coverage TAG parsing . As a result, the proposal made in this paper should be more broadly applicable, outside the problem of TAG parsing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 447, |
|
"end": 467, |
|
"text": "(Kasai et al., 2018)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the first portion of this paper, we introduce the foundations of TAG and the shift-reduce TAG parser employed . We then present our methodology of improving the process of assigning elementary trees (supertags) to the words in a sentence to be parsed, and show how and under what conditions improved supertagging can yield substantial benefits for parsing accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Tree Adjoining Grammar (TAG) (Joshi et al., 1975) , is a lexicalized grammar formalism that generates hierarchical structure through a system of tree rewriting. In a TAG derivation, each word in a sentence is associated with an elementary tree, a piece of syntactic structure that encodes the structural constraints that the word imposes on the sentence in which it appears. A TAG elementary tree thereby encodes information about the dependencies headed by a word, as well as the structural positions of the word's dependents. For example, a transitive verb like read might be associated with the elementary tree t27 on the left of Figure 1 , while a name like Alice or a noun like book would be associated with the elementary tree t3. In these elementary trees, the nodes labeled with the diamond indicate the structural position of the head of the tree. For the verbally-headed tree, the NP nodes that appear along the tree's frontier are the positions for the verb's arguments, i.e., its syntactic dependents. The subscripts on these arguments encode their syntactic relations with the elementary tree's head (0 is subject, 1 is direct object, 2 is indirect object). 2 Elementary trees are combined using one of two derivational operations: substitution and adjoining. In substitution, an elementary tree rooted in some category C is inserted into a frontier node in another elementary tree that is also of category C and notated with a down arrow. Thus, to combine the subject NP with the verb in the sentence Alice read a book, the NP-rooted elementary tree t3 from Figure 1 , headed by Alice, is substituted into the NP 0 substitution node in the S-rooted tree t27, headed by read.", |
|
"cite_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 49, |
|
"text": "(Joshi et al., 1975)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1171, |
|
"end": 1172, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 633, |
|
"end": 641, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1572, |
|
"end": 1580, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tree Adjoining Grammar", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The second operation, adjoining, introduces recursive structure via a special kind of elementary tree, called an auxiliary tree. Auxiliary trees have a distinguished frontier node, the foot node, that is of the same category as the root of the tree. The third tree t1 in Figure 1 is an NP-recursive auxiliary tree that would be associated with the determiner the. The asterisk on the NP frontier node indicates that it is the tree's foot node. Adjoining works by targeting a node N of category C in some elementary tree using a C-recursive auxiliary tree 2 These numeric superscripts correspond to \"deep\" syntactic relations: the subject of a passivized transitive verb will be annotated 1, and operations like dative shift preserve syntactic relations. Though this does not uniquely identify thematic roles of arguments (e.g., unaccusative and unergative subjects are not distinguished), it does provide a richer encoding of predicate-argument dependencies than is provided by usual surface-oriented parses. Recent work has shown that the identity of supertags provides particularly useful information for the task of semantic role labeling (Kasai et al., 2019) . T. When adjoining applies, the node N is rewritten as the tree T, and N's children are attached (or lowered) as the children of the foot node of T. The determiner tree on the right of Figure 1 can thus adjoin to the NP root of the N-headed tree in the middle of the same figure. In this way, the grammar can generate a structure corresponding to the NP the book, which can then be substituted into the NP object substitution node (NP 1 ) in the transitive verb-headed tree (t27) to derive the entire sentence", |
|
"cite_spans": [ |
|
{ |
|
"start": 1142, |
|
"end": 1162, |
|
"text": "(Kasai et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 271, |
|
"end": 279, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1349, |
|
"end": 1357, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tree Adjoining Grammar", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Alice read the book. Similarly, the rightmost elementary tree in the figure, t69, can be adjoined to the VP node in t27 to yield a structure involving adverbial modification. The resulting derived tree structure is given on the left of Figure 2 . This derived tree does not, however, represent the derivational steps that were involved in the creation of the structure, which are instead represented in a derivation tree. The nodes of the derivation tree correspond to elementary trees, and its edges (dependencies) correspond to substitution and adjoining operations that have applied, i.e., a daughter node is an elementary tree that has been substituted or adjoined into the parent node. Substitution is indicated by solid edges annotated with the index of the substitution site, while adjoining is indicated with dotted edges annotated with the locus of adjoining. The derivation tree for the simple sentence under consideration is given on the right in Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 244, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 958, |
|
"end": 966, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tree Adjoining Grammar", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "TAG shares with the Combinatory Categorial Grammar (CCG) formalism the property of lexicalization: in both formalisms, words are associated with units of structure, elementary trees for TAG and lexical categories for CCG. The presence of rich structure associated with the lexical items is a source of information relevant for a variety of NLP tasks, including semantic analysis and translation, and the use of these formalisms have contributed to performance benefits (Cowan et al., 2006; Xu et al., 2017; Artzi et al., 2015; Nadejde et al., 2017) . TAG and CCG differ, however, in the kind of information that the lexical structures encode. In TAG, a verb's elementary tree encodes not only its selected arguments, but also the positions in which they are syntactically realized. Sentences involving long-distance dependencies, such as relative clauses or questions, will therefore involve distinct verbally-headed elementary trees from those used for simple declarative sentences, in which the wh-movement dependency is realized (Frank, 2004) . For example, in the question What did Alice read?, the displacement of the NP object to the front of the question and its original position filled with a trace node indicated by NONE, as in the Penn Treebank, is represented in the elementary tree t214 on the left in Figure 3 . Since the auxiliary verb did must appear directly after the fronted NP (NP 1 , or what, in this case), it adjoins to the S child of NP 1 , as shown in Figure 2 . In contrast, the verb read in the relative clause of the noun phrase the book that Alice read would head a different, but related elementary tree, shown on the right in Figure 3 , which also includes the fronting of the object, but is itself an auxiliary tree that can adjoin to the NP it modifies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 469, |
|
"end": 489, |
|
"text": "(Cowan et al., 2006;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 490, |
|
"end": 506, |
|
"text": "Xu et al., 2017;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 507, |
|
"end": 526, |
|
"text": "Artzi et al., 2015;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 527, |
|
"end": 548, |
|
"text": "Nadejde et al., 2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1032, |
|
"end": 1045, |
|
"text": "(Frank, 2004)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1315, |
|
"end": 1323, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 1477, |
|
"end": 1485, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 1657, |
|
"end": 1665, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tree Adjoining Grammar", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In contrast, CCG lexical categories do not encode the different realizations of a verb's arguments found in declaratives, questions or relatives. In all such cases, a transitive verb would be assigned the lexical category (s\\np)/np. What differs are the categories assigned to the object (np in simple sentences, s/(s/np) for the question word, and (np\\np)/(s/np)) for the relative pronoun), as well as the way in which these elements combine with the verb.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree Adjoining Grammar", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "This study uses the TAG supertagger and parser developed by . The supertaggerparser pipeline is shown in Figure 4 . Raw sentences and part of speech tags are given as input to the TAG supertagger, which outputs predicted supertags (i.e., elementary trees) for each word. These predicted elementary trees are given as input to the (unlexicalized) TAG parser, which outputs predicted parses with labeled dependencies among the elementary trees. We briefly review the architecture developed by . For more details, the reader should consult the original paper.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 113, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Supertagging and Parsing", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As discussed above, a simple transitive verbal predicate such as read might have a different elementary tree depending on the context: t27 as the main predicate of a declarative sentence, or t214 in an interrogative sentence. The same word might have other elementary trees in other constructions, such as subject and object relatives, meaning that the determination of the correct tree requires sensitivity to information that is not local in the string . To address the need for long-distance dependency information, the supertagging model makes use of Long Short-Term Memory (LSTM) units (Hochreiter and Schmidhuber, 1997), a recurrent network architecture which is constructed to avoid the vanishing/exploding gradient problem. Specifically, the supertagger developed by employs a one-layer bidirectional LSTM network. This architecture processes the input sentence both from beginning to end and from end to beginning. The output of these LSTM units at each time step are concatenated, fed into an affine transformation, and then fed into a softmax unit, yielding a probability distribution over the 4,727 elementary trees that exist in the TAG-parsed corpus we employ, which was extracted from the PTB corpus (Chen et al., 2005) . Each word is given to the network as in Kasai et al. (2018) : the concatenation of a 100-dimensional GloVe embedding (Pennington et al., 2014), a 5-dimensional embedding of a predicted part of speech tag, and a 30-dimensional character-level representation of the word. The network is trained by optimizing the negative loglikelihood of the observed sequences of supertags.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1215, |
|
"end": 1234, |
|
"text": "(Chen et al., 2005)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1277, |
|
"end": 1296, |
|
"text": "Kasai et al. (2018)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supertagger Architecture", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Parsing is done using the arc-eager system of shift reduce parsing introduced in the MALT parser (Nivre et al., 2006) . This system maintains a stack, buffer, and the set of dependency relations derived so far as the current state. These dependency relations consist of the substitutions and adjoinings that have already occurred between elementary trees. Initially, the buffer holds the sequence of tokens in the sentence, and the transitions terminate when the buffer is empty. At each state, the arc-eager system may choose one of four operations: LEFT-ARC, RIGHT-ARC, SHIFT, and REDUCE, defining ways in which the top elements of the stack and buffer may be manipulated. The TAG parser further divides LEFT-ARC and RIGHT-ARC into seven types according to the derivational operation involved, whether substitution or adjoining, and the location at which the operation takes place .", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 117, |
|
"text": "(Nivre et al., 2006)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-Reduce Parsing Algorithm", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The parser implemented by Kasai et al. (2017) uses a two-level feed-forward network that is trained to predict the operation that should be taken, given the top five elements of the stack and buffer. A noteworthy aspect of the parser is that these data structures contain only supertag information, not the identities of the words in the sentence being parsed. Each supertag is given to the network as a one-hot vector, which is then embedded into a more compact representation, together with vectors that indicate any substitution operations that have already been performed on the supertag. These vector representations of the top elements of the stack and buffer are concatenated and fed to the network, which yields a probability distribution over the possible transition actions. The parser is decoded using a beam search. Friedman et al. (2017) explore the benefits of a different input representation for the same parser, involving feature-based embeddings of the elementary trees. These feature-based embeddings are vectors that encode linguistically-defined dimensions of information about the elementary trees specified by Chung et al. (2016) . These dimensions include structural properties of the elementary tree (category of the root and head and the category and direction of substitution nodes), subcategorization frame, and grammatical properties (passive, particle shift, wh-movement). The rationale for training a parser with feature embeddings is to allow the network to exploit relationships between trees, and to be able to generalize parsing actions across related contexts. This is particularly useful for cases like passivization and wh-movement, in which the argument structure of the root remains the same, but there are changes in syntax which are reflected in the elementary trees. Friedman et al. (2017) compare the parsing models using both one-hot and featural representations of supertags with respect to parsing performance on PTB sentences, but only saw a \"slight improvement\" (approximately 0.2% improvement in LAS). However, in the case of adapting to new domains, learning this kind of linguistic information may bridge the gap between the original data domain and the new domain, as it will allow sharing of information about parsing actions for related structures. We explore the importance of providing linguistically-rich feature embeddings to the parser to aid in improving parsing accuracy in the new domain of interrogatives despite never training the parser on sentences from the new domain, especially when limited data is used.", |
|
"cite_spans": [ |
|
{ |
|
"start": 828, |
|
"end": 850, |
|
"text": "Friedman et al. (2017)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1133, |
|
"end": 1152, |
|
"text": "Chung et al. (2016)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1810, |
|
"end": 1832, |
|
"text": "Friedman et al. (2017)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shift-Reduce Parsing Algorithm", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The most direct approach to adapting a parser for new domains would be to generate a new, hand-annotated dataset that included instances of the new sentence type, which could be used to train a supertagger and parser. Such a process would, however, involve a substantial annotation effort for each new domain. We instead build on the approach of domain adaptation taken by Rimell and Clark (2008) . The viability of Rimell and Clark's approach rests on the assumption that \"supertagging is almost parsing\" (Bangalore and Joshi, 1999) . If a parser is provided with a correct set of supertags, it should perform better even on sentence types outside the domain on which it was trained. We therefore focus on retraining the TAG supertagger with a hand-annotated set of questions to which TAG elementary trees have been assigned to each word, but for which parses have not been generated. This hand-annotation process is less expensive than the creation of full parses. As we shall see, this procedure results in improvements in both supertagging and parsing accuracy without ever training the parser on an augmented dataset of questions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 373, |
|
"end": 396, |
|
"text": "Rimell and Clark (2008)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 506, |
|
"end": 533, |
|
"text": "(Bangalore and Joshi, 1999)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The question set used in this study contains 350 of the questions used by Rimell and Clark (2008) . Their dataset was drawn from the training data provided for the TREC 9-12 Competitions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 97, |
|
"text": "Rimell and Clark (2008)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To train the TAG BiLSTM supertagger, gold standard part of speech (POS) and supertag sequences were first created for the 350 question set. POS tags were assigned to the 350 questions using the Stanford CoreNLP webbased POS tagging tool. These tags were then checked and corrected by hand to create gold standard POS tags.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supertagger Training and Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Next, elementary trees were assigned to the sentences by hand. To make sure these hand annotations were compatible with and followed the same conventions as the method of supertag assignment for the PTB data used to train the parser, the PTB annotation guidelines (Bies et al., 1995) and the gold standard supertag data (Chen et al., 2005) were frequently reviewed. Stanford Tregex (Levy and Andrew, 2006) was used to find relevant trees (e.g., declarative forms of the questions, relative clauses with a similar structure) in the WSJ corpus. Through these methods, ambiguities regarding assignment of elementary trees were resolved. Hand annotation was primarily done by one author, and another author verified or corrected the hand annotations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 283, |
|
"text": "(Bies et al., 1995)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 339, |
|
"text": "(Chen et al., 2005)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 382, |
|
"end": 405, |
|
"text": "(Levy and Andrew, 2006)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supertagger Training and Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In essence, the hand annotation process was conducted as follows. Given the question \"What did Alexander Graham Bell (AGB) invent?\" the supertag sequence for the corresponding declarative was first determined ( Figure 5 ). From this, the supertag sequence for the question would be cre-ated. The biggest change is that the tree for the predicate, invent, must reflect the wh-movement ( Figure 6 ). As can be seen, the interrogative elementary tree t214 can be derived from the declarative elementary tree t27. NP 1 has been fronted, and the added auxiliary did will adjoin directly after NP 0 at the second S node. Appendix A contains more information about the conventions that were followed in assigning supertags in several common types of questions.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 219, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 394, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Supertagger Training and Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The BiLSTM supertagger was trained with two regimens. In one, only the original PTB training set (WSJ sections 01-22) was provided. In the other, the supertag sequences associated with the hand-tagged questions were added to the PTB data. Rimell and Clark (2008) added ten copies of their 1,328 training questions, adding 13,280 questions to the 39,832 PTB training sentences. Due to the smaller number of hand-tagged questions used for training in this study, 35 exact copies of the training questions were added to the PTB training sentences. This yielded a total of 49,632 sentences in the training set. Through a developmental stage of training and testing, it was determined that 35 copies was optimal to have the highest possible accuracy of supertagging questions without overfitting or reducing accuracy of supertagging PTB sentences. Supertagger training and testing was done using five-fold cross-validation. For each of the five folds, a unique subset of 70 questions was saved for testing, and the remaining 280 questions were used for training. We report mean accuracy over these five folds.", |
|
"cite_spans": [ |
|
{ |
|
"start": 239, |
|
"end": 262, |
|
"text": "Rimell and Clark (2008)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supertagger Training and Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In order to analyze parsing performance of questions, gold parses were created for a small test set of 48 questions, each associated with a unique supertag sequence. These questions were not among those used for the training of the supertagger. As before, the assignment of gold parses was done through careful consultation of the PTB annotation guidelines (Bies et al., 1995) , as well as the existing TAG-parsed version of the PTB.", |
|
"cite_spans": [ |
|
{ |
|
"start": 357, |
|
"end": 376, |
|
"text": "(Bies et al., 1995)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Evaluation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "For the TAG parser, creation of gold parses requires not only the gold supertag sequences, but also the dependency relations (for UAS and LAS) and the arc labels (for LAS). Two additional columns of information must be added when creating a gold parse as opposed to a gold supertag sequence for a sentence, as shown below. As a result, creating gold supertag sequences is less timeintensive than creating gold parses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Evaluation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Supertag Rel Arc Label 1 What t612 2 adjoin 2 continent t3 5 1 (object) 3 is t259 5 adjoin 4 India t3 5 0 (subject) 5 on t2911 0 root Two parsing models were explored, both trained only on the PTB TAG parses: (1) the parser model proposed by that was trained using one-hot vector embeddings of the elementary trees (henceforth -F), and (2) an identical parser trained with Friedman et al.'s elementary tree feature embeddings (henceforth +F). Decoding for both parsers was done using beam search with a beam size of 16. For each model, three different scenarios were tested, varying in the nature of the supertag input received for the questions to be parsed: (1) supertags given by the original PTB-trained BiLSTM supertagger model ) (henceforth PTB), (2) supertags given by a supertagger model trained with an augmented dataset of questions and PTB sentences (henceforth PTB+Q), and (3) hand-annotated gold supertags (henceforth Gold). The accuracy of parses in each of the six cases are reported in Section 5.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Supertagging results for the set of 350 questions and the PTB test set are reported separately in Table 1 . The PTB-trained supertagger gave an accuracy of 79.61% for the set of 350 questions (an average over the five folds of cross-validation, weighted by the number of words in each fold), and 91.50% for the PTB test set. This PTB-trained supertagger frequently made three types of errors when assigning elementary trees to questions:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 105, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Supertagging Results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "1. Incorrect wh-phrase construction: The correct elementary tree for the wh-determiner (e.g., what in what book) should contain a right NP* adjunction node to adjoin to the NP book (as in t1 assigned to the in the book, Figure 1 ). Instead, the elementary tree assigned to book by the PTB-trained supertagger would incorrectly contain a left NP* adjunction node to facilitate adjunction to the wh-phrase, or the verbal predicate's elementary tree would have two NP substitution nodes into which the wh-determiner and the noun could be inserted separately.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 220, |
|
"end": 228, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Supertagging Results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "2. Incorrect tree for auxiliary verb: Auxiliary verbs (e.g., did) were treated as in a declarative sentence, heading a VP-recursive auxiliary tree t23. Because the auxiliary should appear immediately following the fronted NP and before the subject, the adjunction of the verb should instead take place at S (cf. tree t214 in Figure 3 ), as in tree t259. question version (i.e., neither fronting nor the NP-NONE trace were expressed in the elementary tree). For a transitive sentence, this means t27 (Figure 1 ) was assigned to the verbal predicate rather than t214 (Figure 3 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 325, |
|
"end": 333, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 499, |
|
"end": 508, |
|
"text": "(Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 565, |
|
"end": 574, |
|
"text": "(Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Supertagging Results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For the PTB+Q trained supertagger, supertagging accuracy improved, particularly in regards to the three common errors outlined above. On average, supertagging accuracy increased substantially for the question test sets. At the same time, supertagging accuracy on the PTB test set was maintained, indicating that when augmentation is done appropriately, additional training on types of constructions rare in a corpus does not adversely affect supertagging performance on the original corpus. Table 2 reports parsing accuracy on the PTB test set for each of the six parser input conditions described in Section 4 (varying by supertag input and presence or absence of feature-embeddings). 3 We see that the addition of the question data to the supertag's training data (PTB+Q) has a minimal effect on parser performance on the PTB test sentences. Similarly, as found by Friedman et al. (2017) , the addition of feature embeddings results in a very small improvement in parsing accuracy, if at all.", |
|
"cite_spans": [ |
|
{ |
|
"start": 686, |
|
"end": 687, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 867, |
|
"end": 889, |
|
"text": "Friedman et al. (2017)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 491, |
|
"end": 498, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Supertagging Results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "More relevant for the current topic of discussion is the parsing performance of questions, which is reported in Table 3 for each of the six parser input conditions. We first note that while labeled parsing accuracy (LAS) for the -F parser improved from 79.79% to 85.67% when going from PTB to PTB+Q supertagger training, we see an even more dramatic increase when the feature-trained (+F) parser is used: in this case, parsing accuracy increases to 93.09%. As discussed in Section 3.3, the feature embeddings provide linguistic information over which the parser can generalize from one type of structure to another. Because of the rarity of questions in the PTB, many of the correct supertags used when hand-annotating the question set are also rarely present in the gold standard supertag data for the PTB WSJ corpus (Chen et al., 2005) . As a result, the TAG parser (trained only on the PTB WSJ corpus) was not equipped to properly handle these supertags. Thus, while the parsing accuracy increased when given PTB+Q-trained supertags, the improvement is not as large as it might be due to the parser repeatedly encountering uncommon supertags that it was unable to correctly staple together. When the +F parser was used, the parser had learned the knowledge required to better deal with these less common supertags, and parsing accuracy improved from 83.88% to 93.09%. It is notable that this improvement is super-additive: the improvement on LAS (13.3%) is greater than the sum of the individual improvements obtained by using the improved supertagger (PTB+Q) alone (5.88%) or using feature-embeddings (+F) in the parser (4.09%). Thus, we find that with our approach to domain adaptation, when coupled with representations that encode linguistic commonalities across different types of structures, accuracy can increase to a level comparable to the parsing accuracy of the original domain. It is also notable that, when training the supertagger, so few questions (350) are needed to see a significant increase in both supertagging and parsing accuracy (by 15% and 13%, respectively). Table 4 breaks errors in parsing questions into two categories. The error category of \"incorrect wh-phrase\" relates to parses of questions that failed to adjoin a wh-determiner to its corresponding noun phrase, or that incorrectly substituted a wh-phrase as an argument of the corresponding predicate. The \"missing root\" category relates to PTB PTB+Q Gold incorrect wh-phrase -F 19 9 7 +F 16 3 3 missing root -F 16 25 23 +F 1 0 0 parses that omit assigning any term in the sentence as the root of the dependency parse, most likely due to complexity or rareness of the correct root word's elementary tree. The number and types of parsing errors deriving from the presence of uncommon supertags in questions (e.g., a parse missing a root) persist in the -F parser. In contrast, these errors are minimal for the +F parser. Treatment of the wh-phrase construction was a specific focus of training the supertagger on questions, and while errors in this category decreased (cf . Table 4 ) for both parsers once the improved supertags were given, the feature-trained (+F) parser was better able to handle these constructions, and errors decreased much more.", |
|
"cite_spans": [ |
|
{ |
|
"start": 818, |
|
"end": 837, |
|
"text": "(Chen et al., 2005)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 119, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 2087, |
|
"end": 2094, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 2463, |
|
"end": 2523, |
|
"text": "-F 19 9 7 +F 16 3 3 missing root -F 16 25 23 +F 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 3069, |
|
"end": 3079, |
|
"text": ". Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Parsing Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "It is important to note that, although the number of sentences with a missing root increases from the PTB to PTB+Q trained supertagger, the reason for having a missing root changes. Given the correct (often rarer) supertag for the root in the PTB+Q case, the -F parser is now not equipped to properly combine other trees with it, so the root is skipped. This leads to higher numbers of missing root errors for both PTB and PTB+Q. However, such errors do not occur in the +F parser, as sensitivity to features allows the parser to be better equipped to compose even rare trees correctly. We find then that the statement \"supertagging is almost parsing\" (Bangalore and Joshi, 1999) is true only when the linguistic content of supertags is known to the parser. When the parser receives correct supertags (gold) and is equipped to handle them properly since it was trained with feature embeddings, it yields near-perfect parses (99.74%).", |
|
"cite_spans": [ |
|
{ |
|
"start": 652, |
|
"end": 679, |
|
"text": "(Bangalore and Joshi, 1999)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We anticipate that the approach of domain adaptation for supertagging and parsing explored here can be applied to other domains. For example, imperatives are another sentence type nearly absent from newspaper corpora, but which are nonetheless a crucial type of input to NLP systems such as In addition, because questions are not wellrepresented among the original PTB training corpus for the parser, questions on which the parser was tested sometimes involved novel supertags that were absent from the grammar extracted from the PTB. For example, copular sentences with NP predicates (like Mardi Gras is a festival) can front the predicate to form a question (as in What is Mardi Gras?). The appropriate elementary tree for such cases should be the one given in Figure 8 , with the clausal predicate what appearing in fronted position. However, no such elementary tree exists among those that were extracted from the PTB by Chen et al. (2005) . Consequently, in order to better parse all types of questions, and more generally sentences from other domains, it will be necessary to allow for the creation and feature decomposition of new elementary trees.", |
|
"cite_spans": [ |
|
{ |
|
"start": 925, |
|
"end": 943, |
|
"text": "Chen et al. (2005)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 763, |
|
"end": 771, |
|
"text": "Figure 8", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this study, we explored an approach to domain adaptation for TAG parsing in the context of questions. We extended Rimell and Clark's approach for improving parsing by improving supertagging. We found first of all that improvements in TAG supertagging, despite the larger number of supertags involved as compared with CCG, are possible through a relatively limited hand-annotation effort. Supertagging accuracy of questions increased by 15%, without sacrificing supertagging accuracy on the original corpus data. Furthermore, while this approach is also successful in improving parsing performance, its effectiveness is maximized when the parser makes use of linguisticallyinformed representations of supertags. Strikingly, previous work (Friedman et al., 2017) found that the introduction of hand-coded linguistic features in the supertag representations given to the parser does not yield significant benefits in parsing performance. However, our current results suggest that the addition of linguistic features can constitute a crucial source of information when processing structures that are underrepresented in the training data. A parser trained with linguisticallydefined feature decompositions of the supertags can better handle those supertags that are uncommon in the data it was trained on. In such cases (e.g., questions), the parser is able to exploit abstract commonalities with related structures, such as relative clauses, that do occur frequently in the training data. Without such linguistically structured representations, considerably more effort would need to be expended to annotate parses in the new domain of questions. We see then that neural methods are not immune to the need for the careful incorporation of hand-coded linguistic features, particularly in addressing problems of domain adaptation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 740, |
|
"end": 763, |
|
"text": "(Friedman et al., 2017)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Pauli Xu, Below we briefly present our assumptions for each type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "How many/much ... ? An example of this type of question is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "(1) How many battles did she win?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "It is first useful to examine the declarative version closest to this sentence:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "(2) She did win five battles.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "The key difference between the interrogative version (Sentence 1) and the declarative version (Sentence 2) is the change in order, akin to that of wh-movement. Thus, the elementary tree for the verbal predicate win in this question must include the noun phrase trace, as in t214:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "S VP NP1# V win NP0# t27 (declarative) S S VP NP -NONE- V win NP0# NP1# t214 (interrogative)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "As can be seen, the interrogative elementary tree t214 can be derived from the declarative elementary tree t27. NP 1 corresponds to five battles. NP 1 4 Part of speech tags are taken from the PTB. in t27 has been replaced by the NP-NONE trace in t214, since it has moved to the beginning of the sentence (fronting). To show this, an additional S node has been added to the top of the tree. Another key difference adopted as a convention is the treatment of did. In the declarative sentence, did is assigned t23, a VP-recursive auxiliary tree. However, in the interrogative version, did is assigned t259, an S-recursive auxiliary tree. The difference is shown below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "VP VP * V} t23 S S * V} t259", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "This is because of the placement of the additional S node in t214. The auxiliary verb did must come between the object (NP 1 ) and subject (NP 0 ) of the question, as shown in t214.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "An example of this type of question is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What (NP) is NP ?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(3) What is the capital of Kentucky?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What (NP) is NP ?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "with the corresponding declarative sentence:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What (NP) is NP ?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(4) Frankfort is the capital of Kentucky.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What (NP) is NP ?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The supertags assigned to Sentence 3 are shown in Figure 9 , and the supertag for the predicate is t668 in Figure 10 . There are two key concepts behind this type of question. First, as for the auxiliary verb did in the earlier question type, t23 becomes t259 in the context of questions due to the necessity of adjoining to the S node in a position above the subject. Second, we notice in a copular sentence there is no verb to head the elementary tree, i.e., to project the main S node that serves as the root of the derivation. Instead, the noun capital plays the role of predicate of the sentence, and is assigned an Srooted elementary tree, t668. Figure 10 illustrates the similarity of the two elementary trees assigned to the predicate nominal capital in declarative and interrogative forms, with the interrogative t668 encoding the NP-NONE trace. The sequence of elementary trees assigned to this sentence is shown in Figure 11 . Although there is no change in word order when converting from the interrogative to the declarative version of this sentence, the verbally-headed elementary tree follows the practice of placing a trace in subject position and displacing the subject to a higher position, as done in the PTB.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 58, |
|
"text": "Figure 9", |
|
"ref_id": "FIGREF6" |
|
}, |
|
{ |
|
"start": 107, |
|
"end": 116, |
|
"text": "Figure 10", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 652, |
|
"end": 661, |
|
"text": "Figure 10", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 926, |
|
"end": 935, |
|
"text": "Figure 11", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "What (NP) is NP ?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Earlier, t214 was used for the question version of the transitive verb win's elementary tree. The difference between t214 and t335 is whether it was the object or subject that was fronted to form the question. Distinct elementary trees are necessary for each possible position of extraction for a given pattern of transitivity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What (NP) is NP ?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The final question type we consider here is as follows: 7What city is Logan Airport in?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What (NP) is NP+IN ?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Unlike copular questions, in which a noun phrase is the main predicate, in Sentence 7 the main predicate is the preposition in. As a result, this preposition constitutes the head of the S-rooted elementary tree, as shown in Figure 12 , where what city substitutes into the NP 1 node (object), and Logan Airport substitutes into the NP 0 node (subject). ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 233, |
|
"text": "Figure 12", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "What (NP) is NP+IN ?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this respect, TAG is similar to Combinatory Categorial Grammar (CCG)(Steedman, 2000), though the lexical units of CCG carry somewhat less information about structural context, as we will discuss below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Following the standard in the TAG parsing literature, these values do not include accuracy for punctuation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors are grateful to Jungo Kasai for his crucial advice and technical support throughout this work. We would also like to thank the members of the CLAY lab at Yale, who provided valuable feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Broad-coverage CCG semantic parsing with AMR", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Artzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1699--1710", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 1699-1710.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Supertagging: An approach to almost parsing", |
|
"authors": [ |
|
{ |
|
"first": "Srinivas", |
|
"middle": [], |
|
"last": "Bangalore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Aravind", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Computational linguistics", |
|
"volume": "25", |
|
"issue": "2", |
|
"pages": "237--265", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Srinivas Bangalore and Aravind K Joshi. 1999. Su- pertagging: An approach to almost parsing. Com- putational linguistics, 25(2):237-265.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Mary Ann Marcinkiewicz, and Britta Schasberger. 1995. Bracketing guidelines for treebank II style Penn Treebank project", |
|
"authors": [ |
|
{ |
|
"first": "Ann", |
|
"middle": [], |
|
"last": "Bies", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Ferguson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Katz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Mac-Intyre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victoria", |
|
"middle": [], |
|
"last": "Tredinnick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Grace", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ann Bies, Mark Ferguson, Karen Katz, Robert Mac- Intyre, Victoria Tredinnick, Grace Kim, Mary Ann Marcinkiewicz, and Britta Schasberger. 1995. Bracketing guidelines for treebank II style Penn Treebank project. University of Pennsylvania.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Automated extraction of tree-adjoining grammars from treebanks", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Srinivas", |
|
"middle": [], |
|
"last": "Bangalore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Vijay-Shanker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Natural Language Engineering", |
|
"volume": "12", |
|
"issue": "3", |
|
"pages": "251--299", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Chen, Srinivas Bangalore, and K. Vijay-Shanker. 2005. Automated extraction of tree-adjoining gram- mars from treebanks. Natural Language Engineer- ing, 12(3):251-299.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Revisiting supertagging and parsing: How to use supertags in transition-based parsing", |
|
"authors": [ |
|
{ |
|
"first": "Wonchang", |
|
"middle": [], |
|
"last": "Chung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Siddhesh Suhas Mhatre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Nasr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Srinivas", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bangalore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "12th International Workshop on Tree Adjoining Grammars and Related Formalisms", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "85--92", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wonchang Chung, Siddhesh Suhas Mhatre, Alexis Nasr, Owen Rambow, and Srinivas Bangalore. 2016. Revisiting supertagging and parsing: How to use su- pertags in transition-based parsing. In 12th Interna- tional Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+ 12), pages 85-92.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A discriminative model for tree-to-tree translation", |
|
"authors": [ |
|
{ |
|
"first": "Brooke", |
|
"middle": [], |
|
"last": "Cowan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivona", |
|
"middle": [], |
|
"last": "Ku\u010derov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "232--241", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brooke Cowan, Ivona Ku\u010derov\u00e1, and Michael Collins. 2006. A discriminative model for tree-to-tree trans- lation. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Process- ing, pages 232-241.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Phrase Structure Composition and Syntactic Dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Frank. 2004. Phrase Structure Composition and Syntactic Dependencies. MIT Press, Cam- bridge, MA.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Linguistically rich vector representations of supertags for TAG parsing", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Friedman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jungo", |
|
"middle": [], |
|
"last": "Kasai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Mccoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Forrest", |
|
"middle": [], |
|
"last": "Davis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 13th International Workshop on Tree Adjoining Grammars and Related Formalisms", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "122--131", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Friedman, Jungo Kasai, Thomas R. McCoy, Robert Frank, Forrest Davis, and Owen Rambow. 2017. Linguistically rich vector representations of supertags for TAG parsing. In Proceedings of the 13th International Workshop on Tree Adjoining Grammars and Related Formalisms, pages 122-131. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural Computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/neco.1997.9.8.1735" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Tree adjunct grammars", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Aravind", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masako", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Takahashi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "Journal of computer and system sciences", |
|
"volume": "10", |
|
"issue": "1", |
|
"pages": "136--163", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aravind K Joshi, Leon S Levy, and Masako Takahashi. 1975. Tree adjunct grammars. Journal of computer and system sciences, 10(1):136-163.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "TAG parsing with neural networks and vector representations of supertags", |
|
"authors": [ |
|
{ |
|
"first": "Jungo", |
|
"middle": [], |
|
"last": "Kasai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"Thomas" |
|
], |
|
"last": "Mccoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Nasr", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jungo Kasai, Robert Frank, R. Thomas McCoy, Owen Rambow, and Alexis Nasr. 2017. TAG parsing with neural networks and vector representations of su- pertags. In Proceedings of EMNLP. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "End-to-end graph-based tag parsing with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Jungo", |
|
"middle": [], |
|
"last": "Kasai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pauli", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Merrill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jungo Kasai, Robert Frank, Pauli Xu, William Merrill, and Owen Rambow. 2018. End-to-end graph-based tag parsing with neural networks. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies (NAACL-HLT).", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Syntax-aware neural semantic role labeling with supertags", |
|
"authors": [ |
|
{ |
|
"first": "Jungo", |
|
"middle": [], |
|
"last": "Kasai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Friedman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragomir", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jungo Kasai, Dan Friedman, Robert Frank, Dragomir Radev, and Owen Rambow. 2019. Syntax-aware neural semantic role labeling with supertags. In Pro- ceedings of the Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies (NAACL- HLT).", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Tregex and Tsurgeon: Tools for querying and manipulating tree data structures", |
|
"authors": [ |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Galen", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2231--2234", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roger Levy and Galen Andrew. 2006. Tregex and Tsurgeon: Tools for querying and manipulating tree data structures. In Proceedings of the Fifth Interna- tional Conference on Language Resources and Eval- uation (LREC), pages 2231-2234.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "The Stanford CoreNLP natural language processing toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mcclosky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 55-60.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Building a large annotated corpus of English: The Penn treebank", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ann" |
|
], |
|
"last": "Marcinkiewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beatrice", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "313--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn treebank. Computa- tional Linguistics, 19(2):313-330.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Predicting target language CCG supertags improves neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Nadejde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siva", |
|
"middle": [], |
|
"last": "Reddy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomasz", |
|
"middle": [], |
|
"last": "Dwojak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Second Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "68--79", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-4707" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Nadejde, Siva Reddy, Rico Sennrich, Tomasz Dwojak, Marcin Junczys-Dowmunt, Philipp Koehn, and Alexandra Birch. 2017. Predicting target lan- guage CCG supertags improves neural machine translation. In Proceedings of the Second Confer- ence on Machine Translation, pages 68-79, Copen- hagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Maltparser: A data-driven parser-generator for dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jens", |
|
"middle": [], |
|
"last": "Nilsson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre, Johan Hall, and Jens Nilsson. 2006. Maltparser: A data-driven parser-generator for de- pendency parsing. In Proceedings of the Fifth In- ternational Conference on Language Resources and Evaluation (LREC).", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "GloVe: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Uptraining for accurate deterministic question parsing", |
|
"authors": [ |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pi-Chuan", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Ringgaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hiyan", |
|
"middle": [], |
|
"last": "Alshawi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "705--713", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Slav Petrov, Pi-Chuan Chang, Michael Ringgaard, and Hiyan Alshawi. 2010. Uptraining for accurate de- terministic question parsing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 705-713.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Adapting a lexicalized-grammar parser to contrasting domains", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Rimell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "475--484", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Rimell and Stephen Clark. 2008. Adapting a lexicalized-grammar parser to contrasting domains. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 475-484. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Porting a lexicalized-grammar parser to the biomedical domain", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Rimell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Journal of Biomedical Informatics", |
|
"volume": "42", |
|
"issue": "5", |
|
"pages": "852--865", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Rimell and Stephen Clark. 2009. Port- ing a lexicalized-grammar parser to the biomedi- cal domain. Journal of Biomedical Informatics, 42(5):852-865.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "The Syntactic Process", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, MA.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Elementary trees for Alice read the book quickly." |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Derived and derivation trees for Alice read the book quickly." |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Elementary trees for object questions and object relative clauses." |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "The TAG Parser pipeline." |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "What did AGB invent?" |
|
}, |
|
"FIGREF5": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Proposed elementary tree for \"what\" predicate virtual assistants. Another domain to which this method might be applied involves biomedical and clinical text (cf.Rimell and Clark 2009), which pose a challenge for information retrieval systems due to the domain-specific vocabulary abbreviations and distinctive syntactic structures, such as null subjects, asyndetic coordination, and fragments.(a) abbreviations: 8 yo M no PMH presents with n/v/F and fever x4 days (b) null subjects: presents with shortness of breath (c) asyndetic coordination: VS notable for fever to 103F, tachycardia, tachypnea (d) fragments: non-toxic though appears ill" |
|
}, |
|
"FIGREF6": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Elementary trees for Sentence 3." |
|
}, |
|
"FIGREF7": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Elementary trees for \"capital\" in Sentences 4 and 3, respectively. What (NP) VP ? Sentence 5 gives an example of this type of question. (5) What car company invented the Edsel? (6) Ford invented the Edsel." |
|
}, |
|
"FIGREF8": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Elementary trees assigned to Sentence 5." |
|
}, |
|
"FIGREF9": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Elementary tree used in Sentence 7." |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"text": "Supertagging Accuracy. Rows indicate training set, whether augmented or not.", |
|
"num": null, |
|
"content": "<table><tr><td/><td>PTB</td><td colspan=\"2\">PTB+Q Gold</td></tr><tr><td>UAS</td><td colspan=\"2\">-F 90.80 90.53 +F 91.14 90.51</td><td>94.60 96.00</td></tr><tr><td>LAS</td><td colspan=\"2\">-F 89.63 89.39 +F 90.00 89.39</td><td>94.07 95.81</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "PTB Test Parsing Accuracy. Columns indicate training set for supertagger (or gold supertags) that provide input to the parser. \u00b1F indicates the presence or absence of feature-based supertag embeddings in the input to the parser.", |
|
"num": null, |
|
"content": "<table><tr><td/><td>PTB</td><td colspan=\"2\">PTB+Q Gold</td></tr><tr><td>UAS</td><td colspan=\"2\">-F 81.84 86.70 +F 86.18 93.86</td><td>91.04 99.74</td></tr><tr><td>LAS</td><td colspan=\"2\">-F 79.79 85.67 +F 83.88 93.09</td><td>90.53 99.74</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"text": "Question Parsing Accuracy. Columns indicate training set for supertagger (or gold supertags) that provide input to the parser. \u00b1F indicates the presence or absence of feature-based supertag embeddings in the input to the parser.", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"text": "Summary of Parsing Evaluation for Questions. \u00b1F indicates the presence or absence of feature-based supertag embeddings in the input to the parser.", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"text": ". TAG parsing evaluation using textual entailments. In Proceedings of the 13th International Workshop on Tree Adjoining Grammars and Related Formalisms, pages 132-141. Association for Computational Linguistics.", |
|
"num": null, |
|
"content": "<table><tr><td>A Appendix: Assigning TAG Supertags</td></tr><tr><td>to Questions</td></tr><tr><td>This appendix lays out the linguistic assumptions</td></tr><tr><td>and analytic decisions that were made for question</td></tr><tr><td>supertagging and parsing. Within the 350 ques-</td></tr><tr><td>tions, four basic question types, expressed in a</td></tr><tr><td>generalized form below, were most common. 4</td></tr><tr><td>a. How many/much ... ?</td></tr><tr><td>b. What (NP) is NP ?</td></tr><tr><td>c. What (NP) VP ?</td></tr><tr><td>d. What (NP) is NP+IN ?</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |