|
{ |
|
"paper_id": "C16-1022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:03:53.045585Z" |
|
}, |
|
"title": "Syntactic realization with data-driven neural tree grammars", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Mcmahan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Rutgers University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Stone", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Rutgers University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "A key component in surface realization in natural language generation is to choose concrete syntactic relationships to express a target meaning. We develop a new method for syntactic choice based on learning a stochastic tree grammar in a neural architecture. This framework can exploit state-of-the-art methods for modeling word sequences and generalizing across vocabulary. We also induce embeddings to generalize over elementary tree structures and exploit a tree recurrence over the input structure to model long-distance influences between NLG choices. We evaluate the models on the task of linearizing unannotated dependency trees, documenting the contribution of our modeling techniques to improvements in both accuracy and run time.", |
|
"pdf_parse": { |
|
"paper_id": "C16-1022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "A key component in surface realization in natural language generation is to choose concrete syntactic relationships to express a target meaning. We develop a new method for syntactic choice based on learning a stochastic tree grammar in a neural architecture. This framework can exploit state-of-the-art methods for modeling word sequences and generalizing across vocabulary. We also induce embeddings to generalize over elementary tree structures and exploit a tree recurrence over the input structure to model long-distance influences between NLG choices. We evaluate the models on the task of linearizing unannotated dependency trees, documenting the contribution of our modeling techniques to improvements in both accuracy and run time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Where natural language understanding systems face problems of ambiguity, natural language generation (NLG) systems face problems of choice. A wide coverage NLG system must be able to formulate messages using specialized linguistic elements in the exceptional circumstances where they are appropriate; however, it can only achieve fluency by expressing frequent meanings in routine ways. Empirical methods have thus long been recognized as crucial to NLG; see e.g. Langkilde and Knight (1998) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 464, |
|
"end": 491, |
|
"text": "Langkilde and Knight (1998)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "With traditional stochastic modeling techniques, NLG researchers have had to predict choices using factored models with handcrafted representations and strong independence assumptions, in order to avoid combinatorial explosions and address the sparsity of training data. By contrast, in this paper, we leverage recent advances in deep learning to develop new models for syntactic choice that free engineers from many of these decisions, but still generalize more effectively, match human choices more closely, and enable more efficient computations than traditional techniques.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We adopt the characterization of syntactic choice from : the problem is to use a stochastic tree model and a language model to produce a linearized string from an unordered, unlabeled dependency graph. The first step to producing a linearized string is to assign each item an appropriate supertag-a fragment of a parse tree with a leaf left open for the lexical item. This process involves applying a learned model to make predictions for the syntax of each item and then searching over the predictions to find a consistent assignment for the entire sentence. The resulting assignments allow for many possible surface realization outputs because they can underdetermine the order and attachment of adjuncts. To finish the linearization, a language model is used to select the most likely surface form from among the alternatives. While improving the language model would improve the linearized string, we focus here on more accurately predicting the correct supertags from unlabeled dependency trees.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our work exploits deep learning to improve the model of supertag assignment in two ways. First, we analyze the use of embedding techniques to generalize across supertags. Neural networks offer a number of architectures that can cluster tree fragments during training; such models learn to treat related structures similarly, and we show that they improve supertag assignments. Second, we analyze the use of tree recurrences to track hierarchical relationships within the generation process. Such networks can track more of the generation context than a simple feed-forward model; as a side effect, they can simplify the problem of computing consistent supertag assignments for an entire sentence. We evaluate our contributions in two ways: first, by varying the technique used to embed supertags, and then by comparing a feed-forward model against our recurrent tree model. Our presentation begins in \u00a7 2 with an introduction to tree grammars and a deterministic methodology for inducing the elementary trees of the grammar. Next, \u00a7 3 presents the techniques we have developed to represent a tree grammar using a neural architecture. Then, in \u00a7 4, we describe the specific models we have implemented and the algorithms used to exploit the models in NLG. The experiments in \u00a7 5 demonstrate the improvement of the model over baseline results based on previous work on stochastic surface realization. We conclude with a brief discussion of the future potential for neural architectures to predict NLG choices.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Broadly, tree grammars are a family of tree rewriting formalisms that produce strings as a side effect of composing primitive hierarchical structures. The basic syntactic units are called elementary trees; elementary trees combine using tree-rewrite rules to form derived phrase structure trees describing complex sentences. Inducing a tree grammar involves fixing a formal inventory of structures and operations for elementary trees and then inferring instances of those structures to match corpus data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree Grammars", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The canonical tree grammar is perhaps lexicalized tree-adjoining grammar (LTAG) (Joshi and Schabes, 1991) . The elementary trees of LTAG consist of two disjoint sets with distinct operations: initial trees can perform substitution operations and auxiliary trees can perform adjunction operations. The substitution operation replaces a non-terminal leaf of a target tree with an identically-labeled root node of an initial tree. The adjunction operation modifies the internal structure of a target tree by expanding a node identically-labeled with the root and a distinguished foot note in the auxiliary tree. The lexicalization of the the grammar requires each elementary tree to have at least one lexical item as a leaf.", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 105, |
|
"text": "Schabes, 1991)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grammar Formalism", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "LTAG incurs computational costs because it is mildly context-sensitive in generative power. Several variants reduce the complexity of the formalism by limiting the range of adjunction operations. For example, the Tree Insertion Grammar allows for adjunction as long as it is either a left or right auxiliary tree (Schabes and Waters, 1995) . Tree Substitution Grammars, meanwhile, allow for no adjunction and only substitutions (Cohn et al., 2009) . We adopt one particular restriction on adjunction, called sisteradjunction or insertion, which allows trees to attach to an interior node and add itself as a first or last child (Chiang, 2000) . Chiang's sister-adjunction allows for the flat structures in the Penn Treebank while limiting the formalism to context-free power.", |
|
"cite_spans": [ |
|
{ |
|
"start": 313, |
|
"end": 339, |
|
"text": "(Schabes and Waters, 1995)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 428, |
|
"end": 447, |
|
"text": "(Cohn et al., 2009)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 628, |
|
"end": 642, |
|
"text": "(Chiang, 2000)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grammar Formalism", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In lexicalized tree grammars, the lexicon and the grammatical rules are one and the same. The set of possible grammatical moves which can be made are simultaneously the set of possible words which can be used next. This means that inducing a tree grammar from a data set is a matter of inferring the set of constructions in the data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grammar Induction", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We follow previous work in using bracketed phrase structure corpora and deterministic rules to induce the grammar (Bangalore et al., 2001; Chiang, 2000) . Broadly, the methodology is to split the observed trees into the constituents which make it up, according to the grammar formalism. We use head rules (Chiang, 2000; Collins, 1997; Magerman, 1995) to associate internal nodes in a bracketed tree with the lexical item that owns it. We use additional rules to classify some children as complements, corresponding to substitution sites and root notes of complement trees; and other children as adjuncts, corresponding to insertion trees that combine with the parent node, either to the right or to the left of the head. This allows us to segment the tree into units of substitution and insertion. 1 Figure 1 : Embedding supertags using convolutional neural networks. In (A), a tree is encoded by its features and then embedded. In (B), convolutional layers are used to encode the supertag into a vector.", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 138, |
|
"text": "(Bangalore et al., 2001;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 139, |
|
"end": 152, |
|
"text": "Chiang, 2000)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 319, |
|
"text": "(Chiang, 2000;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 334, |
|
"text": "Collins, 1997;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 335, |
|
"end": 350, |
|
"text": "Magerman, 1995)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 800, |
|
"end": 808, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Grammar Induction", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The grammar induction of \u00a7 2 allows us to construct an inventory of supertags to match a corpus. For NLG, we also need to predict the most likely supertag for any lexical item given the generation context. We approach this problem using neural networks. In particular, this work makes two contributions to improve stochastic tree modeling with neural networks. First, we represent supertags as vectors through embedding techniques that enable to model to generalize over complex, but related structures. Second, we address the hierarchical dependence between choices using a recurrent tree network that can capture long-distance influences as well as local ones. We now describe these representations in more detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Representations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Different supertags for the same word can encode differences in the item's own combinatorial syntax, differences in argument structure, and differences in word order. Accordingly, words have many related supertags, with substantial overlaps in structure, and, presumably, corresponding similarities in their patterns of occurrence. A traditional machine learning approach to supertag prediction would treat individual supertags as atoms for classification; generalizing across supertags would require linking model parameters to handcrafted features or back-off categories. By contrast, neural techniques work by embedding such tokens into a vector space. This process learns an abstract representation of tokens that clusters similar items together and makes further predictions as a function of those items' learned features. The resulting ability to generalize across sparse data seems to be one of the most important reasons for the success of deep learning in NLP.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding Supertags", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The simplest way to embed supertags is to treat each structure as a distinct token that indexes a corresponding learned vector. This places no constraints on the learned similarity function, but it also ignores the hierarchical structure of the elementary trees themselves. Previous work on deep learning with graph structures suggests convolutional neural networks can exploit similarities in structure (Kalchbrenner et al., 2014; Niepert et al., 2016) . Thus, we developed analogous techniques to encode supertags based on their underlying tree structure. In particular, to embed a supertag, we embed each node, group the resulting vectors to form a tensor, and then summarize the tensor into a single vector using a series of convolutional neural networks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 404, |
|
"end": 431, |
|
"text": "(Kalchbrenner et al., 2014;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 432, |
|
"end": 453, |
|
"text": "Niepert et al., 2016)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding Supertags", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Note that each elementary tree is a complex structure with nodes labeled by category and assigned a role that enables further tree operations. The root node's role represents the overall action associated with that elementary tree-either substitution or insertion. The remaining nodes either have the substitution point role or the spine role-they are along the spine from root to the lexical attachment point, and thus provide targets for further insertion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding Supertags", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We first embed each node independently, then combined the vectors to form a tensor of embeddings. Specifically, symbols representing the syntactic category and node roles are treated as distinct vocabulary tokens, mapped to integers, and used to retrieve a vector representation that is learned during training. The vectors are grouped into a tensor by placing the root node into the first cell of the first row and leftaligning the descendants in the subsequent rows. The two tensors are concatenated along the embedding dimension. This embed-and-group method is shown in on the left in Figure 1 . Using a series of convolutional neural networks which learn their weights during training, the tensor of embeddings can be reduced to a single vector. To reduce the tensor to a vector, the convolutions are designed with increasingly larger filter sizes. Additionally, the dimensions are reduced alternatingly to also facilitate the capture of features. The entire process is summarized in Eq. 1 with \u039b representing the supertags, G representing embedding matrices, and C representing the convolutional neural network layers. Specifically, G s is the syntactic category embedding matrix and G r is the node role embedding matrix. Each convolutional layer C is shown with its corresponding height and width as C i,j . The encoding first constructs the tensor, T \u039b , through the embed-and-group method. Then, the embedding matrix G \u039b is summarized from T \u039b using the series of convolutional layers.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 588, |
|
"end": 596, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Embedding Supertags", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "T \u039b = [G s (\u039b syntactic category ); G r (\u039b role )] G \u039b = C 4,5 (C 3,1 (C 1,3 (C 2,1 (C 1,2 (T \u039b )))))", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Embedding Supertags", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The final product, a vector per supertag, is aggregated with the other vectors and turned into an embedding matrix. This is visualized in on the right in Figure 1 . During training and test time, supertags are simply input as indices and their feature representations retrieved as an embedding.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 162, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Embedding Supertags", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our models predict supertags as a function of the target word and its context. Neural networks make it possible to generalize over such contexts by learning to represent them with a hidden state vector that aggregates and clusters information from the relevant history. Our approach is to do this using a recurrent tree network. While recurrent neural networks normally use the previous hidden state in the sequential order of the inputs, recurrent tree networks use the hidden state from the parent. Utilizing the parent's hidden state rather than the sequentially previous hidden state, the recurrent connection can travel down the branches of a tree. An example of a recurrent tree network is shown in Figure 2 . In our recurrent tree network, child nodes gain access to a parent's hidden state through an internal tree state. During a tree recurrence, the nodes in the dependency graph are enumerated in a top-down traversal. At each step in the recurrence, the resulting recurrent state is stored in the tree state at the step index. Descendents access the recurrent state using a topological index that is passed in as data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 705, |
|
"end": 713, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Recurrent Tree Networks", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The formulation is summarized in Equation 2. The input to each time step in the current tree is the data, x t , and a topological index, p t . The recurrent tree uses p t to retrieve the parent's hidden state, s p , from the tree state, S tree , and applies the recurrence function, g(). The resulting recurrent state is the hidden state for child node, s c . The recurrent state s c is stored in the tree state, S tree , at index t.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Tree Networks", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "s c = RT N (x t , p t ) = g(x t , S tree [p t ]) = g(x t , s p ) S tree [t] = s c", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Recurrent Tree Networks", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The use of topological indices allows for many recurrent tree networks to be run in parallel on a GPU for efficiency. GPU implementations must be formulated homogeneously so that the same operations are applied across the entire data structure. Normally, tree operations involve conditional access to parent nodes, but using topological indices and a tree state accesses the parent in a homogeneous way.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Tree Networks", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To analyze the representations we describe in \u00a7 2 and \u00a7 3, we developed two alternative architectures for predicting supertags in context. The first is a feed-forward neural network designed to solve a closely analogous task to the supertagging step of 's original FERGUS model. We call it Fergus-N (for Neuralized). The second uses a recurrent tree network to model the generation context. Because it has this richer context representation, it takes advantage of a slightly different characterization of the supertag prediction problem to streamline the problem solving involved in using the model. We call this Fergus-R (for Recurrent). For both stochastic tree models, a recurrent neural network language model is used to complete the linearization task. The same language model is used to eliminate the confound of language model performance and measure performance differences in the stochastic tree modeling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Fergus-N is a stochastic tree model which uses local parent-child information as inputs to a feed-forward network. Each parent-child pair is treated as independent of all others. The probability of the parent's supertag is predicted using an embedding of the pair's lexical material and an embedding of the child's supertag. (Our experiments compare the different embedding options surveyed in \u00a7 3.) Training maximizes the likelihood of the training data according to the model. Formally, our objective is to minimize the negative log probability of the observed parent supertags for each parent-child pair, as formally defined in Eq. 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 1: Fergus-N", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "min \u03b8 \u2212 [ p p\u2192c log[P \u03b8 (tag p |lex p , lex c , tag c )] + c log[P \u03b8 (tag c |lex p , lex c )]]", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Model 1: Fergus-N", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Here tag p is the parent supertag, tag c is the child supertag, lex p is the parent's lexical material, and lex c is the child's lexical material. Note that the probability of supertags for the leaves of the tree are computed with respect to their parent's lexical material. The model is implemented as a feed-forward neural network. Equation 4 details the model formulation. The lexical material, lex p and lex c , are embedded using the word embedding matrix, G w , concatenated, and mapped to a new vector, \u03c9 lex , with a fully connected layer, F C 1 . The child supertag, tag c , is embedded using the target supertag embedding G s and concatenated with the lexical vector, \u03c9 lex , forming an intermediate vector representation of the node, \u03c9 node . The node vector is repeated for each of the parent's possible supertags, tagset p , and then concatenated with their embeddings to construct the set of treelet vectors, \u2126 treelet . The vector states for the leaf nodes are similarly constructed, but instead combine the lexical vector, \u03c9 lex with the embeddings of the child's possible supertags, tagset c . The final operation induces a probability distribution over the treelet and leaf vectors using a score computed by the vectorized function, \u03a8 predict , as the scalar in a softmax distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 1: Fergus-N", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c9 lex = F C 1 ([G w (lex p ); G w (lex c )])", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Model 1: Fergus-N", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u03c9 node = concat([G s (tag c ); \u03c9 lex ]) \u2126 treelet = concat([repeat(\u03c9 node ), G s (tagset p )]) \u2126 leaf = concat([repeat(\u03c9 lex ), G s (tagset c )]) P \u03b8 (tag p,i |lex p , lex c , tag c ) = exp(\u03a8 predict (\u03c9 treelet i ))) j\u2208|tagsetp| exp(\u03a8 predict (\u03c9 treelet j ))) P \u03b8 (tag c,i |lex p , lex c ) = exp(\u03a8 predict (\u03c9 leaf i ))) j\u2208|tagsetc| exp(\u03a8 predict (\u03c9 leaf j )))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 1: Fergus-N", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "At generation time, we are given a full dependency tree. A decoding step is necessary to compute a high probability assignment for all supertags simultaneously. In this process, tags for children must be chosen consistently with one another, and the resulting probabilistic information must be propagated upward to rerank tags elsewhere in the tree. We solve this problem with an A* algorithm. At each step, the algorithm uses a priority queue to select subtrees based on their inside-outside scores. The inside score is computed as the sum of the log probabilities of the supertags in the subtree. The outside score is the sum of the best supertag for nodes outside the subtree, similar to Lewis and Steedman (2014) . Once selected, the subtree is attached to the possible supertags of its parent that are both locally consistent and consistent among its already attached children. These resulting subtrees are placed into the priority queue and the algorithm iterates to progress the search. The search succeeds when the first complete tree has been found. 2", |
|
"cite_spans": [ |
|
{ |
|
"start": 691, |
|
"end": 716, |
|
"text": "Lewis and Steedman (2014)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 1: Fergus-N", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Fergus-R is a stochastic tree model implemented in a top-down recurrent tree network and augmented with soft attention. For each node in the input dependency tree, soft attention-a method which learns a vectorized function to weight a group of vectors and sum into a single vector-is used to summarize its children. The soft attention vector and the node's embedded lexical material serve as the input to the recurrent tree. The output of the recurrent tree represents the vectorized state of each node and is combined with each node's possible supertags to form prediction states. Importantly, removing the conditional dependence on descendents' supertags results in the simplified objective function in Eq. 5 where lex C is the children's lexical information, lex p is the parent's lexical information, tag p is the supertag for the parent node, and RT N is the recurrent tree network.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 2: Fergus-R", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "min \u03b8 \u2212 [ (p,C) P \u03b8 (tag p |RT N, lex p , lex C )]", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Model 2: Fergus-R", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The Fergus-R model uses only lexical information as input to calculate the probability distribution over each node's supertags. The specific formulation is detailed in Eq. 6. First, a parent node's children, lex C , are embedded using the word embedding matrix, G w , and then summarized with an attention function, \u03a8 attn , to form the child context vector, \u03c9 C . The child context is concatenated with the embedded lexical information of the parent node, lex p , and mapped to a new vector space with a fully connected layer, F C 1 , to form the lexical context vector, \u03c9 lex . The context vector and a topological vector for indexing the internal tree state (see \u00a7 3.2) are passed to the recurrent tree network, RT N , to compute the full state vector for the parent node, \u03c9 node . Similar to Fergus-N, the state vector is repeated and concatenated with the vectors of the parent node's possible supertags, tagset p , and mapped to a new vector space with a fully connected layer, F C 2 . A vector in this vector space is labeled \u03c9 elementary because the combination of supertag and lexical item constitutes an elementary tree. The last step is to compute the probability of each supertag using the vectorized function, \u03a8 predict .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 2: Fergus-R", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c9 C = \u03a8 attn (G w (lex C ))", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Model 2: Fergus-R", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u03c9 lex = F C 1 (concat(\u03c9 C , G w (lex p ))) \u03c9 node = RT N (\u03c9 lex , topology) \u2126 elementary = F C 2 (concat(repeat(\u03c9 node ), G s (tagset p ))) P \u03b8 (tag p,i | RT N, lex p , lex C ) = exp(\u03a8 predict (\u03c9 elementary i ))) j\u2208|\u2126| exp(\u03a8 predict (\u03c9 elementary j ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 2: Fergus-R", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Although the same A* algorithm from Fergus-N is used, the decoding for Fergus-R is far simpler. As supertags are incrementally selected in the algorithm, the inside score of the subsequent subtree is computed. Where Fergus-N had to compute a incremental dynamic program to evaluate the inside score, Fergus-R decomposes into a sum of conditionally independent distributions. The resulting setup is a chart parsing problem where the inside score of combining two consistent (non-conflicting) edges is just the sum of their inside scores.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 2: Fergus-R", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The final step to linearizing the output of Fergus-N and Fergus-R-a dependency tree annotated with supertags and partial attachment information-is a search over possible orderings with a language model. There are many possibilities, primarily due to ambiguities in insertion order. Following , a language model is used to select between the alternate orderings. The language model used is a two-layer LSTM trained using the Keras library on the surface form of the Penn Treebank. The surface form was minimally cleaned 3 to simulate realistic scenarios.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linearization", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The difficulty of selecting orderings with a language model is that the possible linearizations can grow exponentially. In particular, our implementations result in a large amount of insertion trees. 4 We approach this problem using a prefix tree which stores the possible linearizations as back-pointers to their last step and the word for the current step. The prefix tree is greedily searched with 32 beams.", |
|
"cite_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 201, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linearization", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Using the representations of \u00a7 3, the models of \u00a7 4 can be instantiated in six different ways. We can use a feed-forward Fergus-N architecture or a recurrent Fergus-R architecture. Each architecture can embed supertags minimally, by learning a scalar corresponding to each supertag; atomically, by learning an embedding vector corresponding to each supertag; or structurally, by using convolutional coding over each supertag's tree structure to form a vector. In each case, the vector (a size-one vector in the minimal condition) is concatenated as described in \u00a7 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We trained six such models using a common experimental platform. We started from the Wall Street Journal sections of the Penn Treebank, which have been previously used for evaluating statistical tree grammars (Chiang, 2000) . 5 Our data pipeline breaks each sentence in the treebank into component elementary trees and then represents the sentence in terms of a derivation tree, specifying the tree-rewriting operations required to construct the actual treebank surface tree from the basic supertags. Removing supertags from the derivation tree leads to the unlabeled dependency trees our models assume as input.", |
|
"cite_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 223, |
|
"text": "(Chiang, 2000)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 226, |
|
"end": 227, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "From this input, we extracted the atomic supertag prediction instances and trained a network defined by each of the architectures of \u00a7 4 and each of the supertag representations of \u00a7 3. As always, we used Sections 02-21 for training, Section 22 for development, and Section 23 for testing. A complete description of network organization and training parameters is given in the appendix. The code and complete experimental setup are publicly available. 6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We evaluate the performance of the models in several ways. First, we look at the accuracy of the supertag predictions directly output by each model. Second, we look at the accuracy of the final supertags obtained by decoding the model predictions to the best-ranked consistent global assignment. These metrics directly assess the ability of the models to successfully learn the target distributions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Metrics", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Next, we evaluate the models on the full NLG task, including linearization. The linearization task allows more freedom in supertag classifications because supertags may differ in minor ways, such as the projections present along the spine, which will not affect generation output for a particular target input. The freedom means models may not be penalized based on decisions that don't matter-thus, at the same time, it also mutes the distinctions between classification decisions. We report a modified edit distance measure, Generation String Accuracy, following . Since linearization uses a beam search, we report statistics both for the top-ranked beam and for the empirically based beam among the candidates computed during search. The difference gives an indication of the effect of the language model in guiding the decisions that remain after supertagging. Table 1 : For each supertag and embedding pair, the mean accuracy of supertag classification directly output by the model and in the consistent global assignment output by A* decoding. Also shown is the median running time-which includes model computation and A* search. The structural embeddings are computed with convolutional coding, the atomic embeddings as rows in a matrix, and the minimal embeddings as scalars in a vector.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 865, |
|
"end": 872, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Performance Metrics", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Finally, we report statistics about the run time of different generation steps. This allows us to assess the complexity of the different decoding steps involved in generation, to reveal any tradeoffs among the models between speed and accuracy. Table 1 shows the results of supertag prediction. All differences between model are significant using a Paired-Sample t-test (p < 10 \u22125 ) The structural and atomic embedding methods consistently perform better, suggesting that the clustering capabilities of neural methods is a crucial part of their effectiveness. For post-decoding performance, Fergus-N utilizes the structural embeddings more than the atomic embeddings. This merits further investigation: it might be because Fergus-N predicts one supertag as a function of another, and so the compositional relationships among the two trees are more importantor because Fergus-R's contextualized decisions depend on similarities among supertags (involving argument structure or information structure) that are difficult for the convolutional coding to represent or learn. Additionally, the minimal embeddings suggests that Fergus-N's architecture might provide enough structure to reduce the difficulty of a large number of cases.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 245, |
|
"end": 252, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Accuracy", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The overall best results come from Fergus-R, suggesting that it is worthwhile to take additional context into account in this task. At the same time, the median time taken to classify and decode a sentence with Fergus-R is just one sixth that of Fergus-N. We suspect that there is a general lesson in this speedup: because neural models can be more flexible about the information they take into account in decisions, it's especially advantageous in designing neural architectures to break a problem down into decisions that can be combined easily.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Finally, decoding the network generally leads to lower accuracy. It seems that our models are not doing a good job of using the predictions they make to triangulate to accurate and consistent supertags. This suggests that the models could be improved by taking more or better information into account in decoding. This is more pronounced in the atomic embeddings than the structural embeddings, which suggests that the lack of structure in the vector representation allows for the model to learn clustering relationships that don't correlate with the structural requirements. Figure 2 shows the NLG evaluation results for the different models. All differences in model are significant using an Independent t-test (p < 10 \u22125 ). 7 For both models, the differences between structural embeddings (using convolutional coding) and atomic embeddings (using standard vector embedding techniques) were not significant, while the differences between the two embeddings and minimal embeddings were significant (p < 10 \u22125 ). The performance confirms our expectation that differences in supertag accuracy after decoding correlate with NLG accuracy overall, but that differences in NLG performance are attenuated. We note by comparison that Bangalore and Rambow report an accuracy of Table 2 : Shown above as accuracy is the percentage of tokens in the linearized strings that are in correct positions according to an edit distance measure.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 576, |
|
"end": 584, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 1270, |
|
"end": 1277, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "74.9% in their best evaluation of FERGUS-on a data set of just 100 sentences with an average length of 16.7. Our evaluation, on 2400 sentences with an average length of 22.1, is more strenuous.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "There are several lines of related work which explore stochastic tree models from the standpoint of parsing and understanding. While using the same methods, NLG has different goals and we think the perspective is instructive. Where parsing infers the most probable underlying structure, generation infers the most likely way of expressing a semantic structure. This divergence of goals leads to different concerns, alternatives, and emphasis. The works most similar to ours explicitly model tree structures, but focus on resolving the uncertainty involved with the latent structure of an observed sentence. For example, the top down tree structure of Zhang et al. (2016) expresses the generation of a dependency structure as the decisions of a set of long short-term memory networks. For each decision, the possible options are different tree structures which can produce the target linear form. In contrast, the generation problem is concerned with different linear forms that can result from the same tree structure. In more extensive tasks, the generation problem can include simulated interpretation to inform decisions; using the ease of structural inference from linear form quantifies the understandability of a sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 651, |
|
"end": 670, |
|
"text": "Zhang et al. (2016)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Although the methodology presented in this work is closely related to several recent neural networks models for long-distance relationships, it differs distinctly in its treatment of state and search. Specifically, forward-planning in a generation task produces a growing horizon of syntactic choices while shrinking the horizon of semantic goals. At each step, syntactic operations grow the number of available syntactic choices while limiting the number of semantic goals left to express. In contrast, parsing and understanding begin with the surface form and construct the organized semantic content, either for a downstream decision or just for the structure itself. The most notable works in this line of research are the recurrent neural network grammars (Dyer et al., 2016) , a shift-reduce parser and interpreter (Bowman et al., 2016) , and a dynamic network for composing other neural network modules (Andreas et al., 2016) . Interestingly, there is a common theme of using indexable and dynamic data structures in neural architecture to make long-distance decisions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 761, |
|
"end": 780, |
|
"text": "(Dyer et al., 2016)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 821, |
|
"end": 842, |
|
"text": "(Bowman et al., 2016)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 910, |
|
"end": 932, |
|
"text": "(Andreas et al., 2016)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "This paper has explored issues in deep learning of probabilistic tree grammars from the standpoint of natural language generation. For NLG, we need models that predict high-probability structures to encode deep linguistic relationships-rather than to infer deep relationships from surface cues. This problem brings new challenges for learning, as it requires us to represent new kinds of linguistic elements and new kinds of structural context in order to capture the regularities involved. Despite these challenges, however, the problem continues to have the mix of data sparsity, rich primitives and combinatorial interactions that has made deep learning attractive for use in natural language parsing and understanding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Of the range of models we surveyed here, the best combines a top down tree recurrence to cluster contexts with appropriate embedding methods to cluster syntactic and lexical elements. Our evaluations suggest that the model is more accurate and faster than alternative techniques. However, it would still be good to analyze the performance of the model more deeply. Can we get better results in the key decoding step? How do human readers find the output of the system?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Looking forward, we see this research a step towards learned models that capture more of the NLG task. We plan to explore similar techniques in planning surface text from more properly semantic inputs or even from abstract communicative goals. Further, we plan to integrate learned methods with knowledge-based techniques to offer designers more control over system output in specific applications. Developing methods appropriate to such settings will require researchers to revisit the core problems of generalizing across linguistic structures and contexts-and, we hope, to build on and extend the provisional solutions we have explored here.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "One particular downside of deterministically constructing the grammar this way is that it can produce an excess of superfluous elementary trees. We minimize this by collapsing repeated projections in the treebank. Other work has provided Bayesian models for reducing grammar complexity by forcing it to follow Dirichlet or Pitman-Yor processes(Cohn et al., 2010)-an interesting direction for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Although, the data has some noise so that sometimes there is no complete tree that can possibly be formed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "With respect to the surface form, the only cleaning operations were to merge proper noun phrases into single tokens. Punctuation and other common cleaning operations were not performed.4 Many of the validation examples had more than 2 40 possible linearizations. 5 A possible additional data source, the data from the 2011 Shared Task on Surface Realization, was not available. 6 https://github.com/braingineer/neural_tree_grammar", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An Indepedent t-test was used instead of a Paired-Sample t-test because of intermittent failures during linearization that resulted in slightly different numbers of observations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research was supported in part by NSF IIS-1526723 and by a sabbatical leave from Rutgers to Stone.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "All of our models were implemented in the Keras (Chollet, 2015) and Theano (Theano Development Team, 2016) libraries. The specific parameters that were used are shown in Table 3 . The parameters were selected by measured performance on the development portion of the data set. In the accompanying code repository, the full experiment parameters-including programmatic parameters controlling the experimental design-are specified in configuration files.In our experiments, the corpus was preprocessed using Stanford NLP tools (De Marneffe et al., 2006) to fix common issues and remove extraneous information. The resulting parse trees were then analyzed to mark the head words, the dependents, and the adjuncts. The marked-up trees were split at adjunction and substitution positions to form the grammar. Our models use an output distribution that's restricted to the set of supertags that have occurred with the lexical item, which requires indices to the supertag embedding matrix to be passed into the computation with the rest of the data. We implement the affinity matrix between the supertag embeddings and lexical state vectors, by concatenating the vectors, mapping them to a new space using a fully connected layer, and computing a score with a vectorized function. (The vectorized function operation is the same mechanism which calculates the probability distribution used in soft attention.) Table 3 : The parameters for the Fergus-R, Fergus-N, and language models. The exact specifications in configuration files can be found in the code repository that accompanies this paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 63, |
|
"text": "(Chollet, 2015)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 68, |
|
"end": 106, |
|
"text": "Theano (Theano Development Team, 2016)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 529, |
|
"end": 551, |
|
"text": "Marneffe et al., 2006)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 170, |
|
"end": 177, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1402, |
|
"end": 1409, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Appendix", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Learning to compose neural networks for question answering", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Andreas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcus", |
|
"middle": [], |
|
"last": "Rohrbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Darrell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1545--1554", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Learning to compose neural networks for question answering. In Proceedings of the 2016 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, pages 1545-1554. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Exploiting a probabilistic hierarchical model for generation", |
|
"authors": [ |
|
{ |
|
"first": "Srinivas", |
|
"middle": [], |
|
"last": "Bangalore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 18th conference on Computational linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "42--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Srinivas Bangalore and Owen Rambow. 2000. Exploiting a probabilistic hierarchical model for generation. In Proceedings of the 18th conference on Computational linguistics-Volume 1, pages 42-48. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Evaluation metrics for generation", |
|
"authors": [ |
|
{ |
|
"first": "Srinivas", |
|
"middle": [], |
|
"last": "Bangalore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Whittaker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the first international conference on Natural language generation", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Srinivas Bangalore, Owen Rambow, and Steve Whittaker. 2000. Evaluation metrics for generation. In Proceedings of the first international conference on Natural language generation-Volume 14, pages 1-8. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Impact of quality and quantity of corpora on stochastic generation", |
|
"authors": [ |
|
{ |
|
"first": "Srinivas", |
|
"middle": [], |
|
"last": "Bangalore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 2001 Conference on Empirical Methods in Natural Langauge Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Srinivas Bangalore, John Chen, and Owen Rambow. 2001. Impact of quality and quantity of corpora on stochastic generation. In Proceedings of the 2001 Conference on Empirical Methods in Natural Langauge Processing, Pittsburgh, PA.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A fast unified model for parsing and sentence understanding", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [], |
|
"last": "Gauthier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abhinav", |
|
"middle": [], |
|
"last": "Rastogi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raghav", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"Christopher" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1466--1477", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Samuel Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, D. Christopher Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1466-1477. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Statistical parsing with an automatically-extracted tree adjoining grammar", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 38th Annual Meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "456--463", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Chiang. 2000. Statistical parsing with an automatically-extracted tree adjoining grammar. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 456-463. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Inducing compact but accurate tree-substitution grammars", |
|
"authors": [ |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "548--556", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Trevor Cohn, Sharon Goldwater, and Phil Blunsom. 2009. Inducing compact but accurate tree-substitution gram- mars. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 548-556. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Inducing tree-substitution grammars", |
|
"authors": [ |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "3053--3096", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Trevor Cohn, Phil Blunsom, and Sharon Goldwater. 2010. Inducing tree-substitution grammars. The Journal of Machine Learning Research, 11:3053-3096.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Three generative, lexicalised models for statistical parsing", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "16--23", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th An- nual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics, pages 16-23. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Generating typed dependency parses from phrase structure parses", |
|
"authors": [ |
|
{ |
|
"first": "Marie-Catherine De", |
|
"middle": [], |
|
"last": "Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Maccartney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of LREC", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "449--454", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marie-Catherine De Marneffe, Bill MacCartney, Christopher D Manning, et al. 2006. Generating typed depen- dency parses from phrase structure parses. In Proceedings of LREC, volume 6, pages 449-454.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Recurrent neural network grammars", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adhiguna", |
|
"middle": [], |
|
"last": "Kuncoro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"Noah" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "199--209", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and A. Noah Smith. 2016. Recurrent neural network gram- mars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 199-209. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Tree-adjoining grammars and lexicalized grammars", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Aravind", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Schabes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aravind K Joshi and Yves Schabes. 1991. Tree-adjoining grammars and lexicalized grammars.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A convolutional neural network for modelling sentences", |
|
"authors": [ |
|
{ |
|
"first": "Nal", |
|
"middle": [], |
|
"last": "Kalchbrenner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "655--665", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 655-665. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Generation that exploits corpus-based statistical knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Irene", |
|
"middle": [], |
|
"last": "Langkilde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "704--710", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Irene Langkilde and Kevin Knight. 1998. Generation that exploits corpus-based statistical knowledge. In Pro- ceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics-Volume 1, pages 704-710. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Improved CCG parsing with semi-supervised supertagging", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "327--338", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Lewis and Mark Steedman. 2014. Improved CCG parsing with semi-supervised supertagging. Transactions of the Association for Computational Linguistics, 2:327-338.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Statistical decision-tree models for parsing", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "David M Magerman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the 33rd annual meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "276--283", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M Magerman. 1995. Statistical decision-tree models for parsing. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics, pages 276-283. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Learning convolutional neural networks for graphs", |
|
"authors": [ |
|
{ |
|
"first": "Mathias", |
|
"middle": [], |
|
"last": "Niepert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Ahmed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Konstantin", |
|
"middle": [], |
|
"last": "Kutzkov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1605.05273" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. 2016. Learning convolutional neural networks for graphs. arXiv preprint arXiv:1605.05273.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Empiricial Methods in Natural Language Processing", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word represen- tation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12:1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Tree Insertion Grammar : A Cubic-Time , Parsable Formalism that Lexicalizes Context-Free Grammar without Changing the Trees Produced", |
|
"authors": [ |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Schabes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Waters", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Computational Linguistics", |
|
"volume": "21", |
|
"issue": "4", |
|
"pages": "479--513", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yves Schabes and Richard C. Waters. 1995. Tree Insertion Grammar : A Cubic-Time , Parsable Formalism that Lexicalizes Context-Free Grammar without Changing the Trees Produced. Computational Linguistics, 21(4):479-513.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints", |
|
"authors": [], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expres- sions. arXiv e-prints, abs/1605.02688, May.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Top-down tree long short-term memory networks", |
|
"authors": [ |
|
{ |
|
"first": "Xingxing", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "310--320", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xingxing Zhang, Liang Lu, and Mirella Lapata. 2016. Top-down tree long short-term memory networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, pages 310-320, San Diego, California, June. Association for Compu- tational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"text": "A recurrent tree network. (A) The dependency structure as a tree. (B) The dependency structure as a sequence.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
} |
|
} |
|
} |
|
} |