|
{ |
|
"paper_id": "Q17-1012", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:12:36.291746Z" |
|
}, |
|
"title": "Head-Lexicalized Bidirectional Tree LSTMs", |
|
"authors": [ |
|
{ |
|
"first": "Zhiyang", |
|
"middle": [], |
|
"last": "Teng", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Singapore University of Technology", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Singapore University of Technology", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Sequential LSTMs have been extended to model tree structures, giving competitive results for a number of tasks. Existing methods model constituent trees by bottom-up combinations of constituent nodes, making direct use of input word information only for leaf nodes. This is different from sequential LSTMs, which contain references to input words for each node. In this paper, we propose a method for automatic head-lexicalization for tree-structure LSTMs, propagating head words from leaf nodes to every constituent node. In addition, enabled by head lexicalization, we build a tree LSTM in the top-down direction, which corresponds to bidirectional sequential LSTMs in structure. Experiments show that both extensions give better representations of tree structures. Our final model gives the best results on the Stanford Sentiment Treebank and highly competitive results on the TREC question type classification task.", |
|
"pdf_parse": { |
|
"paper_id": "Q17-1012", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Sequential LSTMs have been extended to model tree structures, giving competitive results for a number of tasks. Existing methods model constituent trees by bottom-up combinations of constituent nodes, making direct use of input word information only for leaf nodes. This is different from sequential LSTMs, which contain references to input words for each node. In this paper, we propose a method for automatic head-lexicalization for tree-structure LSTMs, propagating head words from leaf nodes to every constituent node. In addition, enabled by head lexicalization, we build a tree LSTM in the top-down direction, which corresponds to bidirectional sequential LSTMs in structure. Experiments show that both extensions give better representations of tree structures. Our final model gives the best results on the Stanford Sentiment Treebank and highly competitive results on the TREC question type classification task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Both sequence structured and tree structured neural models have been applied to NLP problems. Seminal work uses convolutional neural networks (Collobert and Weston, 2008) , recurrent neural networks (Elman, 1990; Mikolov et al., 2010) and recursive neural networks (Socher et al., 2011) for sequence and tree modeling. Long short-term memory (LSTM) networks have significantly improved accuracies in a variety of sequence tasks Bahdanau et al., 2015) compared to vanilla recurrent neural networks. Addressing diminishing gradients effectively, they have been extended to tree structures, achieving promising results for tasks such as syntactic language modeling , sentiment analysis (Li et al., 2015; Le and Zuidema, 2015; Tai et al., 2015; Teng et al., 2016) and relation extraction (Miwa and Bansal, 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 170, |
|
"text": "(Collobert and Weston, 2008)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 199, |
|
"end": 212, |
|
"text": "(Elman, 1990;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 234, |
|
"text": "Mikolov et al., 2010)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 286, |
|
"text": "(Socher et al., 2011)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 428, |
|
"end": 450, |
|
"text": "Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 683, |
|
"end": 700, |
|
"text": "(Li et al., 2015;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 701, |
|
"end": 722, |
|
"text": "Le and Zuidema, 2015;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 723, |
|
"end": 740, |
|
"text": "Tai et al., 2015;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 741, |
|
"end": 759, |
|
"text": "Teng et al., 2016)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 784, |
|
"end": 807, |
|
"text": "(Miwa and Bansal, 2016)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Depending on the node type, typical tree structures in NLP can be categorized to constituent trees and dependency trees. A salient difference between the two types of tree structures is in the node. While dependency tree nodes are input words themselves, constituent tree nodes represent syntactic constituents. Only leaf nodes in constituent trees correspond to words. Though LSTM structures have been developed for both types of trees above, we investigate constituent trees in this paper. There are three existing methods for constituent tree LSTMs Tai et al., 2015; Le and Zuidema, 2015) , which make essentially the same extension from sequence structured LSTMs. We take the method of as our baseline. Figure 1 shows the sequence structured LSTM of Hochreiter and Schmidhuber (1997) and the treestructured LSTM of , illustrating the input (x), cell (c) and hidden (h) nodes at a certain time step t. The most important difference between Figure 1(a) and Figure 1 (b) is the branching factor. While a cell in the sequence structure LSTM depends on the single previous hidden node, a cell in the tree-structured LSTM depends on a left hidden node and a right hidden node. Such tree-structured extensions of the sequence structured LSTM assume that the constituent tree is binarized, building hidden nodes from the input words in the bottom-up direction. The leaf node structure is shown in Figure 1 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 552, |
|
"end": 569, |
|
"text": "Tai et al., 2015;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 570, |
|
"end": 591, |
|
"text": "Le and Zuidema, 2015)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 754, |
|
"end": 787, |
|
"text": "Hochreiter and Schmidhuber (1997)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 707, |
|
"end": 715, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 959, |
|
"end": 967, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1393, |
|
"end": 1401, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A second salient difference between the two types of LSTMs is the modeling of input words. While each cell in the sequence structure LSTM directly depends on its corresponding input word ( Figure 1(a) ), only leaf cells in the tree structure LSTM directly depend on corresponding input words ( Figure 1(c) ). This corresponds well to the constituent tree structure, where there is no direct association between non-leaf constituent nodes and input words. However, it leaves the tree structure a degraded version of a perfect binary-branching variation of the sequence-structure LSTM, with one important source of information (i.e. words) missing in forming a cell (Figure 1(b) ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 201, |
|
"text": "Figure 1(a)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 307, |
|
"text": "Figure 1(c)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 666, |
|
"end": 678, |
|
"text": "(Figure 1(b)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "(c).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We fill this gap by proposing an extension to the tree LSTM model, injecting lexical information into every node in the tree. Our method takes inspiration from work on head-lexicalization, which shows that each node in a constituent tree structure is governed by a head word. As shown in Figure 2 , the head word for the verb phrase \"visited Mary\" is \"visited\", and the head word of the adverb phrase \"this afternoon\" is \"afternoon\". Research has shown that head word information can significantly improve the performance of syntactic parsing (Collins, 2003; Clark and Curran, 2004) . Correspondingly, we use the head lexical information of each constituent word as the input node x for calculating the corresponding cell c in Figure 1 Traditional head-lexicalization relies on specific rules (Collins, 2003; Zhang and Clark, 2009) , typically extracting heads from constituent treebanks according to certain grammar formalisms. For better generalization, we use a neural attention mechanism to derive head lexical information automatically, rather than relying on linguistic head rules to find the head lexicon of each constituent, which is language-and formalism-dependent.", |
|
"cite_spans": [ |
|
{ |
|
"start": 543, |
|
"end": 558, |
|
"text": "(Collins, 2003;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 559, |
|
"end": 582, |
|
"text": "Clark and Curran, 2004)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 793, |
|
"end": 808, |
|
"text": "(Collins, 2003;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 809, |
|
"end": 831, |
|
"text": "Zhang and Clark, 2009)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 288, |
|
"end": 296, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 727, |
|
"end": 735, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "(c).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Based on such head lexicalization, we further make a bidirectional extension of the tree structured LSTM, propagating information in the top-down direction as well as the bottom-up direction. This is analogous to the bidirectional extension of sequence structured LSTMs, which are commonly used for NLP tasks such as speech recognition (Graves et al., 2013) , sentiment analysis (Tai et al., 2015; Li et al., 2015) and machine translation Bahdanau et al., 2015) tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 336, |
|
"end": 357, |
|
"text": "(Graves et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 397, |
|
"text": "(Tai et al., 2015;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 398, |
|
"end": 414, |
|
"text": "Li et al., 2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 439, |
|
"end": 461, |
|
"text": "Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(c).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Results on a standard sentiment classification benchmark and a question type classification benchmark show that our tree LSTM structure gives significantly better accuracies compared with the method of . We achieve the best reported results for sentiment classification. Interestingly, the head lexical information that is learned automatically from the sentiment treebank consists of both syntactic head information and key sentiment word information. This shows the advantage of automatic head-finding as compared with rule-based head lexicalization. We make our code available under GPL at https://github.com/ zeeeyang/lexicalized_bitreelstm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(c).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "LSTM Recurrent neural network (RNN) (Elman, 1990; Mikolov et al., 2010) achieves success on modeling linear structures due to its ability to preserve history over arbitrary length sequences. At each step, RNN decides its hidden state based on both the current input and the previous hidden state. In theory, it can carry over unbounded history. Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997 ) is a special type of RNN that leverages multiple gate vectors and a memory cell vector to solve the vanishing and exploding gradient problems of training RNNs. It has been successfully applied to parsing (Vinyals et al., 2015a) , sentiment classification (Tai et al., 2015; Li et al., 2015) , speech recognition (Graves et al., 2013) , machine translation Bahdanau et al., 2015) and image captioning (Vinyals et al., 2015b) . There are many variants of sequential LSTMs, such as simple Gated Recurrent Neural Networks (Cho et al., 2014) . Greff et al. (2017) compared various architectures of LSTM. In this paper, we take the standard LSTM with peephole connections (Gers and Schmidhuber, 2000) as a baseline.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 49, |
|
"text": "(Elman, 1990;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 50, |
|
"end": 71, |
|
"text": "Mikolov et al., 2010)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 375, |
|
"end": 408, |
|
"text": "(Hochreiter and Schmidhuber, 1997", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 615, |
|
"end": 638, |
|
"text": "(Vinyals et al., 2015a)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 666, |
|
"end": 684, |
|
"text": "(Tai et al., 2015;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 685, |
|
"end": 701, |
|
"text": "Li et al., 2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 723, |
|
"end": 744, |
|
"text": "(Graves et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 767, |
|
"end": 789, |
|
"text": "Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 811, |
|
"end": 834, |
|
"text": "(Vinyals et al., 2015b)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 929, |
|
"end": 947, |
|
"text": "(Cho et al., 2014)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 950, |
|
"end": 969, |
|
"text": "Greff et al. (2017)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Structured LSTM There has been a line of research that extends the standard sequential LSTM in order to model more complex structures. Kalchbrenner et al. (2016) proposed Grid LSTMs to process multi-dimensional data. Theis and Bethge (2015) proposed Spatial LSTMs to handle image data. Dyer et al. (2015) designed Stack LSTMs by adding a top pointer to sequential LSTMs to deal with push and pop sequences of a stack. Tai et al. (2015) , and Le and Zuidema (2015) (Socher et al., 2013b; Le and Zuidema, 2014) to support information flow over trees. In addition to Tai et al. (2015) , and Le and Zuidema (2015) , who explicitly named their models as Tree LSTMs, Cho et al. (2014) designed gated recurrent units over tree structures, and Chen et al. (2015) introduced gate mechanisms to recursive neural networks. These can also be regarded as variants of Tree LSTMs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 161, |
|
"text": "Kalchbrenner et al. (2016)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 217, |
|
"end": 240, |
|
"text": "Theis and Bethge (2015)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 304, |
|
"text": "Dyer et al. (2015)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 418, |
|
"end": 435, |
|
"text": "Tai et al. (2015)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 463, |
|
"text": "Le and Zuidema (2015)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 464, |
|
"end": 486, |
|
"text": "(Socher et al., 2013b;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 487, |
|
"end": 508, |
|
"text": "Le and Zuidema, 2014)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 564, |
|
"end": 581, |
|
"text": "Tai et al. (2015)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 588, |
|
"end": 609, |
|
"text": "Le and Zuidema (2015)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 661, |
|
"end": 678, |
|
"text": "Cho et al. (2014)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 720, |
|
"end": 754, |
|
"text": "structures, and Chen et al. (2015)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Both and Le and Zuidema (2015) proposed Binary Tree LSTM models, which can be applied to situations where there are exactly two children of each internal node in a tree. The difference between and Le and Zuidema (2015) is that besides using two forget gates, Le and Zuidema (2015) also make use of two input gates to let a node know its sibling. Tai et al. (2015) introduced Child-Sum Tree LSTM and Nary Tree LSTM. Child-Sum Tree LSTMs can support multiple children, while N-ary Tree LSTMs work for trees with a branching factor of at most N . In this perspective, Binary Tree LSTM is a special case of N-ary Tree LSTM with N = 2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 30, |
|
"text": "Le and Zuidema (2015)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 197, |
|
"end": 218, |
|
"text": "Le and Zuidema (2015)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 259, |
|
"end": 280, |
|
"text": "Le and Zuidema (2015)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 363, |
|
"text": "Tai et al. (2015)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "When a Child-Sum Tree LSTM is applied to a dependency tree, it is referred to as a Dependency Tree LSTM. A Binary Tree LSTM is also referred to as a Constituent Tree LSTM. Based on Tai et al. (2015) , Miwa and Bansal (2016) introduced a Tree LSTM model that can handle different types of children. A dependency tree naturally contains lexical information at every node, while only leaf nodes contain lexical information in a constituent tree. None of these methods (Tai et al., 2015; Le and Zuidema, 2015) make direct use of lexical input for internal nodes when using constituent Tree LSTMs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 198, |
|
"text": "Tai et al. (2015)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 201, |
|
"end": 223, |
|
"text": "Miwa and Bansal (2016)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 465, |
|
"end": 483, |
|
"text": "(Tai et al., 2015;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 484, |
|
"end": 505, |
|
"text": "Le and Zuidema, 2015)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Bi-LSTM Another common extension to sequential LSTM is to include bidirectional information (Graves et al., 2013) , which can model history both left-to-right and right-to-left. The aforementioned Tree LSTM models (Tai et al., 2015; Le and Zuidema, 2015) propagate the history of children to their parent in the bottom-up direction only, while ignoring the top-down information flow from parents to children. proposed a top-down Tree LSTM to estimate the generation probability of a dependency tree. However, no corresponding bottom-up Tree LSTM is incorporated into their model. Paulus et al. (2014) leveraged bidirectional information over recursive binary trees by propagating global belief down from the tree root to leaf nodes. However, their model is based on recursive neural network rather than LSTM. Miwa and Bansal (2016) adopted a bidirectional Tree LSTM model to jointly extract named entities and relations under a dependency tree structure. For constituent tree structures, however, their model does not work due to lack of word inputs on non-leaf constituent nodes, and in particular the root node. Our head lexicalization allows us to investigate the top-down constituent Tree LSTM. To our knowledge, we are the first to report a bidirectional constituent Tree LSTM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 113, |
|
"text": "(Graves et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 232, |
|
"text": "(Tai et al., 2015;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 233, |
|
"end": 254, |
|
"text": "Le and Zuidema, 2015)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 580, |
|
"end": 600, |
|
"text": "Paulus et al. (2014)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 809, |
|
"end": 831, |
|
"text": "Miwa and Bansal (2016)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A sequence-structure LSTM estimates a sequence of hidden state vectors given a sequence of input vectors, through the calculation of a sequence of hidden cell vectors using a gate mechanism. For NLP, the input vectors are typically word embeddings (Mikolov et al., 2013) , but can also include partof-speech (POS) embeddings, character embeddings or other types of information. For notational convenience, we refer to the input vectors as lexical vectors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 270, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Formally, given an input vector sequence x 1 , x 2 , . . . , x n , each state vector h t is estimated from the Hadamard product of a cell vector c t and a corresponding output gate vector o t", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h t = o t \u2297 tanh(c t )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Here the cell vector depends on both the previous cell vector c t , and a combination of the previous state vector h t\u22121 ; the current input vector x t :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "c t = f t \u2297 c t\u22121 + i t \u2297 g t g t = tanh(W xg x t + W hg h t\u22121 + b g )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The combination of c t\u22121 and g t is controlled by the Hadamard product between a forget gate vector f t and an input gate vector i t , respectively. The gates o t , f t and i t are defined as follows", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "i t = \u03c3(W xi x t + W hi h t\u22121 + W ci c t\u22121 + b i ) f t = \u03c3(W xf x t + W hf h t\u22121 + W cf c t\u22121 + b f ) o t = \u03c3(W xo x t + W ho h t\u22121 + W co c t + b o ), (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where \u03c3 is the sigmoid function. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "W xg , W hg , b g , W xi , W hi , W ci , b i , W xf , W hf , W cf , b f , W xo , W", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "c R t\u22121 , calculating c t as c t = f L t \u2297 c L t\u22121 + f R t \u2297 c R t\u22121 + i t \u2297 g t ,", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "and the input/output gates i t /o t as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "i t = \u03c3 N \u2208{L,R} (W N hi h N t\u22121 + W N ci c N t\u22121 ) + b i o t = \u03c3 N \u2208{L,R} W N ho h N t\u22121 + W co c t + b o", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The forget gate f t is split into f L t and f R t for regulating c L t\u22121 and c R t\u22121 , respectively:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "f L t = \u03c3 N \u2208{L,R} (W N hf l h N t\u22121 + W N cf l c N t\u22121 ) + b f l f R t = \u03c3 N \u2208{L,R} (W N hfr h N t\u22121 + W N cfr c N t\u22121 ) + b fr (6) g t depends on both h L t\u22121 and h R t\u22121 , but as shown in Figure 1 (b), it does not depend on x t g t = tanh N \u2208{L,R} W N hg h N t\u22121 + b g (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Finally, the hidden state vector h t is calculated in the same way as in the sequential LSTM model shown in Equation 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "W L hi , W R hi , W L ci , W R ci , b i , W L ho , W R ho , W co , b o , W L hf l , W R hf l , W L cf l , W R cf l , b f l , W L hfr , W R hfr , W L cfr , W R cfr , b fr , W L hg , W R hg", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "and b g are model parameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We introduce an input lexical vector x t to the calculation of each cell vector c t via a bottom-up head propagation mechanism. As shown in the shaded nodes in Figure 3 (b), the head propagation mechanism is parallel to the cell propagation mechanism. In contrast, the method of in Figure 3 (a) does not have the input vector x t for non-leaf constituents.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 168, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 282, |
|
"end": 291, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Head Lexicalization", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "There are multiple ways to choose a head lexicon for a given binary-branching constituent. One simple method is to choose the head lexicon of the left child as the head (left-headedness). Correspondingly, an alternative is to use the right child for head lexicon. There is less consistency in the governing head lexicons across variations of the same type of constituents with slightly different typologies. Hence, simple baselines can be less effective compared to linguistically motivated head findings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Head Lexicalization", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Rather than selecting head lexicons using manually-defined head-finding rules, which are language-and formalism-dependent (Collins, 2003) , we cast head finding as a part of the neural network model, learning the head lexicon of each constituent by a gated combination of the head lexicons of its two children 1 . Formally,", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 137, |
|
"text": "(Collins, 2003)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Head Lexicalization", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "x t = z t \u2297 x L t\u22121 + (1 \u2212 z t ) \u2297 x R t\u22121 ,", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Head Lexicalization", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where x t represents the head lexicon vector of the current constituent, x L t\u22121 represents the head lexicon of its left child constituent, and x R t\u22121 represents the head lexicon of its right child constituent. The gate z t is calculated based on x L t\u22121 and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Head Lexicalization", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "x R t\u22121 , z t = \u03c3(W L zx x L t\u22121 + W R zx x R t\u22121 + b z )", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Head Lexicalization", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Here W L zx , W R zx and b z are model parameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Head Lexicalization", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Given head lexicon vectors for nodes, the Tree LSTM of can be extended by leveraging x t in calculating the corresponding c t . In particular, x t is used to estimate the input (i t ), output 1 In this paper, we work on binary trees only, which is a common form for CKY and shift-reduce parsing. Typical binarization methods, such as head binarization (Klein and Manning, 2003) , also rely on specific head-finding rules.", |
|
"cite_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 193, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 352, |
|
"end": 377, |
|
"text": "(Klein and Manning, 2003)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicalized Tree LSTM", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "(o t ) and forget (f R t and f L t ) gates:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicalized Tree LSTM", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "i t = \u03c3 W xi x t + N \u2208{L,R} (W N hi h N t\u22121 + W N ci c N t\u22121 ) + b i f L t = \u03c3 W xf x t + N \u2208{L,R} (W N hf l h N t\u22121 + W N cf l c N t\u22121 ) + b f l f R t = \u03c3 W xf x t + N \u2208{L,R} (W N hfr h N t\u22121 + W N cfr c N t\u22121 ) + b fr o t = \u03c3 W xo x t + N \u2208{L,R} W N ho h N t\u22121 + W co c t + b o", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Lexicalized Tree LSTM", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In addition, x t is also used in computing g t ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicalized Tree LSTM", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "g t = tanh W xg x t + N \u2208{L,R} W N hg h N t\u22121 + b g (11)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicalized Tree LSTM", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "With the new definition of i t , f R t , f L t and g t , the computing of c t remains the same as the baseline Tree LSTM model as shown in Equation 4. Similarly, h t remains the Hadamard product of c t and the new o t as shown in Equation 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicalized Tree LSTM", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this model,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicalized Tree LSTM", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "W xi , W xf , W", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicalized Tree LSTM", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "xg and W xo are newly-introduced model parameters. The use of x t in computing the gate and cell values are consistent with those in the baseline sequential LSTM.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicalized Tree LSTM", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Given a sequence of input vectors [x 1 , x 2 , . . . , x n ], a bidirectional sequential LSTM (Graves et al., 2013) computes two sets of hidden state vectors, [h 1 ,h 2 , . . . ,h n ] and [h n ,h n\u22121 , . . . ,h 1 ] in the left-to-right and the right-to-left directions, respectively. The final hidden state h i of the input x i is the concatenation of the corresponding state vectors in the two LSTMs,", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 115, |
|
"text": "(Graves et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bidirectional Extensions", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h i =h i \u2295h n\u2212i+1", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "Bidirectional Extensions", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The two LSTMs can share the same model parameters or use different parameters. We choose the latter in our baseline experiments. We make a bidirectional extension to the Lexicalized Tree LSTM in Section 4.2 by following the sequential LSTMs in Section 3, adding an additional Different from the bottom-up direction, each hidden state in the top-down LSTM has exactly one predecessor. In fact, the path from the root of a tree down to any node forms a sequential LSTM.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bidirectional Extensions", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Note, however, that two different sets of model parameters are used when the current node is the left and the right child of its predecessor. Denoting the two sets of parameters as U L and U R , respectively, the hidden state vector h 7 in Figure 4 is calculated from the hidden state vector h 1 using the parameter set sequence", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 248, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Bidirectional Extensions", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "[U L , U L , U R ]. Similarly, h 8 is calcu- lated from h 1 using [U L , U R , U L ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bidirectional Extensions", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": ". At each step t, the computing of h t follows the sequential LSTM model:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bidirectional Extensions", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h t = o t \u2297 tanh(c t\u22121 ) c t = f t \u2297 c t\u22121 + i t \u2297 g t g t = tanh(W N xg\u2193 x t\u22121 + W N hg\u2193 h t\u22121 + b N g\u2193 )", |
|
"eq_num": "(13)" |
|
} |
|
], |
|
"section": "Bidirectional Extensions", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "With the gate values being defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bidirectional Extensions", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "i t = \u03c3(W N xi\u2193 x t + W N hi\u2193 h t\u22121 + W N ci\u2193 c t\u22121 + b N i\u2193 ) f t = \u03c3(W N xf \u2193 x t + W N hf \u2193 h t\u22121 + W N cf \u2193 c t\u22121 + b N f \u2193 ) o t = \u03c3(W N xo\u2193 x t + W N ho\u2193 h t\u22121 + W N co\u2193 c t + b N o\u2193 )", |
|
"eq_num": "(14)" |
|
} |
|
], |
|
"section": "Bidirectional Extensions", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Here N \u2208 {L, R} and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bidirectional Extensions", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "U N = {W N xg\u2193 , W N hg\u2193 , b N g\u2193 , W N xi\u2193 , W N hi\u2193 , W N ci\u2193 , b N i\u2193 , W N xf \u2193 , W N hf \u2193 , W N cf \u2193 , b N f \u2193 , W N xo\u2193 , W N ho\u2193 , W N co\u2193 , b N o\u2193 }. U L", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bidirectional Extensions", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "and U R are model parameters in the top-down Tree LSTM.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bidirectional Extensions", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "One final note is that the top-down Tree LSTM is enabled by the head propagation mechanism, which allows a head lexicon node to be made available for the root constituent node. Without such information, it would be difficult to build top-down LSTM for constituent trees.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bidirectional Extensions", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We apply the bidirectional Tree LSTM to classification tasks, where the input is a sentence with its binarized constituent tree, and the output is a discrete label. We denote the bottom-up hidden state vector of the root ash ROOT \u2191 , the top-down hidden state vector of the root ash ROOT \u2193 and the top-down hidden state vectors of the input words x 1 , x 2 , . . . , x n ash 1 ,h 2 , . . . ,h n . We take the concatenation of h ROOT \u2191 ,h ROOT \u2193 and the average ofh 1 ,h 2 , . . . ,h n as the final representation h of the sentence:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Usage for Classification", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h =h ROOT \u2191 \u2295h ROOT \u2193 \u2295 1 n n i=1h i", |
|
"eq_num": "(15)" |
|
} |
|
], |
|
"section": "Usage for Classification", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "A softmax classifier is used to predict the probability p j of sentiment label j from h by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Usage for Classification", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h l = ReLU(W hl h + b hl ) P = softmax(W lp h l + b lp ) p j = P [j],", |
|
"eq_num": "(16)" |
|
} |
|
], |
|
"section": "Usage for Classification", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "where W hl , b hl , W lp and b lp are model parameters, and ReLU is the rectifier function f (x) = max(0, x). During prediction, the largest probability component of P will be taken as the answer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Usage for Classification", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We train our classifier to maximize the conditional log-likelihood of gold labels of training samples. Formally, given a training set of size |D|, the training objective is defined by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L(\u0398) = \u2212 |D| i=1 log p yi + \u03bb 2 ||\u0398|| 2 ,", |
|
"eq_num": "(17)" |
|
} |
|
], |
|
"section": "Training", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "where \u0398 is the set of model parameters, \u03bb is a regularization parameter, y i is the gold label of the ith training sample and p y i is obtained according to Equation 16. For sequential LSTM models, we collect errors over each sequence. For Tree LSTMs, we sum up errors at every node. The model parameters are optimized using ADAM (Kingma and Ba, 2015) without gradient clipping, with the default hyper-parameters of the AdamTrainer in the Dynet toolkits. 2 We also use dropout (Srivastava et al., 2014) at lexical input embeddings with a fixed probability p drop to avoid overfitting. p drop is set to 0.5 for all tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 455, |
|
"end": 456, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 477, |
|
"end": 502, |
|
"text": "(Srivastava et al., 2014)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Following Tai et al. (2015) , Li et al. (2015) , and Le and Zuidema (2015) , we use Glove-300d word embeddings 3 to train our model. The pretrained word embeddings are fine-tuned for all tasks. Unknown words are handled in two steps. First, if a word is not contained in the pretrained word embeddings, but its lowercased form exists in the embedding table, we use the lowercase as a replacement. Second, if both the original word and its lowercased form cannot be found, we treat the word as unk. The embedding vector of the UNK token is initialized as the average of all embedding vectors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 27, |
|
"text": "Tai et al. (2015)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 30, |
|
"end": 46, |
|
"text": "Li et al. (2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 53, |
|
"end": 74, |
|
"text": "Le and Zuidema (2015)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We use one hidden layer and the same dimensionality settings for both sequential and Tree LSTMs. LSTM hidden states are of size 150. The output hidden size is 128 and 64 for the sentiment classification task and the question type classification task, respectively. Each model is trained for 30 iterations. The same training procedure repeats five times using different random seeds, with parameters being evaluated at the end of every iteration on the development set. The model that gives the best development result is used for final tests.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The effectiveness of our model is tested mainly on a sentiment classification task and a question type classification task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Sentiment Classification. For sentiment classification, we use the same data settings as . Specifically, we use the Stanford Sentiment Treebank (Socher et al., 2013b) . Each sentence is annotated with a constituent tree. Every internal node corresponds to a phrase. Each node is manually assigned an integer sentiment label from 0 to 4, that correspond to five sentiment classes: very negative, negative, neutral, positive and very positive, respectively. The root label represents the sentiment label of the whole sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 166, |
|
"text": "(Socher et al., 2013b)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "We perform both binary classification and finegrained classification. Following previous work, we use labels of all phrases for training. Gold-standard 3 http://nlp.stanford.edu/data/glove.840B.300d.zip tree structures are used for training and testing (Le and Zuidema, 2015; Li et al., 2015; Tai et al., 2015) . Accuracies are evaluated for both the sentence root labels and phrase labels.", |
|
"cite_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 275, |
|
"text": "(Le and Zuidema, 2015;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 276, |
|
"end": 292, |
|
"text": "Li et al., 2015;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 293, |
|
"end": 310, |
|
"text": "Tai et al., 2015)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "Question Type Classification. For the question type classification task, we use the TREC data (Li and Roth, 2002) . Each training sample in this dataset contains a question sentence and its corresponding question type. We work on the sixway coarse classification task, where the six question types are ENTY, HUM, LOC, DESC, NUM and ABBR, corresponding to ENTITY, HUMAN, LOCA-TION, DESCRIPTION, NUMERIC VALUE and AB-BREVIATION, respectively. For example, the type for the sentence \"What year did the Titanic sink?\" is NUM. The training set consists of 5,452 examples and the test set contains 500 examples. Since there is no development set, we follow Zhou et al. 2015, randomly extracting 500 examples from the training set as a development set. Unlike the sentiment treebank, there is no annotated tree for each sentence. Instead, we obtain an automatically parsed tree for each sentence using ZPar 4 off-the-shelf (Zhang and Clark, 2011) . Another difference between the TREC data and the sentiment treebank is that there is only one label, at the root node, rather than a label for each phrase.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 113, |
|
"text": "(Li and Roth, 2002)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 916, |
|
"end": 939, |
|
"text": "(Zhang and Clark, 2011)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "We consider two models for our baselines. The first is bidirectional LSTM (BiLSTM) (Hochreiter and Schmidhuber, 1997; Graves et al., 2013) . Our bidirectional constituency Tree LSTM (BiConTree) is compared against BiLSTM to investigate the effectiveness of tree structures. For the sentiment task, following Tai et al. (2015) and Li et al. (2015) , we convert the treebank into sequences to allow the bidirectional LSTM model to make use of every phrase span as a training example. The second baseline model is the bottom-up Tree LSTM model of . We compare this model with our lexicalized bidirectional models to show the effects of adding head lexicalization and top-down information flow. (Socher et al., 2013b) 45.7 80.7 85.4 87.6", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 117, |
|
"text": "(Hochreiter and Schmidhuber, 1997;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 118, |
|
"end": 138, |
|
"text": "Graves et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 325, |
|
"text": "Tai et al. (2015)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 330, |
|
"end": 346, |
|
"text": "Li et al. (2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 691, |
|
"end": 713, |
|
"text": "(Socher et al., 2013b)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "BiLSTM (Li et al., 2015) 49.8 83.3 86.7 -DepTree (Tai et al., 2015) 48.4 -85.7 -ConTree (Le and Zuidema, 2015) 49.9 -88.0 -ConTree 50.1 ---", |
|
"cite_spans": [ |
|
{ |
|
"start": 7, |
|
"end": 24, |
|
"text": "(Li et al., 2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 49, |
|
"end": 67, |
|
"text": "(Tai et al., 2015)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 88, |
|
"end": 110, |
|
"text": "(Le and Zuidema, 2015)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "ConTree (Li et al., 2015) 50.4 83.4 86.7 -ConTree (Tai et al., 2015) 51. Table 1 shows the main results for the sentiment classification task, where RNTN is the recursive neural tensor model of Socher et al. (2013b) , Con-Tree and DepTree denote constituency Tree LSTMs and dependency Tree LSTMs, respectively. Our reimplementations of sequential bidirectional LSTM and constituent Tree LSTM give comparable results to the original implementations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 25, |
|
"text": "(Li et al., 2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 50, |
|
"end": 68, |
|
"text": "(Tai et al., 2015)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 215, |
|
"text": "Socher et al. (2013b)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 80, |
|
"text": "Table 1", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "After incorporating head lexicalization into our constituent Tree LSTM, the fine-grained sentiment classification accuracy increases from 51.2 to 52.8, and the binary sentiment classification accuracy increases from 88.5 to 89.2, which demonstrates the effectiveness of the head lexicalization mechanism. Table 1 also shows that a vanilla top-down Con-Tree LSTM by head-lexicalization (i.e. the topdown half of the final bidirectional model) alone obtains comparable accuracies to the bottom-up Con-Tree LSTM model. The BiConTree model can further improve the classification accuracies by 0.7 points (fine-grained) and 1.3 points (binary) compared to the unidirectional bottom-up lexicalized ConTree LSTM model, respectively. Table 1 includes 5 class accuracies for all nodes. There is no significant difference between different models, consistent with the observation of Li et al. (2015) . To our knowledge, these are the best reported results for this sentiment classification task. (Silva et al., 2011) 95.0 Bidirectional ConTree LSTM 94.8 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 873, |
|
"end": 889, |
|
"text": "Li et al. (2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 986, |
|
"end": 1006, |
|
"text": "(Silva et al., 2011)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 305, |
|
"end": 312, |
|
"text": "Table 1", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 726, |
|
"end": 733, |
|
"text": "Table 1", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Main Results", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "Introducing head lexicalization and bidirectional extension to the model increases the model complexity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Time and Model Size", |
|
"sec_num": "7.4" |
|
}, |
|
{ |
|
"text": "In this section, we analyze training time and model size with the fine-grained sentiment classification task. We run all the models using an i7-4790 3.60GHz CPU with a single thread. Table 3 shows the average running time for different models over 30 iterations. The baseline ConTree model takes about 1.3 hours to finish the training procedure. Con-Tree+Lex takes about 1.5 times longer than Con-Tree. BiConTree takes about 3.2 hours, which is about 2.5 times longer than that of ConTree. Table 4 compares the model sizes. We did not count the number of parameters in the lookup table since these parameters are the same for all models. Because the size of LSTM models mainly depends on the dimensionality of the state vector h, we change the size of h to study the effect of model size. When |h| = 150, the model size of the baseline model ConTree is the smallest, which consists of about 538K parameters. The model size of Con-Tree+Lex is about 1.4 times as large as that of the baseline model. The bidirectional model BiCon-Tree is the largest, about 1.7 times as large as that of the ConTree+Lex model. However, this parameter set is not very large compared to the modern memory capacity, even for a computer with 16GB RAM. In conclusion, in terms of both time, number of parameters and accuracy, head lexicalization method is Table 4 also helps to clarify whether the gain of the BiConTree model over the ConTree+Lex model is from the top-down information flow or more parameters. For the same model, increasing the model size can improve the performance to some extent. For example, doubling the size of |h| (75 \u2192 150) increases the performance from 51.5 to 52.8 for the ConTree+Lex model. Similarly, we boost the performance of the BiConTree model when doubling the size of |h| from 75 to 150. However, doubling the size of |h| from 150 to 300 empirically decreases the performance of the Con-Tree+Lex model. The size of the BiConTree model with |h| = 75 is much smaller than that of the Con-Tree+Lex model with |h| = 150. However the performance of these two models is quite close, which indicates that top-down information is useful even for a small model. A ConTree+Lex model with |h| = 215 and a BiConTree model with |h| = 150 are of similar size. The performance of the Con-Tree+Lex model is again worse than that of the Bi-ConTree model (52.5 v.s. 53.5), which shows the effectiveness of top-down information.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 190, |
|
"text": "Table 3", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 490, |
|
"end": 497, |
|
"text": "Table 4", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 1332, |
|
"end": 1339, |
|
"text": "Table 4", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training Time and Model Size", |
|
"sec_num": "7.4" |
|
}, |
|
{ |
|
"text": "In this experiment, we investigate the effect of our head lexicalization method over heuristic baselines. We consider three baseline methods, namely left branching (L), right branching (R) and averaging (A). For L, a parent node accepts lexical information of its left child while ignoring the right child. Correspondingly, for R, a parent node accepts lexical information of its right child while ignoring the left child. For A, a parent node takes the average of the lexical vectors of its children. Table 5 shows the accuracies on the test set, where G denotes our gated head lexicalization method de-Method L R A G Root Accuracy (%) 51.1 51.6 51.8 53.5 Table 5 : Test set accuracies of four head lexicalization methods on fine-grained classification. scribed in Section 4.1. R gives better results compared to L due to relatively more right-branching structures in this treebank. A simple average yields similar results compared with right branching. In contrast, G outperforms A method by considering the relative weights of each branch according to treelevel contexts.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 502, |
|
"end": 509, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 657, |
|
"end": 664, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Head Lexicalization Methods", |
|
"sec_num": "7.5" |
|
}, |
|
{ |
|
"text": "We then investigate what lexical heads can be learned by G. Interestingly, the lexical heads contain both syntactic and sentiment information. Some heads correspond well to syntactic rules (Collins, 2003) , others are driven by subjective words. Compared to Collins' rules, our method found 30.68% and 25.72% overlapping heads on the development and test sets, respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 204, |
|
"text": "(Collins, 2003)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Head Lexicalization Methods", |
|
"sec_num": "7.5" |
|
}, |
|
{ |
|
"text": "Based on the cosine similarity between the head lexical vector and its children, we visualize the head of a node by choosing the head of the child that gives the largest similarity value. Figure 5 shows some examples, where <> indicates head words, sentiment labels (e.g. 2, 3) are also included. In Figure 5a, \"Emerges\" is the syntactic head word of the whole phrase, which is consistent with Collins-style head finding. However, \"rare\" is the head word of the phrase \"something rare\", which is different from the syntactic head. Similar observations are found in Figure 5b , where \"good\" is the head word of the whole phrase, rather than the syntactic head \"place\". The sentiment label of \"good\" and the sentiment label of the whole phrase are both 3. Figure 5c shows more complex interactions between syntax and sentiment for deciding the head word. Table 6 shows some example sentences incorrectly predicted by the baseline bottom-up tree model, but correctly labeled by our final model. The head word of sentence #1 by our model is \"Gloriously\", which is consistent with the sentiment of the whole sentence. This shows how head lexicalization can affect sentiment classification results. Sentences #2 and #3 show the usefulness of top-down informa- tion for complex semantic structures, where compositionality has subtle effects. Our final model improves the results for the 'very negative' and 'very positive' classes by 10% and 11%, respectively. It also boosts the accuracies for sentences with negation (e.g. \"not\", \"no\", and \"none\") by 4.4%. Figure 6 shows the accuracy distribution accord- Figure 6 : Distribution of 5-class accuracies at the root level according to the sentence length.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 196, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF6" |
|
}, |
|
{ |
|
"start": 300, |
|
"end": 306, |
|
"text": "Figure", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 565, |
|
"end": 574, |
|
"text": "Figure 5b", |
|
"ref_id": "FIGREF6" |
|
}, |
|
{ |
|
"start": 754, |
|
"end": 763, |
|
"text": "Figure 5c", |
|
"ref_id": "FIGREF6" |
|
}, |
|
{ |
|
"start": 853, |
|
"end": 860, |
|
"text": "Table 6", |
|
"ref_id": "TABREF11" |
|
}, |
|
{ |
|
"start": 1552, |
|
"end": 1560, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1601, |
|
"end": 1609, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Head Lexicalization Methods", |
|
"sec_num": "7.5" |
|
}, |
|
{ |
|
"text": "ing to the sentence length. We find that our model can improve the classification accuracy for longer sentences (>30 words) by 3.5 absolute points compared to the baseline ConTree LSTM of , which demonstrates the strength of our model for handling long range information. By considering bidirectional information over tree structures, our model is aware of more contexts for making better predictions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "7.6" |
|
}, |
|
{ |
|
"text": "Our main results are obtained on semanticdriven sentence classification tasks, where the automatically-learned head words contain mixed syntactic and semantic information. To further investigate the effectiveness of automatically learned head information on a pure syntactic task, we additionally conduct a simple parser reranking experiment. Further, we discuss findings in language modeling by Kuncoro et al. (2017) on the model of recurrent neural network grammars (Dyer et al., 2016) . Finally, we show potential future work leveraging our idea for more tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 396, |
|
"end": 417, |
|
"text": "Kuncoro et al. (2017)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 468, |
|
"end": 487, |
|
"text": "(Dyer et al., 2016)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Applications", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We use our tree LSTM models to rerank the 10 best outputs of the Charniak (2000) parser. Given a sentence x, suppose that Y (x) is a set of parse tree candidates generated by a baseline parser for x, the goal of a syntactic reranker is to choose the best parsing hypothesis\u0177 according to a score function f (x, y; \u0398). Formally,", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 80, |
|
"text": "Charniak (2000)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "y = arg max y\u2208Y (x) {f (x, y; \u0398)}", |
|
"eq_num": "(18)" |
|
} |
|
], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "For each tree y of sentence x, we follow Socher et al. (2013a) and define the score f (x, y; \u0398) as the sum of scores of each constituent node,", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 62, |
|
"text": "Socher et al. (2013a)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "f (x, y; \u0398) = r\u2208node(x,y) Score(r; \u0398)", |
|
"eq_num": "(19)" |
|
} |
|
], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "Without loss of generality, we take a binary node as an example. Given a node A, suppose that its two children are B and C. Let the learned composition state vectors of A, B and C by our proposed Tree-LSTM model be n A , n B and n C , respectively. The head word vector of node A is h A . Score(A; \u0398) is defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "o BC A = ReLU (W L s n B + W R s n C + W H s h A + b s ) Score BC A = log(sof tmax(o BC A ))[A],", |
|
"eq_num": "(20)" |
|
} |
|
], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "where W L s , W R s and b s are model parameters. Training. Given a training instance x i , Y (x i ) in the training set D, we use a max-margin loss function to train our reranking model. Suppose that the oracle parse tree in", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Y (x i ) is y i , the loss function L(\u0398) is L(\u0398) = 1 |D| |D| i=1 r i (\u0398) + \u03bb 2 ||\u0398|| 2", |
|
"eq_num": "(21)" |
|
} |
|
], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "Here \u03bb is a regularization parameter and r i (\u0398) is the margin loss between y i and the highest score tree\u0177 i predicted by the reranking model. r i (\u0398) is given by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "r i (\u0398) = max yi\u2208Y (xi) (0, f (x i ,\u0177 i ; \u0398)+ \u2206(y i ,\u0177 i ) \u2212 f (x i , y i ; \u0398)),", |
|
"eq_num": "(22)" |
|
} |
|
], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "where \u2206(y i ,\u0177 i ) is the structure loss between y i and y i by counting the number of incorrect nodes in the oracle tree:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u2206(y i ,\u0177 i ) = node\u2208\u0177i \u03ba1{node / \u2208 y i }.", |
|
"eq_num": "(23)" |
|
} |
|
], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "\u03ba is a scalar. With this loss function, we require the score of the oracle tree to be higher than the other candidates by a score margin. Intuitively, the score of the y i will increase and the score of\u0177 i will decrease during training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "Results. We experiment on the WSJ portion of the Penn Treebank, following the standard split (Collins, 2003) . Sections 2-21 are used for training, Section 24 and Section 23 are the development set and test set, respectively. The Charniak parser (Charniak, 2000; Charniak and Johnson, 2005 ) is adopted for our baseline by following the settings of Choe and Charniak (2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 108, |
|
"text": "(Collins, 2003)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 262, |
|
"text": "(Charniak, 2000;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 263, |
|
"end": 289, |
|
"text": "Charniak and Johnson, 2005", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 358, |
|
"end": 373, |
|
"text": "Charniak (2016)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "To obtain N-best lists on the development set and test set, we first train a baseline parser on the training set. To obtain N-best lists on the training data, we split the training data into 20 folds and trained 20 parsers. Each parser was trained on 19 folds data and used to produce the n-best list of the remaining fold. For the neural reranking model, we use the pretrained word vectors from Collobert et al. (2011) . The input dimension is 50. The dimension of state vectors in Tree-LSTM model is 60. These parameters are trained with ADAM (Kingma and Ba, 2015) with a batch size of 20. We set \u03ba = 0.1 for all experiments. For practical reasons, we use the Con-Tree+Lex model to learn the node representations and define Y (x i ) to be the 10-best parsing trees of x i . Table 7 shows the reranking results on WSJ test set. The baseline F1 score is 89.7. Our ConTree improves the baseline model to 90.6. Using Con-Tree+Lex model can further improve the performance (90.6 \u2192 90.9). This suggests that automatic heads can also be useful for a syntactic task. Among neural rerankers, our model outperforms Socher et al. (2013a) , but underperforms current state-of-theart models, including sequence-to-sequence based LSTM language models (Vinyals et al., 2015a; Choe and Charniak, 2016) and recurrent neural network grammars (Dyer et al., 2016) . This is likely due to our simple reranking configurations and settings 5 . Nevertheless, it serves our goal of contrasting the tree LSTM models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 396, |
|
"end": 419, |
|
"text": "Collobert et al. (2011)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1107, |
|
"end": 1128, |
|
"text": "Socher et al. (2013a)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1239, |
|
"end": 1262, |
|
"text": "(Vinyals et al., 2015a;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 1263, |
|
"end": 1287, |
|
"text": "Choe and Charniak, 2016)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1326, |
|
"end": 1345, |
|
"text": "(Dyer et al., 2016)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 776, |
|
"end": 783, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "Kuncoro et al. (2017) investigate composition functions in recurrent neural network grammars (RNNG) (Dyer et al., 2016) , finding that syntactic head information can be automatically learned. Their observa-Model F 1 Baseline (Charniak (2000) ) 89.7 ConTree 90.6 ConTree+Lex 90.9 Our 10-best Oracle 94.8 Table 7 : Reranking results on WSJ test set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 119, |
|
"text": "(Dyer et al., 2016)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 241, |
|
"text": "(Charniak (2000)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 303, |
|
"end": 310, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Language Modeling", |
|
"sec_num": "8.2" |
|
}, |
|
{ |
|
"text": "tion is consistent with ours. Formally, an RNNG is a tuple N, \u03a3, R, S, \u0398 , where N is the set of nonterminals, \u03a3 is the set of terminals, R is a set of top-down transition-based rules, S is the start symbol and \u0398 is the set of model parameters. Given S, the derivation process resembles transition-based parsing, which is performed incrementally from left to right. Unlike surface language models, RNNGs model sentences with explicit grammar. Comparing naive sequence-to-sequence models of syntax (Vinyals et al., 2015a) , RNNGs have the advantage of explicitly modeling syntactic composition between constituents, by combining the vector representation of child constituents into a single vector representation of their parent using a neural network. Kuncoro et al. (2017) show that such compositions are the key to the success, and further investigate several alternatives neural network structures.", |
|
"cite_spans": [ |
|
{ |
|
"start": 497, |
|
"end": 520, |
|
"text": "(Vinyals et al., 2015a)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 752, |
|
"end": 773, |
|
"text": "Kuncoro et al. (2017)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Modeling", |
|
"sec_num": "8.2" |
|
}, |
|
{ |
|
"text": "In particular, they compare vanilla LSTMs to attention networks when composing child constituents. Interestingly, the attention values represent syntactic heads among the child constituents to some extent. In addition, the vector constituent representation implicitly reflects constituent types. Their finding is consistent with ours in that a neural network can learn pure syntactic head information from constituent vectors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Modeling", |
|
"sec_num": "8.2" |
|
}, |
|
{ |
|
"text": "Our head-lexicalized tree model can be used for all tasks that require representation learning for sentences, given their constituent syntax. One example of future work is relation extraction. For example, given the sentence \"John is from Google Inc.\", a relation 'works in' can be extracted between 'John' and 'Google Inc.'. Miwa and Bansal (2016) solve this task by using the Child-Sum tree representation of Tai et al. (2015) to represent the input sentence, extracting features for the two entities according to their related nodes in the dependency tree, and then conducting rela-tion classification based on these features. Headlexicalization and top-down information can potentially be useful for improving relation extraction in the framework of Miwa and Bansal (2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 326, |
|
"end": 348, |
|
"text": "Miwa and Bansal (2016)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 411, |
|
"end": 428, |
|
"text": "Tai et al. (2015)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 754, |
|
"end": 776, |
|
"text": "Miwa and Bansal (2016)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relation Extraction", |
|
"sec_num": "8.3" |
|
}, |
|
{ |
|
"text": "We proposed lexicalized variants for constituent tree LSTMs. Learning the heads of constituents automatically using a neural model, our lexicalized tree LSTM is applicable to arbitrary binary branching trees in CFG, and is formalism-independent. In addition, lexical information on the root further allows a top-down extension to the model, resulting in a bidirectional constituent Tree LSTM. Experiments on two well-known datasets show that head-lexicalization improves the unidirectional Tree LSTM model. In addition, the bidirectional Tree LSTM gives superior labeling results compared to both unidirectional Tree LSTMs and bidirectional sequential LSTMs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "Transactions of the Association for Computational Linguistics, vol. 5, pp. 163-177, 2017. Action Editor: Scott Yih.Submission batch: 5/2016; Revision batch: 12/2016; Published 6/2017. c 2017 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/clab/dynet", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/SUTDNLP/ZPar, version 7.5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Dyer et al. (2016) employs 2-layerd LSTMs with input and hidden dimensions of size 256 and 128. Choe and Charniak (2016) use 3-layered LSTMs with both the input and hidden dimensions of size 1500. In addition, we only use the tree LSTM for scoring candidate parses in order to isolate the effect of tree LSTMs. In contrast, the previous works use the complex feature combinations in order to achieve high accuracies, which is different from our goal.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the anonymous reviewers for their detailed and constructive comments. Yue Zhang is the corresponding author.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2016 International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 2016 International Conference on Learning Represen- tations, San Diego, California, USA, May.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Coarse-tofine n-best parsing and MaxEnt discriminative reranking", |
|
"authors": [ |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "173--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to- fine n-best parsing and MaxEnt discriminative rerank- ing. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 173-180, Ann Arbor, Michigan, USA, June. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A maximum-entropy-inspired parser", |
|
"authors": [ |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "132--139", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st North American chapter of the Association for Computational Linguis- tics conference, pages 132-139, Seattle, Washington, April. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Sentence modeling with gated recursive neural network", |
|
"authors": [ |
|
{ |
|
"first": "Xinchi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xipeng", |
|
"middle": [], |
|
"last": "Qiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenxi", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shiyu", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuanjing", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "793--798", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Shiyu Wu, and Xuanjing Huang. 2015. Sentence modeling with gated recursive neural network. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 793-798, Lisbon, Portu- gal, September. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "On the properties of neural machine translation: Encoder-Decoder approaches", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merrienboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "103--111", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the proper- ties of neural machine translation: Encoder-Decoder approaches. In Proceedings of SSST-8, Eighth Work- shop on Syntax, Semantics and Structure in Statisti- cal Translation, pages 103-111, Doha, Qatar, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Parsing as language modeling", |
|
"authors": [ |
|
{ |
|
"first": "Kook", |
|
"middle": [], |
|
"last": "Do", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Choe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2331--2336", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2331-2336, Austin, Texas, USA, November. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Parsing the WSJ using CCG and Log-Linear models", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Curran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "103--110", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Clark and James R. Curran. 2004. Pars- ing the WSJ using CCG and Log-Linear models. In Proceedings of the 42nd Meeting of the Associa- tion for Computational Linguistics, pages 103-110, Barcelona, Spain, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Head-driven statistical models for natural language parsing", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "29", |
|
"issue": "", |
|
"pages": "589--637", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational linguis- tics, 29(4):589-637.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", |
|
"authors": [ |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 25th international conference on Machine learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "160--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified ar- chitecture for natural language processing: Deep neu- ral networks with multitask learning. In Proceedings of the 25th international conference on Machine learn- ing, pages 160-167, New York, NY, USA, July. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Transitionbased dependency parsing with stack long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Austin", |
|
"middle": [], |
|
"last": "Matthews", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "334--343", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short-term memory. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 334-343, Beijing, China, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Recurrent neural network grammars", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adhiguna", |
|
"middle": [], |
|
"last": "Kuncoro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "199--209", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 199-209, San Diego, California, USA, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Finding structure in time", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Elman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Cognitive science", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "179--211", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey L. Elman. 1990. Finding structure in time. Cog- nitive science, 14(2):179-211.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Recurrent nets that time and count", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Felix", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Gers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 2000 International Joint Conference on Neural Networks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "189--194", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix A. Gers and J\u00fcrgen Schmidhuber. 2000. Recur- rent nets that time and count. In Proceedings of the 2000 International Joint Conference on Neural Net- works, pages 189-194, Como, Italy, July. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Hybrid speech recognition with deep bidirectional LSTM", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Navdeep", |
|
"middle": [], |
|
"last": "Jaitly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abdel-Rahman", |
|
"middle": [], |
|
"last": "Mohamed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "IEEE Workshop on Automatic Speech Recognition and Understanding", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "273--278", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Graves, Navdeep Jaitly, and Abdel-rahman Mo- hamed. 2013. Hybrid speech recognition with deep bidirectional LSTM. In 2013 IEEE Workshop on Au- tomatic Speech Recognition and Understanding, pages 273-278, Olomouc, Czech, December. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "LSTM: A search space odyssey", |
|
"authors": [ |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Greff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rupesh", |
|
"middle": [ |
|
"Kumar" |
|
], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Koutn\u00edk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Steunebrink", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "IEEE Transactions on Neural Networks and Learning Systems", |
|
"volume": "", |
|
"issue": "99", |
|
"pages": "1--11", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Klaus Greff, Rupesh Kumar Srivastava, Jan Koutn\u00edk, Bas R. Steunebrink, and J\u00fcrgen Schmidhuber. 2017. LSTM: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems, PP(99):1- 11.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735- 1780.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "Nal", |
|
"middle": [], |
|
"last": "Kalchbrenner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivo", |
|
"middle": [], |
|
"last": "Danihelka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2016 International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. 2016. Grid long short-term memory. In Proceed- ings of the 2016 International Conference on Learning Representations, San Juan, Puerto Rico, May. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 2015 International Conference on Learning Rep- resentations, San Diego, USA, May.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Accurate unlexicalized parsing", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Accu- rate unlexicalized parsing. In Proceedings of the 41st", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "423--430", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 423-430, Sapporo, Japan, July. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "What do recurrent neural network grammars learn about syntax?", |
|
"authors": [ |
|
{ |
|
"first": "Adhiguna", |
|
"middle": [], |
|
"last": "Kuncoro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lingpeng", |
|
"middle": [], |
|
"last": "Kong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1249--1258", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2017. What do recurrent neural network grammars learn about syntax? In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics, pages 1249-1258, Valen- cia, Spain, April. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The insideoutside recursive neural network model for dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Phong", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Willem", |
|
"middle": [], |
|
"last": "Zuidema", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "729--739", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phong Le and Willem Zuidema. 2014. The inside- outside recursive neural network model for depen- dency parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing, pages 729-739, Doha, Qatar, October. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Compositional distributional semantics with long short term memory", |
|
"authors": [ |
|
{ |
|
"first": "Phong", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Willem", |
|
"middle": [], |
|
"last": "Zuidema", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "10--19", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phong Le and Willem Zuidema. 2015. Compositional distributional semantics with long short term memory. In Proceedings of the Fourth Joint Conference on Lex- ical and Computational Semantics, pages 10-19, Den- ver, Colorado, USA, June. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Learning question classifiers", |
|
"authors": [ |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 19th International Conference on Computational linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--7", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xin Li and Dan Roth. 2002. Learning question classi- fiers. In Proceedings of the 19th International Confer- ence on Computational linguistics, pages 1-7, Taipei, Taiwan, August. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "When are tree structures necessary for deep learning of representations?", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2304--2314", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiwei Li, Thang Luong, Dan Jurafsky, and Eduard Hovy. 2015. When are tree structures necessary for deep learning of representations? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2304-2314, Lisbon, Por- tugal, September. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Gated graph sequence neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Yujia", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Tarlow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Brockschmidt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Zemel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. 2016. Gated graph sequence neu- ral networks. In Proceedings of the 2016 International Conference on Learning Representations, San Juan, Puerto Rico, May.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Recurrent neural network based language model", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Karafi\u00e1t", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukas", |
|
"middle": [], |
|
"last": "Burget", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Cernock\u1ef3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanjeev", |
|
"middle": [], |
|
"last": "Khudanpur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 11th Annual Conference of the International Speech Communication Association", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1045--1048", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Lukas Burget, Jan Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In Pro- ceedings of the 11th Annual Conference of the Inter- national Speech Communication Association, pages 1045-1048, Chiba, Japan, September.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Workshop Proceedings of the 2013 International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In Workshop Proceedings of the 2013 International Conference on Learning Rep- resentations, Scottsdale, Arizona, USA, May.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "End-to-End relation extraction using LSTMs on sequences and tree structures", |
|
"authors": [ |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1105--1116", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-End re- lation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics, pages 1105-1116, Berlin, Germany, August. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Global belief recursive neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Romain", |
|
"middle": [], |
|
"last": "Paulus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2888--2896", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Romain Paulus, Richard Socher, and Christopher D. Manning. 2014. Global belief recursive neural net- works. In Advances in Neural Information Process- ing Systems, pages 2888-2896, Montreal, Quebec, Canada, December.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "From symbolic to sub-symbolic information in question classification", |
|
"authors": [ |
|
{ |
|
"first": "Joao", |
|
"middle": [], |
|
"last": "Silva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu\u00edsa", |
|
"middle": [], |
|
"last": "Coheur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Artificial Intelligence Review", |
|
"volume": "35", |
|
"issue": "2", |
|
"pages": "137--154", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joao Silva, Lu\u00edsa Coheur, Ana Cristina Mendes, and An- dreas Wichert. 2011. From symbolic to sub-symbolic information in question classification. Artificial Intel- ligence Review, 35(2):137-154.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Parsing natural scenes and natural language with recursive neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cliff", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 26th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "129--136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Cliff C. Lin, Andrew Y. Ng, and Christo- pher D. Manning. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 26th International Conference on Machine Learning, pages 129-136, Bellevue, Wash- ington, USA, June-July.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Parsing with compositional vector grammars", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013a. Parsing with composi- tional vector grammars. In Proceedings of the 51st", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Annual Meeting of the Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "455--465", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 455-465, Sofia, Bulgaria, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Recursive deep models for semantic compositionality over a sentiment treebank", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Perelygin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1631--1642", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christo- pher Potts. 2013b. Recursive deep models for seman- tic compositionality over a sentiment treebank. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing, pages 1631- 1642, Seattle, Washington, USA, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Dropout: A simple way to prevent neural networks from overfitting", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Krizhevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "15", |
|
"issue": "1", |
|
"pages": "1929--1958", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3104--3112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Se- quence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104-3112, Montreal, Quebec, Canada, Decem- ber.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Improved semantic representations from tree-structured long short-term memory networks", |
|
"authors": [ |
|
{ |
|
"first": "Kai Sheng", |
|
"middle": [], |
|
"last": "Tai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1556--1566", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Man- ning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th Inter- national Joint Conference on Natural Language Pro- cessing, pages 1556-1566, Beijing, China, July. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Context-sensitive lexicon features for neural sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Zhiyang", |
|
"middle": [], |
|
"last": "Teng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Vo Duy-Tin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1629--1638", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiyang Teng, Vo Duy-Tin, and Yue Zhang. 2016. Context-sensitive lexicon features for neural sentiment analysis. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1629-1638, Austin, Texas, USA, November. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Generative image modeling using spatial LSTMs", |
|
"authors": [ |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Theis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Bethge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1918--1926", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lucas Theis and Matthias Bethge. 2015. Generative im- age modeling using spatial LSTMs. In Advances in Neural Information Processing Systems, pages 1918- 1926, Montreal, Quebec, Canada, December.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Grammar as a foreign language", |
|
"authors": [ |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Terry", |
|
"middle": [], |
|
"last": "Koo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2755--2763", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oriol Vinyals, \u0141ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015a. Gram- mar as a foreign language. In Advances in Neural Information Processing Systems, pages 2755-2763, Montreal, Quebec, Canada, December.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Show and tell: A neural image caption generator", |
|
"authors": [ |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Toshev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samy", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dumitru", |
|
"middle": [], |
|
"last": "Erhan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3156--3164", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Du- mitru Erhan. 2015b. Show and tell: A neural image caption generator. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pages 3156-3164, Boston, MA, USA, June.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Transitionbased parsing of the chinese treebank using a global discriminative model", |
|
"authors": [ |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 11th International Conference on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "162--171", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yue Zhang and Stephen Clark. 2009. Transition- based parsing of the chinese treebank using a global discriminative model. In Proceedings of the 11th International Conference on Parsing Technologies, pages 162-171, Paris, France, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Syntactic processing using the generalized perceptron and beam search", |
|
"authors": [ |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Computational linguistics", |
|
"volume": "37", |
|
"issue": "1", |
|
"pages": "105--151", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yue Zhang and Stephen Clark. 2011. Syntactic process- ing using the generalized perceptron and beam search. Computational linguistics, 37(1):105-151.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Top-down tree long short-term memory networks", |
|
"authors": [ |
|
{ |
|
"first": "Xingxing", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "310--320", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xingxing Zhang, Liang Lu, and Mirella Lapata. 2016. Top-down tree long short-term memory networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 310-320, San Diego, California, USA, June. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "A C-LSTM neural network for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Chunting", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chonglin", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [ |
|
"C M" |
|
], |
|
"last": "Lau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1301.3781" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chunting Zhou, Chonglin Sun, Zhiyuan Liu, and Francis C. M. Lau. 2015. A C-LSTM neural network for text classification. arXiv preprint arXiv:1301.3781.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Long short-term memory over recursive structures", |
|
"authors": [ |
|
{ |
|
"first": "Xiaodan", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Parinaz", |
|
"middle": [], |
|
"last": "Sobhani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongyu", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 32nd International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1604--1612", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long short-term memory over recursive structures. In Proceedings of the 32nd International Conference on Machine Learning, pages 1604-1612, Lille, France, July.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Topology of sequential and tree LSTMs. (a) nodes in sequential LSTM; (b) non-leaf nodes in tree LSTM; (c) leaf nodes in tree LSTM. Shaded nodes represent lexical input vectors. White nodes represent hidden state vectors.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "(b).", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"text": "Head-Lexicalized Constituent Tree.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"text": "Contrast between Zhu et al. (2015) (a) and this paper (b). Shaded nodes represent lexical input vectors. White nodes represent hidden state vectors.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"text": "Top-down tree LSTM. set of hidden state vectors in the top-down direction.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF5": { |
|
"text": "the road to hell is paved with good intentions\".", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF6": { |
|
"text": "Visualizing head words found automatically by our model.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "sequence LSTM by splitting the previous state vector h t\u22121 into a left child state vector h L t\u22121 and a right child state vector h R t\u22121 , and the previous cell vector c t\u22121 into a left child cell vector c L t\u22121 and a right child cell vector", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Test set accuracies for sentiment classification tasks.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td>Model</td><td>Accuracy</td></tr><tr><td>Baseline BiLSTM Baseline BottomUp ConTree LSTM SVM</td><td>93.8 93.4</td></tr></table>", |
|
"type_str": "table", |
|
"text": "shows the question type classification results. Our final model gives better results compared", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"content": "<table><tr><td>Model</td><td colspan=\"3\">ConTree ConTree+Lex BiConTree</td></tr><tr><td colspan=\"2\">Time (s) 4,664</td><td>7,157</td><td>11,434</td></tr></table>", |
|
"type_str": "table", |
|
"text": "TREC question type classification results.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"content": "<table><tr><td>to the BiLSTM model and the bottom-up ConTree</td></tr><tr><td>model, achieving comparable results to the state-of-</td></tr><tr><td>the-art SVM classifier with carefully designed fea-</td></tr><tr><td>tures.</td></tr></table>", |
|
"type_str": "table", |
|
"text": "Averaged training time over 30 iterations.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF9": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Effect of model size.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF11": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Example output sentences.", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |