Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N16-1026",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:36:20.811968Z"
},
"title": "LSTM CCG Parsing",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We demonstrate that a state-of-the-art parser can be built using only a lexical tagging model and a deterministic grammar, with no explicit model of bi-lexical dependencies. Instead, all dependencies are implicitly encoded in an LSTM supertagger that assigns CCG lexical categories. The parser significantly outperforms all previously published CCG results, supports efficient and optimal A * decoding, and benefits substantially from semisupervised tri-training. We give a detailed analysis, demonstrating that the parser can recover long-range dependencies with high accuracy and that the semi-supervised learning enables significant accuracy gains. By running the LSTM on a GPU, we are able to parse over 2600 sentences per second while improving state-of-the-art accuracy by 1.1 F1 in domain and up to 4.5 F1 out of domain.",
"pdf_parse": {
"paper_id": "N16-1026",
"_pdf_hash": "",
"abstract": [
{
"text": "We demonstrate that a state-of-the-art parser can be built using only a lexical tagging model and a deterministic grammar, with no explicit model of bi-lexical dependencies. Instead, all dependencies are implicitly encoded in an LSTM supertagger that assigns CCG lexical categories. The parser significantly outperforms all previously published CCG results, supports efficient and optimal A * decoding, and benefits substantially from semisupervised tri-training. We give a detailed analysis, demonstrating that the parser can recover long-range dependencies with high accuracy and that the semi-supervised learning enables significant accuracy gains. By running the LSTM on a GPU, we are able to parse over 2600 sentences per second while improving state-of-the-art accuracy by 1.1 F1 in domain and up to 4.5 F1 out of domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Combinatory Categorial Grammar (CCG) is a strongly lexicalized formalism-the vast majority of attachment decisions during parsing are specified by the selection of lexical entries for words (see Figure 1 for examples). State-of-the-art parsers typically include a supertagging model, to select possible lexical categories, and a bi-lexical dependency model, to resolve the remaining parse attachment ambiguities. In this paper, we introduce a long shortterm memory (LSTM) CCG parsing model that has no explicit model of bi-lexical dependencies, but instead relies on a bi-directional recurrent neural network (RNN) supertagger to capture all long distance dependencies. This approach has a number of advantages: it is conceptually simple, allows for the reuse of existing optimal and efficient parsing algorithms, benefits significantly from semi-supervised learning, and is highly accurate both in and out of domain. The parser is publicly released. 1 Neural networks have shown strong performance in a range of NLP tasks; however they can break the dynamic programs for structured prediction problems, such as parsing, when vector embeddings are recursively computed for subparts of the output. Existing neural net parsers either (1) use greedy inference techniques including shift-reduce parsing (Henderson et al., 2013; Chen and Manning, 2014; Weiss et al., 2015; , constituency parse re-ranking (Socher et al., 2013) , and stringto-string transduction (Vinyals et al., 2015) , or (2) avoid recursive computations entirely (Durrett and Klein, 2015) . Our approach gives a simple alternative: we only train a model for tagging decisions, where we can easily use recurrent architectures such as LSTMs (Hochreiter and Schmidhuber, 1997) , and rely on the highly lexicalized nature of the CCG grammar to allow this tagger to specify nearly every aspect of the complete parse.",
"cite_spans": [
{
"start": 951,
"end": 952,
"text": "1",
"ref_id": null
},
{
"start": 1299,
"end": 1323,
"text": "(Henderson et al., 2013;",
"ref_id": "BIBREF18"
},
{
"start": 1324,
"end": 1347,
"text": "Chen and Manning, 2014;",
"ref_id": "BIBREF9"
},
{
"start": 1348,
"end": 1367,
"text": "Weiss et al., 2015;",
"ref_id": "BIBREF33"
},
{
"start": 1400,
"end": 1421,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF28"
},
{
"start": 1457,
"end": 1479,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF32"
},
{
"start": 1527,
"end": 1552,
"text": "(Durrett and Klein, 2015)",
"ref_id": "BIBREF13"
},
{
"start": 1703,
"end": 1737,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 195,
"end": 201,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our LSTM supertagger is bi-directional and includes a softmax potential over tags for each word in the sentence. During training, we jointly optimize all LSTM parameters, including the word embeddings, to maximize the conditional likelihood of supertag sequences. For inference, we use a recently introduced A* CCG parsing algorithm (Lewis and Steedman, 2014a) , which efficiently searches for the Figure 1 : Four examples of prepositional phrase attachment in CCG. In the upper two parses, the attachment decision is determined by the choice of supertags. In the lower parses, the attachment is ambiguous given the supertags. In such cases, our parser deterministically attaches low (i.e. preferring the lower-right parse).",
"cite_spans": [
{
"start": 333,
"end": 360,
"text": "(Lewis and Steedman, 2014a)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 398,
"end": 406,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "highest probability sequence of tags that combine to produce a complete parse tree. Whenever there is parsing ambiguity not specified by the supertags, the model attaches low (see Figure 1 ). This approach is not only conceptually simple but also highly effective, as we demonstrate with extensive experiments. Because the A* algorithm is extremely efficient and the LSTMs can be run in parallel on GPUs, the end-to-end parser can process over 2600 sentences per second. This is more than three times the speed of any publicly available parser for any formalism. Apart from Hall et al. (2014), we are not aware of efficient algorithms for running other state-of-art-parsers on GPUs. The LSTM parameters also benefit from semi-supervised training, which we demonstrate by employing a recently introduced tri-training scheme (Weiss et al., 2015) . Finally, the recurrent nature of the LSTM allows for effective modelling of long distance dependencies, as we show empirically. Our approach significantly advances the state-of-the-art on benchmark datasets-improving accuracy by 1.1 F1 in domain and up to 4.5 F1 out of domain.",
"cite_spans": [
{
"start": 823,
"end": 843,
"text": "(Weiss et al., 2015)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 180,
"end": 188,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Combinatory Categorial Grammar (CCG) Compared to a phrase-structure grammar, CCG contains a much smaller set of binary rules (we use 11), but a much larger set of lexical tags (we use 425). The binary rules are conjectured to be language-universal, and most language-specific information is lexicalized (Steedman, 2000) . The large tag set means that most (but not all) attachment decisions are determined by tagging decisions. Figure 1 shows how a prepositional phrase attachment decision can be encoded in the choice of tags.",
"cite_spans": [
{
"start": 303,
"end": 319,
"text": "(Steedman, 2000)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 428,
"end": 434,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The process of assigning CCG categories to words is called supertagging. All supertaggers used in practice are probabilistic, providing a distribution over possible tags for each word. Parsing models either use these scores directly (Auli and Lopez, 2011b) , or as a form of beam search (Clark and Curran, 2007) , typically in conjunction with models of the dependencies or derivation.",
"cite_spans": [
{
"start": 233,
"end": 256,
"text": "(Auli and Lopez, 2011b)",
"ref_id": "BIBREF4"
},
{
"start": 287,
"end": 311,
"text": "(Clark and Curran, 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Supertag-Factored A * CCG Parsing Lewis and Steedman (2014a) introduced supertag-factored CCG parsers, in which the score for a parse is simply the sum of the scores of its supertags. The parser takes in a distribution over supertags for each word, and outputs the highest scoring parse-subject to the hard constraint that the parse only uses standard CCG combinators (resolving any remaining ambiguity by attaching low). One advantage of the supertag-factored model is that it allows a simple A * parsing algorithm, which provably finds the highest scoring supertag sequence that can be combined to construct a complete parse.",
"cite_spans": [
{
"start": 34,
"end": 60,
"text": "Lewis and Steedman (2014a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In A * parsing, partial parses y i,j of span i . . . j are maintained in a sorted agenda and added to the chart in order of their cost, which is the sum of their Viterbi inside score g(y i,j ) and an upper bound on their Viterbi outside score h(y i,j ). When y i,j is added to the chart, the agenda is updated with any new partial parses that can be created by combining y i,j with existing chart items (Algorithm 1). If h is a monotonic upper bound on the outside score, the first chart entry for a span with a given category is guaranteed to be optimal-all other possible completions of the competing partial parses provably have lower scores, due to the outside score bounds. There is no guarantee this certificate of optimality is achieved efficiently for parses of the whole sentence, and in the worst case the algorithm could fill the entire parse chart. However, as we will see later, A* parsing is very efficient in practice for the models we present in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In the supertag-factored model, g and h are computed as follows, where g(y k ) is the score for word k having tag y k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "g(y i,j ) = j k=i g(y k ) (1) h(y i,j ) = i\u22121 k=1 max y k g(y k ) + N k=j+1 max y k g(y k ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "where Eq. 1 follows from the definition of the supertag factored model and Eq. 2 combines this definition with the fact that the max score over all supertags for a word is an upperbound on the score for the actual supertag used in the best parse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Supertagging is almost parsing (Bangalore and Joshi, 1999)-consequently the task is very chal-Algorithm 1 Agenda-based parsing algorithm Definitions x 1...N is the input words, and y variables denote scored partial parses. TAG(x 1...N ) returns a set of scored pre-terminals for every word. ADD(C, y) adds partial parse y to chart C. RULES(C, y) returns the set of scored partial parses that can be created by combining y with existing entries in C. The agenda A is ordered as described in Section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "function PARSE(x 1...N ) 2: A \u2190 \u2205 Empty agenda A 3: for y \u2208 TAG(x 1...N ) do 4: PUSH(A, y) 5: C \u2190 \u2205 Empty chart C 6: while C 1,N = \u2205 \u2227 A = \u2205 do 7: y \u2190 EXTRACT_MAX(A) 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "if y / \u2208 C then 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "ADD(C, y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "for y \u2208 RULES(C, y) do 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "INSERT(A, y )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "return C 1,N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "lenging, with hundreds of tags, and the correct assignment often depending on long-range dependencies. For example, in The doctor sent for the patient arrived, the category for sent depends on the final word. Recent work has made dramatic progress, using feed-forward neural networks (Lewis and Steedman, 2014b) and RNNs (Xu et al., 2015) . We make several extensions to previous work on supertagging. Firstly, we use bi-directional models, to capture both previous and subsequent sentence context into supertagging decisions. Secondly, we use LSTMs, rather than RNNs. Many tagging decisions rely on long-range context, and RNNs typically struggle to account for sequences of longer than a few words (Hochreiter and Schmidhuber, 1997) . Finally, we use a deep architecture, to allow the modelling of complex interactions in the context.",
"cite_spans": [
{
"start": 321,
"end": 338,
"text": "(Xu et al., 2015)",
"ref_id": "BIBREF35"
},
{
"start": 700,
"end": 734,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "Our supertagging model is summarized in Figure 2 . Each word is mapped to an embedding vector. This vector is a concatenation of an embedding for the word (lower-cased), and embeddings for features of the word (we use 1 to 4 character prefixes and suffixes). The embedding vector is used as input to two stacked LSTMs (with depth 2), one processing the sentence left-to-right, and the other right-to-left.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 49,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "The outputs from the LSTMs are projected into a further hidden layer, a bias is added, and a RELU non-linearity is applied. This layer gives a contextdependent representation of the word that is fed into a softmax over supertags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "We use a variant on the standard LSTM with coupled 'input' and 'forget' gates, and peephole connections. Each LSTM cell at position t takes three inputs: a cell state vector c t\u22121 and hidden state vector h t\u22121 from the cell at position t \u2212 1, and x t from the layer below. It outputs h t to the layer above, and c t and h t to the cell at t + 1. c t and h t are computed as follows, where \u03c3 is the component-wise logistic sigmoid, and \u2022 is the component-wise product:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i t =\u03c3(W i [c t\u22121 , h t\u22121 , x t ] + b i ) (3) c t = tanh(W c [h t\u22121 , x t ] + bc) (4) o t =\u03c3(W o [c t , h t\u22121 , x t ] + b o ) (5) c t =i t \u2022c t + (1 \u2212 i t )c t\u22121 (6) h t =o t \u2022 tanh(c t )",
"eq_num": "(7)"
}
],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "We train the model using stochastic gradient descent, with a minibatch size of 1, a learning rate of 0.01, and using momentum with \u00b5 = 0.7. We then fine-tune models using a larger minibatch size of 32. Gradients whose L 2 norm exceeds 5 are clipped. Training was run for 30 epochs, shuffling the order of sentences after each epoch, and we used the model parameters with the highest development supertagging accuracy. The input layer uses dropout with a rate of 0.5. All trainable parameters have L 2 regularization of \u039b = 10 \u22126 . Word embedding are initialized using 50-dimensional pre-trained values from Turian et al. (2010) . For prefix and suffix embeddings, we use randomly initialized 32dimensional vectors-features occurring less than 3 times are replaced with an 'unknown' embedding. We add special start and end tokens to each sentence, with trainable parameters. The LSTM state size is 128 and the RELU layer has a size of 64.",
"cite_spans": [
{
"start": 607,
"end": 627,
"text": "Turian et al. (2010)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM CCG Supertagging Model",
"sec_num": "3"
},
{
"text": "Our experiments focus on two parsing models: Supertag-Factored We use the supertagging model described in Section 3 to build a supertagfactored parser, closely following the approach described in Section 2. We also add a penalty of 0.1 (tuned on development data) for every time a unary rule is applied in a parse. The attach-low heuristic is implemented by adding a small penalty of \u2212 d at every binary rule instantiation, where d is the absolute distance between the heads of the left and right children, and is a small constant. We increase the penalty to 10 for clitics, to encourage these to attach locally. Because these penalties are \u2264 0, they do not affect the A* upper bound calculations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "4"
},
{
"text": "Dependencies We also train a model with dependency features, to investigate how much they improve accuracy beyond the supertag-factored model. We adapt a joint CCG and SRL model (Lewis et al., 2015) to CCGbank parsing, by assigning every CCGbank dependency a role based on its argument number (i.e., the first argument of every category has role ARG0). A global log-linear model is trained to maximize the marginal likelihood of the gold dependencies. We use the same features and hyperparameters as Lewis et al. (2015) , except that we do not use the supertagger score feature (to separate the effect of the dependencies features from the supertagger). We choose this model because it has an A * parsing algorithm, meaning that we do not need to use aggressive beam search.",
"cite_spans": [
{
"start": 178,
"end": 198,
"text": "(Lewis et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 500,
"end": 519,
"text": "Lewis et al. (2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "4"
},
{
"text": "A number of papers have shown that strong parsers can be improved by exploiting text without goldstandard annotations. Recent work suggests tritraining, in which the output of two parsers is intersected to create training data for a third parser, is highly effective (Weiss et al., 2015) .",
"cite_spans": [
{
"start": 267,
"end": 287,
"text": "(Weiss et al., 2015)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised Learning",
"sec_num": "5"
},
{
"text": "We perform the first application of tri-training to a lexicalized formalism. Following Weiss et al., we parse the corpus of Chelba et al. (2013) with a shiftreduce parser and a chart-based model. We use the shift-reduce parser from Ambati et al. (2016) and our dependency model (without using a supertagger feature, to limit the correlation with our tagging model). On development sentences where the parsers produce the same supertags (40%), supertagging accuracy is 98.0%. This subset is considerably easier than general text-our CCGbank-trained supertagger is 97.4% accurate on this data-but tritraining still provides useful additional training data.",
"cite_spans": [
{
"start": 232,
"end": 252,
"text": "Ambati et al. (2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised Learning",
"sec_num": "5"
},
{
"text": "In total, we include 43 million words of text that the parsers annotate with the same supertags and 15 copies of the gold CCGbank training data. Our experiments show that tri-training improves both supertagging and parsing accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised Learning",
"sec_num": "5"
},
{
"text": "Our parser makes an unusual trade-off, by combining a complex tagging model with a deterministic parsing model. The A * parsing algorithm is extremely efficient, and the overall time required to process a sentence is dominated by the supertagger. GPUs can improve performance over CPUs by computing many vector operations in parallel. There are two major obstacles to using GPUs for parsing. First, most models use sparse rather than dense features, which are difficult to compute efficiently on GPUs. The most successful implementation we are aware of exploits the fact that the Berkeley parser is unlexicalized to run parsing operations in parallel (Hall et al., 2014) . Second, most neural models have features that depend on the current parse or stack state (e.g. Chen and Manning (2014) ). This makes it difficult to exploit the parallelism of GPUs, because these data structures are typically built incrementally on CPU. It may be possible to write GPU-specific code that maintains the entire parse state on GPU, but we are not aware of any such implementations.",
"cite_spans": [
{
"start": 651,
"end": 670,
"text": "(Hall et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 768,
"end": 791,
"text": "Chen and Manning (2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GPU Parsing",
"sec_num": "6"
},
{
"text": "In contrast, our supertagger only uses matrix operations, and does not take any parse state as inputmeaning it is straightforward to run on a GPU. To exploit the parallelism of GPUs, we process thousands of sentences simultaneously-improving parsing efficiency by an order-of-magnitude over CPU. A major advantage of our model is that it allows all of the computationally intensive decisions to occur on GPUs. Unlike existing GPU parsers, the LSTM can be run with generic library code. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GPU Parsing",
"sec_num": "6"
},
{
"text": "We trained our parser on Sections 02-21 of CCGbank (Hockenmaier and Steedman, 2007) , using Section 00 for development, and Section 23 for test. Our experiments use a supertagger beam of 10 \u22124which does not affect the final scores, but reduces overheads such as building the initial agenda. 2 We use TensorFlow (Abadi et al., 2015) .",
"cite_spans": [
{
"start": 51,
"end": 83,
"text": "(Hockenmaier and Steedman, 2007)",
"ref_id": "BIBREF20"
},
{
"start": 291,
"end": 292,
"text": "2",
"ref_id": null
},
{
"start": 311,
"end": 331,
"text": "(Abadi et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "7.1"
},
{
"text": "Dev Where results are available, we compare our work with the following models: EASYCCG, which has the same parsing model as our parser, but uses a feed-forward neural-network supertagger (NN); the C&C parser (Clark and Curran, 2007) , and C&C+RNN (Xu et al., 2015) , which is the C&C parser with an RNN supertagger. All results are for 100% coverage of the test data.",
"cite_spans": [
{
"start": 209,
"end": 233,
"text": "(Clark and Curran, 2007)",
"ref_id": "BIBREF11"
},
{
"start": 248,
"end": 265,
"text": "(Xu et al., 2015)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Test C&C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We refer to the models described in Section 4 as LSTM and DEPENDENCIES respectively. We also report the performance of LSTM+DEPENDENCIES, which combines the model scores (weighting the LSTM score by 1.8, tuned on development data).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "The most direct measure of the effectiveness of our LSTM and tri-training is on the supertagging task. Results are shown in Table 1 . The improvement of our deep LSTM over the RNN model is greater than the improvement of the RNN over C&C model. Further gains follow from tri-training, improving the state-of-the-art by 1.7%.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Supertagging Results",
"sec_num": "7.2"
},
{
"text": "Parsing results are shown in Figure 2 . Surprisingly, our CCGBank-trained LSTM outperforms any previous approach. 3 The ensemble of the LSTM ",
"cite_spans": [
{
"start": 114,
"end": 115,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "English Parsing Results",
"sec_num": "7.3"
},
{
"text": "We also evaluate on two out-of-domain datasets used by Rimell and Clark (2008) , but did no development on this data. In both cases, we use Rimell and Clark's scripts for converting CCG parses to the target dependency representations. The datasets are: QUESTIONS 500 questions from TREC (Rimell and Clark, 2008) . Questions frequently contain very long range dependencies, providing an interesting test of the LSTM supertagger's ability to capture unbounded dependencies. We follow Rimell and Clark by re-training the supertagger on the concatenation of the CCGbank training data and 10 copies of the QUESTIONS training data.",
"cite_spans": [
{
"start": 55,
"end": 78,
"text": "Rimell and Clark (2008)",
"ref_id": "BIBREF27"
},
{
"start": 287,
"end": 311,
"text": "(Rimell and Clark, 2008)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-domain Experiments",
"sec_num": "7.4"
},
{
"text": "BIOINFER 500 sentences from biomedical abstracts. This dataset tests the parser's robustness to a large amount of unseen vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-domain Experiments",
"sec_num": "7.4"
},
{
"text": "Results are shown in Table 3 . Our LSTM parser outperforms existing work on question parsing, showing that it can successfully model the longrange dependencies found in questions. Adding dependency features yields only a small improvement.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Out-of-domain Experiments",
"sec_num": "7.4"
},
{
"text": "On the BIOINFER corpus, our tri-trained LSTM parser is 4.5 F1 better than the previous state-ofthe-art. Dependency features appear to be much (2011b)'s joint parsing and supertagging model, due to differences in the experimental setup. These models are 0.3 and 1.5 F1 more accurate than the C&C baseline respectively, which is well within the margin of improvement obtained by our model. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-domain Experiments",
"sec_num": "7.4"
},
{
"text": "In contrast to standard parsing algorithms, the efficiency of our model depends directly on the accuracy of the supertagger in guiding the search. We therefore measure the efficiency empirically. Results are shown in Table 4 . 5 Our parser runs more slowly than EASYCCG on CPU, due to the more complex tagging model (but is 4.8 F1 more accurate). Adding dependencies substantially reduces efficiency, due to calculating sparse features. Without dependencies, the run time is dominated by the LSTM supertagger. Running the supertagger on a GPU reduces parsing times dramaticallyoutperforming SpaCy, the fastest publicly available parser (Choi et al., 2015) . Roughly half the parsing time is spent on GPU supertagging, and half on CPU parsing. To better exploit batching in the GPU, our implementation dynamically buckets sentences by length (bins of width 10), and tags batches when the bucket size reaches 3072 (the number of threads on our GPU). We are not aware of any GPU implementations of shift-reduce parsers or lexicalized chart parsers, so it is unclear if most other state-ofthe-art parsers can be adapted to exploit GPUs. ",
"cite_spans": [
{
"start": 227,
"end": 228,
"text": "5",
"ref_id": null
},
{
"start": 636,
"end": 655,
"text": "(Choi et al., 2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 217,
"end": 224,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Efficiency Experiments",
"sec_num": "7.5"
},
{
"text": "We also measure performance while removing different aspects of the full parsing model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablations",
"sec_num": "8"
},
{
"text": "Numerous variations are possible on our supertagging architecture. Apart from tri-training, the major differences from the previous state-of-the-art (Xu et al., 2015) are that we use LSTMs rather than RNNs, and that we use bidirectional networks rather than only a forward-directional RNN. These modifications lead to a 1.3% improvement in accuracy. Table 5 shows performance while ablating these changes; they all contribute substantially to tagging accuracy. Table 6 shows several classes of words where the LSTM model outperforms the baseline neural network that uses only local context (NN). The performance increase on unseen words is likely due to the fact that the LSTM can model more context to determine the category for a word. Unsurprisingly, this leads to a large improvement in accuracy for words taking non-local arguments. Finally, we see a large improvement in prepositional phrase attachment. This improvement is likely to be due to the deep architecture, which can better take into account the interaction between the preposition, its argument Table 7 : Effect of simulating weaker grammars, by allowing the specified atomic categories to unify. * allows all atomic categories to unify, except conjunctions and punctuation. Results are on development sentences of length \u226440.",
"cite_spans": [
{
"start": 149,
"end": 166,
"text": "(Xu et al., 2015)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 350,
"end": 358,
"text": "Table 5",
"ref_id": null
},
{
"start": 462,
"end": 469,
"text": "Table 6",
"ref_id": "TABREF7"
},
{
"start": 1063,
"end": 1070,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Supertagger Model Architecture",
"sec_num": "8.1"
},
{
"text": "noun phrase, and its nominal or verbal attachment. Table 6 also shows cases where the semi-supervised models perform better. Accuracy improves on unseen words-showing that tri-training can be a more effective way of generalizing to unseen words than pre-trained word embeddings alone. We also see improvement in accuracy on wh-words, which we attribute to the training data containing more examples of rare categories used for wh-words in piedpiping and similar constructions. One case where performance remains weak for all models is on unseen usages-where words occur in the CCGbank training data, but not with the category required in the test data. The improvement from tri-training is limited, likely due to the weakness of the baseline parses, and new techniques will be required to correct such errors.",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Supertagger Model Architecture",
"sec_num": "8.1"
},
{
"text": "A subtle but crucial point is that our method depends on the strictness of the CCGbank grammar to exclude ungrammatical derivations. Because there is no dependency model, we rely on the deterministic CCG grammar as a hard constraint. There is a trade-off between restrictive grammars which may be brittle on noisy text, and weaker grammars that may overgenerate ungrammatical sentences. We measure this trade-off by testing weaker grammars, which merge categories that are normally distinct. For example, if we merge PP and NP , then an S \\NP can take either a PP or NP argument. Table 7 shows that relaxing the grammar significantly hurts performance; the deterministic constraints are crucial to training a high quality LSTM CCG parser. With a very relaxed grammar in which all atoms can unify, dependencies features help compensate for the weakened grammar. Future work should explore further strengthening the grammar--e.g. marking plurality on NP s to enforce plural agreement, or using slash-modalities to prevent over-generation arising from composition (Baldridge and Kruijff, 2003) .",
"cite_spans": [
{
"start": 1061,
"end": 1090,
"text": "(Baldridge and Kruijff, 2003)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 580,
"end": 587,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of Grammar",
"sec_num": "8.3"
},
{
"text": "Perhaps our most surprising result is that high accuracy can be achieved with a rule-based grammar and no dependency features. We performed several experiments to verify whether the model can capture long-range dependencies, and the extent to which dependency features are required to further improve parsing performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Dependency Features",
"sec_num": "8.4"
},
{
"text": "Supertagging accuracy is still the bottleneck A natural question is whether further improvements to our model will require a more powerful parsing model (such as adding dependency or derivation features), or if future work should focus on the supertagger. We found that on sentences where all the supertags are correct in the final parse (51%), the F1 is very high: 97.7. On parses containing supertag errors, the F1 drops to just 80.3. This result suggests that parsing accuracy can be significantly increased by improving the supertagger, and that very high performance could be attained only using a supertagging model. 'Attach low' heuristic is surprisingly effective Given a sequence of supertags, our grammar is still ambiguous. As explained in Section 2, we resolve these ambiguities by attaching low. To investigate the accuracy of this heuristic, we performed oracle decoding given the highest scoring supertagsand found that F1 improved by 1.3, showing that there are limits to what can be achieved with a rulebased grammar. In contrast, an 'attach high' heuristic scores 5.2 F1 less than attaching low, suggesting that these decisions are reasonably frequent, but that attaching low is much more common.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Dependency Features",
"sec_num": "8.4"
},
{
"text": "Would adding a dependency model help here? We consider several dependencies whose attachment is often ambiguous given the supertags. Results are shown in Table 8 . Any improvements from the dependency model are small-it is difficult to improve Table 8 : Per-relation accuracy for several dependencies whose attachments are often ambiguous given the supertags. Results are only on sentences where the parsers assign the correct supertags. Supertag-factored model is accurate on longrange dependencies One motivation for CCG parsing is to recover long-range dependencies. While we do not explicitly model these dependencies, they can still be extracted from the parse. Instead, we rely on the LSTM supertagger to implicitly model the dependencies-a task that becomes more challenging with longer dependencies. We investigate the accuracy of our parser for dependencies of different lengths. Figure 3 shows that adding dependencies features does not improve the recovery of long-range dependencies over the LSTM alone; the LSTM accurately models long-range dependencies.",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 161,
"text": "Table 8",
"ref_id": null
},
{
"start": 244,
"end": 251,
"text": "Table 8",
"ref_id": null
},
{
"start": 889,
"end": 897,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Effect of Dependency Features",
"sec_num": "8.4"
},
{
"text": "Recent work has applied neural networks to parsing, mostly using neural classifiers in shift-reduce parsers (Henderson et al., 2013; Chen and Manning, 2014; Weiss et al., 2015) . Unlike our approach, none of these report both state-ofthe-art speed and accuracy. Vinyals et al. (2015) in-stead propose embedding entire sentences in a vector space, and then generating parse trees as strings. Our model achieves state-of-the-art accuracy with a non-ensemble model trained on the standard training data, whereas their model requires ensembles or extra supervision to match the state of the art.",
"cite_spans": [
{
"start": 108,
"end": 132,
"text": "(Henderson et al., 2013;",
"ref_id": "BIBREF18"
},
{
"start": 133,
"end": 156,
"text": "Chen and Manning, 2014;",
"ref_id": "BIBREF9"
},
{
"start": 157,
"end": 176,
"text": "Weiss et al., 2015)",
"ref_id": "BIBREF33"
},
{
"start": 262,
"end": 283,
"text": "Vinyals et al. (2015)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "9"
},
{
"text": "Most work on CCG parsing has either used CKY chart parsing (Hockenmaier, 2003; Clark and Curran, 2007; Fowler and Penn, 2010; Auli and Lopez, 2011a) or shift-reduce algorithms (Zhang and Clark, 2011; Xu et al., 2014; Ambati et al., 2015) . These methods rely on beam-search to cope with the huge space of possible CCG parses. Instead, we use Lewis and Steedman (2014a) 's A * algorithm. By using a semi-supervised LSTM supertagger, we improved over Lewis and Steedman's parser by 4.8 F1. CCG supertagging was first attempted with maximum-entropy Markov models (Clark, 2002) in practice, the combination of sparse features and a large tag set makes such models brittle. Lewis and Steedman (2014b) applied feed-forward neural networks to supertagging, motivated by using pretrained work embeddings to reduce sparsity. Xu et al. (2015) showed further improvements by using RNNs to condition on non-local context. Concurrently with this work, Xu et al. (2016) explored bidirectional RNN models, and Vaswani et al. (2016) use bidirectional LSTMs with a different training procedure.",
"cite_spans": [
{
"start": 59,
"end": 78,
"text": "(Hockenmaier, 2003;",
"ref_id": "BIBREF21"
},
{
"start": 79,
"end": 102,
"text": "Clark and Curran, 2007;",
"ref_id": "BIBREF11"
},
{
"start": 103,
"end": 125,
"text": "Fowler and Penn, 2010;",
"ref_id": "BIBREF15"
},
{
"start": 126,
"end": 148,
"text": "Auli and Lopez, 2011a)",
"ref_id": "BIBREF3"
},
{
"start": 176,
"end": 199,
"text": "(Zhang and Clark, 2011;",
"ref_id": "BIBREF38"
},
{
"start": 200,
"end": 216,
"text": "Xu et al., 2014;",
"ref_id": "BIBREF34"
},
{
"start": 217,
"end": 237,
"text": "Ambati et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 342,
"end": 368,
"text": "Lewis and Steedman (2014a)",
"ref_id": null
},
{
"start": 449,
"end": 487,
"text": "Lewis and Steedman's parser by 4.8 F1.",
"ref_id": null
},
{
"start": 560,
"end": 573,
"text": "(Clark, 2002)",
"ref_id": "BIBREF12"
},
{
"start": 669,
"end": 695,
"text": "Lewis and Steedman (2014b)",
"ref_id": "BIBREF23"
},
{
"start": 816,
"end": 832,
"text": "Xu et al. (2015)",
"ref_id": "BIBREF35"
},
{
"start": 939,
"end": 955,
"text": "Xu et al. (2016)",
"ref_id": "BIBREF36"
},
{
"start": 995,
"end": 1016,
"text": "Vaswani et al. (2016)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "9"
},
{
"text": "Our tagging model is closely related to the bidirectional LSTM POS tagging model of . We see larger gains over the state-ofthe-art-likely because supertagging involves more long-range dependencies than POS tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "9"
},
{
"text": "Other work has successfully applied GPUs to parsing, but has required GPU-specific code and algorithms (Yi et al., 2011; Johnson, 2011; Canny et al., 2013; Hall et al., 2014) . GPUs have also been used for machine translation .",
"cite_spans": [
{
"start": 103,
"end": 120,
"text": "(Yi et al., 2011;",
"ref_id": "BIBREF37"
},
{
"start": 121,
"end": 135,
"text": "Johnson, 2011;",
"ref_id": "BIBREF22"
},
{
"start": 136,
"end": 155,
"text": "Canny et al., 2013;",
"ref_id": "BIBREF7"
},
{
"start": 156,
"end": 174,
"text": "Hall et al., 2014)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "9"
},
{
"text": "We have shown that a combination of deep learning, linguistics and classic AI search can be used to build a parser with both state-of-the-art speed and accuracy. Future work will explore using our parser to recover other representations from CCG, such as Universal Dependencies (McDonald et al., 2013) or semantic roles. The major obstacle is the mismatch between these representations and CCGbank-we will therefore investigate new techniques for obtaining other representations from CCG parses. We will also explore new A * parsing algorithms that explicitly model the global parse structure using neural networks, while maintaining optimality guarantees.",
"cite_spans": [
{
"start": 278,
"end": 301,
"text": "(McDonald et al., 2013)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "10"
},
{
"text": "http://github.com/mikelewis0/EasySRL",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We cannot compare directly with Fowler and Penn (2010)'s adaptation of the Berkeley parser to CCG, or Auli and Lopez",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "honnibal.github.io/spaCy 5 All timing experiments use a single 3.5GHz core and (where applicable) a single NVIDIA TITAN X GPU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Bharat Ram Ambati, Greg Coppola, Chlo\u00e9 Kiddon, Luheng He, Yannis Konstas and the anonymous reviewers for comments on an earlier version, and Mark Yatskar for helpful discussions.This research was supported in part by the NSF (IIS-1252835) , DARPA under the DEFT program through the AFRL (FA8750-13-2-0019), an Allen Distinguished Investigator Award, and a gift from Google.",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 247,
"text": "(IIS-1252835)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "TensorFlow: Large-scale Machine Learning on Heterogeneous Systems. Software available from tensorflow.org",
"authors": [
{
"first": "Mart\u0131n",
"middle": [],
"last": "Abadi",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Barham",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Brevdo",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Citro",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Devin",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mart\u0131n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. 2015. TensorFlow: Large-scale Machine Learning on Heterogeneous Systems. Software available from ten- sorflow.org.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An Incremental Algorithm for Transition-based CCG Parsing",
"authors": [
{
"first": "Bharat",
"middle": [
"Ram"
],
"last": "Ambati",
"suffix": ""
},
{
"first": "Tejaswini",
"middle": [],
"last": "Deoskar",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharat Ram Ambati, Tejaswini Deoskar, Mark Johnson, and Mark Steedman. 2015. An Incremental Algo- rithm for Transition-based CCG Parsing. In Proceed- ings of the 2015 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Shift-Reduce CCG Parsing using Neural Network Models",
"authors": [],
"year": 2016,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharat Ram Ambati, Tejaswini Deoskar, and Mark Steed- man. 2016. Shift-Reduce CCG Parsing using Neural Network Models. In Proceedings of the Human Lan- guage Technology Conference of the NAACL, Com- panion Volume: Short Papers.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Comparison of Loopy Belief Propagation and Dual Decomposition for Integrated CCG Supertagging and Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Auli and Adam Lopez. 2011a. A Comparison of Loopy Belief Propagation and Dual Decomposition for Integrated CCG Supertagging and Parsing. In Pro- ceedings of the 49th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies-Volume 1.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Training a loglinear parser with loss functions via softmax-margin",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Auli and Adam Lopez. 2011b. Training a log- linear parser with loss functions via softmax-margin. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multimodal Combinatory Categorial Grammar",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
},
{
"first": "Geert-Jan M",
"middle": [],
"last": "Kruijff",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Baldridge and Geert-Jan M Kruijff. 2003. Multi- modal Combinatory Categorial Grammar. In Proceed- ings of the tenth conference on European chapter of the Association for Computational Linguistics-Volume 1.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Supertagging: An approach to almost parsing",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas Bangalore and Aravind K. Joshi. 1999. Su- pertagging: An approach to almost parsing. Compu- tational Linguistics, 25(2).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A multiteraflop constituency parser using gpus",
"authors": [
{
"first": "John",
"middle": [],
"last": "Canny",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Architecture",
"volume": "3",
"issue": "",
"pages": "3--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Canny, David Hall, and Dan Klein. 2013. A multi- teraflop constituency parser using gpus. Architecture, 3:3-5.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling",
"authors": [
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Phillipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Robinson",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling. Technical report, Google.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Fast and Accurate Dependency Parser using Neural Networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "740--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher D Manning. 2014. A Fast and Accurate Dependency Parser using Neural Net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), volume 1, pages 740-750.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "It Depends: Dependency Parser Comparison Using A Web-based Evaluation Tool",
"authors": [
{
"first": "D",
"middle": [],
"last": "Jinho",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stent",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "387--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinho D. Choi, Joel Tetreault, and Amanda Stent. 2015. It Depends: Dependency Parser Comparison Using A Web-based Evaluation Tool. In Proceedings of the 53rd Annual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Con- ference on Natural Language Processing (Volume 1: Long Papers), pages 387-396, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Widecoverage Efficient Statistical Parsing with CCG and Log-Linear Models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "James R Curran",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and James R Curran. 2007. Wide- coverage Efficient Statistical Parsing with CCG and Log-Linear Models. Computational Linguistics, 33(4).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Supertagging for Combinatory Categorial Grammar",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 6th International Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+ 6)",
"volume": "",
"issue": "",
"pages": "19--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark. 2002. Supertagging for Combinatory Categorial Grammar. In Proceedings of the 6th Inter- national Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+ 6), pages 19-24.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Neural CRF Parsing",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and Dan Klein. 2015. Neural CRF Parsing. In Proceedings of the Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Transitionbased Dependency Parsing with Stack Long Short-Term Memory",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Matthews",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based Dependency Parsing with Stack Long Short- Term Memory. In Proc. ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Accurate Context-free Parsing with Combinatory Categorial Grammar",
"authors": [
{
"first": "A",
"middle": [
"D"
],
"last": "Timothy",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Fowler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Penn",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy AD Fowler and Gerald Penn. 2010. Accu- rate Context-free Parsing with Combinatory Catego- rial Grammar. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguis- tics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sparser, better, faster gpu parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "208--217",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Hall, Taylor Berg-Kirkpatrick, and Dan Klein. 2014. Sparser, better, faster gpu parsing. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 208-217, Baltimore, Maryland, June. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Gappy Pattern Matching on GPUs for On-Demand Extraction of Hierarchical Translation Grammars",
"authors": [
{
"first": "Hua",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2015,
"venue": "TACL",
"volume": "3",
"issue": "",
"pages": "87--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hua He, Jimmy Lin, and Adam Lopez. 2015. Gappy Pattern Matching on GPUs for On-Demand Extraction of Hierarchical Translation Grammars. TACL, 3:87- 100.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multi-lingual Joint Parsing of Syntactic and Semantic Dependencies with a Latent Variable Model",
"authors": [
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Merlo",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Gabriele",
"middle": [],
"last": "Musillo",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Henderson, Paola Merlo, Ivan Titov, and Gabriele Musillo. 2013. Multi-lingual Joint Parsing of Syntac- tic and Semantic Dependencies with a Latent Variable Model. Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Long Short-term Memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long Short-term Memory. Neural computation, 9(8):1735- 1780.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "CCGbank: a Corpus of CCG derivations and Dependency Structures Extracted from the Penn Treebank",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Hockenmaier and Mark Steedman. 2007. CCG- bank: a Corpus of CCG derivations and Dependency Structures Extracted from the Penn Treebank. Com- putational Linguistics, 33(3).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Data and models for statistical parsing with Combinatory Categorial Grammar",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Hockenmaier. 2003. Data and models for statis- tical parsing with Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh. College of Sci- ence and Engineering. School of Informatics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Mike Lewis and Mark Steedman. 2014a. A* CCG Parsing with a Supertag-factored Model",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "29--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson. 2011. Parsing in Parallel on Multi- ple Cores and GPUs. In Proceedings of the Aus- tralasian Language Technology Association Workshop 2011, pages 29-37, Canberra, Australia, December. Mike Lewis and Mark Steedman. 2014a. A* CCG Pars- ing with a Supertag-factored Model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Improved CCG parsing with Semi-supervised Supertagging. Transactions of the",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis and Mark Steedman. 2014b. Improved CCG parsing with Semi-supervised Supertagging. Transac- tions of the Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Joint A* CCG Parsing and Semantic Role Labelling",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2015,
"venue": "Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Luheng He, and Luke Zettlemoyer. 2015. Joint A* CCG Parsing and Semantic Role Labelling. In Empirical Methods in Natural Language Process- ing.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Finding function in form: Compositional character models for open vocabulary word representation",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
},
{
"first": "Ramon",
"middle": [],
"last": "Fermandez",
"suffix": ""
},
{
"first": "Silvio",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Lu\u00eds",
"middle": [],
"last": "Marujo",
"suffix": ""
},
{
"first": "Tiago",
"middle": [],
"last": "Lu\u00eds",
"suffix": ""
}
],
"year": 2015,
"venue": "The Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1520--1530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Chris Dyer, Alan W. Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Lu\u00eds Marujo, and Tiago Lu\u00eds. 2015. Finding function in form: Com- positional character models for open vocabulary word representation. In EMNLP, pages 1520-1530. The As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Universal Dependency Annotation for Multilingual Parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Ryan T Mcdonald",
"suffix": ""
},
{
"first": "Yvonne",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Quirmbach-Brundage",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Keith",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "92--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan T McDonald, Joakim Nivre, Yvonne Quirmbach- Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith B Hall, Slav Petrov, Hao Zhang, Oscar T\u00e4ckstr\u00f6m, et al. 2013. Universal Dependency An- notation for Multilingual Parsing. In ACL (2), pages 92-97.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Adapting a Lexicalized-grammar Parser to Contrasting Domains",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Rimell and Stephen Clark. 2008. Adapting a Lexicalized-grammar Parser to Contrasting Domains. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Parsing with Compositional Vector Grammars",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the ACL conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013. Parsing with Compositional Vector Grammars. In Proceedings of the ACL confer- ence.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The Syntactic Process",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman. 2000. The Syntactic Process. MIT Press.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Word representations: A Simple and General Method for Semi-supervised Learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: A Simple and General Method for Semi-supervised Learning. In Proceedings of the 48th Annual Meeting of the Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Supertagging With LSTMs",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Musa",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Yonatan Bisk, Kenji Sagae, and Ryan Musa. 2016. Supertagging With LSTMs . In Pro- ceedings of the Human Language Technology Confer- ence of the NAACL.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Grammar as a Foreign Language",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Gram- mar as a Foreign Language. In Advances in Neural Information Processing Systems.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Structured Training for Neural Network Transition-Based Parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL 2015",
"volume": "",
"issue": "",
"pages": "323--333",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured Training for Neural Net- work Transition-Based Parsing. In Proceedings of ACL 2015, pages 323-333.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Shift-Reduce CCG Parsing with a Dependency Model",
"authors": [
{
"first": "Wenduan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenduan Xu, Stephen Clark, and Yue Zhang. 2014. Shift-Reduce CCG Parsing with a Dependency Model. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics (ACL 2014).",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "CCG Supertagging with a Recurrent Neural Network",
"authors": [
{
"first": "Wenduan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenduan Xu, Michael Auli, and Stephen Clark. 2015. CCG Supertagging with a Recurrent Neural Network. Volume 2: Short Papers, page 250.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Shift-Reduce CCG Parsing with Recurrent Neural Networks and Expected F-Measure Training",
"authors": [
{
"first": "Wenduan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenduan Xu, Michael Auli, and Stephen Clark. 2016. Shift-Reduce CCG Parsing with Recurrent Neural Networks and Expected F-Measure Training. In Pro- ceedings of the Human Language Technology Confer- ence of the NAACL.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Efficient parallel cky parsing on gpus",
"authors": [
{
"first": "Youngmin",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "Chao-Yue",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Keutzer",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 12th International Conference on Parsing Technologies, IWPT '11",
"volume": "",
"issue": "",
"pages": "175--185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Youngmin Yi, Chao-Yue Lai, Slav Petrov, and Kurt Keutzer. 2011. Efficient parallel cky parsing on gpus. In Proceedings of the 12th International Conference on Parsing Technologies, IWPT '11, pages 175-185, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Shift-reduce CCG Parsing",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2011. Shift-reduce CCG Parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies-Volume 1.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Visualization of our supertagging model, based on stacked bi-directional LSTMs. Each word is fed into stacked LSTMs reading the sentence in each direction, the outputs of the LSTMs are combined, and there is a final softmax over categories.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "F1 on dependencies of various lengths. on the 'attach low' heuristic with current models.",
"uris": null
},
"TABREF0": {
"num": null,
"html": null,
"text": "Supertagging accuracy on CCGbank.",
"content": "<table><tr><td>tagger</td><td colspan=\"2\">91.5 92.0</td><td/></tr><tr><td>NN</td><td colspan=\"2\">91.3 91.6</td><td/></tr><tr><td>RNN</td><td colspan=\"2\">93.1 93.0</td><td/></tr><tr><td>LSTM</td><td colspan=\"2\">94.1 94.3</td><td/></tr><tr><td colspan=\"3\">LSTM + Tri-training 94.9 94.7</td><td/></tr><tr><td>Model</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>C&amp;C</td><td colspan=\"3\">86.2 84.2 85.2</td></tr><tr><td>C&amp;C + RNN</td><td colspan=\"3\">87.7 86.4 87.0</td></tr><tr><td>EASYCCG</td><td colspan=\"3\">83.7 83.0 83.3</td></tr><tr><td>Dependencies</td><td colspan=\"3\">86.5 85.8 86.1</td></tr><tr><td>LSTM</td><td colspan=\"3\">87.7 86.7 87.2</td></tr><tr><td>LSTM + Dependencies</td><td colspan=\"3\">88.2 87.3 87.8</td></tr><tr><td>LSTM + Tri-training</td><td colspan=\"3\">88.6 87.5 88.1</td></tr><tr><td colspan=\"4\">LSTM + Tri-training + Dependencies 88.2 87.3 87.8</td></tr></table>",
"type_str": "table"
},
"TABREF1": {
"num": null,
"html": null,
"text": "Labelled F1 for CCGbank dependencies on the CCGbank test set (Section 23).",
"content": "<table/>",
"type_str": "table"
},
"TABREF3": {
"num": null,
"html": null,
"text": "Out-of-domain experiments.",
"content": "<table><tr><td>and the Dependency model outperforms the LSTM</td></tr><tr><td>alone, showing that dependency features are cap-</td></tr><tr><td>turing some generalizations that the LSTM does</td></tr><tr><td>not. However, semi-supervised learning substan-</td></tr><tr><td>tially improves the LSTM, matching the accuracy of</td></tr><tr><td>the ensemble-showing that the LSTM is expressive</td></tr><tr><td>enough to compensate given sufficient data.</td></tr></table>",
"type_str": "table"
},
"TABREF5": {
"num": null,
"html": null,
"text": "Sentences parsed per second on our hardware. Parsers marked * use non-CCG formalisms but are the fastest available CPU and GPU parsers.",
"content": "<table><tr><td>less robust to unseen words than the LSTM tagging</td></tr><tr><td>model, and are unhelpful. Because the parser was</td></tr><tr><td>not trained or developed on this domain, it is likely</td></tr><tr><td>to perform similarly well on other domains.</td></tr></table>",
"type_str": "table"
},
"TABREF7": {
"num": null,
"html": null,
"text": "Development supertagging accuracy on several classes of words. Long range refers to words taking an argument at least 5 words away.",
"content": "<table/>",
"type_str": "table"
}
}
}
}