Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K16-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:12.582428Z"
},
"title": "Greedy, Joint Syntactic-Semantic Parsing with Stack LSTMs",
"authors": [
{
"first": "Swabha",
"middle": [],
"last": "Swayamdipta",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {
"postCode": "98195",
"settlement": "Seattle",
"region": "WA",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a transition-based parser that jointly produces syntactic and semantic dependencies. It learns a representation of the entire algorithm state, using stack long short-term memories. Our greedy inference algorithm has linear time, including feature extraction. On the CoNLL 2008-9 English shared tasks, we obtain the best published parsing performance among models that jointly learn syntax and semantics.",
"pdf_parse": {
"paper_id": "K16-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a transition-based parser that jointly produces syntactic and semantic dependencies. It learns a representation of the entire algorithm state, using stack long short-term memories. Our greedy inference algorithm has linear time, including feature extraction. On the CoNLL 2008-9 English shared tasks, we obtain the best published parsing performance among models that jointly learn syntax and semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We introduce a new joint syntactic and semantic dependency parser. Our parser draws from the algorithmic insights of the incremental structure building approach of Henderson et al. (2008) , with two key differences. First, it learns representations for the parser's entire algorithmic state, not just the top items on the stack or the most recent parser states; in fact, it uses no expert-crafted features at all. Second, it uses entirely greedy inference rather than beam search. We find that it outperforms all previous joint parsing models, including Henderson et al. (2008) and variants (Gesmundo et al., 2009; Henderson et al., 2013) on the CoNLL 2008 and 2009 (English) shared tasks. Our parser's multilingual results are comparable to the top systems at CoNLL 2009.",
"cite_spans": [
{
"start": 164,
"end": 187,
"text": "Henderson et al. (2008)",
"ref_id": "BIBREF18"
},
{
"start": 554,
"end": 577,
"text": "Henderson et al. (2008)",
"ref_id": "BIBREF18"
},
{
"start": 591,
"end": 614,
"text": "(Gesmundo et al., 2009;",
"ref_id": "BIBREF12"
},
{
"start": 615,
"end": 638,
"text": "Henderson et al., 2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Joint models like ours have frequently been proposed as a way to avoid cascading errors in NLP pipelines; varying degrees of success have been attained for a range of joint syntactic-semantic analysis tasks (Sutton and McCallum, 2005; Henderson et al., 2008; Toutanova et al., 2008; Johansson, 2009; Llu\u00eds et al., 2013, inter alia) .",
"cite_spans": [
{
"start": 207,
"end": 234,
"text": "(Sutton and McCallum, 2005;",
"ref_id": "BIBREF40"
},
{
"start": 235,
"end": 258,
"text": "Henderson et al., 2008;",
"ref_id": "BIBREF18"
},
{
"start": 259,
"end": 282,
"text": "Toutanova et al., 2008;",
"ref_id": "BIBREF43"
},
{
"start": 283,
"end": 299,
"text": "Johansson, 2009;",
"ref_id": "BIBREF23"
},
{
"start": 300,
"end": 331,
"text": "Llu\u00eds et al., 2013, inter alia)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One reason pipelines often dominate is that they make available the complete syntactic parse tree, and arbitrarily-scoped syntactic features-such as the \"path\" between predicate and argument, proposed by Gildea and Jurafsky (2002) -for semantic analysis. Such features are a mainstay of highperformance semantic role labeling (SRL) systems (Roth and Woodsend, 2014; Lei et al., 2015; FitzGerald et al., 2015; Foland and Martin, 2015) , but they are expensive to extract (Johansson, 2009; He et al., 2013) .",
"cite_spans": [
{
"start": 204,
"end": 230,
"text": "Gildea and Jurafsky (2002)",
"ref_id": "BIBREF13"
},
{
"start": 340,
"end": 365,
"text": "(Roth and Woodsend, 2014;",
"ref_id": "BIBREF38"
},
{
"start": 366,
"end": 383,
"text": "Lei et al., 2015;",
"ref_id": "BIBREF25"
},
{
"start": 384,
"end": 408,
"text": "FitzGerald et al., 2015;",
"ref_id": "BIBREF10"
},
{
"start": 409,
"end": 433,
"text": "Foland and Martin, 2015)",
"ref_id": "BIBREF11"
},
{
"start": 470,
"end": 487,
"text": "(Johansson, 2009;",
"ref_id": "BIBREF23"
},
{
"start": 488,
"end": 504,
"text": "He et al., 2013)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This study shows how recent advances in representation learning can bypass those expensive features, discovering cheap alternatives available during a greedy parsing procedure. The specific advance we employ is the stack LSTM , a neural network that continuously summarizes the contents of the stack data structures in which a transition-based parser's state is conventionally encoded. Stack LSTMs were shown to obviate many features used in syntactic dependency parsing; here we find them to do the same for joint syntactic-semantic dependency parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We believe this is an especially important finding for greedy models that cast parsing as a sequence of decisions made based on algorithmic state, where linguistic theory and researcher intuitions offer less guidance in feature design.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our system's performance does not match that of the top expert-crafted feature-based systems (Zhao et al., 2009; Bj\u00f6rkelund et al., 2010; Roth and Woodsend, 2014; Lei et al., 2015) , systems which perform optimal decoding , or of systems that exploit additional, differently-annotated datasets (FitzGerald et al., 2015) . Many advances in those systems are orthogonal to our model, and we expect future work to achieve further gains by integrating them.",
"cite_spans": [
{
"start": 93,
"end": 112,
"text": "(Zhao et al., 2009;",
"ref_id": "BIBREF47"
},
{
"start": 113,
"end": 137,
"text": "Bj\u00f6rkelund et al., 2010;",
"ref_id": "BIBREF3"
},
{
"start": 138,
"end": 162,
"text": "Roth and Woodsend, 2014;",
"ref_id": "BIBREF38"
},
{
"start": 163,
"end": 180,
"text": "Lei et al., 2015)",
"ref_id": "BIBREF25"
},
{
"start": 294,
"end": 319,
"text": "(FitzGerald et al., 2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Because our system is very fast-with an end-to-end runtime of 177.6\u00b118 seconds to parse the CoNLL 2009 English test data on a single core-we believe it will be useful in practical set-all are expected to reopen soon expect.01 reopen.01 sbj root vc oprd im tmp A1 C-A1 AM-TMP",
"cite_spans": [],
"ref_spans": [
{
"start": 216,
"end": 281,
"text": "expect.01 reopen.01 sbj root vc oprd im tmp A1 C-A1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Figure 1: Example of a joint parse. Syntactic dependencies are shown by arcs above the sentence and semantic dependencies below; predicates are marked in boldface. C-denotes continuation of argument A1. Correspondences between dependencies might be close (between expected and to) or not (between reopen and all).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "tings. Our open-source implementation has been released. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A1",
"sec_num": null
},
{
"text": "We largely follow the transition-based, synchronized algorithm of Henderson et al. (2013) to predict joint parse structures. The input to the algorithm is a sentence annotated with part-of-speech tags. The output consists of a labeled syntactic dependency tree and a directed SRL graph, in which a subset of words in the sentence are selected as predicates, disambiguated to a sense, and linked by labeled, directed edges to their semantic arguments. Figure 1 shows an example.",
"cite_spans": [
{
"start": 66,
"end": 89,
"text": "Henderson et al. (2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 451,
"end": 459,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Joint Syntactic and Semantic Dependency Parsing",
"sec_num": "2"
},
{
"text": "The two parses are constructed in a bottom-up fashion, incrementally processing words in the sentence from left to right. The state of the parsing algorithm at timestep t is represented by three stack data structures: a syntactic stack S t , a semantic stack M t -each containing partially built structures-and a buffer of input words B t . Our algorithm also places partial syntactic and semantic parse structures onto the front of the buffer, so it is also implemented as a stack. Each arc in the output corresponds to a transition (or \"action\") chosen based on the current state; every transition modifies the state by updating S t , M t , and B t to S t+1 , M t+1 , and B t+1 , respectively. While each state may license several valid actions, each action 1 https://github.com/clab/ joint-lstm-parser has a deterministic effect on the state of the algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-Based Procedure",
"sec_num": "2.1"
},
{
"text": "Initially, S 0 and M 0 are empty, and B 0 contains the input sentence with the first word at the front of B and a special root symbol at the end. 2 Execution ends on iteration t such that B t is empty and S t and M t contain only a single structure headed by root.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-Based Procedure",
"sec_num": "2.1"
},
{
"text": "There are separate sets of syntactic and semantic transitions; the former manipulate S and B, the latter M and B. All are formally defined in Table 1. The syntactic transitions are from the \"arceager\" algorithm of Nivre (2008) . They include:",
"cite_spans": [
{
"start": 214,
"end": 226,
"text": "Nivre (2008)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transitions for Joint Parsing",
"sec_num": "2.2"
},
{
"text": "\u2022 S-SHIFT, which copies 3 an item from the front of B and pushes it on S. \u2022 S-REDUCE pops an item from S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transitions for Joint Parsing",
"sec_num": "2.2"
},
{
"text": "\u2022 S-RIGHT( ) creates a syntactic dependency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transitions for Joint Parsing",
"sec_num": "2.2"
},
{
"text": "Let u be the element at the top of S and v be the element at the front of B. The new dependency has u as head, v as dependent, and label . u is popped off S, and the resulting structure, rooted at u, is pushed on S. Finally, v is copied to the top of S. Because SRL graphs allow a node to be a semantic argument of two parents-like all in the example in Figure 1 -M-LEFT and M-RIGHT do not remove the dependent from the semantic stack and buffer respectively, unlike their syntactic equivalents, S-LEFT and S-RIGHT. We use two other semantic transitions from Henderson et al. (2013) which have no syntactic analogues:",
"cite_spans": [
{
"start": 559,
"end": 582,
"text": "Henderson et al. (2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 354,
"end": 362,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transitions for Joint Parsing",
"sec_num": "2.2"
},
{
"text": "\u2022 M-SWAP swaps the top two items on M , to allow for crossing semantic arcs. \u2022 M-PRED(p) marks the item at the front of B as a semantic predicate with the sense p, and replaces it with the disambiguated predicate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transitions for Joint Parsing",
"sec_num": "2.2"
},
{
"text": "The CoNLL 2009 corpus introduces semantic self-dependencies where many nominal predicates (from NomBank) are marked as their own arguments; these account for 6.68% of all semantic arcs in the English corpus. An example involving an eventive noun is shown in Figure 2 . We introduce a new semantic transition, not in Henderson et al. (2013) , to handle such cases:",
"cite_spans": [
{
"start": 316,
"end": 339,
"text": "Henderson et al. (2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 258,
"end": 266,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Transitions for Joint Parsing",
"sec_num": "2.2"
},
{
"text": "\u2022 M-SELF(r) adds a dependency, with label r between the item at the front of B and itself. The result replaces the item at the front of B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transitions for Joint Parsing",
"sec_num": "2.2"
},
{
"text": "Note that the syntactic and semantic transitions both operate on the same buffer, though they independently specify the syntax and semantics, respectively. In order to ensure that both syntactic and semantic parses are produced, the syntactic and semantic transitions are interleaved. Only syntactic transitions are considered until a transition is chosen that copies an item from the buffer front to the syntactic stack (either S-SHIFT or S-RIGHT). The algorithm then switches to semantic transitions until a buffer-modifying transition is taken (M-SHIFT). 4 At this point, the buffer is modi-fied and the algorithm returns to syntactic transitions. This implies that, for each word, its leftside syntactic dependencies are resolved before its left-side semantic dependencies. An example run of the algorithm is shown in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 822,
"end": 830,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transitions for Joint Parsing",
"sec_num": "2.2"
},
{
"text": "To ensure that the parser never enters an invalid state, the sequence of transitions is constrained, following Henderson et al. (2013) . Actions that copy or move items from the buffer (S-SHIFT, S-RIGHT and M-SHIFT) are forbidden when the buffer is empty. Actions that pop from a stack (S-REDUCE and M-REDUCE) are forbidden when that stack is empty. We disallow actions corresponding to the same dependency, or the same predicate to be repeated in the sequence. Repetitive M-SWAP transitions are disallowed to avoid infinite swapping. Finally, as noted above, we restrict the parser to syntactic actions until it needs to shift an item from B to S, after which it can only execute semantic actions until it executes an M-SHIFT.",
"cite_spans": [
{
"start": 111,
"end": 134,
"text": "Henderson et al. (2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints on Transitions",
"sec_num": "2.3"
},
{
"text": "Asymptotic runtime complexity of this greedy algorithm is linear in the length of the input, following the analysis by Nivre (2009) . 5",
"cite_spans": [
{
"start": 119,
"end": 131,
"text": "Nivre (2009)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints on Transitions",
"sec_num": "2.3"
},
{
"text": "The transitions in \u00a72 describe the execution paths our algorithm can take; like past work, we apply a statistical classifier to decide which transition to take at each timestep, given the current state. The novelty of our model is that it learns a finite-length vector representation of the entire joint parser's state (S, M , and B) in order to make this decision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Model",
"sec_num": "3"
},
{
"text": "LSTMs are recurrent neural networks equipped with specialized memory components in addition to a hidden state (Hochreiter and Schmidhuber, 1997; Graves, 2013) to model sequences. Stack LSTMs are LSTMs that allow for stack operations: query, push, and pop. A \"stack pointer\" is maintained which determines which cell in the LSTM provides the memory and hidden units when computing the new memory cell contents. Query provides a summary of the stack in a single fixed-length vector. Push adds semantic transitions, hence we only copy it.",
"cite_spans": [
{
"start": 110,
"end": 144,
"text": "(Hochreiter and Schmidhuber, 1997;",
"ref_id": "BIBREF20"
},
{
"start": 145,
"end": 158,
"text": "Graves, 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stack Long Short-Term Memory (LSTM)",
"sec_num": "3.1"
},
{
"text": "Bt an element to the top of the stack, resulting in a new summary. Pop, which does not correspond to a conventional LSTM operation, moves the stack pointer to the preceding timestep, resulting in a stack summary as it was before the popped item was observed. Implementation details Goldberg, 2015) and code have been made publicly available. 6 Using stack LSTMs, we construct a representation of the algorithm state by decomposing it into smaller pieces that are combined by recursive function evaluations (similar to the way a list is built by a concatenate operation that operates on a list and an element). This enables information that would be distant from the \"top\" of the stack to be carried forward, potentially helping the learner.",
"cite_spans": [
{
"start": 282,
"end": 297,
"text": "Goldberg, 2015)",
"ref_id": "BIBREF14"
},
{
"start": 342,
"end": 343,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mt",
"sec_num": null
},
{
"text": "Action St+1 Mt+1 Bt+1 Dependency S M (v, v), B S-SHIFT (v, v), S M (v, v), B - (u, u), S M B S-REDUCE S M B - (u, u), S M (v, v), B S-RIGHT( ) (v, v), (gs(u, v, l), u), S M (v, v), B S \u222a u \u2192 v (u, u), S M (v, v), B S-LEFT( ) S M (gs(v, u, l), v), B S \u222a u \u2190 v S M (v, v), B M-SHIFT S (v, v), M B - S (u, u), M B M-REDUCE S M B - S (u, u), M (v, v), B M-RIGHT(r) S (gm(u, v, r), u), M (v, v), B M \u222a u r \u2192 v S (u, u), M (v, v), B M-LEFT(r) S (u, u), M (gm(v, u, r), v), B M \u222a u r \u2190 v S (u, u), (v, v), M B M-SWAP S (v, v), (u, u), M B - S M (v, v), B M-PRED(p) S M (g d (v, p), v), B - S M (v, v), B M-SELF(r) S M (gm(v, v, r), v), B M \u222a v r \u2194 v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mt",
"sec_num": null
},
{
"text": "Our algorithm employs four stack LSTMs, one each for the S, M , and B data structures.Like Dyer et al. (2015), we use a fourth stack LSTM, A, for the history of actions-A is never popped from, only pushed to. Figure 4 illustrates the architecture. The algorithm's state at timestep t is encoded by the four vectors summarizing the four stack LSTMs, and this is the input to the classifier that chooses among the allowable transitions at that timestep.",
"cite_spans": [],
"ref_spans": [
{
"start": 209,
"end": 217,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Stack LSTMs for Joint Parsing",
"sec_num": "3.2"
},
{
"text": "Let s t , m t , b t , and a t denote the summaries of S t , M t , B t , and A t , respectively. Let A t = Allowed(S t , M t , B t , A t ) denote the allowed transitions given the current stacks and buffer. The parser state at time t is given by a rectified linear unit (Nair and Hinton, 2010) in vector y t : ... ",
"cite_spans": [
{
"start": 269,
"end": 292,
"text": "(Nair and Hinton, 2010)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stack LSTMs for Joint Parsing",
"sec_num": "3.2"
},
{
"text": "y t = elementwisemax {0, d + W[s t ; m t ; b t ; a t ]}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stack LSTMs for Joint Parsing",
"sec_num": "3.2"
},
{
"text": "M-PRED (expect.01) M-REDUCE M-LEFT (A1) A are v c M-SHIFT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stack LSTMs for Joint Parsing",
"sec_num": "3.2"
},
{
"text": "q \u03c4 + \u03b8 \u03c4 \u2022 y t (1) \u2261 arg max \u03c4 \u2208At score(\u03c4 ; S t , M t , B t , A t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stack LSTMs for Joint Parsing",
"sec_num": "3.2"
},
{
"text": "where \u03b8 \u03c4 and q \u03c4 are parameters for each transition type \u03c4 . Note that only allowed transitions are considered in the decision rule (see \u00a72.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stack LSTMs for Joint Parsing",
"sec_num": "3.2"
},
{
"text": "To use stack LSTMs, we require vector representations of the elements that are stored in the stacks. Specifically, we require vector representations of atoms (words, possibly with part-of-speech tags) and parse fragments. Word vectors can be pretrained or learned directly; we consider a concatenation of both in our experiments; part-of-speech Figure 3 : Joint parser transition sequence for the sentence in Figure 1 , \"all are expected to reopen soon.\" Syntactic labels are in lower-case and semantic role labels are capitalized. *** marks the operation predicted in Figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 345,
"end": 353,
"text": "Figure 3",
"ref_id": null
},
{
"start": 409,
"end": 417,
"text": "Figure 1",
"ref_id": null
},
{
"start": 569,
"end": 577,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Composition Functions",
"sec_num": "3.3"
},
{
"text": "vectors are learned and concatenated to the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition Functions",
"sec_num": "3.3"
},
{
"text": "To obtain vector representations of parse fragments, we use neural networks which recursively compute representations of the complex structured output . The tree structures here are always ternary trees, with each internal node's three children including a head, a dependent, and a label. The vectors for leaves are word vectors and vectors corresponding to syntactic and semantic relation types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition Functions",
"sec_num": "3.3"
},
{
"text": "The vector for an internal node is a squashed (tanh) affine transformation of its children's vectors. For syntactic and semantic attachments, re- spectively, the composition function is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition Functions",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g s (v, u, l) = tanh(Z s [v; u; l] + e s )",
"eq_num": "(2)"
}
],
"section": "Composition Functions",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g m (v, u, r) = tanh(Z m [v; u; r] + e m )",
"eq_num": "(3)"
}
],
"section": "Composition Functions",
"sec_num": "3.3"
},
{
"text": "where v and u are vectors corresponding to atomic words or composed parse fragments; l and r are learned vector representations for syntactic and semantic labels respectively. Syntactic and semantic parameters are separated (Z s , e s and Z m , e m , respectively). Finally, for predicates, we use another recursive function to compose the word representation, v with a learned representation for the dismabiguated sense of the predicate, p:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition Functions",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g d (v, p) = tanh(Z d [v; p] + e d )",
"eq_num": "(4)"
}
],
"section": "Composition Functions",
"sec_num": "3.3"
},
{
"text": "where Z d and e d are parameters of the model. Note that, because syntactic and semantic transitions are interleaved, the fragmented structures are a blend of syntactic and semantic compositions. Figure 5 shows an example.",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 204,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Composition Functions",
"sec_num": "3.3"
},
{
"text": "Training the classifier requires transforming each training instance (a joint parse) into a transition sequence, a deterministic operation under our transition set. Given a collection of algorithm states at time t and correct classification decisions \u03c4 t , we minimize the sum of log-loss terms, given (for one timestep) by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2212 log exp(q \u03c4t + \u03b8 \u03c4t \u2022 y t ) \u03c4 \u2208At exp(q \u03c4 + \u03b8 \u03c4 \u2022 y t )",
"eq_num": "(5)"
}
],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "with respect to the classifier and LSTM parameters. Note that the loss is differentiable with respect to the parameters; gradients are calculated using backpropagation. We apply stochastic gradient descent with dropout for all neural network parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "Following , \"structured skipgram\" embeddings were used, trained on the English (AFP section), German, Spanish and Chinese Gigaword corpora, with a window of size 5; training was stopped after 5 epochs. For out-of-vocabulary words, a randomly initialized vector of the same dimension was used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Embeddings",
"sec_num": "3.5"
},
{
"text": "Predicate sense disambiguation is handled within the model (M-PRED transitions), but since senses are lexeme-specific, we need a way to handle unseen predicates at test time. When a predicate is encountered at test time that was not observed in training, our system constructs a predicate from the predicted lemma of the word at that position and defaults to the \"01\" sense, which is correct for 91.22% of predicates by type in the English CoNLL 2009 training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate Sense Disambiguation",
"sec_num": "3.6"
},
{
"text": "Our model is evaluated on the CoNLL shared tasks on joint syntactic and semantic dependency parsing in 2008 and 2009 (Haji\u010d et al., 2009) . The standard training, development and test splits of all datasets were used. Per the shared task guidelines, automatically predicted POS tags and lemmas provided in the datasets were used for all experiments. As a preprocessing step, pseudo-projectivization of the syntactic trees (Nivre et al., 2007) was used, which allowed an accurate conversion of even the non-projective syntactic trees into syntactic transitions. However, the oracle conversion of semantic parses into transitions is not perfect despite using the M-SWAP action, due to the presence of multiple crossing arcs. 7",
"cite_spans": [
{
"start": 117,
"end": 137,
"text": "(Haji\u010d et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 422,
"end": 442,
"text": "(Nivre et al., 2007)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "The standard evaluation metrics include the syntactic labeled attachment score (LAS), the semantic F 1 score on both in-domain (WSJ) and outof-domain (Brown corpus) data, and their macro average (Macro F 1 ) to score joint systems. Because the task was defined somewhat differently in each year, each dataset is considered in turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "The CoNLL 2008 dataset contains annotations from the Penn Treebank (Marcus et al., 1993) , PropBank (Palmer et al., 2005) and Nom-Bank (Meyers et al., 2004) . The shared task evaluated systems on predicate identification in addition to predicate sense disambiguation and SRL.",
"cite_spans": [
{
"start": 67,
"end": 88,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF31"
},
{
"start": 100,
"end": 121,
"text": "(Palmer et al., 2005)",
"ref_id": "BIBREF37"
},
{
"start": 135,
"end": 156,
"text": "(Meyers et al., 2004)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CoNLL 2008",
"sec_num": "4.1"
},
{
"text": "To identify predicates, we trained a zero-Markov order bidirectional LSTM two-class classifier. As input to the classifier, we use learned representations of word lemmas and POS tags. This model achieves an F 1 score of 91.43% on marking words as predicates (or not).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CoNLL 2008",
"sec_num": "4.1"
},
{
"text": "Hyperparameters The input representation for a word consists of pretrained embeddings (size 100 for English, 80 for Chinese, 64 for German and Spanish), concatenated with additional learned word and POS tag embeddings (size 32 and 12, respectively). Learned embeddings for syntactic and semantic arc labels are of size 20 and predicates 100. Two-layer LSTMs with hidden state dimension 100 were used for each of the four stacks. The parser state y t and the composition function g are of dimension 100. A dropout rate of 0.2 (Zaremba et al., 2014) was used on all layers at training time, tuned on the development data from the set of values {0.1, 0.2, 0.3, 1.0}. The learned representations for actions are of size 100, similarly tuned from {10, 20, 30, 40, 100}. Other hyperparameters have been set intuitively; careful tuning is expected to yield improvements (Weiss et al., 2015 ).",
"cite_spans": [
{
"start": 525,
"end": 547,
"text": "(Zaremba et al., 2014)",
"ref_id": "BIBREF45"
},
{
"start": 863,
"end": 882,
"text": "(Weiss et al., 2015",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CoNLL 2008",
"sec_num": "4.1"
},
{
"text": "An initial learning rate of 0.1 for stochastic gradient descent was used and updated in every training epoch with a decay rate of 0.1 . Training is stopped when the development performance does not improve for approximately 6-7 hours of elapsed time. Experiments were run on a single thread on a CPU, with memory requirements of up to 512 MB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CoNLL 2008",
"sec_num": "4.1"
},
{
"text": "Relative to the CoNLL 2008 task (above), the main change in 2009 is that predicates are preidentified, and systems are only evaluated on predicate sense disambiguation (not identification). Hence, the bidirectional LSTM classifier is not used here. The preprocessing for projectivity, and the hyperparameter selection is the same as in \u00a74.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CoNLL 2009",
"sec_num": "4.2"
},
{
"text": "In addition to the joint approach described in the preceding sections, we experiment here with several variants: Semantics-only: the set of syntactic transitions S, the syntactic stack S, and the syntactic composition function g s are discarded. As a result, the set of constraints on transitions is a subset of the full set of constraints in \u00a72.3. Effectively, this model does not use any syntactic features, similar to Collobert et al. 2011and Zhou and Xu (2015) . It provides a controlled test of the benefit of explicit syntax in a semantic parser.",
"cite_spans": [
{
"start": 446,
"end": 464,
"text": "Zhou and Xu (2015)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CoNLL 2009",
"sec_num": "4.2"
},
{
"text": "Syntax-only: all semantic transitions in M, the semantic stack M , and the semantic composition function g m are discarded. S-SHIFT and S-RIGHT now move the item from the front of the buffer to the syntactic stack, instead of copying. The set of constraints on the transitions is again a subset of the full set of constraints. This model is an arceager variant of , and serves to check whether semantic parsing degrades syntactic performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CoNLL 2009",
"sec_num": "4.2"
},
{
"text": "Hybrid: the semantics parameters are trained using automatically predicted syntax from the syntax-only model. At test time, only semantic parses are predicted. This setup bears similarity to other approaches which pipeline syntax and semantics, extracting features from the syntactic parse to help SRL. However, unlike other approaches, this model does not offer the entire syntactic tree for feature extraction, since only the partial syntactic structures present on the syntactic stack (and potentially the buffer) are visible at a given timestep. This model helps show the effect of joint prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CoNLL 2009",
"sec_num": "4.2"
},
{
"text": "CoNLL 2008 (Table 2) Our joint model significantly outperforms the joint model of Henderson et al. (2008) , from which our set of tran-",
"cite_spans": [
{
"start": 82,
"end": 105,
"text": "Henderson et al. (2008)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 11,
"end": 20,
"text": "(Table 2)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Model LAS Sem. Macro F 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "F 1 joint models: Llu\u00eds and M\u00e0rquez (2008) 85.8 70.3 78.1 Henderson et al. (2008) 87.6 73.1 80.5 Johansson (2009) 86.6 77.1 81.8 87.5 76.1 81.8 CoNLL 2008 best: #3: Zhao and Kit (2008) 87.7 76.7 82.2 #2: Che et al. (2008) 86.7 78.5 82.7 #2: Ciaramita et al. (2008) 87.4 78.0 82.7 #1: J&N (2008) 89.3 81.6 85.5 Joint (this work) 89.1 80.5 84.9 Johansson, 2009; for the same task; our joint model surpasses the performance of all these models. The best reported systems on the CoNLL 2008 task are due to Johansson and Nugues (2008) , Che et al. (2008) , Ciaramita et al. (2008) and Zhao and Kit (2008) , all of which pipeline syntax and semantics; our system's semantic and overall performance is comparable to these. We fall behind only Johansson and Nugues (2008) , whose success was attributed to carefully designed global SRL features integrated into a pipeline of classifiers, making them asymptotically slower.",
"cite_spans": [
{
"start": 18,
"end": 52,
"text": "Llu\u00eds and M\u00e0rquez (2008) 85.8 70.3",
"ref_id": null
},
{
"start": 58,
"end": 81,
"text": "Henderson et al. (2008)",
"ref_id": "BIBREF18"
},
{
"start": 97,
"end": 113,
"text": "Johansson (2009)",
"ref_id": "BIBREF23"
},
{
"start": 165,
"end": 184,
"text": "Zhao and Kit (2008)",
"ref_id": "BIBREF46"
},
{
"start": 204,
"end": 221,
"text": "Che et al. (2008)",
"ref_id": "BIBREF5"
},
{
"start": 241,
"end": 264,
"text": "Ciaramita et al. (2008)",
"ref_id": "BIBREF7"
},
{
"start": 284,
"end": 294,
"text": "J&N (2008)",
"ref_id": null
},
{
"start": 343,
"end": 359,
"text": "Johansson, 2009;",
"ref_id": "BIBREF23"
},
{
"start": 502,
"end": 529,
"text": "Johansson and Nugues (2008)",
"ref_id": null
},
{
"start": 532,
"end": 549,
"text": "Che et al. (2008)",
"ref_id": "BIBREF5"
},
{
"start": 552,
"end": 575,
"text": "Ciaramita et al. (2008)",
"ref_id": "BIBREF7"
},
{
"start": 580,
"end": 599,
"text": "Zhao and Kit (2008)",
"ref_id": "BIBREF46"
},
{
"start": 736,
"end": 763,
"text": "Johansson and Nugues (2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "CoNLL 2009 English (Table 3 ) All of our models (Syntax-only, Semantics-only, Hybrid and Joint) improve over Gesmundo et al. (2009) and Henderson et al. (2013) , demonstrating the benefit of our entire-parser-state representation learner compared to the more locally scoped model. Given that syntax has consistently proven useful in SRL, we expected our Semantics-only model to underperform Hybrid and Joint, and it did. In the training domain, syntax and semantics benefit each other (Joint outperforms Hybrid). Outof-domain (the Brown test set), the Hybrid pulls ahead, a sign that Joint overfits to WSJ. As a syntactic parser, our Syntax-only model performs slightly better than , who achieve 89.56 LAS on this task. Joint parsing is very slightly better still.",
"cite_spans": [
{
"start": 109,
"end": 131,
"text": "Gesmundo et al. (2009)",
"ref_id": "BIBREF12"
},
{
"start": 136,
"end": 159,
"text": "Henderson et al. (2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 19,
"end": 27,
"text": "(Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "The overall performance of Joint is on par with the other winning participants at the CoNLL 2009 shared task (Zhao et al., 2009; Che et al., 2009; Gesmundo et al., 2009) , falling behind only Zhao et al. (2009) , who carefully designed languagespecific features and used a series of pipelines for the joint task, resulting in an accurate but computationally expensive system.",
"cite_spans": [
{
"start": 109,
"end": 128,
"text": "(Zhao et al., 2009;",
"ref_id": "BIBREF47"
},
{
"start": 129,
"end": 146,
"text": "Che et al., 2009;",
"ref_id": "BIBREF6"
},
{
"start": 147,
"end": 169,
"text": "Gesmundo et al., 2009)",
"ref_id": "BIBREF12"
},
{
"start": 192,
"end": 210,
"text": "Zhao et al. (2009)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "State-of-the-art SRL systems (shown in the last block of Table 3 ) which use advances orthogonal to the contributions in this paper, perform better than our models. Many of these systems use expert-crafted features derived from full syntactic parses in a pipeline of classifiers followed by a global reranker (Bj\u00f6rkelund et al., 2009; Bj\u00f6rkelund et al., 2010; Roth and Woodsend, 2014) ; we have not used these features or reranking. Lei et al. (2015) use syntactic parses to obtain interaction features between predicates and their arguments and then compress feature representations using a low-rank tensor. present an exact inference algorithm for SRL based on dynamic programming and their local and structured models make use of many syntactic features from a pipeline; our search procedure is greedy. Their algorithm is adopted by FitzGerald et al. (2015) for inference in a model that jointly learns representations from a combination of PropBank and FrameNet annotations; we have not experimented with extra annotations.",
"cite_spans": [
{
"start": 309,
"end": 334,
"text": "(Bj\u00f6rkelund et al., 2009;",
"ref_id": "BIBREF2"
},
{
"start": 335,
"end": 359,
"text": "Bj\u00f6rkelund et al., 2010;",
"ref_id": "BIBREF3"
},
{
"start": 360,
"end": 384,
"text": "Roth and Woodsend, 2014)",
"ref_id": "BIBREF38"
},
{
"start": 433,
"end": 450,
"text": "Lei et al. (2015)",
"ref_id": "BIBREF25"
},
{
"start": 836,
"end": 860,
"text": "FitzGerald et al. (2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 57,
"end": 64,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Our system achieves an end-to-end runtime of 177.6\u00b118 seconds to parse the CoNLL 2009 English test set on a single core. This is almost 2.5 times faster than the pipeline model of Lei et al. (2015) (439.9\u00b142 seconds) on the same machine. 8 (Table 4) We tested the joint model on the non-English CoNLL 2009 datasets, and the results demonstrate that it adapts easily-it is on par with the top three systems in most cases. We note that our Chinese parser relies on pretrained word embeddings for its superior performance; without them (not shown), it was on par with the others. Japanese is a small-data case (4,393 training examples), illustrating our model's dependence on reasonably large training datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 240,
"end": 249,
"text": "(Table 4)",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "We have not extended our model to incorporate morphological features, which are used by the systems to which we compare. Future work might in- The first block presents results of other models evaluated for both syntax and semantics on the CoNLL 2009 task. The second block presents our models. The third block presents the best published models, each using its own syntactic preprocessing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CoNLL 2009 Multilingual",
"sec_num": null
},
{
"text": "corporate morphological features where available; this could potentially improve performance, especially in highly inflective languages like Czech. An alternative might be to infer word-internal representations using character-based word embeddings, which was found beneficial for syntactic parsing . ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CoNLL 2009 Multilingual",
"sec_num": null
},
{
"text": "Other approaches to joint modeling, not considered in our experiments, are notable. Llu\u00eds et al. (2013) propose a graph-based joint model using dual decomposition for agreement between syntax and semantics, but do not achieve competitive performance on the CoNLL 2009 task. Lewis et al. (2015) proposed an efficient joint model for CCG syntax and SRL, which performs better than a pipelined model. However, their training necessitates CCG annotation, ours does not. Moreover, their evaluation metric rewards semantic dependencies regardless of where they attach within the argument span given by a PropBank constituent, making direct comparison to our evaluation infeasible. Krishnamurthy and Mitchell (2014) propose a joint CCG parsing and relation extraction model which improves over pipelines, but their task is different from ours. Li et al. (2010) also perform joint syntactic and semantic dependency parsing for Chinese, but do not report results on the CoNLL 2009 dataset. There has also been an increased interest in models which use neural networks for SRL. Collobert et al. (2011) proposed models which perform many NLP tasks without hand-crafted features. Though they did not achieve the best results on the constituent-based SRL task (Carreras and M\u00e0rquez, 2005) , their approach inspired Zhou and Xu (2015) , who achieved state-of-the-art results using deep bidirectional LSTMs. Our approach for dependency-based SRL is not directly comparable.",
"cite_spans": [
{
"start": 84,
"end": 103,
"text": "Llu\u00eds et al. (2013)",
"ref_id": "BIBREF30"
},
{
"start": 274,
"end": 293,
"text": "Lewis et al. (2015)",
"ref_id": "BIBREF26"
},
{
"start": 675,
"end": 708,
"text": "Krishnamurthy and Mitchell (2014)",
"ref_id": "BIBREF24"
},
{
"start": 837,
"end": 853,
"text": "Li et al. (2010)",
"ref_id": "BIBREF27"
},
{
"start": 1068,
"end": 1091,
"text": "Collobert et al. (2011)",
"ref_id": "BIBREF8"
},
{
"start": 1247,
"end": 1275,
"text": "(Carreras and M\u00e0rquez, 2005)",
"ref_id": "BIBREF4"
},
{
"start": 1302,
"end": 1320,
"text": "Zhou and Xu (2015)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We presented an incremental, greedy parser for joint syntactic and semantic dependency parsing. Our model surpasses the performance of previous joint models on the CoNLL 2008 and 2009 English tasks, without using expert-crafted, expensive features of the full syntactic parse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "This works better for the arc-eager algorithm (Ballesteros andNivre, 2013), in contrast toHenderson et al. (2013), who initialized with root at the buffer front.3 Note that in the original arc-eager algorithm(Nivre, 2008), SHIFT and RIGHT-ARC actions move the item on the buffer front to the stack, whereas we only copy it (to allow the semantic operations to have access to it).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Had we moved the item at the buffer front during the syntactic transitions, it would have been unavailable for the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The analysis in(Nivre, 2009) does not consider SWAP actions. However, since we constrain the number of such actions, the linear time complexity of the algorithm stays intact.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For 1.5% of English sentences in the CoNLL 2009 English dataset, the transition sequence incorrectly encodes the gold-standard joint parse; details inHenderson et al. (2013).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See https://github.com/taolei87/ SRLParser; unlike other state-of-the-art systems, this one is publicly available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank Sam Thomson, Lingpeng Kong, Mark Yatskar, Eunsol Choi, George Mulcaire, and Luheng He, as well as the anonymous reviewers, for many useful comments. This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program and by the U.S. Army Research Office under grant number W911NF-10-1-0533. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the U.S. Army Research Office or the U.S. Government. Miguel Ballesteros was supported by the European Commission under the contract numbers FP7-ICT-610411 (project MULTISENSOR) and H2020-RIA-645012 (project KRISTINA).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Going to the roots of dependency parsing",
"authors": [
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "1",
"pages": "5--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miguel Ballesteros and Joakim Nivre. 2013. Going to the roots of dependency parsing. Computational Linguistics, 39(1):5-13.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improved transition-based parsing by modeling characters instead of words with LSTMs",
"authors": [
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by mod- eling characters instead of words with LSTMs. In Proc. of EMNLP.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multilingual semantic role labeling",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Love",
"middle": [],
"last": "Hafdell",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Nugues",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders Bj\u00f6rkelund, Love Hafdell, and Pierre Nugues. 2009. Multilingual semantic role labeling. In Proc. of CoNLL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A high-performance syntactic and semantic dependency parser",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Love",
"middle": [],
"last": "Hafdell",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Nugues",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of COL-ING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders Bj\u00f6rkelund, Bernd Bohnet, Love Hafdell, and Pierre Nugues. 2010. A high-performance syntactic and semantic dependency parser. In Proc. of COL- ING.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Introduction to the CoNLL-2005 shared task: Semantic role labeling",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Carreras and Llu\u00eds M\u00e0rquez. 2005. Introduc- tion to the CoNLL-2005 shared task: Semantic role labeling. In Proc. of CoNLL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A cascaded syntactic and semantic dependency parsing system",
"authors": [
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Zhenghua",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuxuan",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Yongqiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wanxiang Che, Zhenghua Li, Yuxuan Hu, Yongqiang Li, Bing Qin, Ting Liu, and Sheng Li. 2008. A cascaded syntactic and semantic dependency pars- ing system. In Proc. of CoNLL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multilingual dependency-based syntactic and semantic parsing",
"authors": [
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Zhenghua",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yongqiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuhang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wanxiang Che, Zhenghua Li, Yongqiang Li, Yuhang Guo, Bing Qin, and Ting Liu. 2009. Multilingual dependency-based syntactic and semantic parsing. In Proc. of CoNLL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "DeSRL: A linear-time semantic role labeling system",
"authors": [
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Attardi",
"suffix": ""
},
{
"first": "Felice",
"middle": [],
"last": "Dell'orletta",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimiliano Ciaramita, Giuseppe Attardi, Felice Dell'Orletta, and Mihai Surdeanu. 2008. DeSRL: A linear-time semantic role labeling system. In Proc. of CoNLL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493-2537.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Transitionbased dependency parsing with stack long shortterm memory",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Matthews",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short- term memory. In Proc. of ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semantic role labelling with neural network factors",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas FitzGerald, Oscar T\u00e4ckstr\u00f6m, Kuzman Ganchev, and Dipanjan Das. 2015. Semantic role labelling with neural network factors. In Proc. of EMNLP.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dependencybased semantic role labeling using convolutional neural networks",
"authors": [
{
"first": "R",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Foland",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of *SEM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William R. Foland and James Martin. 2015. Depen- dencybased semantic role labeling using convolu- tional neural networks. In Proc. of *SEM.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A latent variable model of synchronous syntactic-semantic parsing for multiple languages",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Gesmundo",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Merlo",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Gesmundo, James Henderson, Paola Merlo, and Ivan Titov. 2009. A latent variable model of synchronous syntactic-semantic parsing for multiple languages. In Proc. of CoNLL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic labeling of semantic roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguis- tics, 28(3):245-288.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A primer on neural network models for natural language processing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1510.00726"
]
},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg. 2015. A primer on neural network models for natural language processing. arXiv:1510.00726.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Generating sequences with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1308.0850"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Graves. 2013. Generating sequences with recur- rent neural networks. arXiv:1308.0850.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Maria",
"middle": [
"Ant\u00f2nia"
],
"last": "Mart\u00ed",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Jan\u0161t\u011bp\u00e1nek",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Stra\u0148\u00e1k",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Haji\u010d, Massimiliano Ciaramita, Richard Johans- son, Daisuke Kawahara, Maria Ant\u00f2nia Mart\u00ed, Llu\u00eds M\u00e0rquez, Adam Meyers, Joakim Nivre, Sebastian Pad\u00f3, Jan\u0160t\u011bp\u00e1nek, Pavel Stra\u0148\u00e1k, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL- 2009 shared task: Syntactic and semantic dependen- cies in multiple languages. In Proc. of CoNLL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Dynamic feature selection for dependency parsing",
"authors": [
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "He He, Hal Daum\u00e9 III, and Jason Eisner. 2013. Dy- namic feature selection for dependency parsing. In Proc. of EMNLP.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A latent variable model of synchronous parsing for syntactic and semantic dependencies",
"authors": [
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Merlo",
"suffix": ""
},
{
"first": "Gabriele",
"middle": [],
"last": "Musillo",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Henderson, Paola Merlo, Gabriele Musillo, and Ivan Titov. 2008. A latent variable model of syn- chronous parsing for syntactic and semantic depen- dencies. In Proc. of CoNLL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Multi-lingual joint parsing of syntactic and semantic dependencies with a latent variable model",
"authors": [
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Merlo",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Gabriele",
"middle": [],
"last": "Musillo",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "4",
"pages": "949--998",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Henderson, Paola Merlo, Ivan Titov, and Gabriele Musillo. 2013. Multi-lingual joint pars- ing of syntactic and semantic dependencies with a latent variable model. Computational Linguistics, 39(4):949-998.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Dependency-based syntactic-semantic analysis with PropBank and NomBank",
"authors": [],
"year": null,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dependency-based syntactic-semantic analysis with PropBank and NomBank. In Proc. of CoNLL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Statistical bistratal dependency parsing",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Johansson. 2009. Statistical bistratal depen- dency parsing. In Proc. of EMNLP.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Joint syntactic and semantic parsing with combinatory categorial grammar",
"authors": [
{
"first": "Jayant",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jayant Krishnamurthy and Tom M. Mitchell. 2014. Joint syntactic and semantic parsing with combina- tory categorial grammar. In Proc. of ACL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Llu\u00eds M\u00e0rquez i Villodre, Alessandro Moschitti, and Regina Barzilay",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Lei, Yuan Zhang, Llu\u00eds M\u00e0rquez i Villodre, Alessandro Moschitti, and Regina Barzilay. 2015. High-order low-rank tensors for semantic role label- ing. In Proc. of NAACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Joint A* CCG parsing and semantic role labelling",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Luheng He, and Luke Zettlemoyer. 2015. Joint A* CCG parsing and semantic role labelling. In Proc. of EMNLP.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Joint syntactic and semantic parsing of Chinese",
"authors": [
{
"first": "Junhui",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junhui Li, Guodong Zhou, and Hwee Tou Ng. 2010. Joint syntactic and semantic parsing of Chinese. In Proc. of ACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Two/too simple adaptations of word2vec for syntax problems",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Chris Dyer, Alan Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In Proc. of NAACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A joint model for parsing syntactic and semantic dependencies",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Llu\u00eds",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Llu\u00eds and Llu\u00eds M\u00e0rquez. 2008. A joint model for parsing syntactic and semantic dependencies. In Proc. of CoNLL.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Joint arc-factored parsing of syntactic and semantic dependencies",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Llu\u00eds",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the ACL",
"volume": "1",
"issue": "",
"pages": "219--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Llu\u00eds, Xavier Carreras, and Llu\u00eds M\u00e0rquez. 2013. Joint arc-factored parsing of syntactic and semantic dependencies. Transactions of the ACL, 1:219-230.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Building a large annotated corpus of English: The Penn treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large anno- tated corpus of English: The Penn treebank. Com- putational Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The NomBank project: An interim report",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Ruth",
"middle": [],
"last": "Reeves",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Macleod",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Szekely",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Zielinska",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. The NomBank project: An interim report. In Proc. of NAACL.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Rectified linear units improve restricted Boltzmann machines",
"authors": [
{
"first": "Vinod",
"middle": [],
"last": "Nair",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted Boltzmann machines. In Proc. of ICML.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "MaltParser: A language-independent system for data-driven dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "Atanas",
"middle": [],
"last": "Chanev",
"suffix": ""
},
{
"first": "G\u00fclsen",
"middle": [],
"last": "Eryigit",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Svetoslav",
"middle": [],
"last": "Marinov",
"suffix": ""
},
{
"first": "Erwin",
"middle": [],
"last": "Marsi",
"suffix": ""
}
],
"year": 2007,
"venue": "Natural Language Engineering",
"volume": "13",
"issue": "",
"pages": "95--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G\u00fclsen Eryigit, Sandra K\u00fcbler, Svetoslav Marinov, and Erwin Marsi. 2007. MaltParser: A language-independent system for data-driven de- pendency parsing. Natural Language Engineering, 13:95-135.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Algorithms for deterministic incremental dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "4",
"pages": "513--553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2008. Algorithms for deterministic in- cremental dependency parsing. Computational Lin- guistics, 34(4):513-553.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Non-projective dependency parsing in expected linear time",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2009. Non-projective dependency pars- ing in expected linear time. In Proc. of ACL.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "The Proposition Bank: An annotated corpus of semantic roles",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "1",
"pages": "71--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated cor- pus of semantic roles. Computational Linguistics, 31(1):71-106.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Composition of word representations improves semantic role labelling",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Kristian",
"middle": [],
"last": "Woodsend",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Roth and Kristian Woodsend. 2014. Com- position of word representations improves semantic role labelling. In Proc. of EMNLP.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu\u00eds M\u00e0rquez, and Joakim Nivre. 2008. The CoNLL-2008 shared task on joint parsing of syntac- tic and semantic dependencies. In Proc. of CoNLL.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Joint parsing and semantic role labeling",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Sutton and Andrew McCallum. 2005. Joint parsing and semantic role labeling. In Proc. of CoNLL.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Efficient inference and structured learning for semantic role labeling",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the ACL",
"volume": "3",
"issue": "",
"pages": "29--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar T\u00e4ckstr\u00f6m, Kuzman Ganchev, and Dipanjan Das. 2015. Efficient inference and structured learn- ing for semantic role labeling. Transactions of the ACL, 3:29-41.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Online graph planarisation for synchronous parsing of semantic and syntactic dependencies",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Merlo",
"suffix": ""
},
{
"first": "Gabriele",
"middle": [],
"last": "Musillo",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Titov, James Henderson, Paola Merlo, and Gabriele Musillo. 2009. Online graph planarisation for synchronous parsing of semantic and syntactic dependencies. In Proc. of IJCAI.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "A global joint model for semantic role labeling",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "2",
"pages": "161--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Aria Haghighi, and Christopher D. Manning. 2008. A global joint model for se- mantic role labeling. Computational Linguistics, 34(2):161-191.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Structured training for neural network transition-based parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proc. of ACL.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Recurrent neural network regularization",
"authors": [
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.2329"
]
},
"num": null,
"urls": [],
"raw_text": "Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv:1409.2329.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Parsing syntactic and semantic dependencies with two single-stage maximum entropy models",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao and Chunyu Kit. 2008. Parsing syntactic and semantic dependencies with two single-stage maxi- mum entropy models. In Proc. of CoNLL.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Multilingual dependency learning: Exploiting rich features for tagging syntactic and semantic dependencies",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Jun'ichi Kazama",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Torisawa",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Wenliang Chen, Jun'ichi Kazama, Kiyotaka Uchimoto, and Kentaro Torisawa. 2009. Multilin- gual dependency learning: Exploiting rich features for tagging syntactic and semantic dependencies. In Proc. of CoNLL.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "End-to-end learning of semantic role labeling using recurrent neural networks",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural net- works. In Proc. of ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "\u2022 S-LEFT( ) creates a syntactic dependency with label in the reverse direction as S-RIGHT. The top of S, u, is popped. The front of B, v, is replaced by the new structure, rooted at v. The semantic transitions are similar, operating on the semantic stack. \u2022 M-SHIFT removes an item from the front of B and pushes it on M . \u2022 M-REDUCE pops an item from M . \u2022 M-RIGHT(r) creates a semantic dependency. Let u be the element at the top of M and v, the front of B. The new dependency has u as head, v as dependent, and label r. u is popped off M , and the resulting structure, rooted at u, is pushed on M . \u2022 M-LEFT(r) creates a semantic dependency with label r in the reverse direction as M-RIGHT. The buffer front, v, is replaced by the new v-rooted structure. M remains unchanged. Example of an SRL graph with an arc from predicate problem.01 to itself, filling the A2 role. Our SELF(A2) transition allows recovering this semantic dependency.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "https://github.com/clab/lstm-parser root",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Stack LSTM for joint parsing. The state illustrated corresponds to the ***-marked row in the example transition sequence in Fig. 3. where W and d are the parameters of the classifier. The transition selected at timestep t is arg max \u03c4 \u2208At",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Example of a joint parse tree fragment with vector representations shown at each node. The vectors are obtained by recursive composition of representations of head, dependent, and label vectors. Syntactic dependencies and labels are in green, semantic in blue.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Parser transitions along with the modifications to the stacks and the buffer resulting from each. Syntactic transitions are shown above, semantic below. Italic symbols denote symbolic representations of words and relations, and bold symbols indicate (learned) embeddings ( \u00a73.5) of words and relations; each element in a stack or buffer includes both symbolic and vector representations, either atomic or recursive. S represents the set of syntactic transitions, and M the set of semantic transitions.",
"html": null
},
"TABREF2": {
"content": "<table><tr><td>: Joint parsers: comparison on the CoNLL</td></tr><tr><td>2008 test (WSJ+Brown) set.</td></tr><tr><td>sitions is derived, showing the benefit of learn-</td></tr><tr><td>ing a representation for the entire algorithmic</td></tr><tr><td>state. Several other joint learning models have</td></tr><tr><td>been proposed</td></tr></table>",
"type_str": "table",
"num": null,
"text": "",
"html": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Comparison on the CoNLL 2009 English test set.",
"html": null
},
"TABREF6": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Comparison of macro F 1 scores on the multilingual CoNLL 2009 test set.",
"html": null
}
}
}
}