|
{ |
|
"paper_id": "Q18-1043", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:10:10.749996Z" |
|
}, |
|
"title": "Exploring Neural Methods for Parsing Discourse Representation Structures", |
|
"authors": [ |
|
{ |
|
"first": "Rik", |
|
"middle": [], |
|
"last": "Van Noord", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Groningen", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Lasha", |
|
"middle": [], |
|
"last": "Abzianidze", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Groningen", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Toral", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Groningen", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Groningen", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Neural methods have had several recent successes in semantic parsing, though they have yet to face the challenge of producing meaning representations based on formal semantics. We present a sequenceto-sequence neural semantic parser that is able to produce Discourse Representation Structures (DRSs) for English sentences with high accuracy, outperforming traditional DRS parsers. To facilitate the learning of the output, we represent DRSs as a sequence of flat clauses and introduce a method to verify that produced DRSs are well-formed and interpretable. We compare models using characters and words as input and see (somewhat surprisingly) that the former performs better than the latter. We show that eliminating variable names from the output using De Bruijn indices increases parser performance. Adding silver training data boosts performance even further.", |
|
"pdf_parse": { |
|
"paper_id": "Q18-1043", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Neural methods have had several recent successes in semantic parsing, though they have yet to face the challenge of producing meaning representations based on formal semantics. We present a sequenceto-sequence neural semantic parser that is able to produce Discourse Representation Structures (DRSs) for English sentences with high accuracy, outperforming traditional DRS parsers. To facilitate the learning of the output, we represent DRSs as a sequence of flat clauses and introduce a method to verify that produced DRSs are well-formed and interpretable. We compare models using characters and words as input and see (somewhat surprisingly) that the former performs better than the latter. We show that eliminating variable names from the output using De Bruijn indices increases parser performance. Adding silver training data boosts performance even further.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Semantic parsing is the task of mapping a natural language expression to an interpretable meaning representation. Semantic parsing used to be the domain of symbolic and statistical approaches (Pereira and Shieber, 1987; Zelle and Mooney, 1996; Blackburn and Bos, 2005) . Recently, however, neural methods, and in particular sequenceto-sequence models, have been successfully applied to a wide range of semantic parsing tasks. These include code generation (Ling et al., 2016) , question answering (Dong and Lapata, 2016; He and Golub, 2016) and Abstract Meaning Representation parsing (Konstas et al., 2017) . Because these models have no intrinsic knowledge of the structure (tree, graph, set) they have to produce, recent work also focused on structured decoding methods, creating neural architectures that always output a graph or a tree (Alvarez-Melis and Jaakkola, 2017; Buys and Blunsom, 2017) . These methods often outperform the more general sequence-to-sequence models but are tailored to specific meaning representations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 219, |
|
"text": "(Pereira and Shieber, 1987;", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 220, |
|
"end": 243, |
|
"text": "Zelle and Mooney, 1996;", |
|
"ref_id": "BIBREF56" |
|
}, |
|
{ |
|
"start": 244, |
|
"end": 268, |
|
"text": "Blackburn and Bos, 2005)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 456, |
|
"end": 475, |
|
"text": "(Ling et al., 2016)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 497, |
|
"end": 520, |
|
"text": "(Dong and Lapata, 2016;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 521, |
|
"end": 540, |
|
"text": "He and Golub, 2016)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 585, |
|
"end": 607, |
|
"text": "(Konstas et al., 2017)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 841, |
|
"end": 875, |
|
"text": "(Alvarez-Melis and Jaakkola, 2017;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 876, |
|
"end": 899, |
|
"text": "Buys and Blunsom, 2017)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper will focus on parsing Discourse Representation Structures (DRSs) proposed in Discourse Representation Theory (DRT), a wellstudied formalism developed in formal semantics (Kamp, 1984; Van der Sandt, 1992; Asher, 1993; Kamp and Reyle, 1993; Muskens, 1996; Van Eijck and Kamp 1997; Kadmon, 2001; Asher and Las-carides, 2003) , dealing with many semantic phenomena: quantifiers, negation, scope ambiguities, pronouns, presuppositions, and discourse structure (see Figure 1 ). DRSs are recursive structures and thus form a challenge for sequence-tosequence models because they need to generate a well-formed structure and not something that looks like one but is not interpretable.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 193, |
|
"text": "(Kamp, 1984;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 214, |
|
"text": "Van der Sandt, 1992;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 215, |
|
"end": 227, |
|
"text": "Asher, 1993;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 249, |
|
"text": "Kamp and Reyle, 1993;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 264, |
|
"text": "Muskens, 1996;", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 289, |
|
"text": "Van Eijck and Kamp 1997;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 303, |
|
"text": "Kadmon, 2001;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 304, |
|
"end": 332, |
|
"text": "Asher and Las-carides, 2003)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 471, |
|
"end": 479, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The problem that we try to tackle bears similarities to the recently introduced task of mapping sentences to an Abstract Meaning Representation (AMR; Banarescu et al. 2013) . But there are notable differences between DRS and AMR. Firstly, DRSs contain scope, which results in a more linguistically motivated treatment of modals, quantification, and negation. Secondly, DRSs contain a substantially higher number of variable bindings (reentrant nodes in AMR terminology), which are challenging for learning (Damonte et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 172, |
|
"text": "Banarescu et al. 2013)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 506, |
|
"end": 528, |
|
"text": "(Damonte et al., 2017)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "DRS parsing was attempted in the 1980s for small fragments of English (Johnson and Klein, 1986; Wada and Asher, 1986) . Transactions of the Association for Computational Linguistics, vol. 6, pp. 619-634, 2018. Action Editor: Asli Celikyilmaz.", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 95, |
|
"text": "(Johnson and Klein, 1986;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 96, |
|
"end": 117, |
|
"text": "Wada and Asher, 1986)", |
|
"ref_id": "BIBREF55" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Submission batch: 7/2018; Revision batch: 9/2018; Published 12/2018. c 2018 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Tom isn't afraid of anything.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "System output of a DRS in a clausal form:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "b1 REF x1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "b3 REF s1 b1 male \"n.02\" x1 b3 Time s1 t1 b1 Name x1 \"tom\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "b3 Experiencer s1 x1 b2 REF t1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "b3 afraid \"a.01\" s1 b2 EQU t1 \"now\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "b3 Stimulus s1 x2 b2 time \"n.08\" t1 b3 REF x2 b0 NOT b3 b3 entity \"n.01\" x2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The same DRS in a box format:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "b0 \u00ac s 1 x 2 b3 afraid.a.01(s 1 ) Time(s 1 , t 1 ) Stimulus(s 1 , x 2 ) Experiencer(s 1 , x 1 ) entity.n.01(x 2 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "x 1 b1 male.n.02(x 1 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Name(x 1 , tom) t 1 b2 time.n.08(t 1 ) t 1 = now Figure 1 : DRS parsing in a nutshell. Given a raw text, a system has to generate a DRS in the clause format, a flat version of the standard box notation. The semantic representation formats are made more readable by using various letters for variables: the letters x, e, s, and t are used for discourse referents denoting individuals, events, states, and time, respectively, and b is used for variables denoting DRS boxes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 57, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "DRS parsers based on supervised machine learning emerged later (Bos, 2008b; Le and Zuidema, 2012; Bos, 2015; Liu et al., 2018) . The objectives of this paper are to apply neural methods to DRS parsing. In particular, we are interested in answers to the following research questions (RQs):", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 75, |
|
"text": "(Bos, 2008b;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 76, |
|
"end": 97, |
|
"text": "Le and Zuidema, 2012;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 98, |
|
"end": 108, |
|
"text": "Bos, 2015;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 109, |
|
"end": 126, |
|
"text": "Liu et al., 2018)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. Are sequence-to-sequence models able to produce formal meaning representations (DRSs)? 2. What is better for input: sequences of characters or sequences of words; does tokenization help; and what kind of casing is best used? 3. What is the best way of dealing with variables that occur in DRSs? 4. Does adding silver data increase the performance of the neural parser? 5. What parts of semantics are learned and what parts of semantics are still challenging?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We make the following contributions to semantic parsing: 1 (a) The output of our parser consists of interpretable scoped meaning representations, 1 The code is available here:", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 147, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/ RikVN/Neural_DRS.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "guaranteed by a specially designed checking tool ( \u00a73). (b) We compare different methods of representing input and output in \u00a74. (c) We show in \u00a75 that using additional, non-gold standard data can improve performance. (d) We perform a thorough analysis of the produced output and compare our methods with symbolic/ statistical approaches ( \u00a76). Kamp and Reyle, 1993) . In general, a DRS can be seen as an ordered pair A, l : B , where A is a set of presuppositional DRSs, and B a DRS with a label l. The presuppositional DRSs A can be viewed as propositions that need to be anchored in the context in order to make the main DRS B true, where presuppositions comprise anaphoric phenomena, too (Van der Sandt, 1992; Geurts, 1999; Beaver, 2002) . DRSs are either elementary DRSs or segmented DRSs. An elementary DRS is an ordered pair of a set of discourse referents and a set of conditions. There are basic conditions and complex conditions. A basic condition is a predicate applied to constants or discourse referents, whereas a complex condition can introduce Boolean operators ranging over DRSs (negation, conditionals, disjunction) . Segmented DRSs capture discourse structure by connecting two units of discourse by a discourse relation (Asher and Lascarides, 2003) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 345, |
|
"end": 366, |
|
"text": "Kamp and Reyle, 1993)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 692, |
|
"end": 713, |
|
"text": "(Van der Sandt, 1992;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 714, |
|
"end": 727, |
|
"text": "Geurts, 1999;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 728, |
|
"end": 741, |
|
"text": "Beaver, 2002)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1096, |
|
"end": 1133, |
|
"text": "(negation, conditionals, disjunction)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1240, |
|
"end": 1268, |
|
"text": "(Asher and Lascarides, 2003)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Despite a long tradition of formal interest in DRT, it is only recently that textual corpora annotated with DRSs have been made available. The Groningen Meaning Bank (GMB) is a large corpus with DRS annotation for mostly short English newspaper texts (Basile et al., 2012; . The DRSs in this corpus are produced by an existing semantic parser and then partially corrected. The DRSs in the GMB are therefore not gold standard.", |
|
"cite_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 272, |
|
"text": "(Basile et al., 2012;", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotated Corpora", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "A similar corpus is the Parallel Meaning Bank (PMB), which provides DRSs for English, German, Dutch, and Italian sentences based on a parallel corpus . The PMB, too, is constructed using an existing semantic parser, but a part of it is completely manually checked and corrected (i.e., gold standard). In contrast to the GMB, the PMB involves two major additions: (a) its semantics are refined by modeling tense and using semantic tagging (Bjerva et al., 2016; , and (b) the non-logical symbols of the DRSs corresponding to concepts and semantic roles are grounded in WordNet (Fellbaum, 1998) and VerbNet (Bonial et al., 2011) , respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 438, |
|
"end": 459, |
|
"text": "(Bjerva et al., 2016;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 591, |
|
"text": "(Fellbaum, 1998)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 604, |
|
"end": 625, |
|
"text": "(Bonial et al., 2011)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotated Corpora", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "These additions make the DRSs of the PMB more fine-grained meaning representations. For this reason we choose the PMB (over the GMB) as our corpus for evaluating our semantic parser. Even though the sentences in the current release of the PMB are relatively short, they contain many difficult semantic phenomena that a semantic parser has to deal with: pronoun resolution, quantifiers, scope of modals and negation, multiword expressions, word senses, semantic roles, presupposition, tense, and discourse relations. As far as we know, we are the first group to use the PMB corpus for semantic parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotated Corpora", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The usual way to represent DRSs is the wellknown box format. To facilitate reading a DRS with unresolved presuppositions, it can be depicted as a network of boxes, where a nonpresuppositional (i.e., main) DRS l : B is connected to the presuppositional DRSs A with arrows. Each box comes with a unique label and has two rows. In the case of elementary DRSs, these rows contain discourse referents in the top row and conditions in the bottom row ( Figure 1) . A segmented DRS has a row with labeled DRSs and a row with discourse relations (Figure 2 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 446, |
|
"end": 455, |
|
"text": "Figure 1)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 537, |
|
"end": 546, |
|
"text": "(Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Formatting DRSs with Boxes and Clauses", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The DRS in Figure 1 consists of a main box b0 and two presuppositional boxes, b1 and b2. Note that b0 has no discourse referents but introduces negation via a single condition \u00acb3 with a nested box b3. The conditions of b3 represent unary and binary relations over discourse referents that are introduced either by b3 or the presuppositional DRSs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 19, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Formatting DRSs with Boxes and Clauses", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "A clausal form is another way of formatting DRSs. It represents a DRS as a set of clauses (see Figures 1 and 2 ). This format is better suited for machine learning than the box format, as it has a simple, flat structure and facilitates partial matching of DRSs, which is useful for evaluation (van Noord et al., 2018) . Conversion from the box notation to the clausal form and vice versa is transparent: Discourse referents, conditions, and dis- course relations in the clausal form are preceded by the label of the box in which they occur. Notice that the variable letters in the semantic representations are automatically set and they simply serve for readability purposes. Throughout the experiments described in this paper, we utilize clausal form DRSs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 317, |
|
"text": "(van Noord et al., 2018)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 110, |
|
"text": "Figures 1 and 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Formatting DRSs with Boxes and Clauses", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We use the English DRSs from release 2.1.0 of the PMB . 2 The release suggests using the parts 00, 10, 20, and 30 as the development set, resulting in 3,998 training and 557 development instances. Basic statistics are shown in Table 1 , and the number of occurrences of some of the semantic phenomena mentioned in \u00a72.2 are given in Table 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 57, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 234, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 332, |
|
"end": 339, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotated Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Because this is a rather small training set, we tune our model using 10-fold cross-validation (CV) on the training set, rather than tuning on a separate development set. This means that we will use the suggested development set as a test set (and refer to it as such). When testing on this set, we train a model on all available training data. The utilized PMB release also comes with \"silver\" data-namely, 71,308 DRSs that are only partially manually corrected. In addition, we use the DRSs from the silver data but without the manual corrections, which makes them \"bronze\" DRSs (following PMB terminology). Our experiments will initially use only the gold standard data, after which we will use the silver or bronze data to further push the score of our best systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotated Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The clausal form of a DRS needs to satisfy a set of constraints in order to correspond to a semantically interpretable DRS, that is, translatable into a first-order logic formula without free occurrences of a variable (Kamp and Reyle, 1993) . For example, all discourse referents need to be explicitly introduced with a REF clause to avoid free occurrences of variables. We implemented a clausal form checker that validates the clausal form if and only if it represents a semantically interpretable DRS. Distinguishing box variables from entity variables is crucial for the validity checking, but automatically learned clausal forms are not expected to differentiate variable types. First, the checker separately 3 The phenomena are automatically counted based on clausal forms. The counting algorithm does not guarantee the exact number for certain phenomena, though it returned the exact counts of all the phenomena on the test data except the pronoun resolution (30).", |
|
"cite_spans": [ |
|
{ |
|
"start": 218, |
|
"end": 240, |
|
"text": "(Kamp and Reyle, 1993)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 713, |
|
"end": 714, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clausal Form Checker", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "parses each clause in the form to induce variable types based on the fixed set of comparison and DRS operators. After typing all the variables, the checker verifies whether the clauses collectively correspond to a DRS with well-formed semantics. For each box variable in a discourse relation, existence of the corresponding box inside the same segmented DRS is checked. For each entity variable in a condition, an introduction of the binder (i.e., accessible) discourse variable is found. The goal of these two steps is to prevent free occurrences of variables in DRSs. While binding the entity variables, necessary accessibility relations between the boxes are induced. In the end, the checker verifies the transitive closure of the induced accessibility relation on loops and checks existence of a unique main box of the DRS.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clausal Form Checker", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The checker is applied to every automatically obtained clausal form. If a clausal form fails the test, it is considered as ill-formed and will not have a single clause matched with the gold standard when calculating the F-score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clausal Form Checker", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "A DRS parser is evaluated by comparing its output DRS to a gold standard DRS using the Counter tool (van Noord et al., 2018) . Counter calculates an F-score over matching clauses. Because variable names are meaningless, obtaining the matching clauses essentially is a search for the best variable mapping between two DRSs. Counter tries to find this mapping by performing a hill-climbing search with a predefined number of restarts to avoid getting stuck in a local optimum, which is similar to the evaluation system SMATCH for AMR parsing. 4 Counter generalizes over WordNet synsets (i.e., a system is not penalized for predicting a word sense that is in the same synset as the gold standard word sense).", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 124, |
|
"text": "(van Noord et al., 2018)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 541, |
|
"end": 542, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "To calculate whether there is a significant difference between two systems, we perform approximate randomization (Noreen, 1989) with \u03b1 = 0.05, R = 1,000, and F (model 1 ) > F (model 2 ) as test statistics for each individual DRS pair.", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 127, |
|
"text": "(Noreen, 1989)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We use a recurrent sequence-to-sequence neural network (henceforth seq2seq) with two bidirectional long short-term memory (LSTM) layers and 300 nodes, implemented in OpenNMT (Klein et al., 2017) . The network encodes a sequence representation of the natural language utterance, while the decoder produces the sequences of the meaning representation. We apply dropout (Srivastava et al., 2014) between both the recurrent encoding and decoding layers to prevent overfitting, and use general attention (Luong et al., 2015) to selectively give more weight to certain parts of the input sentence. An overview of the general framework of the seq2seq model is shown in Figure 3 . During decoding we perform beam search with length normalization, which in neural machine translation (NMT) is crucial to obtaining good results (Britz et al., 2017) . We experimented with a wide range of parameter settings, of which the final settings can be found in Table 3 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 174, |
|
"end": 194, |
|
"text": "(Klein et al., 2017)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 367, |
|
"end": 392, |
|
"text": "(Srivastava et al., 2014)", |
|
"ref_id": "BIBREF51" |
|
}, |
|
{ |
|
"start": 499, |
|
"end": 519, |
|
"text": "(Luong et al., 2015)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 818, |
|
"end": 838, |
|
"text": "(Britz et al., 2017)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 662, |
|
"end": 670, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 942, |
|
"end": 949, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Neural Architecture", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We opted against trying to find the best parameter settings for each individual experiment (next to impossible in terms of computing time necessary, as a single 10-fold CV experiment takes 12 hours on GPU), but selected parameter settings that showed good performance for both the initial character and word-level representations (see \u00a74 for details). The parameter search was performed using 10-fold CV on the training set. Training is stopped when there is no more improvement in perplexity on the validation set, which in our case occurred after 13-15 epochs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Architecture", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "A powerful, well-known technique in the field of NMT is to use an ensemble of models during decoding Sennrich et al., 2016a ). The resulting model averages over the predictions of the individual models, which can balance out some of the errors. In our experiments, we apply this method when decoding on the test set, but not for our experiments of 10-fold CV (this would take too much computation time). ", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 123, |
|
"text": "Sennrich et al., 2016a", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Architecture", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "This section describes the experiments we conduct regarding the data representations of the input (English sentences) and output (a DRS) during training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments with Data Representations", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We first try two (default) representations: characterlevel and word-level. Most semantic parsers use word-level representations for the input, but as a result are often dependent on pre-trained word embeddings or anonymization of the input 5 to obtain good results. Character-level models avoid this issue but might be at a higher risk of producing ill-formed output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Between Characters and Words", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Character-based model In the character-level model, the input (an English sentence) is represented as a sequence of individual characters. The output (a DRS in clause format) is linearized, with special characters indicating spaces and clause separators. The semantic roles (e.g., Agent, Theme), DRS operators (e.g., REF,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Between Characters and Words", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "NOT, POS), and deictic constants (e.g., \"now\", \"speaker\", \"hearer Word-based model In the word-level model, the input is represented as a sequence of words, using spaces as a separator (i.e., the original words are kept). The output is the same as for the character-based model, except that the character sequences are represented as words. We use pre-trained GloVe embeddings (Pennington et al., 2014) 6 to initialize the encoder and decoder representations. In the DRS representation, there are semantic roles and DRS operators that might look like English words, but should not be interpreted as such (e.g. Agent, NOT). These entities are removed from the set of pre-trained embeddings, so that the model will learn them from scratch (starting from a random initialization).", |
|
"cite_spans": [ |
|
{ |
|
"start": 377, |
|
"end": 402, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF46" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Between Characters and Words", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Hybrid representations: BPE We do not necessarily have to restrict ourselves to using only characters or words as input representation. In NMT, byte-pair encoding (BPE; Sennrich et al. 2016b) is currently the de facto standard (Bojar et al., 2017) . This is a frequency-based method that automatically finds a representation that is between character-level and word-level. It starts out with the character-level format and then does a predefined number of merges of frequently co-occurring characters. Tuning this number of merges determines whether the resulting representation is closer to character-level or word-level. We explore a large range of merges (1k-100k), while applying a corresponding set of pre-trained BPE embeddings (Heinzerling and Strube, 2018) . However, none of the BPE experiments improved on the character-level or wordlevel score (F-scores between 57 and 68), only coming close when using a small number of merges (which is very close to character-level anyway). Therefore this technique was disregarded for further experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 247, |
|
"text": "(Bojar et al., 2017)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 734, |
|
"end": 764, |
|
"text": "(Heinzerling and Strube, 2018)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Between Characters and Words", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Combined char and word There is also a fourth possible representation of the input: concatenating the character-level and word-level representations. This is uncommon in NMT because of the large size of the embedding space (hence their preference for BPE), but possible here since the PMB data contain relatively short sentences. We simply add the word embedding vector after the sequence of character-embeddings for each word in the input and still initialize these embeddings using the pre-trained GloVe embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Between Characters and Words", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The results of the experiments (10-fold CV) for finding the best representation are shown in Table 4 . Character representations are clearly better than word representations, though the word-level representation produces fewer ill-formed DRSs. Both representations are maintained for our further experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 100, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Representation results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Although the combination of characters and words did lead to a small increase in performance over characters only (Table 4) , this difference is not significant. Hence, this representation is discarded in further experiments described in this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 123, |
|
"text": "(Table 4)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Representation results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An interesting aspect of the PMB data is the way the input sentences are tokenized. In the data set, multiword expressions are tokenized as single words, for example, \"New York\" is tokenized to \"New\u223cYork.\" Unfortunately, most off-the-shelf tokenizers (e.g., the Moses tokenizer) are not equipped to deal with this. We experiment with using Elephant (Evang et al., 2013) , a tokenizer that can be (re-)trained on individual data sets, using the tokenized sentences of the published silver and gold PMB data set. 7 Simultaneously, we are interested in whether character-level models need tokenization at all, which would be a possible advantage of this type of representing the input text. Results of the experiment are shown in Table 5 . None of the two tokenization methods yielded a significant advantage for the character-level models, so they will not be used further. The word-level models, however, did benefit from tokenization, but Elephant did not give us an advantage over the Moses tokenizer. Therefore, for (a) Standard naming $1 REF @1 $1 male \"n.02\" @1 $1 Name @1 \"tom\" $2 REF @2 $2 EQU @2 \"now\" $2 time \"n.08\" @2 $0 NOT $3 $3 REF @3 $3 Time @3 @2 $3 Experiencer @3 @1 $3 afraid \"a.01\" @3 $3 Stimulus @3 @4 $3 REF @4 $3 entity \"n.01\" @4 Figure 1 . For (c), positive numbers refer to introductions that have yet to occur, and negative numbers refer to known introductions. A zero refers to the previous introduction for that variable type. word-level models, we use Moses in our subsequent experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 369, |
|
"text": "(Evang et al., 2013)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 727, |
|
"end": 734, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1250, |
|
"end": 1258, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tokenization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "So far we did not attempt to do anything special with the variables that occur in DRSs, as we simply tried to learn them as supplied in the PMB data set. Obviously, DRSs constitute a challenge for seq2seq models because of the high number of multiple occurrences of the same variables, in particular compared with AMR. AMR parsers do not deal well with this, because the reentrancy metric (Damonte et al., 2017) is among the lowest metrics for all AMR parsers that reported them or are publicly available (van Noord and Bos, 2017b). Moreover, for AMR, only 50% of the representations contain at least one reentrant node, and only 20% of the triples in AMR contain a reentrant node (van Noord and Bos, 2017a), but for DRSs these are both virtually 100%. Although seq2seq AMR parsers could get away with ignoring variables during training and reinstating them in a post-processing step, for DRSs this is unfeasible.", |
|
"cite_spans": [ |
|
{ |
|
"start": 389, |
|
"end": 411, |
|
"text": "(Damonte et al., 2017)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Representing Variables", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "However, because variable names are chosen arbitrarily, they will be hard for a seq2seq model to learn. We will therefore experiment with two methods of rewriting the variables to a more general representation, distinguishing between box variables and discourse variables. Our first method (absolute) traverses down the list of clauses, rewriting each new variable to a unique representation, taking the order into account. The second 3.3 bs/mos + rel + feature 76.9 3.7 74.9 2.9 Table 5 : Results of the 10-fold CV experiments regarding tokenization, variable rewriting, and casing; bs/mos means that we use no tokenization for the character-level parser, while we use Moses for the word-level parser.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 480, |
|
"end": 487, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Representing Variables", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "method (relative) is more sophisticated; it rewrites variables based on when they were introduced, inspired by the De Bruijn index (de Bruijn, 1972) . We view box variables as introduced when they are first mentioned, and we take the REF clause of a discourse referent as their introduction. The two rewriting methods are illustrated in Figure 4 . The results are shown in Table 5 . For both characters and words, the relative rewriting method significantly outperforms the absolute method and the baseline, though the absolute method produces fewer ill-formed DRSs. Interestingly, the character-level model still obtains a higher F1score compared to the word-level model, even though it produces more ill-formed DRSs. Charlevel Wordlevel Figure 5 : Learning curve for different number of gold instances for both the character-level and word-level neural parsers (10-fold CV experiment for every 500 instances).", |
|
"cite_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 148, |
|
"text": "(de Bruijn, 1972)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 337, |
|
"end": 345, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 373, |
|
"end": 380, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 739, |
|
"end": 747, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Representing Variables", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Casing is a writing device mostly used for punctuation purposes. On the one hand, it increases the set of characters (hence adding more redundant variation to the input). On the other hand, case can be a useful feature to recognize proper names because names of individuals are semantically analysed as presuppositions. Explicitly encoding uppercase with a feature could therefore prevent us from including a named-entity recognizer, often used in other semantic parsers. Although we do not expect dealing with case to be a major challenge, we try out different techniques to find an optimal balance between abstracting over input characters and parsing performance. The results, in Table 5 , show that the feature works well for the character-level model, but for the wordlevel model, it does not outperform lowercasing. These settings are used in further experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 683, |
|
"end": 690, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Casing", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Because semantic annotation is a difficult and time-consuming task, gold standard data sets are usually relatively small. This means that semantic parsers (and data-hungry neural methods in particular) can often benefit from more training data. Some examples in semantic parsing are data recombination (Jia and Liang, 2016) , paraphrasing (Berant and Liang, 2014) , or exploiting machinegenerated output (Konstas et al., 2017) . However, before we do any experiments using extra training data, we want to be sure that we can still ben-", |
|
"cite_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 323, |
|
"text": "(Jia and Liang, 2016)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 363, |
|
"text": "(Berant and Liang, 2014)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 426, |
|
"text": "(Konstas et al., 2017)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments with Additional Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Char parser Word parser Data F1 % ill F1 % ill", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments with Additional Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Best gold-only 75.9 2.9 72.8 2.0 + ensemble 77.9", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments with Additional Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "1.8 75.1 0.9 Gold + silver 82.9 1.8 82.7 1.1 + ensemble 83.6 1.3 83.1 0.7 Table 6 : F1-score and percentage of ill-formed DRSs on the test set, for the experiments with the PMB-released silver data. The scores without using an ensemble are an average of five runs of the model. efit from more gold training data. For both the character level and word level we plot the learning curve, adding 500 training instances at a time, in Figure 5 . For both models the F-score clearly still improves when using more training instances, which shows that there is at least the potential for additional data to improve the score. For DRSs, the PMB-2.1.0 release already contains a large set of silver standard data (71,308 instances), containing DRSs that are only partially manually corrected. We then train a model on both the gold and silver standard data, making no distinction between them during training. After training we take the last model and restart the training on only the gold data, in a similar process as described in Konstas et al. (2017) and van Noord and Bos (2017b) . In general, restarting the training to fine-tune the weights of the model is a common technique in NMT (Denkowski and Neubig, 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1023, |
|
"end": 1044, |
|
"text": "Konstas et al. (2017)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 1063, |
|
"end": 1074, |
|
"text": "Bos (2017b)", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 1180, |
|
"end": 1208, |
|
"text": "(Denkowski and Neubig, 2017)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 81, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 429, |
|
"end": 437, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments with Additional Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We are aware that there are many methods to obtain and utilize additional data. However, our main aim is not to find the optimal method for DRS parsing, but to demonstrate that using additional data is indeed beneficial for neural DRS parsing. Because we are not further fine-tuning our model, we will show results on the test set in this section. Table 6 shows the results of adding the silver data. This results in a large increase in performance, for both the character-and word-level models. We are still reliant on manually annotated data, however, because without the gold data (so training on only the silver data), we score even lower than our baseline model (68.4 and 68.1 for the char and word parser). Similarly, we are reliant on the fine-tuning procedure, as we also score below our baseline models without it (71.6 and 71.0 for the char and word parsers, respectively).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 348, |
|
"end": 355, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments with Additional Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Char parser Word parser Data F1 % ill F1 % ill Silver (Boxer-generated) 83.6 1.3 83.1 0.7 Bronze (Boxer-generated) 83.8 1.1 82.4 0.9 Bronze (NN-generated) 77.9 2.7 74.5 2.2 without ill-formed DRSs 78.6 1.6 74.9 0.9 Table 7 : Test set results of the experiments that analyze the impact of the silver data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 222, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments with Additional Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We believe there are two possible factors that could explain why the addition of silver data results in such a large improvement: (i) the fact that the data are silver instead of bronze or (ii) the fact that a different DRS parser (Boxer, see \u00a76) is used to create the silver data instead of our own parser. We conduct an experiment to identify the impact on performance of silver versus bronze and Boxer versus our parser. The results are shown in Table 7 . Note that these experiments are performed to analyze the impact of the silver data, not to further push the score, meaning Silver (Boxergenerated) is our final model that will be compared to other approaches in \u00a76.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 449, |
|
"end": 456, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments with Additional Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For factor (i), we compare the performance of the model trained on silver and bronze versions of the exact same documents (so leaving out the manual corrections). Interestingly, we score slightly higher for the character-level model with bronze than with silver (though the difference is not statistically significant), meaning that the extra manual corrections are not beneficial (in their current format). This suggests that the silver data are closer to bronze than to the gold standard.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments with Additional Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For factor (ii), we use our own best parser (without silver data) to parse the sentences in the PMB silver data release and use that as additional training data. 8 Because the silver data contain longer and more complicated sentences than the gold data, our best parser produces more ill-formed DRSs (13.7% for char and 15.6% for word). We can either discard those instances or still maintain them for the model to learn from. For Boxer this is not an issue since only 0.3% of DRSs produced were ill-formed. We observe that a full self-training pipeline results in lower performance compared with using Boxer-produced DRSs. In fact, this does not seem to be beneficial over only using the gold standard data. Most likely, Table 8 : Test set results of our best neural models compared to two baseline models and two parsers.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 722, |
|
"end": 729, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments with Additional Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "because Boxer combines symbolic and statistical methods, it learns very different things from our neural parsers, which in turn provides more valuable information to the model. A more detailed analysis on the difference in (semantic) output is performed in \u00a76.2 and 6.3. Removing ill-formed DRSs before training leads to higher F-scores for both the char and word parsers, as well as a lower number of ill-formed DRSs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments with Additional Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this section, we compare our best neural models (with and without silver data, see Table 6 ) with two baseline systems and with two DRS parsers: AMR2DRS and Boxer. AMR2DRS is a parser that obtains DRSs from AMRs by applying a set of rules (van Noord et al., 2018) , in our case using AMRs produced by the AMR parser of van Noord and Bos (2017b) . Boxer is an existing DRS parser using a statistical combinatory categorical grammar parser for syntactic analysis and a compositional semantics based on \u03bb-calculus, followed by pronoun and presupposition resolution (Curran et al., 2007; Bos, 2008b) . SPAR is a baseline parser that outputs the same (fixed) default DRS for each input sentence. We implemented a second baseline model, SIM-SPAR, which outputs, for each sentence in the test set, the DRS of the most similar sentence in the training set. This similarity is calculated by taking the cosine similarity of the average word embedding vector (with removed stopwords) based on the GloVe embeddings (Pennington et al., 2014) . Table 8 shows the result of the comparison. The neural models comfortably outperform the baselines. We see that both our neural models outperform Boxer by a large margin when using the Boxer-labeled silver data. However, even without this dependence, the neural models perform significantly better than Boxer. It is worth noting that the character-level model significantly outperforms the word-level model, even though it cannot benefit from pre-trained word embeddings and from a tokenizer. Concurrently with our work, a neural DRS parser has been developed by Liu et al. (2018) . They use a customized neural seq2seq model that produces the DRS in three stages. It first predicts the general (deep) structure of the DRSs, after which the conditions and referents are filled in. Unfortunately, they train and evaluate their parser on annotated data from the GMB rather than from the PMB (see \u00a72). This, combined with the fact that their work is contemporaneous to the current paper, make it difficult to compare the approaches. However, we see no apparent reason why their method should not work on the PMB data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 266, |
|
"text": "(van Noord et al., 2018)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 347, |
|
"text": "Bos (2017b)", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 565, |
|
"end": 586, |
|
"text": "(Curran et al., 2007;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 598, |
|
"text": "Bos, 2008b)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1006, |
|
"end": 1031, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 1597, |
|
"end": 1614, |
|
"text": "Liu et al. (2018)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 93, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1034, |
|
"end": 1041, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "An intriguing question is what our models actually learn, and what parts of meaning are still challenging for neural methods. We do this in two ways, by performing an automatic analysis and by doing a manual inspection on a variety of semantic phenomena. Table 9 shows an overview of the different automatic evaluation metrics we implemented, with corresponding scores of the three models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 262, |
|
"text": "Table 9", |
|
"ref_id": "TABREF11" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The character-and word-level systems perform comparably in all categories except for VerbNet roles, where the character-based parser shows a clear advantage (1.6 percentage point difference). The score for WordNet synsets is similar, but the word-level model has more difficulty predicting synsets that are introduced by verbs than for nouns. It is clear that the neural models outperform Boxer consistently on each of these metrics (partly because Boxer picks the first sense by default). What also stands out is the impact of the word senses: With a perfect word sensedisambiguation module (oracle senses), large improvements can be gained for all three parsers. It is interesting to look at what errors the model makes in terms of producing ill-formed output. For both the neural parsers, only about 2% of the ill-formed DRSs are ill-formed because of a syntactic error in an individual clause (e.g., b1 Agent x1, where a fourth argument is missing), whereas all the other errors are due to a violated semantic constraint (see \u00a73.2). In other words, the produced output is a syntactically well-formed DRS but is not interpretable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "To find out how sentence length affects performance, we plot in Figure 6 the mean F-score obtained by each parser on input sentences of different lengths, from 3 to 10 words. 9 We observe that all the parsers degrade with sentence length. To identify whether any of the parsers degrades significantly more than any other, we build a regression model in which we predict the Fscore using as predictors the parser (char, word, and Boxer), the sentence length, and the number of clauses produced. According to the regression model, (i) the performance of all three systems decreases with sentence length, thus corroborating the trends shown in Figure 6 and (ii) the interaction between parser and sentence length is not significant (i.e., none of the parsers decreases significantly more than any other with sentence length). The fact that the performance of the neural parsers degrades with sentence length is not surprising, because they are based on the seq2seq architecture, and models built on this architecture for other tasks, such as machine translation, have been shown to have the same issue (Toral and S\u00e1nchez-Cartagena, 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 176, |
|
"text": "9", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1099, |
|
"end": 1134, |
|
"text": "(Toral and S\u00e1nchez-Cartagena, 2017)", |
|
"ref_id": "BIBREF53" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 72, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF6" |
|
}, |
|
{ |
|
"start": 641, |
|
"end": 649, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The automatic evaluation metrics provide overall scores but do not capture how the models perform on certain semantic phenomena present in the DRSs. Therefore, we manually inspected the test set output of the three parsers for the semantic phenomena listed in Table 2 . We here describe each phenomenon and explain how the parser output is evaluated on them.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 267, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Manual Inspection", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "The negation & modals phenomenon covers possibility (POS), necessity (NEC), and negation (NOT). The phenomenon is considered successfully captured if an automatically produced clausal form has the clause with the modal operator and the main concept is correctly put under the scope of the modal operator. For example, to capture the negation in Figure 1 , the presence of b0 NOT b3 and b3 afraid \"a.01\" s1 is sufficient. Scope ambiguity counts nested pairs of scopal operators such as possibility (POS), necessity (NEC), negation (NOT), and implication (IMP). Pronoun resolution checks if an anaphoric pronoun and its antecedent are represented by the same discourse referent. Discourse relation & implication involves determining a discourse relation or an implication with a main concept in each of their scopes (i.e., boxes). For instance, to get the discourse relation in Figure 2 correctly, a clausal form needs to include b0 CONTINUATION b1 b5, b1 play \"v.03\" e1, and b5 sing \"v.01\" e2. Finally, the embedded clauses phenomenon verifies whether the main verb concept of an embedded clause is placed inside the propositional box (PRP). This phenomenon also covers control verbs: It checks whether a controlled argument of a subordinate verb is correctly identified as an argument of a control verb. Table 10 : Manual evaluation of the output of the three semantic parsers on several semantic phenomena. Reported numbers are accuracies.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 345, |
|
"end": 353, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 876, |
|
"end": 884, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1304, |
|
"end": 1312, |
|
"text": "Table 10", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Manual Inspection", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "The results of the semantic evaluation of the parsers on the test set is given in Table 10 . The character-level parser performs better than the word-level parser on all the phenomena except one. Even though both our neural parsers clearly outperformed Boxer in terms of F-score, they perform worse than Boxer on the selected semantic phenomena. Although the differences are not big, Boxer obtained the highest score for four out of five phenomena. This suggests that just the F-score is perhaps not good enough as an evaluation metric, or that the final F-score should perhaps be weighted towards certain clauses. For example, it is arguably more important to capture a negation correctly than tense. Our current metric only gives a rough indication about the contents, but not about the inferential capabilities of the meaning representation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 90, |
|
"text": "Table 10", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Manual Inspection", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "We implemented a general, end-to-end neural seq2seq model that is able to produce well-formed DRSs with high accuracy (RQ1). Character-level models can outperform word-level models, even though they are not dependent on tokenization and pre-trained word embeddings (RQ2). It is beneficial to rewrite DRS variables to a more general representation (RQ3). Obtaining and using additional data can benefit performance as well, though it might be better to use an external parser rather than doing a full self-training pipeline (RQ4). F-score is only a rough measure for semantic accuracy: Boxer still outperformed our best neural models on a subset of specific semantic phenomena (RQ5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We think there are many opportunities for future work. Because the sentences in the PMB data set are relatively short, it makes sense to investigate seq2seq models performing well for longer texts. There are a few promising directions here that could combat the degrading performance on longer sentences. First, the Transformer model (Vaswani et al., 2017) is an interesting candidate for exploration, a state-of-the-art neural model developed for MT that does not have worse performance for longer sentences. Second, a seq2seq model that is able to first predict the general structure of the DRS, after which it can fill in the details, similar to Liu et al. (2018) , is something that could be explored. A third possibility is a neural parser that tries to build the DRS incrementally, producing clauses for different parts of the sentence individually, and then combining them to a final DRS.", |
|
"cite_spans": [ |
|
{ |
|
"start": 334, |
|
"end": 356, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF54" |
|
}, |
|
{ |
|
"start": 649, |
|
"end": 666, |
|
"text": "Liu et al. (2018)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Concerning the evaluation of DRS parsers, we feel there are a couple of issues that could be addressed in future work. One idea is to facilitate computing F-scores tailored to specific semantic phenomena that are dubbed important, so the evaluation we performed in this paper manually could be carried out automatically. Another idea is to evaluate the application of DRSs to improve performance on other linguistic or semantic tasks in which DRSs that capture the full semantics will, presumably, have an advantage. A combination of glass-box and black-box evaluation seems a promising direction here (Bos, 2008a; van Noord et al., 2018 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 602, |
|
"end": 614, |
|
"text": "(Bos, 2008a;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 615, |
|
"end": 637, |
|
"text": "van Noord et al., 2018", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "http://pmb.let.rug.nl/data.php.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Counter ignores REF clauses in the calculation of the F-score because they are usually redundant and therefore inflate the final score(van Noord et al., 2018).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This is done to keep the vocabulary small. An example is to change all proper names to NAME in both the sentence and meaning representation during training. When producing output, the original names are restored by switching NAME with a proper name found in the input sentence(Konstas et al., 2017).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Common Crawl version trained on 840 billion tokens, vector size 300.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Gold tokenization is available in the data set, but using this would not reflect practical applications of DRS parsing, as we want raw text as input for a realistic setting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that we cannot apply the manual corrections, so in PMB terminology, these data are bronze instead of silver.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Shorter and longer sentences are excluded as there are fewer than 10 input sentences for any such length-for example, there are only three sentences that have two words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was funded by the NWO-VICI grant \"Lost in Translation-Found in Meaning\" (288-89-003). The Tesla K40 GPU used in this work was kindly donated to us by the NVIDIA Corporation. We also want to thank the three anonymous reviewers for their comments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The Parallel Meaning Bank: Towards a multilingual corpus of translations annotated with compositional meaning representations", |
|
"authors": [ |
|
{ |
|
"first": "Lasha", |
|
"middle": [], |
|
"last": "Abzianidze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Bjerva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kilian", |
|
"middle": [], |
|
"last": "Evang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hessel", |
|
"middle": [], |
|
"last": "Haagsma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rik", |
|
"middle": [], |
|
"last": "Van Noord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Ludmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duc-Duy", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "242--247", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lasha Abzianidze, Johannes Bjerva, Kilian Evang, Hessel Haagsma, Rik van Noord, Pierre Ludmann, Duc-Duy Nguyen, and Johan Bos. 2017. The Parallel Meaning Bank: Towards a multilingual corpus of translations annotated with compositional meaning representations. In Proceedings of the 15th Conference of the Eu- ropean Chapter of the Association for Compu- tational Linguistics: Volume 2, Short Papers, pages 242-247, Valencia, Spain. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Towards universal semantic tagging", |
|
"authors": [ |
|
{ |
|
"first": "Lasha", |
|
"middle": [], |
|
"last": "Abzianidze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 12th International Conference on Computational Semantics (IWCS 2017) -Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lasha Abzianidze and Johan Bos. 2017. Towards universal semantic tagging. In Proceedings of the 12th International Conference on Computa- tional Semantics (IWCS 2017) -Short Papers, Montpellier, France. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Tree-structured decoding with doublyrecurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Alvarez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Melis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Jaakkola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Alvarez-Melis and Tommi S. Jaakkola. 2017. Tree-structured decoding with doubly- recurrent neural networks. In Proceedings of the International Conference on Learning Repre- sentations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Reference to Abstract Objects in Discourse", |
|
"authors": [ |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Asher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicholas Asher. 1993. Reference to Abstract Objects in Discourse. Kluwer Academic Publishers.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Logics of Conversation. Studies in natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nicholas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Asher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lascarides", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicholas. Asher and Alex. Lascarides. 2003. Log- ics of Conversation. Studies in natural language processing. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Abstract Meaning Representation for sembanking", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Banarescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shu", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Madalina", |
|
"middle": [], |
|
"last": "Georgescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kira", |
|
"middle": [], |
|
"last": "Griffitt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulf", |
|
"middle": [], |
|
"last": "Hermjakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "178--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sem- banking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178-186, Sofia, Bulgaria.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Developing a large semantically annotated corpus", |
|
"authors": [ |
|
{ |
|
"first": "Valerio", |
|
"middle": [], |
|
"last": "Basile", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kilian", |
|
"middle": [], |
|
"last": "Evang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noortje", |
|
"middle": [], |
|
"last": "Venhuizen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC 2012)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3196--3200", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Valerio Basile, Johan Bos, Kilian Evang, and Noortje Venhuizen. 2012. Developing a large semantically annotated corpus. In Proceedings of the Eighth International Conference on Lan- guage Resources and Evaluation (LREC 2012), pages 3196-3200, Istanbul, Turkey.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Presupposition projection in DRT: A critical assesment", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Beaver", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "The Construction of Meaning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "23--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David I. Beaver. 2002. Presupposition projection in DRT: A critical assesment. In The Con- struction of Meaning, pages 23-43. Stanford University.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Semantic parsing via paraphrasing", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1415--1425", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Berant and Percy Liang. 2014. Seman- tic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1415-1425.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Semantic tagging with deep residual networks", |
|
"authors": [ |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Bjerva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3531--3541", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johannes Bjerva, Barbara Plank, and Johan Bos. 2016. Semantic tagging with deep resid- ual networks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3531-3541, Osaka, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Findings of the 2017 conference on machine translation (WMT17)", |
|
"authors": [ |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajen", |
|
"middle": [], |
|
"last": "Chatterjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvette", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shujian", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Huck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varvara", |
|
"middle": [], |
|
"last": "Logacheva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matteo", |
|
"middle": [], |
|
"last": "Negri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Rubino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Turchi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Second Conference on Machine Translation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "169--214", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Find- ings of the 2017 conference on machine translation (WMT17). In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, pages 169-214, Copenhagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A hierarchical unification of LIRICS and VerbNet semantic roles", |
|
"authors": [ |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Corvey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Volha", |
|
"middle": [], |
|
"last": "Petukhova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harry", |
|
"middle": [], |
|
"last": "Bunt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 5th IEEE International Conference on Semantic Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "483--489", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Claire Bonial, William J. Corvey, Martha Palmer, Volha Petukhova, and Harry Bunt. 2011. A hi- erarchical unification of LIRICS and VerbNet semantic roles. In Proceedings of the 5th IEEE International Conference on Semantic Comput- ing (ICSC 2011), pages 483-489.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Let's not argue about semantics", |
|
"authors": [ |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 6th Language Resources and Evaluation Conference (LREC 2008)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2835--2840", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johan Bos. 2008a. Let's not argue about seman- tics. In Proceedings of the 6th Language Resources and Evaluation Conference (LREC 2008), pages 2835-2840, Marrakech, Morocco.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Wide-coverage semantic analysis with boxer", |
|
"authors": [ |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Semantics in Text Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "277--286", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johan Bos. 2008b. Wide-coverage semantic anal- ysis with boxer. In Semantics in Text Pro- cessing. STEP 2008 Conference Proceedings, volume 1 of Research in Computational Seman- tics, pages 277-286. College Publications.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Open-domain semantic parsing with Boxer", |
|
"authors": [ |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 20th Nordic Conference of Computational Linguistics (NODALIDA 2015)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "301--304", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johan Bos. 2015. Open-domain semantic pars- ing with Boxer. In Proceedings of the 20th Nordic Conference of Computational Linguis- tics (NODALIDA 2015), pages 301-304.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The Groningen Meaning Bank", |
|
"authors": [ |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valerio", |
|
"middle": [], |
|
"last": "Basile", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kilian", |
|
"middle": [], |
|
"last": "Evang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noortje", |
|
"middle": [], |
|
"last": "Venhuizen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Bjerva", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Handbook of Linguistic Annotation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johan Bos, Valerio Basile, Kilian Evang, Noortje Venhuizen, and Johannes Bjerva. 2017. The Groningen Meaning Bank. In Nancy Ide and James Pustejovsky, editors, Handbook of Lin- guistic Annotation. Springer Netherlands.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Massive exploration of neural machine translation architectures", |
|
"authors": [ |
|
{ |
|
"first": "Denny", |
|
"middle": [], |
|
"last": "Britz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Goldie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1442--1451", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc Le. 2017. Massive exploration of neural machine translation architectures. In Proceedings of the 2017 Conference on Empir- ical Methods in Natural Language Processing, pages 1442-1451.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Lambda calculus notation with nameless dummies, a tool for automatic formula manipulation, with application to the church-rosser theorem", |
|
"authors": [ |
|
{ |
|
"first": "Nicolaas", |
|
"middle": [], |
|
"last": "Govert De Bruijn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1972, |
|
"venue": "Indagationes Mathematicae (Proceedings)", |
|
"volume": "75", |
|
"issue": "", |
|
"pages": "381--392", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicolaas Govert de Bruijn. 1972. Lambda calcu- lus notation with nameless dummies, a tool for automatic formula manipulation, with applica- tion to the church-rosser theorem. In Indaga- tiones Mathematicae (Proceedings), volume 75, pages 381-392. Elsevier.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Robust incremental neural semantic graph parsing", |
|
"authors": [ |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Buys", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1215--1226", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Buys and Phil Blunsom. 2017. Robust incremental neural semantic graph parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1215-1226.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Smatch: An evaluation metric for semantic feature structures", |
|
"authors": [ |
|
{ |
|
"first": "Shu", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "748--752", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shu Cai and Kevin Knight. 2013. Smatch: An evaluation metric for semantic feature struc- tures. In Proceedings of the 51st Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 748-752, Sofia, Bulgaria. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Linguistically motivated large-scale NLP with C&C and Boxer", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Curran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--36", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Curran, Stephen Clark, and Johan Bos. 2007. Linguistically motivated large-scale NLP with C&C and Boxer. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 33-36, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "An incremental parser for abstract meaning representation", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Damonte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shay", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giorgio", |
|
"middle": [], |
|
"last": "Satta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "536--546", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for ab- stract meaning representation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguis- tics: Volume 1, Long Papers, pages 536-546, Valencia, Spain. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Stronger baselines for trustable results in neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Denkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the First Workshop on Neural Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "18--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Denkowski and Graham Neubig. 2017. Stronger baselines for trustable results in neu- ral machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 18-27, Vancouver. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Language to logical form with neural attention", |
|
"authors": [ |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "33--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceed- ings of the 54th Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers), pages 33-43, Berlin, Germany. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Representing discourse in context", |
|
"authors": [ |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Van Eijck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hans", |
|
"middle": [], |
|
"last": "Kamp", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Handbook of Logic and Language", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "179--240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan van Eijck and Hans Kamp. 1997. Repre- senting discourse in context. In Johan van Benthem and Alice ter Meulen, editors, Hand- book of Logic and Language, pages 179-240. Elsevier, MIT.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Elephant: Sequence labeling for word and sentence segmentation", |
|
"authors": [ |
|
{ |
|
"first": "Kilian", |
|
"middle": [], |
|
"last": "Evang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valerio", |
|
"middle": [], |
|
"last": "Basile", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Chrupa\u0142a", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1422--1426", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kilian Evang, Valerio Basile, Grzegorz Chrupa\u0142a, and Johan Bos. 2013. Elephant: Sequence labeling for word and sentence segmentation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 1422-1426, Seattle, Washington, USA.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "WordNet. An Electronic Lexical Database", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet. An Electronic Lexical Database. The MIT Press, Cambridge, Ma., USA.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "of Current Research in the Semantics/Pragmatics interface", |
|
"authors": [ |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Geurts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bart Geurts. 1999. Presuppositions and Pronouns, volume 3 of Current Research in the Semantic- s/Pragmatics interface. Elsevier.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Characterlevel question answering with attention", |
|
"authors": [ |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Golub", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1598--1607", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaodong He and David Golub. 2016. Character- level question answering with attention. In Proceedings of the 2016 Conference on Empir- ical Methods in Natural Language Processing, pages 1598-1607.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "BPEmb: Tokenization-free pre-trained subword embeddings in 275 languages", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Heinzerling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Strube", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Heinzerling and Michael Strube. 2018. BPEmb: Tokenization-free pre-trained subword embeddings in 275 languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Data recombination for neural semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Robin", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "12--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robin Jia and Percy Liang. 2016. Data recombina- tion for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 12-22.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Discourse, anaphora and parsing", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ewan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "11th International Conference on Computational Linguistics. Proceedings of Coling '86", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "669--675", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Johnson and Ewan Klein. 1986. Discourse, anaphora and parsing. In 11th International Conference on Computational Linguistics. Pro- ceedings of Coling '86, pages 669-675, Univer- sity of Bonn.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "A theory of truth and semantic representation", |
|
"authors": [ |
|
{ |
|
"first": "Hans", |
|
"middle": [], |
|
"last": "Kamp", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "Truth, Interpretation and Information", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hans Kamp. 1984. A theory of truth and se- mantic representation. In Jeroen Groenendijk, Theo M.V. Janssen, and Martin Stokhof, editors, Truth, Interpretation and Information, pages 1-41. FORIS, Dordrecht -Holland/ Cinnaminson -U.S.A.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "From Discourse to Logic; An Introduction to Model theoretic Semantics of Natural Language, Formal Logic and DRT", |
|
"authors": [ |
|
{ |
|
"first": "Hans", |
|
"middle": [], |
|
"last": "Kamp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Uwe", |
|
"middle": [], |
|
"last": "Reyle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hans Kamp and Uwe Reyle. 1993. From Dis- course to Logic; An Introduction to Model the- oretic Semantics of Natural Language, Formal Logic and DRT. Kluwer, Dordrecht.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Open-NMT: Open-source toolkit for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuntian", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Senellart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of ACL 2017, System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "67--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Open- NMT: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, Sys- tem Demonstrations, pages 67-72. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Neural AMR: Sequence-to-sequence models for parsing and generation", |
|
"authors": [ |
|
{ |
|
"first": "Ioannis", |
|
"middle": [], |
|
"last": "Konstas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Srinivasan", |
|
"middle": [], |
|
"last": "Iyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Yatskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "146--157", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neu- ral AMR: Sequence-to-sequence models for parsing and generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 146-157, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Learning compositional semantics for open domain semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Phong", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Willem", |
|
"middle": [], |
|
"last": "Zuidema", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of COLING 2012", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1535--1552", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phong Le and Willem Zuidema. 2012. Learning compositional semantics for open domain se- mantic parsing. Proceedings of COLING 2012, pages 1535-1552.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Latent predictor networks for code generation", |
|
"authors": [ |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [ |
|
"Moritz" |
|
], |
|
"last": "Hermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom\u00e1\u0161", |
|
"middle": [], |
|
"last": "Ko\u010disk\u1ef3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fumin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Senior", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "599--609", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u1ef3, Fumin Wang, and Andrew Senior. 2016. Latent predictor networks for code generation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 599-609.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Discourse representation structure parsing", |
|
"authors": [ |
|
{ |
|
"first": "Jiangming", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shay", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "429--439", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiangming Liu, Shay B. Cohen, and Mirella Lapata. 2018. Discourse representation struc- ture parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 429-439.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Effective approaches to attention-based neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1412--1421", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Pro- cessing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Combining montague semantics and discourse representation", |
|
"authors": [ |
|
{ |
|
"first": "Reinhard", |
|
"middle": [], |
|
"last": "Muskens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Linguistics and Philosophy", |
|
"volume": "19", |
|
"issue": "", |
|
"pages": "143--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reinhard Muskens. 1996. Combining montague semantics and discourse representation. Lin- guistics and Philosophy, 19:143-186.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Evaluating scoped meaning representations", |
|
"authors": [ |
|
{ |
|
"first": "Rik", |
|
"middle": [], |
|
"last": "Van Noord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lasha", |
|
"middle": [], |
|
"last": "Abzianidze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hessel", |
|
"middle": [], |
|
"last": "Haagsma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rik van Noord, Lasha Abzianidze, Hessel Haagsma, and Johan Bos. 2018. Evaluating scoped meaning representations. In Proceed- ings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Re- sources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Dealing with co-reference in neural semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Rik", |
|
"middle": [], |
|
"last": "Van Noord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2nd Workshop on Semantic Deep Learning (SemDeep-2)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--49", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rik van Noord and Johan Bos. 2017a. Dealing with co-reference in neural semantic parsing. In Proceedings of the 2nd Workshop on Semantic Deep Learning (SemDeep-2), pages 41-49.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Neural semantic parsing by character-based translation: Experiments with abstract meaning representations", |
|
"authors": [ |
|
{ |
|
"first": "Rik", |
|
"middle": [], |
|
"last": "Van Noord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Computational Linguistics in the Netherlands Journal", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "93--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rik van Noord and Johan Bos. 2017b. Neural semantic parsing by character-based transla- tion: Experiments with abstract meaning rep- resentations. Computational Linguistics in the Netherlands Journal, 7:93-108.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Computer-intensive Methods for Testing Hypotheses", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Noreen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric W. Noreen. 1989. Computer-intensive Meth- ods for Testing Hypotheses. Wiley New York.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "GloVe: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Prolog and Natural Language Analysis. CSLI Lecture Notes 10", |
|
"authors": [ |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fernando Pereira and Stuart Shieber. 1987. Prolog and Natural Language Analysis. CSLI Lecture Notes 10. Chicago University Press, Stanford.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Presupposition projection as anaphora resolution", |
|
"authors": [ |
|
{ |
|
"first": "Rob", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Van Der", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sandt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Journal of Semantics", |
|
"volume": "9", |
|
"issue": "4", |
|
"pages": "333--377", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1093/jos/9.4.333" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rob A. Van der Sandt. 1992. Presupposition projection as anaphora resolution. Journal of Semantics, 9(4):333-377.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Edinburgh neural machine translation systems for WMT 16", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the First Conference on Machine Translation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "371--376", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Edinburgh neural machine trans- lation systems for WMT 16. In Proceedings of the First Conference on Machine Transla- tion: Volume 2, Shared Task Papers, volume 2, pages 371-376.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1715--1725", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Dropout: A simple way to prevent neural networks from overfitting", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Krizhevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "15", |
|
"issue": "1", |
|
"pages": "1929--1958", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neu- ral networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "27", |
|
"issue": "", |
|
"pages": "3104--3112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neu- ral networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Pro- cessing Systems 27, pages 3104-3112. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "A multifaceted evaluation of neural versus phrase-based machine translation for 9 language directions", |
|
"authors": [ |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Toral", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "V\u00edctor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "S\u00e1nchez-Cartagena", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1063--1073", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antonio Toral and V\u00edctor M. S\u00e1nchez-Cartagena. 2017. A multifaceted evaluation of neural versus phrase-based machine translation for 9 language directions. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1063-1073, Valencia, Spain. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "BUILDRS: An implementation of DR theory and LFG", |
|
"authors": [ |
|
{ |
|
"first": "Hajime", |
|
"middle": [], |
|
"last": "Wada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Asher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "11th International Conference on Computational Linguistics. Proceedings of Coling '86", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "540--545", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hajime Wada and Nicholas Asher. 1986. BUILDRS: An implementation of DR theory and LFG. In 11th International Conference on Computational Linguistics. Proceedings of Coling '86, pages 540-545, University of Bonn.", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "Learning to parse database queries using inductive logic programming", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Zelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the national conference on artificial intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1050--1055", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using induc- tive logic programming. In Proceedings of the national conference on artificial intelligence, pages 1050-1055.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "A segmented DRS. Discourse relations are formatted with uppercase characters.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "The sequence-to-sequence model with word-representation input. SEP is used as a special character to separate clauses in the output.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"text": "Different methods of variable naming exemplified on the clausal form of", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF6": { |
|
"num": null, |
|
"text": "Performance of each parser for sentences of different length.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"4\">: Number of documents, sentences, and to-</td></tr><tr><td colspan=\"4\">kens for the English part of PMB release 2.1.0.</td></tr><tr><td colspan=\"4\">Note that the number of tokens is based on the</td></tr><tr><td colspan=\"4\">PMB tokenization, treating multiword expressions</td></tr><tr><td>as a single token.</td><td/><td/><td/></tr><tr><td>Phenomenon</td><td colspan=\"2\">Train Test</td><td>Silver</td></tr><tr><td>Negation & modals</td><td>442</td><td>73</td><td>17,527</td></tr><tr><td>Scope ambiguity</td><td>\u224867</td><td>15</td><td>\u22483,108</td></tr><tr><td>Pronoun resolution</td><td>\u2248291</td><td>31</td><td>\u22483,893</td></tr><tr><td>Discourse rel. & imp.</td><td>254</td><td>33</td><td>16,654</td></tr><tr><td>Embedded clauses</td><td>\u2248160</td><td colspan=\"2\">30 \u224846,458</td></tr></table>", |
|
"text": "", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Counts of relevant semantic phenomena for PMB release 2.1.0.", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Parameters explored during training and testing with their final values. All other parameters have default values.", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": ") are not represented as character sequences, but treated as compound characters, meaning that REF is not treated as a sequence of R, E and F, but directly as REF. All proper names, WordNet senses, time/date expressions, and numerals are represented as character sequences.", |
|
"html": null |
|
}, |
|
"TABREF11": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>: F-scores of fine-grained evaluation on the</td></tr><tr><td>test set of the three semantic parsers.</td></tr></table>", |
|
"text": "", |
|
"html": null |
|
}, |
|
"TABREF13": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "). Patrick Blackburn and Johan Bos. 2005. Representation and Inference for Natural Language. A First Course in Computational Semantics. CSLI.", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |