|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:34:36.391482Z" |
|
}, |
|
"title": "Learning compositional structures for semantic graph parsing", |
|
"authors": [ |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Groschwitz", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Saarland University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Meaghan", |
|
"middle": [], |
|
"last": "Fowlie", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Utrecht University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Saarland University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "AM dependency parsing is a method for neural semantic graph parsing that exploits the principle of compositionality. While AM dependency parsers have been shown to be fast and accurate across several graphbanks, they require explicit annotations of the compositional tree structures for training. In the past, these were obtained using complex graphbankspecific heuristics written by experts. Here we show how they can instead be trained directly on the graphs with a neural latent-variable model, drastically reducing the amount and complexity of manual heuristics. We demonstrate that our model picks up on several linguistic phenomena on its own and achieves comparable accuracy to supervised training, greatly facilitating the use of AM dependency parsing for new sembanks.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "AM dependency parsing is a method for neural semantic graph parsing that exploits the principle of compositionality. While AM dependency parsers have been shown to be fast and accurate across several graphbanks, they require explicit annotations of the compositional tree structures for training. In the past, these were obtained using complex graphbankspecific heuristics written by experts. Here we show how they can instead be trained directly on the graphs with a neural latent-variable model, drastically reducing the amount and complexity of manual heuristics. We demonstrate that our model picks up on several linguistic phenomena on its own and achieves comparable accuracy to supervised training, greatly facilitating the use of AM dependency parsing for new sembanks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "It is generally accepted in linguistic semantics that meaning is compositional, i.e. that the meaning representation for a sentence can be computed by evaluating a tree bottom-up. A compositional parsing model not only reflects this insight, but has practical advantages such as in compositional generalisation (e.g. Herzig and Berant 2020), i.e. systematically generalizing from limited data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, in developing a compositional semantic parser, one faces the task of figuring out what exactly the compositional structures -i.e. the trees that link the sentence and the meaning representation -should look like. This is challenging even for expert linguists; for instance, (Copestake et al., 2001) report that 90% of the development time of the English Resource Grammar (Copestake and Flickinger, 2000) went into the development of the syntax-semantics interface.", |
|
"cite_spans": [ |
|
{ |
|
"start": 283, |
|
"end": 307, |
|
"text": "(Copestake et al., 2001)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 412, |
|
"text": "(Copestake and Flickinger, 2000)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Compositional semantic parsers which are learned from data face an analogous problem: to train a such a parser, the compositional structures must be made explicit. However, these structures are not annotated in most sembanks. For instance, the AM (Apply-Modify) dependency parser of Groschwitz et al. (2018) uses a neural model to predict AM dependency trees, compositional structures that evaluate to semantic graphs. Their parser achieves high accuracy and parsing speed (Lindemann et al., 2020) across a variety of English semantic graphbanks. To obtain an AM dependency tree for each graph in the corpus, they use hand-written graphbank-specific heuristics. These heuristics cost significant time and expert knowledge to create, limiting the ability of the AM parser to scale to new sembanks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 283, |
|
"end": 307, |
|
"text": "Groschwitz et al. (2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 473, |
|
"end": 497, |
|
"text": "(Lindemann et al., 2020)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we drastically reduce the need for hand-written heuristics for training the AM dependency parser. We first present a graphbankindependent method to compactly represent the relevant compositional structures of a graph in a tree automaton. We then train a neural AM dependency parser directly on these tree automata. Our code is available at github.com/coli-saar/am-parser.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We evaluate the consistency and usefulness of the learned compositional structures in two ways. We first evaluate the accuracy of the trained AM dependency parsers, across four graphbanks, and find that it is on par with an AM dependency parser that was trained on the hand-designed compositional structures of . We then analyze the compositional structures which our algorithm produced, and find that they are linguistically consistent and meaningful. We expect that our methods will facilitate the design of compositional models of semantics in the future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Compositional semantic graph parsers other than AM dependency parsers, like Artzi et al. (2015) , Figure 1 : AM dep-trees and graphs for the fairy that begins to glow. We usually write our example AM dep-trees without alignments as in (b). We include node names where helpful, as in (c), where e.g. b is labeled begin. Peng et al. (2015) and Chen et al. (2018) , use CCG and HRG based grammars to parse AMR and EDS (Flickinger et al., 2017) . They use a combination of heuristics, hand-annotated compositional structures and sampling to obtain training data for their parsers, in contrast to our joint neural technique. None of these approaches use slot names that carry meaning; to the best of our knowledge this work is the first to learn them from data. Fancellu et al. (2019) use DAG grammars for compositional parsing of Discourse Representation Structures (DRS). Their algorithm for extracting the compositional structure of a graph is deterministic and graphbank-independent, but comes at a cost: for example, rules for heads require different versions depending on how often the head is modified, reducing the reusability of the rule. Maillard et al. (2019) and Havrylov et al. (2019) learn compositional, continuous-space neural sentence encodings using latent tree structures. Their tasks are different: they learn to predict continousspace embeddings; we learn to predict symbolic compositional structures. Similar observations hold for self-attention (Vaswani et al., 2017; Kitaev and Klein, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 95, |
|
"text": "Artzi et al. (2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 319, |
|
"end": 337, |
|
"text": "Peng et al. (2015)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 360, |
|
"text": "Chen et al. (2018)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 415, |
|
"end": 440, |
|
"text": "(Flickinger et al., 2017)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 757, |
|
"end": 779, |
|
"text": "Fancellu et al. (2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1143, |
|
"end": 1165, |
|
"text": "Maillard et al. (2019)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1170, |
|
"end": 1192, |
|
"text": "Havrylov et al. (2019)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1463, |
|
"end": 1485, |
|
"text": "(Vaswani et al., 2017;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 1486, |
|
"end": 1509, |
|
"text": "Kitaev and Klein, 2018)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 106, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "begin S ARG0 O[S] ARG1 G-begin fairy G-fairy elf G-elf glow S ARG0 G-glow charm S ARG0 O ARG1 G-charm charm O ARG0 S ARG1 G-charmP", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Compositional semantic graph parsing methods do not predict a graph directly, but rather predict a compositional structure which in turn determines the graph. Groschwitz et al. (2018) represent the compositional structure of a graph with AM dependency trees (AM dep-trees for short) like the one in Fig. 1a . It describes the way the meanings of the words -the graph fragments in Fig. 2 -combine to form the semantic graph in Fig. 1c , here an AMR (Banarescu et al., 2013) . The AM dep-tree edges are labeled with graph-combining operations, taken from the Apply-Modify (AM) algebra (Groschwitz et al., 2017; Groschwitz, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 183, |
|
"text": "Groschwitz et al. (2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 448, |
|
"end": 472, |
|
"text": "(Banarescu et al., 2013)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 583, |
|
"end": 608, |
|
"text": "(Groschwitz et al., 2017;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 609, |
|
"end": 626, |
|
"text": "Groschwitz, 2019)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 299, |
|
"end": 306, |
|
"text": "Fig. 1a", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 386, |
|
"text": "Fig. 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 426, |
|
"end": 433, |
|
"text": "Fig. 1c", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "AM dependency parsing", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Graphs are built out of fragments called graph constants (Fig. 2 ). Each graph constant has a root, marked with a rectangular outline, and may have special node markers called sources (Courcelle and Engelfriet, 2012) , drawn in red, which mark the empty slots where other graphs will be inserted.", |
|
"cite_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 216, |
|
"text": "(Courcelle and Engelfriet, 2012)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 64, |
|
"text": "(Fig. 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "AM dependency parsing", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In Fig. 1a , the APP O operation plugs the root of G-glow into the O source of G-begin. Because G-begin and G-glow both have an S-source, APP O merges these nodes, creating a reentrancy, i.e. an undirected cycle, and yielding Fig. 1d , which is in turn attached at S to the root of G-fairy by MOD S . APP fills a source of a head with an argument while MOD uses a source of a modifier to connect it to a head; both operations keep the root of the head.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Fig. 1a", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 226, |
|
"end": 233, |
|
"text": "Fig. 1d", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "AM dependency parsing", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Types The [S] annotation at the O-source of G-begin in Fig. 2 is a request as to what the type of the O argument of G-begin should be. The type of a graph is the set of its sources with their request annotations, so the request [S] means that the source set of the argument must be {S}. Because this is true of G-glow, the AM dependency tree is well-typed; otherwise the tree could not be evaluated to a graph. Thus, the graph constants lexically specify the semantic valency of each word as well as reentrancies due to e.g. control.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 55, |
|
"end": 61, |
|
"text": "Fig. 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "AM dependency parsing", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "If a graph has no sources, we say it has the empty type [ ]; if a source in a graph printed here has no annotation, it is assumed to have the empty request (i.e. its argument must have no sources).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "AM dependency parsing", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Parsing Groschwitz et al. (2018) use a neural supertagger and dependency parser to predict scores for graph constants and edges respectively. Computing the highest scoring well-typed AM dep-tree is NP-hard; we use their fixed-tree approximate decoder here.", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 32, |
|
"text": "Groschwitz et al. (2018)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "AM dependency parsing", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The central challenge of compositional methods lies in the fact that the compositional structures are not provided in the graphbanks. Existing AM parsers (Groschwitz et al., 2018; Lindemann et al., , 2020 use hand-built heuristics to extract AM dep-trees for supervised training from the graphs in the graphbank. These heuristics require extensive expert work, including graphbank-specific decisions for source allocations and graphbank-and phenomenon-specific patterns to extract type requests for reentrancies. In this section we present a simpler yet more complete method for obtaining the basic structure of an AM dep-tree for a given semantic graph G (for decomposing the graph), with much reduced reliance on heuristics. We will learn meaningful source names jointly with training the parser in \u00a75 and \u00a76.", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 179, |
|
"text": "(Groschwitz et al., 2018;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 180, |
|
"end": 204, |
|
"text": "Lindemann et al., , 2020", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decomposition algorithm", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Notation. We treat graphs as a quadruple", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decomposition algorithm", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "G = N G , r G , E G , L G ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decomposition algorithm", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "where the nodes N G are arbitrary objects (in the examples here we use lowercase letters), r G \u2208 N G is the root, E G \u2286 N G \u00d7N G is a set of directed edges, and L G is the labelling function for the nodes and edges. For example in Fig. 3a , the node g is labeled \"glow\". The node identities are not relevant for graph identity or evaluation measures, but allow us to refer to specific nodes during decomposition. We formalize AM dep-trees as similar quadruples. Note that our example graphs are all AMRs, but our algorithms apply unchanged to all graphbanks", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 238, |
|
"text": "Fig. 3a", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Decomposition algorithm", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Let us first consider the case where the semantic graph G has no reentrancies, like in Fig. 3a . The first step in obtaining the AM dep-tree for G is to obtain the basic shape of the constants. We let each graph constant contain exactly one labeled node. Each edge belongs to the constant of exactly one node. The edges in the constant of a node are called its blob (Groschwitz et al., 2017) ; the blobs partition the edge set of the graph. For example, the blobs of the AMR in Fig. 3a are g plus the 'ARG0' edge, t plus the 'mod' edge, and f . We normalise edges so that they point away from the node to whose blob they belong, like in Fig. 3b , where the 'mod' edge is reversed and grouped with the node t to match P-tiny in Fig. 4 . We add an -of suffix to the label of reversed edges. From here on, we assume all graph edges to be normalised this way.", |
|
"cite_spans": [ |
|
{ |
|
"start": 366, |
|
"end": 391, |
|
"text": "(Groschwitz et al., 2017)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 94, |
|
"text": "Fig. 3a", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 478, |
|
"end": 485, |
|
"text": "Fig. 3a", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 637, |
|
"end": 644, |
|
"text": "Fig. 3b", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 727, |
|
"end": 733, |
|
"text": "Fig. 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Basic transformation to AM dep-trees", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Heuristics for this partition of edges into blobs are simple yet effective. Thus, this is the only part of this method where we still rely on graphbankspecific heuristics. (We use the same blob heuristics as in our experiments).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic transformation to AM dep-trees", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Once the decision of which edge goes in which blob is made, we obtain canonical constants, which are single node constants using placeholder source names and the empty request at every source; see e.g. P-glow in Fig. 4 (P for 'placeholder'). Placeholder source names are graphspecific source names: for a given argument slot in a constant, let n be the node that eventually fills it in G; we write n for the placeholder source in that slot. For example in the AM dep-tree in Fig. 3c the source f in P-glow ( Fig. 4 ) gets filled by node f in the AMR in Fig. 3b . These placeholder sources are unique within the graph, allowing us to track source names through the AM dep-tree. When we restrict ourselves to the canonical constants, in a setting without reentrancies, the compositional structure is fully determined by the structure of the graph:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 218, |
|
"text": "Fig. 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 475, |
|
"end": 482, |
|
"text": "Fig. 3c", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 508, |
|
"end": 514, |
|
"text": "Fig. 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 553, |
|
"end": 560, |
|
"text": "Fig. 3b", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Basic transformation to AM dep-trees", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Lemma 4.1. For a graph G without reentrancies, given a partition of G into blobs, there is exactly one AM dep-tree C G with canonical constants that evaluates to G.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic transformation to AM dep-trees", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We call this AM dep-tree the canonical AM tree Fig. 3c shows the canonical AM tree for the graph in Fig. 3b , using the canonical constants in Fig. 4 . The canonical AM tree uses the same nodes and root as G, and essentially the same edges, but all edges point away from the root, forming a tree. Each node is labeled with its canonical constant. Each edge n \u2212 \u2192 m \u2208 E C is labeled APP m if the corresponding edge in the graph has the same direction, and is labeled MOD n if there is instead an edge m \u2212 \u2192 n in G.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 54, |
|
"text": "Fig. 3c", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 100, |
|
"end": 107, |
|
"text": "Fig. 3b", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 149, |
|
"text": "Fig. 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Basic transformation to AM dep-trees", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "C G = N G , r G , E C , L C of G.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic transformation to AM dep-trees", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Finding AM dep-trees for graphs with reentrancies, like in Fig. 6a , is more challenging. To solve the problem in its generality, we first unroll the graph as in Fig. 6b We then obtain a canonical AM-tree C U for the unrolled graph U as in \u00a74.1 (see Fig. 6c ), but REF-n nodes fill n-sources; e.g. x has an incoming APP f edge here. C U evaluates to U , not to G; we obtain an AM dep-tree that evaluates to G through a process called resolving the reentrancies, which removes all REF-nodes and instead expresses the reentrancies with the AM type system. Fig. 6e shows the result T of applying this resolution process to C U in Fig. 6c . In T , the s and g sources of the graph P -and (see Fig. 5 ) each have a request [f] that signals that the f sources of P-sparkle and P-glow are still open when these graphs combine with P -and, yielding the partial Fig. 6d . Since identical sources merge in the AM algebra, Fig. 6d has a single f-source slot. Into this slot, P-fairy is inserted to yield the original graph G in Fig. 6a , and we have obtained the reentrancy without using a REF-node. f is now a child of a in T ; we call a the resolution target of f , RT (f ). In general the resolution target of a node n is the lowest common ancestor of n and all nodes labeled REF-n. Thus, to resolve the graph, we (a) add the necessary type requests to account for sources remaining open until they are merged at the resolution target and (b) make each node a dependent of its resolution target and remove all REF-nodes. Algorithm 1 describes this procedure. It uses the idea of an nresolution path, which is a path between a node n or a REF-n node and its resolution target. In Fig. 6c , there are two f -resolution paths: one in blue between f and its resolution target a, and one in green between the REF-f node x and its resolution target a. Further, \u03c4 (n) is the type of the graph constant in T for a node n and \u03b2(n) is the type of the result of evaluating the subtree below n in T .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 66, |
|
"text": "Fig. 6a", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 169, |
|
"text": "Fig. 6b", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 257, |
|
"text": "Fig. 6c", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 554, |
|
"end": 561, |
|
"text": "Fig. 6e", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 627, |
|
"end": 634, |
|
"text": "Fig. 6c", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 689, |
|
"end": 695, |
|
"text": "Fig. 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 853, |
|
"end": 860, |
|
"text": "Fig. 6d", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 912, |
|
"end": 919, |
|
"text": "Fig. 6d", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 1017, |
|
"end": 1024, |
|
"text": "Fig. 6a", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 1671, |
|
"end": 1678, |
|
"text": "Fig. 6c", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Reentrancies and types", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Algorithm 1: Reentrancy resolution 1 T \u2190 the canonical AM-tree C U of an unrolling U of G; 2 R \u2190 {n \u2208 N G | \u2203 REF-n node in U }; 3 while R = \u2205: 4 Pick a y \u2208 R s.t. there is no x \u2208 R, x = y, with y on an x-resolution path; 5 for p \u2208 y-resolution paths: 6 for n APP \u2212 \u2212 \u2192 m \u2208 p:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reentrancies and types", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In the example, Algorithm 1 iterates over all edges in both resolution paths (Line 6; the order of these iterations does not impact the result). For the two bottom edges s", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reentrancies and types", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "APP f \u2212 \u2212 \u2192 f and g APP f \u2212 \u2212 \u2192 x, Line 8", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reentrancies and types", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "applies. Since the subtree rooted at f evaluates to a constant with empty type, no actual changes are made here (\u03b2(y) can be non-trivial from resolution paths handled previously). For the two upper edges a APPs \u2212 \u2212 \u2192 s and a APPg \u2212 \u2212 \u2192 g, Line 10 applies, adding f to the requests at s and g in the constant at a. In Line 11, f gets moved up to become a child of its resolution target a and in Line 12 the REF-f node x gets removed, yielding T in Fig. 6e . Algorithm 1 is correct in the following precise sense: Theorem 1. Let G be a graph, let U be an unrolling of G, let C U be the canonical AM-tree of U , and let T be the result of applying Algorithm 1 to C U . Then T is a well-typed AM dep-tree that evaluates to G iff for all y \u2208 N G , for all y-resolution paths p in C,", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 447, |
|
"end": 454, |
|
"text": "Fig. 6e", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Reentrancies and types", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "1. the bottom-most edge n \u2212 \u2192 m of p (i.e. m is y or labeled REF-y) does not have a MOD label, and 2. for all y-resolution paths p in C, if n MOD \u2212 \u2212 \u2192 m \u2208 p, n, m = y, then there is a directed path in G from n to y.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reentrancies and types", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Condition (1) captures the fact that moving MOD edges in the graph changes the evaluation result (the modifier would attach at a different node) and Condition (2) the fact that modifiers are not allowed to add sources to the type of the head they modify.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reentrancies and types", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Algorithm 1 does not yield all possible AM deptrees; in Appendix B, we present an algorithm that yields all possible AM dep-trees (with placeholder sources) for a graph. However, we find in practice that Algorithm 1 almost always finds the best linguistic analysis; i.e. reasons to deviate from Algorithm 1 are rare (we estimate that this affects about 1% of nodes and edges in the AM dep-tree). We leave handling these rare cases to future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reentrancies and types", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To obtain an unrolled graph U , we use Algorithm 2. The idea is to simply expand G through breadth-first search, creating REF-nodes when we encounter a node a second time. We use separate queues F and B for forward and backward traversal of edges, allowing us to avoid traversing edges backwards wherever possible, since that would yield MOD edges in the canonical AM-tree C U , which can be problematic for the conditions of Theorem 1. And indeed, we can show that whenever there is an unrolled graph U satisfying the conditions of Theorem 1, Algorithm 2 returns one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unrolling the graph", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Algorithm 2 does not specify the order in which the incident edges of each node n are added to the add e to E U where e is just like e except with x in place of n 18 return U queues, leaving an element of choice. However, we find that nearly all of these choices are unified later in the resolution process; meaningful choices are rare. For example in Fig. 6b , f and x may be switched, but Algorithm 1 always yields the AM dep-tree in Fig. 6e . In practice, we execute Algorithm 2 with arbitrary queueing order, and follow it with Algorithm 1. The AM dep-tree we obtain is guaranteed to be a decomposition of the original graph whenever one exists:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 352, |
|
"end": 359, |
|
"text": "Fig. 6b", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 443, |
|
"text": "Fig. 6e", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Unrolling the graph", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Theorem 2. Let G be a graph partitioned into blobs. If there is a well-typed AM dep-tree T , using that blob partition, that evaluates to G, then Algorithm 2 (with any queueing order) and Algorithm 1 yield such a tree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unrolling the graph", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We have now seen how, for any graph G, we obtain a unique AM dependency tree T . This tree represents the compositional structure of G, but it still contains placeholder source names. We will now show how to automatically choose source names. These names should be consistent across the trees for different sentences; this yields reusable graph constants, which capture linguistic generalizations and permit more accurate parsing. But the source names must also remain consistent within each tree to ensure that the tree still evaluates correctly to G; for instance, if we replace the placeholder source f in P-glow in Fig. 6e by O, but we replace f in P -and by S, then the AM dep-tree would not be well-typed because the request is not satisfied. We therefore proceed in two steps. In this section, we represent all internally consistent source assignments compactly with a tree automaton. In \u00a76, we then learn to select globally reusable source names jointly with training the neural parser.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 619, |
|
"end": 626, |
|
"text": "Fig. 6e", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Tree automata. A (bottom-up) tree automaton (Comon et al., 2007) is a device for compactly describing a language (set) of trees. It processes a tree bottom-up, starting at the leaves, and nondeterministically assigns states from a finite set to the nodes. A rule in a tree automaton has the general shape f (q 1 , . . . , q n ) \u2192 q. If the automaton can assign the states q 1 , . . . , q n to the children of a node \u03c0 with node label f , this rule allows it to assign the state q to \u03c0. The automaton accepts a tree if it can assign a final state to the root node. Tree automata can be seens as generalisation of parse charts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 64, |
|
"text": "(Comon et al., 2007)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "General construction. Given an AM dependency tree T with placeholders, we construct a tree automaton that accepts all well-typed variants of T with consistent source assignments. More specifically, let S be a finite set of reusable source names; we will use S = {S, O, M} here, evoking subject, object, and modifier. The automaton will keep track of source name assignments, i.e. of partial functions \u03c6 from placeholder source names into S. Its rules will ensure that the functions \u03c6 assign source names consistently.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We start by binarizing T into a binary tree B, whose leaves are the graph constants in T and whose internal nodes correspond to the edges of T ; the binarized tree for the dependency tree in Fig. 7a is shown in Fig. 7b . We then construct a tree automaton A B that accepts binarized trees which are isomorphic to B, but whose node labels have been replaced by graph constants and operations with reusable source names. The states of A B are of the form \u03c0, \u03c6 , where \u03c6 is a source name assignment and \u03c0 is the address of a node in B. Node addresses \u03c0 \u2208 N * are defined recursively: the root has the empty address , and the i-th child of a node at address \u03c0 has address \u03c0i. The final states are all states with \u03c0 = , indicating that we have reached the root. Figure 7 : (a) AM dep-tree with placeholder sources for the graph in Fig. 1c, (b) its binarization B and (c) example automaton run (states in green).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 191, |
|
"end": 198, |
|
"text": "Fig. 7a", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 211, |
|
"end": 218, |
|
"text": "Fig. 7b", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 757, |
|
"end": 765, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 826, |
|
"end": 838, |
|
"text": "Fig. 1c, (b)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "P-fairy:[ ] P -begin:[f, g[f]] P-glow:[f] MODf APPg (a) MOD f P-fairy APP g P -begin P-glow (b) MOD S , {} G-fairy 0, {} APP O 1, g \u2192O f \u2192S G-begin 10, g \u2192O f \u2192S G-glow 11, {f \u2192S} (c)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Rules. The automaton A B has two kinds of rules. Leaf rules choose injective source name assignments for constants; there is one rule for every possible assignment at each constant. That is, for every graph constant H at an address \u03c0 in B, the automaton A B contains all rules of the form", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "G \u2192 \u03c0, \u03c6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "where \u03c6 is an injective map from the placeholder sources in H to S, and G is the graph constant identical to H except that each placeholder source s in H has been replaced by \u03c6(s).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For example, the automaton for Fig. 7b contains the following rule:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 38, |
|
"text": "Fig. 7b", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "G-begin \u2192 00, {g \u2192 O, f \u2192 S}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Note that this rule uses the node label G-begin with the reusable source names, not the graph constant P -begin in B with the placeholders.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In addition, operation rules percolate source assignments from children to parents. Let APP x for some placeholder source x be the operation at address \u03c0 in B. Then A B contains all rules of the form", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "APP \u03c6 1 (x) ( \u03c00, \u03c6 1 , \u03c01, \u03c6 2 ) \u2192 \u03c0, \u03c6 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "as long as \u03c6 1 and \u03c6 2 are identical where their domains overlap, i.e. they assign consistent source names to the placeholders. The rule passes \u03c6 1 on to its parent. The assignments in \u03c6 2 are either redundant, because of overlap with \u03c6 1 , or they are no longer relevant because they were filled by operations further below in the tree. The MOD case works out similarly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In the example, A B contains the rule", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "APP O ( 10, \u03c6 b , 11, \u03c6 g ) \u2192 1, \u03c6 b", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "where \u03c6 b = {g \u2192 O, f \u2192 S} and \u03c6 g = {f \u2192 S}, because \u03c6 b and \u03c6 g agree on f. A complete accepting run of the automaton is shown in Fig. 7c .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 139, |
|
"text": "Fig. 7c", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The automaton A B thus constructed accepts the binarizations of all well-typed AM dependency trees with sources in S that match T .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree automata for source names", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As a final step, we train the neural parser of Groschwitz et al. (2018) directly on the tree automata. For each position i in the sentence, the parser predicts a score c (G, i) for each graph constant G, and for each pair i, j of positions and operation , it predicts an edge score c i \u2212 \u2192 j .", |
|
"cite_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 71, |
|
"text": "Groschwitz et al. (2018)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint learning of compositional structure and parser", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The tree automata are factored the same way, in that they have one rule per graph constant and per dependency edge. As a result, we get a oneto-one correspondence between parser scores and automaton rules when aligning automata rules to words via the words' alignments to graph nodes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint learning of compositional structure and parser", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We thus take the neural parser scores as rule weights c (r) for rules r in the automaton. In a weighted tree automaton, the weight of a tree is defined as the product of the weights of all rules that built it. The inside score I of the tree automaton is the sum of the weights of all the trees it accepts. Computing this sum naively would be intractable, but the inside score can be computed efficiently with dynamic programming. Our training objective is to maximize the sum of the log inside scores of all automata in the corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint learning of compositional structure and parser", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The arithmetic structure of computing the inside scores is complex and varies from automaton to automaton, which would make batching difficult. We solve this with the chain rule as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint learning of compositional structure and parser", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "\u2207 \u03b8 log I = 1 I \u2207I = 1 I r\u2208A \u2202 \u2202c (r) I \u2207 \u03b8 c (r) = 1 I r\u2208A \u03b1 (r) \u2207 \u03b8 c (r) ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint learning of compositional structure and parser", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "where \u03b8 are the parameters of the neural parser, which determine c(r), and \u03b1 (r) is the outer weight of the rule r (Eisner, 2016) , i.e. the total weight of trees that use r divided by c(r). The outer weight can be effectively computed with the inside-outside algorithm (Baker, 1979) . This occurs outside of the gradient, so we do not need to backpropagate into it. Since the scores c (r) are direct outputs of the neural parser, their gradients can be batched straightforwardly. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 129, |
|
"text": "(Eisner, 2016)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 283, |
|
"text": "(Baker, 1979)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint learning of compositional structure and parser", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We evaluate parsing accuracy on the graphbanks DM, PAS, and PSD from the SemEval 2015 shared task on Semantic Dependency Parsing (SDP, Oepen et al. (2015) ) and on the AMRBank LDC2017T10 (Banarescu et al., 2013) . We follow in the choice of neural architecture, in particular using BERT (Devlin et al., 2019) embeddings, and in the choice of decoder, hyperparameters and pre-and postprocessing (we train the model of \u00a76 for 100 instead of 40 epochs, since it is slower to converge than supervised training). When a graph G is non-decomposable using our blob partition, i.e. if there is no well-typed AM dep-tree T that evaluates to G, and so the condition of Theorem 2 does not hold, then we remove that graph from the training set. (This does not affect coverage at evaluation time.) This occurs rarely, affecting e.g. about 1.6% of graphs in the PSD training set. Like , we use the heuristic AMR alignments of (Groschwitz et al., 2018) . These alignments can yield multi-node constants. In those cases, we first run the algorithm of Section 4 to obtain an AM tree with placeholder source names, and then consolidate those constants that are aligned to the same word into one constant, effectively collapsing segments of the AM tree into a single constant. We then construct the tree automata of Section 5 as normal.", |
|
"cite_spans": [ |
|
{ |
|
"start": 129, |
|
"end": 154, |
|
"text": "(SDP, Oepen et al. (2015)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 187, |
|
"end": 211, |
|
"text": "(Banarescu et al., 2013)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 287, |
|
"end": 308, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 912, |
|
"end": 937, |
|
"text": "(Groschwitz et al., 2018)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "We consider three baselines. Each of these chooses a single tree for each training instance from the tree automata and performs supervised training. The random trees baseline samples a tree for each sentence from its automaton, uniformly at random. In the random weights baseline, we fix a random weight for each graph constant and edge label, globally across the corpus, and select the highestscoring tree for each sentence. The EM weights baseline instead optimizes these global weights with the inside-outside algorithm. Table 1 compares the baselines and the joint neural method. Random trees perform worst -consistency across the corpus matters. The difference between random weights and EM is suprisingly small, despite the EM algorithm converging well. The joint neural learning outperforms the baselines on all graphbanks; we analyze this in \u00a7 8. We also experimented with different numbers of sources, finding 3 to work best for DM, PAS and AMR, and 4 for PSD (all results in Appendix C). Table 2 compares the accuracy of our joint model to and to the state of the art on the respective graphbanks. Our model is competitive with the state of the art on most graphbanks. In particular, our parsing accuracy is on par with Lindemann et al. (2019), who perform supervised training with hand-crafted heuristics. This indicates that our model learns appropriate source names.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 524, |
|
"end": 531, |
|
"text": "Table 1", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 998, |
|
"end": 1005, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "Grahbank-specific pre-and processing. The pre-and postprocessing steps of (Lindemann et al., 2019) we use still rely on two graphbank-specific heuristics, that directly relate to AM depenency trees: in PSD, it includes a simple but effective step to make coordination structures more compatible with the specific flavor of application and modification of AM dependency trees. In AMR it includes a step to remove some edges related to coreference (a non-compositional source of reentrancy).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "We include in brackets the results without those two preprocessing steps. The drop in performance for PSD indicates that while for the most part our method is graphbank-independent, not all shapes of graphs are equally suited for AM dependency-parsing and some preprocessing to bring the graph 'into shape' can still be important. For AMR, keeping the co-reference based edges leads to AM trees that resolve those reentrancies with the AM type system. That is, the algorithm 'invents' ad-hoc compositional explanations for a non-compositional phenomenon, yielding graph constants with type annotations that do not generalize well. The corresponding drop in performance indicates that extending AM dependency parsing to handle coreference will be an important future step when parsing AMR; some work in that direction has already been undertaken (Anikina et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 845, |
|
"end": 867, |
|
"text": "(Anikina et al., 2020)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "As AM parsing is inherently interpretable, we can explore linguistic properties of the learned graph constants and trees. We find that the neural method makes use of both syntax and semantics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We compute for each sentence in the training set the best tree from its tree automaton, according to the neural weights of the best performing epoch. We then sample trees from this set for handanalysis (see Appendix A), to examine whether the model learned consistent sources for subjects and objects. We find that while the EM method uses highly consistent graph constants and AM operations, the neural method, which has access to the strings, sacrifices some graph constant and operation consistency in favour of syntactic consistency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Syntactic Subjects and Objects. In the active sentence The fairy charms the elf, the phrase the fairy is the syntactic subject and the elf the syntactic object. In the passive The elf is charmed (by the fairy), the phrase the elf is now the syntactic subject, even though in both sentences, the fairy is the charmer and the elf the charmee. Similarly, the fairy is the syntactic subject in the intransitive sentence The fairy glows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Intra-Phenomenon Consistency. For both the EM and neural method, we found completely consistent source allocations for active transitive verbs in all four sembanks. These source allocations were also the overwhelming favourite graph constants for two-argument predicates (72-92%), and the most common sources used by Apply operations (94-98%). For example, in AMR, the graph constant template in Fig. 8a appears 26,653 times in the neural parser output. 74% of these used sources x = S 1 and y = S 2 (from S = {S 1 , S 2 , S 3 }). All active transitive sentences in our sample used this source allocation, so we call this the active graph constant (e.g. G-charm in Fig. 2 ) and refer to the sources S 1 and S 2 as S and O respectively, for subject and object. All four sembanks showed this kind of consistency; when we refer to S and O sources below, we mean whichever two sources displayed the same behaviour as S 1 and S 2 in AMR.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 396, |
|
"end": 403, |
|
"text": "Fig. 8a", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 665, |
|
"end": 671, |
|
"text": "Fig. 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Linguistic Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "All four graphbanks are also highly consistent in their modifiers: classical modifiers such as adjectives are nearly universally adjoined with one consistent source -we refer to it as M -and MOD M is the overwhelming favourite (90-99%) for MOD operations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Cross-Phenomenon Consistency. We call a parser syntactically consistent if its syntactic subjects fill the S slot, regardless of their semantic role. A syntactically consistent parser would acquire the AMR in Fig. 8c from the active sentence by the analysis in Fig. 8b , and from the passive sentence by the analysis in Fig. 8d , with the passive constant G-charmP from Fig. 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 216, |
|
"text": "Fig. 8c", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 268, |
|
"text": "Fig. 8b", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 327, |
|
"text": "Fig. 8d", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 376, |
|
"text": "Fig. 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Linguistic Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "The neural parser is syntactically consistent: in all sembanks, it uses the same source S for syntactic subjects in passives as for actives. EM, conversely, prefers to use the same graph constants for active and passives, flipping the APP edges to produce syntactically inconsistent trees as in Fig. 8e . Single-argument predicates are also syntactically consistent in the neural model, using S for subjects and O for objects, while EM picks one source. The heuristics in have passive constants, but use them only when forced to, e.g. when coordinating active and passive.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 295, |
|
"end": 302, |
|
"text": "Fig. 8e", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Linguistic Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Finally, we compute the entropy of the graph constants for the best trees of the training set as G f (g) ln f (G), where f (G) is the frequency of constant G in the trees.The entropies are between 2 and 3 nats, but are consistently lower for EM than the neural method, by 0.031 to 0.079 nats. Considering that the neural method achieves higher parsing accuracies, using the most common graph constants and edges possible evidently is not always optimal for performance. The syntactic regularities exploited by the neural method may contribute to its improved performance. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Analysis", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "In this work, we presented a method to obtain the compositional structures for AM dependency parsing that relies much less on graphbank-specific heuristics written by experts. Our neural model learns linguistically meaningful argument slot names, as shown by our manual evaluation; in this regard, our model learns to do the job of the linguist. High parsing performance across graphbanks shows that the learned compositional structures are also well-suited for practical applications, promising easier adaptation of AM dependency parsing to new graphbanks. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "To sample trees, we compute for each sentence in the training set the best tree from its tree automaton, according to the neural weights of the best performing epoch. This ensures the AM trees evaluate to the correct graph. We then sample trees from this set for hand-analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Sampling Method for hand analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To get relevant sentences, we sampled 5-to-15word sentences with graph constants from the following six categories:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Sampling Method for hand analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Transitive verbs: graph constants with a labeled root and two arguments with edges labelled as in As explained in the main text, we define the active constants as those with the most common source allocation, and the passive constants as those with the active source allocation flipped. We sampled both active and passive source allocations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Sampling Method for hand analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Verbs with one argument: Graph constants just like the transitive ones but lacking one of the arguments. There are four of these, given both source allocations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Sampling Method for hand analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Generally these graph constants are used for more than just verbs; for each of the six categories we sampled until we had ten relevant sentences. We visualised the AM trees and categorised the phenomena, for example active or passive verbs, nominalised verbs, imperatives, relative clauses, gerund modifiers, and so forth.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Sampling Method for hand analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To answer the question of whether the parser used consistent constants for active and passive transitive sentences, we sampled until we had ten sentences with active or passive main verbs. For the single-argument verbs, we also looked at nominalised verbs, modifiers, and so forth. (Sampling and visualisation scripts will be available together with the rest of our code on GitHub.) B An algorithm to obtain all AM dep-trees for a graph", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Sampling Method for hand analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Let G be a graph partitioned into blobs. Let U G be the set of unrolled graphs for G that can be obtained by Algorithm 2 by varying the queue order. Let further M G be the set of results of Algorithm 3 below for every input AM dep-tree T = C U for U \u2208 U G and every choice of set M as specified in the algorithm. Algorithm 3 switches the order of two nodes m and k, making k the head of the subtree previously headed by m. This change of head is only possible when the incoming edge of m is labeled MOD (for APP, the change of head changes the evaluation result). It also requires a MOD edge between m and k; an APP edge with this type of swap would lead to a non-well-typed graph.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Sampling Method for hand analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, let R G be the set of results of Algorithm 4 for every input AM dep-tree T \u2208 M G and any valid choice of R and RT (valid as described in the algorithm). Algorithm 4 is like Algorithm 1 for reentrancy resolution, but can have resolution targets RT (n) that are higher in the tree than the lowest common ancestor of n and the REF-n nodes. Further, Algorithm 4 uses the same methodology to also move nodes that do not need resolution to become descendents of a 'resolution target' higher in the tree (i.e. R here can now also contain nodes for which no REF node exists).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Sampling Method for hand analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Then the following Theorem 1 holds:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Sampling Method for hand analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Algorithm 3: Modify-edge swapping Theorem 1. Let G be a graph partitioned into blobs, and let T G be the set of all well-typed AM dep-trees with placeholder sources, using that blob partition, that evaluate to G. Then if T G = \u2205, all AM dep-trees in R G are either not well-typed or do not evaluate to G. If however T G = \u2205, then", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Sampling Method for hand analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "R G = T G .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Sampling Method for hand analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "# sources DM PAS PSD AMR 2 92.2 91.9 75.6 74.3 3 94.5 94.8 82.7 76.5 4 94.4 94.7 83.4 75.9 6 92.3 93.6 80.1 73.4 Table 2 : Development set accuracies of the neural method for different numbers of source names.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 120, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Sampling Method for hand analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 AMR F-scores are Smatch scores \u2022 DM, PAS and PSD: We compute labeled F-score with the evaluation toolkit that was developed for the SDP shared task:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Additional Details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/ semantic-dependency-parsing/toolkit", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Additional Details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 We use the standard train/dev/test split for all corpora", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Additional Details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 AMR corpus available through https://amr. isi.edu/download.html (requires LDC license)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Additional Details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 SDP corpora available through https: //catalog.ldc.upenn.edu/LDC2016T10", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Additional Details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(requires LDC license)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Additional Details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Number of source names. We experimented with different numbers of source names in the joint neural method (Table 2) . Mostly, three source names were most effective, except for PSD, where four were most effective. Two source names are not enough to model many common phenomena (e.g. ditransitive verbs, coordination of verbs); graphs containing these phenomena cannot be decomposed with two sources and are removed from the training set, reducing parsing accuracy. The higher performance of PSD with four sources may stem from PSD using flat coordination structures which require more source names; although this is also true for AMR where four source names are not beneficial. The drop with six source names may come from the fact that the latent space grows rapidly with more sources, making it harder to learn consistent source assignments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 115, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "C Additional Details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Hyperparameters. See Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 28, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "C Additional Details", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank the anonymous reviewers as well as Lucia Donatelli, Pia Wei\u00dfenhorn and Matthias Lindemann for their thoughtful comments. This research was in part funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), project KO 2916/2-2, and by the Dutch Research Council (NWO) as part of the project Learning Meaning from Structure (VI.Veni.194.057).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": " Table 3 : Common hyperparameters used in all experiments (the random trees, random weights and EM weights baselines use 40 epochs since they converge faster). For a complete description of the neural architecture, see and its supplementary materials.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1, |
|
"end": 8, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Predicting coreference in Abstract Meaning Representations", |
|
"authors": [ |
|
{ |
|
"first": "Tatiana", |
|
"middle": [], |
|
"last": "Anikina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tatiana Anikina, Alexander Koller, and Michael Roth. 2020. Predicting coreference in Abstract Mean- ing Representations. In Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference, pages 33-38, Barcelona, Spain (online). Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Broad-coverage CCG Semantic Parsing with AMR", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Artzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG Semantic Parsing with AMR. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Trainable grammars for speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Baker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1979, |
|
"venue": "The Journal of the Acoustical Society of America", |
|
"volume": "65", |
|
"issue": "S1", |
|
"pages": "132--132", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1121/1.2017061" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. K. Baker. 1979. Trainable grammars for speech recognition. The Journal of the Acoustical Society of America, 65(S1):S132-S132.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Abstract Meaning Representation for Sembanking", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Banarescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shu", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Madalina", |
|
"middle": [], |
|
"last": "Georgescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kira", |
|
"middle": [], |
|
"last": "Griffitt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulf", |
|
"middle": [], |
|
"last": "Hermjakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for Sembanking. In Proceedings of the 7th Linguis- tic Annotation Workshop and Interoperability with Discourse.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "One SPRING to rule them both: Symmetric AMR semantic parsing and generation without a complex pipeline", |
|
"authors": [ |
|
{ |
|
"first": "Michele", |
|
"middle": [], |
|
"last": "Bevilacqua", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rexhina", |
|
"middle": [], |
|
"last": "Blloshmi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michele Bevilacqua, Rexhina Blloshmi, and Roberto Navigli. 2021. One SPRING to rule them both: Symmetric AMR semantic parsing and generation without a complex pipeline. In Proceedings of AAAI.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Accurate SHRG-based semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Yufei", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weiwei", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaojun", |
|
"middle": [], |
|
"last": "Wan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "408--418", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yufei Chen, Weiwei Sun, and Xiaojun Wan. 2018. Ac- curate SHRG-based semantic parsing. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 408-418, Melbourne, Australia. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Tree Automata techniques and applications", |
|
"authors": [ |
|
{ |
|
"first": "Hubert", |
|
"middle": [], |
|
"last": "Comon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Dauchet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Gilleron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Florent", |
|
"middle": [], |
|
"last": "Jacquemard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Denis", |
|
"middle": [], |
|
"last": "Lugiez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sophie", |
|
"middle": [], |
|
"last": "Tison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Tommasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "L\u00f6ding", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hubert Comon, Max Dauchet, R\u00e9mi Gilleron, Flo- rent Jacquemard, Denis Lugiez, Sophie Tison, Marc Tommasi, and Christof L\u00f6ding. 2007. Tree Au- tomata techniques and applications. published on- line -http://tata.gforge.inria.fr/.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "An opensource grammar development environment and broad-coverage english grammar using HPSG", |
|
"authors": [ |
|
{ |
|
"first": "Ann", |
|
"middle": [], |
|
"last": "Copestake", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Flickinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the Second conference on Language Resources and Evaluation (LREC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ann Copestake and Dan Flickinger. 2000. An open- source grammar development environment and broad-coverage english grammar using HPSG. In Proceedings of the Second conference on Language Resources and Evaluation (LREC).", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "An algebra for semantic construction in constraint-based grammars", |
|
"authors": [ |
|
{ |
|
"first": "Ann", |
|
"middle": [], |
|
"last": "Copestake", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Lascarides", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Flickinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 39th ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ann Copestake, Alex Lascarides, and Dan Flickinger. 2001. An algebra for semantic construction in constraint-based grammars. In Proceedings of the 39th ACL.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Graph Structure and Monadic Second-Order Logic, a Language Theoretic Approach", |
|
"authors": [ |
|
{ |
|
"first": "Bruno", |
|
"middle": [], |
|
"last": "Courcelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joost", |
|
"middle": [], |
|
"last": "Engelfriet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bruno Courcelle and Joost Engelfriet. 2012. Graph Structure and Monadic Second-Order Logic, a Lan- guage Theoretic Approach. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Inside-outside and forwardbackward algorithms are just backprop (tutorial paper)", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Workshop on Structured Prediction for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--17", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Eisner. 2016. Inside-outside and forward- backward algorithms are just backprop (tutorial pa- per). In Proceedings of the Workshop on Structured Prediction for NLP, pages 1-17.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Semantic graph parsing with recurrent neural network DAG grammars", |
|
"authors": [ |
|
{ |
|
"first": "Federico", |
|
"middle": [], |
|
"last": "Fancellu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sorcha", |
|
"middle": [], |
|
"last": "Gilroy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2769--2778", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1278" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Federico Fancellu, Sorcha Gilroy, Adam Lopez, and Mirella Lapata. 2019. Semantic graph parsing with recurrent neural network DAG grammars. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 2769- 2778, Hong Kong, China. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Transition-based semantic dependency parsing with pointer networks", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Fern\u00e1ndez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Gonz\u00e1lez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "G\u00f3mez-Rodr\u00edguez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7035--7046", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.629" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Fern\u00e1ndez-Gonz\u00e1lez and Carlos G\u00f3mez- Rodr\u00edguez. 2020. Transition-based semantic dependency parsing with pointer networks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7035-7046, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Open SDP 1.2. LIN", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Flickinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angelina", |
|
"middle": [], |
|
"last": "Ivanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Kuhlmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Oepen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Zeman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Flickinger, Jan Haji\u010d, Angelina Ivanova, Marco Kuhlmann, Yusuke Miyao, Stephan Oepen, and Daniel Zeman. 2017. Open SDP 1.2. LIN-", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "DAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "DAT/CLARIN digital library at the Institute of For- mal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Methods for taking semantic graphs apart and putting them back together again", |
|
"authors": [ |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Groschwitz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonas Groschwitz. 2019. Methods for taking seman- tic graphs apart and putting them back together again. Ph.D. thesis, Macquarie University and Saar- land University.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A constrained graph algebra for semantic parsing with AMRs", |
|
"authors": [ |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Groschwitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Meaghan", |
|
"middle": [], |
|
"last": "Fowlie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "IWCS 2017 -12th International Conference on Computational Semantics -Long papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonas Groschwitz, Meaghan Fowlie, Mark Johnson, and Alexander Koller. 2017. A constrained graph algebra for semantic parsing with AMRs. In IWCS 2017 -12th International Conference on Computa- tional Semantics -Long papers.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "AMR Dependency Parsing with a Typed Semantic Algebra", |
|
"authors": [ |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Groschwitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Lindemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Meaghan", |
|
"middle": [], |
|
"last": "Fowlie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonas Groschwitz, Matthias Lindemann, Meaghan Fowlie, Mark Johnson, and Alexander Koller. 2018. AMR Dependency Parsing with a Typed Semantic Algebra. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Cooperative learning of disjoint syntax and semantics", |
|
"authors": [ |
|
{ |
|
"first": "Serhii", |
|
"middle": [], |
|
"last": "Havrylov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Germ\u00e1n", |
|
"middle": [], |
|
"last": "Kruszewski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1118--1128", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1115" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Serhii Havrylov, Germ\u00e1n Kruszewski, and Armand Joulin. 2019. Cooperative learning of disjoint syn- tax and semantics. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 1118-1128, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Establishing strong baselines for the new decade: Sequence tagging, syntactic and semantic parsing with BERT", |
|
"authors": [ |
|
{ |
|
"first": "Han", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinho", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "The Thirty-Third International Flairs Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Han He and Jinho Choi. 2020. Establishing strong baselines for the new decade: Sequence tagging, syntactic and semantic parsing with BERT. In The Thirty-Third International Flairs Conference.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Spanbased semantic parsing for compositional generalization", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Herzig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Herzig and Jonathan Berant. 2020. Span- based semantic parsing for compositional general- ization.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Constituency parsing with a self-attentive encoder", |
|
"authors": [ |
|
{ |
|
"first": "Nikita", |
|
"middle": [], |
|
"last": "Kitaev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2676--2686", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1249" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikita Kitaev and Dan Klein. 2018. Constituency pars- ing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676-2686, Melbourne, Australia. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Compositional semantic parsing across graphbanks", |
|
"authors": [ |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Lindemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Groschwitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4576--4585", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1450" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthias Lindemann, Jonas Groschwitz, and Alexan- der Koller. 2019. Compositional semantic parsing across graphbanks. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 4576-4585, Florence, Italy. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Fast semantic parsing with welltypedness guarantees", |
|
"authors": [ |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Lindemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Groschwitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3929--3951", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.323" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthias Lindemann, Jonas Groschwitz, and Alexan- der Koller. 2020. Fast semantic parsing with well- typedness guarantees. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 3929-3951, On- line. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Jointly learning sentence embeddings and syntax with unsupervised tree-LSTMs", |
|
"authors": [ |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Maillard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dani", |
|
"middle": [], |
|
"last": "Yogatama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Natural Language Engineering", |
|
"volume": "", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean Maillard, Stephen Clark, and Dani Yogatama. 2019. Jointly learning sentence embeddings and syntax with unsupervised tree-LSTMs. Natural Lan- guage Engineering, 25(4).", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Semeval 2015 task 18: Broad-coverage semantic dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Oepen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Kuhlmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Zeman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Silvie", |
|
"middle": [], |
|
"last": "Cinkov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Flickinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov\u00e1, Dan Flickinger, Jan Haji\u010d, and Zde\u0148ka Ure\u0161ov\u00e1. 2015. Semeval 2015 task 18: Broad-coverage semantic dependency pars- ing. In Proceedings of the 9th International Work- shop on Semantic Evaluation (SemEval 2015).", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "A synchronous hyperedge replacement grammar based approach for AMR parsing", |
|
"authors": [ |
|
{ |
|
"first": "Xiaochang", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linfeng", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 19th Conference on Computational Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaochang Peng, Linfeng Song, and Daniel Gildea. 2015. A synchronous hyperedge replacement gram- mar based approach for AMR parsing. In Proceed- ings of the 19th Conference on Computational Lan- guage Learning.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaiser", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6000--6010", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, undefine- dukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st Interna- tional Conference on Neural Information Processing Systems, NIPS'17, page 6000-6010, Red Hook, NY, USA. Curran Associates Inc.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Smatch: an evaluation metric for semantic feature structures", |
|
"authors": [ |
|
{ |
|
"first": "Shu", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "748--752", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shu Cai and Kevin Knight. 2013. Smatch: an evalua- tion metric for semantic feature structures. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 748-752, Sofia, Bulgaria. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Compositional semantic parsing across graphbanks", |
|
"authors": [ |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Lindemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Groschwitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4576--4585", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1450" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthias Lindemann, Jonas Groschwitz, and Alexan- der Koller. 2019. Compositional semantic parsing across graphbanks. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 4576-4585, Florence, Italy. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "AM dep-tree with word alignments. The dashed lines connect tokens to their graph constants, and arrows point from heads to arguments, labeled by the operation that puts the graphs together." |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Graph constants" |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "The tiny fairy glows." |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Figure 4: Canonical constants." |
|
}, |
|
"FIGREF5": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Resolved AM dep-tree T for (a); changes with respect to (c) in purple Analysis for The fairy sparkles and glows." |
|
}, |
|
"FIGREF6": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "if m is y or labeled REF-y: 8 Add \u03b2(y) to the request at y in \u03c4 (n); \u03b2(y)] to the request at m in \u03c4 (n); 11 Move the subtree of T rooted at y up to be an APP y daughter of RT (y), unless RT (y) = y; 12 Delete all REF-y nodes from T ; 13 R \u2190 R \u2212 {y} 14 return T result in" |
|
}, |
|
"FIGREF7": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "EM analysis of passives uses APPO for syntactic subjectFigure 8: AMR examples of active and passive. See Fig. 2 for graph constants." |
|
}, |
|
"FIGREF8": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Input: an AM dep-tree T and a set M of pairs of consecutive edges in T of the form n Add \u03b2(m) (which always includes n) to the request at m in \u03c4 (k); 6 return T Algorithm 4: Extended reentrancy resolution 1 Input: an AM dep-tree T ; a set R \u2287 {n \u2208 N G | \u2203 REF-n node in T }; and a map RT that assigns to each node n \u2208 R a resolution target RT (n), that is at least as high as the lowest common ancestor of n and all REF-n nodes (if they exist), and that satisfies the conditions of Theorem 1. 2 while R = \u2205: 3 Pick a y \u2208 R s.t. there is no x \u2208 R, x = y, with y on an x-resolution path; 4 for p \u2208 y-resolution paths: y or labeled REF-y: 7 Add \u03b2(y) to the request at y in \u03c4 (n); Add y[\u03b2(y)] to the request at m in \u03c4 (n); 10 Move the subtree of T rooted at y up to be an APP y daughter of RT (y), unless RT (y) = y; 11 Delete all REF-y nodes from T ; 12 R \u2190 R \u2212 {y} 13 return T" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"content": "<table><tr><td>5</td><td colspan=\"2\">if F = \u2205: // traverse forward</td></tr><tr><td>6</td><td colspan=\"2\">e \u2190 F .pop;</td></tr><tr><td>7</td><td colspan=\"2\">n \u2190 e.target;</td></tr><tr><td>8</td><td>else:</td><td>// traverse backward</td></tr><tr><td>9</td><td colspan=\"2\">e \u2190 B.pop;</td></tr><tr><td>10</td><td colspan=\"2\">n \u2190 e.origin;</td></tr><tr><td>11</td><td colspan=\"2\">Mark e as traversed;</td></tr><tr><td>12</td><td colspan=\"2\">if n \u2208 N U :</td></tr><tr><td>13</td><td colspan=\"2\">add n, e to U ;</td></tr><tr><td>15</td><td>else:</td><td/></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Unrolling Input: Graph G 1 F, B \u2190 empty FIFO queues; 2 U \u2190 empty graph; 3 add r G to U , add outgoing edges of r G to F and incoming edges of r G to B; 4 while F \u222a B = \u2205:" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Baseline comparisons on the development sets (3 source names in all experiments)." |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"content": "<table><tr><td/><td/><td>DM</td><td/><td>PAS</td><td/><td>PSD</td><td>AMR 17</td></tr><tr><td/><td>id F</td><td>ood F</td><td>id F</td><td>ood F</td><td>id F</td><td>ood F</td><td>Smatch F</td></tr><tr><td>He and Choi (2020)</td><td>94.6</td><td>90.8</td><td>96.1</td><td>94.4</td><td>86.8</td><td>79.5</td><td>-</td></tr><tr><td>FG'20</td><td>94.4</td><td>91.0</td><td>95.1</td><td>93.4</td><td>82.6</td><td>82.0</td><td>-</td></tr><tr><td colspan=\"2\">Bevilacqua et al. (2021) -</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>84.5</td></tr><tr><td>L'19, w/o MTL</td><td colspan=\"5\">93.9\u00b10.1 90.3\u00b10.1 94.5\u00b10.1 92.5\u00b10.1 82.0\u00b10.1</td><td>81.5\u00b10.3</td><td>76.3\u00b10.2</td></tr><tr><td>This work</td><td>94.</td><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "2\u00b10.0 90.2\u00b10.1 94.6\u00b10.0 92.7\u00b10.1 81.4\u00b10.1 (75.8\u00b10.1) 80.7\u00b10.4 (74.1\u00b10.1) 75.1\u00b10.2 (74.2\u00b10.3)" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Semantic parsing accuracies (id = in domain test set; ood = out of domain test set). Results for our work are averages of three runs with standard deviations. L'19 are results of Lindemann et al. (2019) with fixed tree decoder (incl. post-processing bugfix for AMR as per Lindemann et al. (2020)). FG'20 is Fern\u00e1ndez-Gonz\u00e1lez and G\u00f3mez-Rodr\u00edguez (2020)." |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"content": "<table><tr><td>:</td><td/><td/></tr><tr><td>Sembank</td><td>subject</td><td>object</td></tr><tr><td>AMR</td><td>ARG0</td><td>ARG1</td></tr><tr><td>DM</td><td>ARG1</td><td>ARG2</td></tr><tr><td>PAS</td><td colspan=\"2\">verb ARG1 verb ARG2</td></tr><tr><td>PSD</td><td>ACT arg</td><td>PAT arg</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF8": { |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Transitive verbs" |
|
} |
|
} |
|
} |
|
} |