|
{ |
|
"paper_id": "S17-1024", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:28:46.914943Z" |
|
}, |
|
"title": "Parsing Graphs with Regular Graph Grammars", |
|
"authors": [ |
|
{ |
|
"first": "Sorcha", |
|
"middle": [], |
|
"last": "Gilroy", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Edinburgh", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Edinburgh", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Maneth", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Universit\u00e4t Bremen", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Recently, several datasets have become available which represent natural language phenomena as graphs. Hyperedge Replacement Languages (HRL) have been the focus of much attention as a formalism to represent the graphs in these datasets. Chiang et al. (2013) prove that HRL graphs can be parsed in polynomial time with respect to the size of the input graph. We believe that HRL are more expressive than is necessary to represent semantic graphs and we propose the use of Regular Graph Languages (RGL; Courcelle 1991), which is a subfamily of HRL, as a possible alternative. We provide a topdown parsing algorithm for RGL that runs in time linear in the size of the input graph.", |
|
"pdf_parse": { |
|
"paper_id": "S17-1024", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Recently, several datasets have become available which represent natural language phenomena as graphs. Hyperedge Replacement Languages (HRL) have been the focus of much attention as a formalism to represent the graphs in these datasets. Chiang et al. (2013) prove that HRL graphs can be parsed in polynomial time with respect to the size of the input graph. We believe that HRL are more expressive than is necessary to represent semantic graphs and we propose the use of Regular Graph Languages (RGL; Courcelle 1991), which is a subfamily of HRL, as a possible alternative. We provide a topdown parsing algorithm for RGL that runs in time linear in the size of the input graph.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "NLP systems for machine translation, summarization, paraphrasing, and other tasks often fail to preserve the compositional semantics of sentences and documents because they model language as bags of words, or at best syntactic trees. To preserve semantics, they must model semantics. In pursuit of this goal, several datasets have been produced which pair natural language with compositional semantic representations in the form of directed acyclic graphs (DAGs), including the Abstract Meaning Representation Bank (AMR; Banarescu et al. 2013) , the Prague Czech-English Dependency Treebank (Haji\u010d et al., 2012) , Deepbank (Flickinger et al., 2012) , and the Universal Conceptual Cognitive Annotation (Abend and Rappoport, 2013) . To make use of this data, we require models of graphs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 521, |
|
"end": 543, |
|
"text": "Banarescu et al. 2013)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 611, |
|
"text": "(Haji\u010d et al., 2012)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 623, |
|
"end": 648, |
|
"text": "(Flickinger et al., 2012)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 701, |
|
"end": 728, |
|
"text": "(Abend and Rappoport, 2013)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Consider how we might use compositional semantic representations in machine translation Figure 1 : Semantic machine translation using AMR (Jones et al., 2012) . The edge labels identify 'cat' as the object of the verb 'miss', 'Anna' as the subject of 'miss' and 'Anna' as the possessor of 'cat'. Edges whose head nodes are not attached to any other edge are interpreted as node labels.", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 158, |
|
"text": "(Jones et al., 2012)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 96, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "( Figure 1 ), a two-step process in which semantic analysis is followed by generation. Jones et al. (2012) observe that this decomposition can be modeled with a pair of synchronous grammars, each defining a relation between strings and graphs. Necessarily, one projection of this synchronous grammar produces strings, while the other produces graphs, i.e., is a graph grammar. A consequence of this representation is that the complete translation process can be realized by parsing: to analyze a sentence, we parse the input string with the string-generating projection of the synchronous grammar, and read off the synchronous graph from the resulting parse. To generate a sentence, we parse the graph, and read off the synchronous string from the resulting parse. In this paper, we focus on the latter problem: using graph grammars to parse input graphs. We call this graph recognition to avoid confusion with other parsing problems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 106, |
|
"text": "Jones et al. (2012)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 2, |
|
"end": 10, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recent work in NLP has focused primarily on hyperedge replacement grammar (HRG; Drewes et al. 1997 ), a context-free graph grammar formalism that has been studied in an NLP context by several researchers (Chiang et al., 2013; Peng et al., 2015; Bauer and Rambow, 2016) . In particular, Chiang et al. (2013) propose that HRG could be used to represent semantic graphs, and precisely characterize the complexity of a CKY-style algorithm for graph recognition from Lautemann (1990) to be polynomial in the size of the input graph. HRGs are very expressive-they can generate graphs that simulate non-context-free string languages (Engelfriet and Heyker, 1991; Bauer and Rambow, 2016) . This means they are likely more expressive than we need to represent the linguistic phenomena that appear in existing semantic datasets. In this paper, we propose the use of Regular Graph Grammars (RGG; Courcelle 1991) a subfamily of HRG that, like its regular counterparts among string and tree languages, is less expressive than context-free grammars but may admit more practical algorithms. By analogy to Chiang's CKY-style algorithm for HRG. We develop an Earley-style recognition algorithm for RGLs that is linear in the size of the input graph.", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 98, |
|
"text": "Drewes et al. 1997", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 225, |
|
"text": "(Chiang et al., 2013;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 226, |
|
"end": 244, |
|
"text": "Peng et al., 2015;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 245, |
|
"end": 268, |
|
"text": "Bauer and Rambow, 2016)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 306, |
|
"text": "Chiang et al. (2013)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 462, |
|
"end": 478, |
|
"text": "Lautemann (1990)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 626, |
|
"end": 655, |
|
"text": "(Engelfriet and Heyker, 1991;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 656, |
|
"end": 679, |
|
"text": "Bauer and Rambow, 2016)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We use the following notation. If n is an integer, [n] denotes the set {1, . . . , n}. Let \u0393 be an alphabet, i.e., a finite set. Then s \u2208 \u0393 * denotes that s is a sequence of arbitrary length, each element of which is in \u0393. We denote by |s| the length of s. A ranked alphabet is an alphabet \u0393 paired with an arity mapping (i.e., a total function) rank: \u0393 \u2192 N. Definition 1. A hypergraph (or simply graph) over a ranked alphabet \u0393 is a tuple G = (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Languages", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "V G , E G , att G , lab G , ext G ) where V G is a finite set of nodes; E G is a finite set of edges (distinct from V G ); att G : E G \u2192 V *", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Languages", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "G maps each edge to a sequence of nodes; lab G : E G \u2192 \u0393 maps each edge to a label such that |att G (e)| = rank(lab G (e)); and ext G is an ordered subset of V G called the external nodes of G.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Languages", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We assume that the elements of ext G are pairwise distinct, and the elements of att G (e) for each edge e are also pairwise distinct. An edge e is attached to its nodes by tentacles, each labeled by an integer indicating the node's position in att G (e) = (v 1 , . . . , v k ). The tentacle from e to v i will have label i, so the tentacle labels lie in the set [k] where k = rank(e). To express that a node v is attached to the ith tentacle of an edge e, we say vert(e, i) = v. Likewise, the nodes in ext G are labeled by their position in ext G . We refer to the ith external node of G by ext G (i) and in figures this will be labeled (i). The rank of an edge e is k if att(e) = (v 1 , . . . , v k ) (or equivalently, rank(lab(e)) = k). The rank of a hypergraph G, denoted by rank(G) is the size of ext G . Example 1. Hypergraph G in Figure 2 has four nodes (shown as black dots) and three hyperedges labeled a, b, and X (shown boxed). The bracketed numbers (1) and (2) denote its external nodes and the numbers between edges and the nodes are tentacle labels. Call the top node v 1 and, proceeding clockwise, call the other nodes v 2 , v 3 , and v 4 . Call its edges e 1 , e 2 and e 3 . Its definition would state att G (e 1 ) = (", |
|
"cite_spans": [ |
|
{ |
|
"start": 362, |
|
"end": 365, |
|
"text": "[k]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 836, |
|
"end": 844, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Regular Graph Languages", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "v 1 , v 2 ), att G (e 2 ) = (v 2 , v 3 ), att G (e 3 ) = (v 1 , v 4 , v 3 ), lab G (e 1 ) = a, lab G (e 2 ) = b, lab G (e 3 ) = X, and ext G = (v 4 , v 2 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Languages", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Definition 2. Let G be a hypergraph containing an edge e with att G (e) = (v 1 , . . . , v k ) and let H be a hypergraph of rank k with node and edge sets disjoint from those of G. The replacement of e by H is the graph", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Languages", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "G = G[e/H]. Its node set V G is V \u222a V H where V = V G \u2212 {v 1 , . . . , v k }. Its edge set is E G = (E G \u2212 {e}) \u222a E H .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Languages", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We define att G = att \u222a att H where for every e \u2208 (E G \u2212 {e}), att(e) is obtained from att G (e ) by replacing v i by the ith external node of H. Let lab G = lab \u222a lab H where lab is the restriction of lab G to edges in E G \u2212 {e}. Finally, let ext G = ext G .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Languages", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Example 2. A replacement is shown in Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 45, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Regular Graph Languages", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Definition 3. A hyperedge replacement grammar G = (N G , T G , P G , S G ) consists of ranked (disjoint) alphabets N G and T G of nonterminal and terminal symbols, respectively, a finite set P G of productions, and a start symbol S G \u2208 N G . Every production in P G is of the form X \u2192 G where G is a hypergraph over N G \u222a T G and rank(G) = rank(X).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperedge Replacement Grammars", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "For each production p : X \u2192 G, we use L(p) to refer to X (the left-hand side of p) and R(p) to refer to G (the right-hand side of p). An edge is a terminal edge if its label is terminal and a nonterminal edge if its label is nonterminal. A graph is a terminal graph if all of its edges are terminal. The terminal subgraph of a graph is the subgraph consisting of all terminal edges and their incident nodes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperedge Replacement Grammars", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Given a HRG G, we say that graph G immediately derives graph G , denoted G \u2192 G , iff there is an edge e \u2208 E G and a nonterminal X \u2208 N G such that lab G (e) = X and G = G[e/H], where X \u2192 H is in P G . We extend the idea of immediate derivation to its transitive closure G \u2192 * G , and say here that G derives G . For every X \u2208 N G we also use X to de- ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperedge Replacement Grammars", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "X a b (1) (2) G (2) c (1) a (3) d H (1) c d a (2) b a G[e/H]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperedge Replacement Grammars", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "p : X (1) 1 go 1 2 I arg0 Y Z s : (1) (2) 1 2 1 arg0 arg1 X q : W Y (2) (1) 1 2 1 1 2 arg1 arg0 W t : (1) 1 want Y r : Z X (2) (1) 1 2 1 1 2 arg1 arg0 Z u :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperedge Replacement Grammars", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(1) 1 need Table 1 : Productions of a HRG. The labels p, q, r, s, t, and u label the productions so that we can refer to them in the text. Note that Y can rewrite in two ways, either via production r or s. note the graph consisting of a single edge e with lab(e) = X and nodes", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 18, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hyperedge Replacement Grammars", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(v 1 , . . . , v rank(X) ) such that att G (e) = (v 1 , . . . , v rank(X) ), and we define the language L X (G) as {G | X \u2192 * G \u2227 G is terminal}. The language of G is L(G) = L S G (G).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperedge Replacement Grammars", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We call the family of languages that can be produced by any HRG the hyperedge replacement languages (HRL).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperedge Replacement Grammars", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We assume that terminal edges are always of rank 2, and depict them as directed edges where the direction is determined by the tentacle labels: the tentacle labeled 1 attaches to the source of the edge and the tentacle labeled 2 attaches to the target of the edge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperedge Replacement Grammars", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Example 3. Table 1 shows a HRG deriving AMR graphs for sentences of the form 'I need to want to need to want to ... to want to go'. Figure 3 is a graph derived by the grammar. The grammar is somewhat unnatural, a point we will return to ( \u00a74).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 18, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 132, |
|
"end": 140, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hyperedge Replacement Grammars", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We can use HRGs to generate chain graphs Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 48, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hyperedge Replacement Grammars", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "S (1) S (2) (1) (2) 1 2 a 1 2 b a b", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperedge Replacement Grammars", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Figure 4: A HRG producing the string language a n b n .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperedge Replacement Grammars", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(strings) by restricting the form of the productions in the grammars. Figure 4 shows a HRG that produces the context-free string language a n b n . HRGs can simulate the class of mildly context-sensitive languages that is characterized, e.g., by linear context-free rewriting systems (LCFRS; Vijay-Shanker et al. 1987) , where the fan-out of the LCFRS will influence the maximum rank of nonterminal required in the HRG, see (Engelfriet and Heyker, 1991) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 291, |
|
"text": "(LCFRS;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 318, |
|
"text": "Vijay-Shanker et al. 1987)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 424, |
|
"end": 453, |
|
"text": "(Engelfriet and Heyker, 1991)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 78, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hyperedge Replacement Grammars", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A regular graph grammar (RGG; Courcelle 1991) is a restricted form of HRG. To explain the restrictions, we first require some definitions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Grammars", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Definition 4. Given a graph G, a path in G from a node v to a node v is a sequence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Grammars", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "(v 0 , i 1 , e 1 , j 1 , v 1 )(v 1 , i 2 , e 2 , j 2 , v 2 ) . . . (v k\u22121 , i k , e k , j k , v k ) (1) such that v 0 = v, v k = v , and for each r \u2208 [k], vert(e r , i r ) = v r\u22121 and vert(e r , j r ) = v r .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Grammars", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The length of this path is k.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Grammars", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "A path is terminal if every edge in the path has a terminal label. A path is internal if each v i is internal for 1 \u2264 i \u2264 k \u2212 1. Note that the endpoints v 0 and v k of an internal path can be external.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Grammars", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Definition 5. A HRG G is a Regular Graph Grammar (or simply RGG) if each nonterminal in N G has rank at least one and for each p \u2208 P G the following hold: (C1) R(p) has at least one edge. Either it is a single terminal edge, all nodes of which are external, or each of its edges has at least one internal node.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Grammars", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "(C2) Every pair of nodes in R(p) is connected by a terminal and internal path.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Grammars", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Example 4. The grammar in Table 1 is an RGG. Although HRGs can produce context-free languages (and beyond) as shown in Figure 4 , the only string languages RGGs can produce are the regular string languages. See Figure 5 for an example of a string generating RGG. Similarly, RGGs can produce regular tree languages, but not context-free tree languages. Figure 6 shows a tree generating RGG that generates binary trees the internal nodes of which are represented by a-labeled edges, and the leaves of which are represented by b-labeled edges. Note that these two results of regularity of the string-and tree-languages generated by RGG follow from the fact that graph languages produced by RGG are MSO-definable (Courcelle, 1991) , and the well-known facts that the regular string and graph languages are MSO-definable.", |
|
"cite_spans": [ |
|
{ |
|
"start": 709, |
|
"end": 726, |
|
"text": "(Courcelle, 1991)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 33, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 119, |
|
"end": 127, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 211, |
|
"end": 219, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 352, |
|
"end": 360, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Regular Graph Grammars", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "X (1) a Y (1) b 1 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Grammars", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Figure 5: A RGG for a regular string language. Figure 6 : A RGG for a regular tree language.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 55, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Regular Graph Grammars", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "X (1) Y Z (1) 1 a 1 2 1 1 b", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Grammars", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We call the family of languages generated by RGGs the regular graph languages (RGLs).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regular Graph Grammars", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "To recognize RGG, we exploit the property that every nonterminal including the start symbol has rank at least one (Definition 5), and we assume that the corresponding external node is identified in the input graph. This mild assumption may be reasonable for applications like AMR parsing, where grammars could be designed so that the external node is always the unique root. Later we relax this assumption.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RGL Recognition", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The availability of an identifiable external node suggests a top-down algorithm, and we take in-spiration from a top-down recognition algorithm for the predictive top-down parsable grammars, another subclass of HRG (Drewes et al., 2015) . These grammars, the graph equivalent of LL(1) string grammars, are incomparable to RGG, but the algorithms are related in their use of top-down prediction and in that they both fix an order of the edges in the right-hand side of each production.", |
|
"cite_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 236, |
|
"text": "(Drewes et al., 2015)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RGL Recognition", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Just as the algorithm of Chiang et al. (2013) generalizes CKY to HRG, our algorithm generalizes Earley's algorithm (Earley, 1970) . Both algorithms operate by recognizing incrementally larger subgraphs of the input graph, using a succinct representation for subgraphs that depends on an arbitrarily chosen marker node m of the input graph. Chiang et al. (2013) prove each subgraph has a unique boundary representation, and give algorithms that use only boundary representations to compute the union of two subgraphs, requiring time linear in the number of boundary nodes; and to check disjointness of subgraphs, requiring time linear in the number of boundary edges.", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 45, |
|
"text": "Chiang et al. (2013)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 115, |
|
"end": 129, |
|
"text": "(Earley, 1970)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Top-Down Recognition for RGLs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For each production p of the grammar, we impose a fixed order on the edges of R(p), as in Drewes et al. (2015) . We discuss this order in detail in \u00a73.2. As in Earley's algorithm, we use dotted rules to represent partial recognition of pro-", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 110, |
|
"text": "Drewes et al. (2015)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Top-Down Recognition for RGLs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "ductions: X \u2192\u0113 1 . . .\u0113 i\u22121 \u2022\u0113 i . . .\u0113 n means that", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Top-Down Recognition for RGLs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "we have identified the edges\u0113 1 to\u0113 i\u22121 and that we must next recognize edge\u0113 i . We write\u0113 and v for edges and nodes in productions and e and v for edges and nodes in a derived graph. When the identity of the sequence is immaterial we abbreviate it as \u03b1, for example writing X \u2192 \u2022 \u03b1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Top-Down Recognition for RGLs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We present our recognizer as a deductive proof system (Shieber et al., 1995) . The items of the Name", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 76, |
|
"text": "(Shieber et al., 1995)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Top-Down Recognition for RGLs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Conditions recognizer are of the form", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "PREDICT [b(I), p : X \u2192\u01131 . . . \u2022\u0113 i . . .\u0113n, \u03c6p][q : Y \u2192 \u03b1] [\u03c6p(\u0113i), q : Y \u2192 \u2022 \u03b1, \u03c6 0 q [ext R(q) = \u03c6p(\u0113i)]] lab(\u0113i) = Y SCAN [b(I), X \u2192\u01131 . . . \u2022\u0113 i . . .\u0113n, \u03c6p][e = edg lab(\u0113 i ) (v1, . . . , vm)] [b(I \u222a {e}), X \u2192\u01131 . . . \u2022\u0113 i+1 . . .\u0113n, \u03c6p[att(\u0113i) = (v1, . . . , vm)]] \u03c6p(\u0113i)(j) \u2208 VG \u21d2 \u03c6p(\u0113i)(j) = vert(e, j) COMPLETE [b(I), p : X \u2192\u01131 . . . \u2022\u0113 i . . .\u0113n, \u03c6p][b(J), q : Y \u2192 \u03b1 \u2022 , \u03c6q] [b(I \u222a J), X \u2192\u01131 . . . \u2022\u0113 i+1 . . .\u0113n, \u03c6p[att(\u0113i) = \u03c6p(ext R(q) )]] \u03c6p(\u0113i)(j) \u2208 VG \u21d2 \u03c6p(\u0113i)(j) = \u03c6q(ext R(q) )(j), lab(\u0113i) = Y, EI \u2229 EJ = \u2205", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[b(I), p : X \u2192\u0113 1 . . . \u2022\u0113 i . . .\u0113 n , \u03c6 p ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where I is a subgraph that has been recognized as matching\u0113 1 , . . . ,\u0113 i\u22121 ; p : X \u2192\u0113 1 , . . . ,\u0113 n is a production in the grammar with the edges in order; and \u03c6 p :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "E R(p) \u2192 V * G maps the endpoints of edges in R(p) to nodes in G.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each production p, we number the nodes in some arbitrary but fixed order. Using this, we construct the function \u03c6 0 p :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "E R(p) \u2192 V * R(p) such that for\u0113 \u2208 E R(p) if att(\u0113) = (v 1 ,v 2 ) then \u03c6 0 p (\u0113) = (v 1 ,v 2 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As we match edges in the graph with edges in p, we assign the nodesv to nodes in the graph. For example, if we have an edge\u0113 in a production p such that att(\u0113) = (v 1 ,v 2 ) and we find an edge e which matches\u0113, then we update \u03c6 p to record this fact, written \u03c6 p [att(\u0113) = att(e)]. We also use \u03c6 p to record assignments of external nodes. If we assign the ith external node to v, we write \u03c6 p [ext p (i) = v]. We write \u03c6 0 p to represent a mapping with no grounded nodes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since our algorithm makes top-down predictions based on known external nodes, our boundary representation must cover the case where a subgraph is empty except for these nodes. If at some point we know that our subgraph has external nodes \u03c6(\u0113), then we use the shorthand \u03c6(\u0113) rather than the full boundary representation \u03c6(\u0113), \u2205, m \u2208 \u03c6(\u0113) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To keep notation uniform, we use dummy nonterminal S * \u2208 N G that derives S G via the production p 0 . For graph G, our system includes the axiom:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[ext G , p 0 : S * \u2192 \u2022 S G , \u03c6 0 p 0 [ext R(p 0 ) = ext G ]].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our goal is to prove:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[b(G), p S : S * \u2192 S G \u2022 , \u03c6 p S ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where \u03c6 p S has a single edge\u0113 in its domain which has label S G in R(p S ) and \u03c6 p S (\u0113) = ext G .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As in Earley's algorithm, we have three inference rules: PREDICT, SCAN and COMPLETE (Table 2). PREDICT is applied when the edge after the dot is nonterminal, assigning any external nodes that have been identified. SCAN is applied when the edge after the dot is terminal. Using \u03c6 p , we may already know where some of the endpoints of the edge should be, so it requires the endpoints of the scanned edge to match. COMPLETE requires that each of the nodes of\u0113 i in R(p) have been identified, these nodes match up with the corresponding external nodes of the subgraph J, and that the subgraphs I and J are edge-disjoint.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We provide a high-level proof that the recognizer is sound and complete. Proposition 1. Let G be a HRG and G a graph.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[b(G), p S : S * \u2192 S G \u2022 , \u03c6 p S ] can be proved from the axiom [ext G , p S : S * \u2192 \u2022 S G , \u03c6 p S [ext R(p S ) = ext G ]] if and only if G \u2208 L(G).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Proof. We prove that for each", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "X \u2208 N G , [b(G), p X : X * \u2192 X \u2022 , \u03c6 p X ] can be proved from [ext G , p X : X * \u2192 \u2022 X, \u03c6 p X [ext R(p X ) = ext G ]] if", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "and only if G \u2208 L X (G) where the dummy nonterminal X * was added to the set of nonterminals and p X : X * \u2192 X was added to the set of productions. We prove this by induction on the number of edges in G.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We assume that each production in the grammar contains at least one terminal edge. If the HRG is not in this form, it can be converted into this form and in the case of RGGs they are already in this form by definition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Base Case: Let G consist of a single edge. If: Assume G \u2208 L X (G). Since G consists of one edge, there must be a production q : X \u2192 G. Apply PREDICT to the axiom and p X : X * \u2192 X to obtain the item [\u03c6 p X (X), q :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "X \u2192 \u2022 G, \u03c6 0 q [ext G = \u03c6 p X (X)]]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ". Apply SCAN to the single terminal edge that makes up G to obtain [b(G), q : X \u2192 G \u2022 , \u03c6 q ] and finally apply COMPLETE to this and the axiom reach the goal", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[b(G), p X : X * \u2192 X, \u03c6 p X ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Only if: Assume the goal can be reached from the axiom and G = e. Then the item [b(e), q : X \u2192 e, \u03c6 q ] must have been reached at some point for some q \u2208 P G . Therefore q : X \u2192 e is a production and so e = G \u2208 L X (G).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Assumption: Assume that the proposition holds when G has fewer than k edges.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Inductive", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Step: Assume G has k edges. If: Assume G \u2208 L X (G), then there is a production q : X \u2192 H where H has nonterminals Y 1 , . . . , Y n and there are graphs", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "H 1 , . . . , H n such that G = H[Y 1 /H 1 ] . . . [Y n /H n ]. Each graph H i for i \u2208 [n]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "has fewer than k edges and so we apply the inductive hypothesis to show that we can prove the items", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[b(H i ), r i : Y i \u2192 J i , \u03c6 r i ] for each i \u2208 [n]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ". By applying COMPLETE to each such item and applying SCAN to each terminal edge of H we", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "reach the goal [b(G), p X : X * \u2192 X \u2022 , \u03c6 p X ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Only If: Assume the goal can be proved from the axiom. Then we must have at some point reached an item of the form [b(G), q : X \u2192 H, \u03c6 q ] and that H has nonterminals Y 1 , . . . , Y n . This means that there are graphs", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "H 1 , . . . , H n such that [b(H i ), p Y i : Y * i \u2192 Y i , \u03c6 p Y i ] for each i \u2208 [n] and G = H[Y 1 /H 1 ] . . . [Y n /H n ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since each H i has fewer than k edges, we apply the inductive hypothesis to get that", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "H i \u2208 L Y i (G) for each i \u2208 [n]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "and therefore G \u2208 L X (G).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Example 5. Using the RGG in Table 1 , we show how to recognize the graph in Figure 7 , which can be derived by applying production s followed by production u, where the external nodes of Y are (v 3 , v 2 ). Assume the ordering of the edges in production s is arg1, arg0, Z; the top node isv 1 ; the bottom node isv 2 ; and the node on the right isv 3 ; and that the marker node is not in this subgraphwe elide reference to it for simplicity. Letv 4 be the top node of R(u) andv 5 be the bottom node of R(u). The external nodes of Y are determined top-down, so the recognize of this subgraph is triggered by this item:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 35, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 76, |
|
"end": 84, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[{v 3 , v 2 }, Y \u2192 \u2022 arg1 arg0 Z, \u03c6 0 s [ext R(s) = (v 3 , v 2 )]] (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u03c6 s (arg1) = (v 1 , v 3 ), \u03c6 s (arg0) = (v 1 , v 2 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ", and \u03c6 s (Z) = (v 1 ). Table 3 shows how we can prove the item", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 31, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[ {v 3 , v 2 }, {e 3 , e 2 } , Y \u2192 arg1arg0Z \u2022 , \u03c6]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The boundary representation {v 3 , v 2 }, {e 3 , e 2 } in this item represents the whole subgraph shown in Figure 7 . Figure 3 . To refer to nodes and edges in the text, they are labeled v1, v2, v3, e1, e2, and e3.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 115, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF6" |
|
}, |
|
{ |
|
"start": 118, |
|
"end": 126, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "v 1 v 4 v 2 v 3 . . . . . . need (e1) arg0 (e2) arg1 (e3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Then the goal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our algorithm requires a fixed ordering of the edges in the right-hand sides of each production. We will constrain this ordering to exploit the structure of RGG productions, allowing us to bound recognition complexity. If s =\u0113 1 . . .\u0113 n is an order, define", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "s i:j =\u0113 i . . .\u0113 j .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Definition 7. Let s =\u0113 1 , . . . ,\u0113 n be an edge order of a right-hand side of a production. Then s is normal if it has the following properties:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "1.\u0113 1 is connected to an external node, 2. s 1:j is a connected graph for all j \u2208 [n] 3. if\u0113 i is nonterminal, each endpoint of\u0113 i must be incident with some terminal edge\u0113 j for which j < i.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Example 6. The ordering of the edges of production s in Example 5 is normal.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Arbitrary HRGs do not necessarily admit a normal ordering. For example, the graph in Figure 8 cannot satisfy Properties 2 and 3 simultaneously. However, RGGs do admit a normal ordering. v2, v1}, {e3} , Y \u2192 arg1 \u2022 arg0Z, \u03c6s[att(arg1) = (v1, v3)]] SCAN: 1. and e3 = edg arg1 (v1, v3) 3.", |
|
"cite_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 199, |
|
"text": "v2, v1}, {e3}", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 281, |
|
"text": "(v1, v3)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 93, |
|
"text": "Figure 8", |
|
"ref_id": "FIGREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "[{v3, v2}, Y \u2192 \u2022 arg1arg0Z, \u03c6 0 s [ext R(s) = (v3, v2)]] Equation 2 2. [ {v3,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "[ {v3, v2, v1}, {e3, e2} , Y \u2192 arg1arg0 \u2022 Z, \u03c6s[att(arg0) = (v1, v2)]] SCAN: 2. and e2 = edg arg0 (v1, v2)] 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "[(v1), Z \u2192 \u2022 need, \u03c6 0 u [ext R(u) = (v1)]]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "PREDICT: 3. and Z \u2192 need", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "5. [ {v1, v4}, {e1} , Z \u2192 need \u2022 , \u03c6u[att(need) = (v1, v4)]]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "SCAN: 4. and e1 = edg need (v1, v4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "6. [ {v3, v2}, {e3, e2} , Y \u2192 arg1arg0Z \u2022 , \u03c6s[att(Z) = (v1)]]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "COMPLETE: 3. and 5. Table 3 : The steps of recognizing that the subgraph shown in Figure 7 is derived from productions r2 and u in the grammar in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 27, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 82, |
|
"end": 90, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF6" |
|
}, |
|
{ |
|
"start": 146, |
|
"end": 153, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Proposition 2. If G is an RGG, for every p \u2208 P G , there is a normal ordering of the edges in R(p).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Proof. If R(p) contains a single node then it must be an external node and it must have a terminal edge attached to it since R(p) must contain at least one terminal edge. If R(p) contains multiple nodes then by C2 there must be terminal internal paths between all of them, so there must be a terminal edge attached to the external node, which we use to satisfy Property 1. To produce a normal ordering, we next select terminal edges once one of their endpoints is connected to an ordered edge, and nonterminal edges once all endpoints are connected to ordered edges, possible by C2. Therefore, Properties 2 and 3 are satisfied.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "A normal ordering tightly constrains the recognition of edges. Property 3 ensures that when we apply PREDICT, the external nodes of the predicted edge are all bound to specific nodes in the graph. Properties 1 and 2 ensure that when we apply SCAN, at least one endpoint of the edge is bound (fixed).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normal Ordering", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Assume a normally-ordered RGG. Let the maximum number of edges in the right-hand side of any production be m; the maximum number of nodes in any right-hand side of a production k; the maximum degree of any node in the input graph d; and the number of nodes in the input graph n.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognition Complexity", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "As previously mentioned, Drewes et al. (2015) also propose a HRG recognizer which can recognize a subclass of HRG (incomparable to RGG) called the predictive top-down parsable grammars. Their recognizer in this case runs in O(n 2 ) time. A well-known bottom-up recognizing algorithm for HRG was first proposed by Lautemann (1990) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 45, |
|
"text": "Drewes et al. (2015)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 313, |
|
"end": 329, |
|
"text": "Lautemann (1990)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognition Complexity", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In this paper, the recognizer is shown to be polynomial in the size of the input graph. Later, Chiang et al. (2013) formulate the same algorithm more precisely and show that the recognizing complexity is O((3 d \u00d7 n) k+1 ) where k in their case is the treewidth of the grammar. Remark 1. The maximum number of nodes in any right-hand side of a production (k) is also the maximum number of boundary nodes for any subgraph in the recognizer.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 115, |
|
"text": "Chiang et al. (2013)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognition Complexity", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "COMPLETE combines subgraphs I and J only when the entire subgraph derived from Y has been recognized. Boundary nodes of J are also boundary nodes of I because they are nodes in the terminal subgraph of R(p) where Y connects. The boundary nodes of I \u222a J are also bounded by k since form a subset of the boundary nodes of I. Remark 2. Given a boundary node, there are at most (d m ) k\u22121 ways of identifying the remaining boundary nodes of a subgraph that is isomorphic to the terminal subgraph of the right-hand side of a production.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognition Complexity", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The terminal subgraph of each production is connected by C2, with a maximum path length of m. For each edge in the path, there are at most d subsequent edges. Hence for the k \u2212 1 remaining boundary nodes there are (d m ) k\u22121 ways of choosing them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognition Complexity", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We count instantiations of COMPLETE for an upper bound on complexity (McAllester, 2002) , using similar logic to (Chiang et al., 2013) . The number of boundary nodes of I, J and I \u222a J is at most k. Therefore, if we choose an arbitrary node to be some boundary node of I \u222a J, there are at most (d m ) k\u22121 ways of choosing its remaining boundary nodes. For each of these nodes, there are at most (3 d ) k states of their attached boundary edges: in I, in J, or in neither. The total number of instantiations is O(n(d m ) k\u22121 (3 d ) k ), linear in the number of input nodes and exponential in the degree of the input graph. Note that in the case of the AMR dataset (Banarescu et al. 2013) , the maximum node degree is 17 and the average is 2.12.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 87, |
|
"text": "(McAllester, 2002)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 113, |
|
"end": 134, |
|
"text": "(Chiang et al., 2013)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 662, |
|
"end": 685, |
|
"text": "(Banarescu et al. 2013)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognition Complexity", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We observe that RGGs could be relaxed to produce graphs with no external nodes by adding a dummy nonterminal S with rank 0 and a single production S \u2192 S. To adapt the recognition algorithm, we would first need to guess where the graph starts. This would add a factor of n to the complexity as the graph could start at any node.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognition Complexity", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We have presented RGG as a formalism that could be useful for semantic representations and we have provided a top-down recognition algorithm for them. The constraints of RGG enable more efficient recognition than general HRG, and this tradeoff is reasonable since HRG is very expressive-when generating strings, it can express non-context-free languages (Engelfriet and Heyker, 1991; Bauer and Rambow, 2016) , far more power than needed to express semantic graphs. On the other hand, RGG is so constrained that it may not be expressive enough: it would be more natural to derive the graph in Figure 4 from outermost to innermost predicate; but constraint C2 makes it difficult to express this, and the grammar in Table 1 A possible alternative would be to consider Restricted DAG Grammars (RDG; Bj\u00f6rklund et al. 2016) . Parsing for a fixed such grammar can be achieved in quadratic time with respect to the input graph. It is known that for a fixed HRG generating k-connected hypergraphs consisting of hyperedges of rank k only, parsing can be carried out in cubic time (k-HRG; (Drewes, 1993) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 354, |
|
"end": 383, |
|
"text": "(Engelfriet and Heyker, 1991;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 384, |
|
"end": 407, |
|
"text": "Bauer and Rambow, 2016)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 795, |
|
"end": 817, |
|
"text": "Bj\u00f6rklund et al. 2016)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1078, |
|
"end": 1092, |
|
"text": "(Drewes, 1993)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 592, |
|
"end": 600, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 713, |
|
"end": 720, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusions", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "More general than RDGs a is the class of graph languages recognized by DAG automata (DA-GAL; Blum and Drewes 2016), for which the deterministic variant provides polynomial time parsing. Note that RGGs can generate graph languages of unbounded node degree. With respect to expressive power, RDGs and k-HRGs are incomparable to RGGs. Figure 9 shows the relationships between the context-free and regular languages for strings, trees and graphs. Monadic-second order logic (MSOL; Courcelle and Engelfriet 2011) is a form of logic which when restricted to strings gives us exactly the regular string languages and when restricted to trees gives us exactly the regular tree languages. RGLs lie in the intersection of HRG and MSOL on graphs but they do not make up this entire intersection. Courcelle (1991) defined (non-constructively) this intersection to be the strongly context-free languages (SCFL). We believe that there may be other formalisms that are subfamilies of SCFL which may be useful for semantic representations. All inclusions shown in Figure 9 are strict. For instance, RGL cannot produce \"star graphs\" (one node that has edges to n other nodes), while DAGAL and HRL can produce such graphs. It is well-known that HRL and MSOL are incomparable. There is a language in RGL that is not in DAGAL, for instance, \"ladders\" (two string graphs of n nodes each, with an edge between the ith node of each string).", |
|
"cite_spans": [ |
|
{ |
|
"start": 785, |
|
"end": 801, |
|
"text": "Courcelle (1991)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 332, |
|
"end": 340, |
|
"text": "Figure 9", |
|
"ref_id": "FIGREF8" |
|
}, |
|
{ |
|
"start": 1048, |
|
"end": 1056, |
|
"text": "Figure 9", |
|
"ref_id": "FIGREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusions", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Another alternative formalism to RGG that is defined as a restriction of HRG are Tree-like Grammars (TLG; Matheja et al. 2015) . They define a subclass of SCFL, i.e., they are MSO definable. TLGs have been considered for program verification, where closure under intersection of the formalism is essential. Note that RGGs are also closed under intersection. While TLG and RDG are both incomparable to RGG, they share important characteristics, including the fact that the terminal subgraph of every production is connected. This means that our top-down recognition algorithm is applicable to both. In the future we would like to investigate larger, less restrictive (and more linguistically expressive) subfamilies of SCFL. We plan to implement and evaluate our algorithm experimentally.", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 126, |
|
"text": "Matheja et al. 2015)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusions", |
|
"sec_num": "4" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh; and in part by a Google faculty research award (to AL). We thank Clara Vania, Sameer Bansal, Ida Szubert, Federico Fancellu, Antonis Anastasopoulos, Marco Damonte, and the anonymous reviews for helpful discussion of this work and comments on previous drafts of the paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Universal conceptual cognitive annotation (ucca)", |
|
"authors": [ |
|
{ |
|
"first": "Omri", |
|
"middle": [], |
|
"last": "Abend", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ari", |
|
"middle": [], |
|
"last": "Rappoport", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ACL (1). The Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "228--238", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Omri Abend and Ari Rappoport. 2013. Univer- sal conceptual cognitive annotation (ucca). In ACL (1). The Association for Computational Linguistics, pages 228-238. http://dblp.uni- trier.de/db/conf/acl/acl2013-1.html#AbendR13.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Abstract meaning representation for sembanking", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Banarescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shu", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Madalina", |
|
"middle": [], |
|
"last": "Georgescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kira", |
|
"middle": [], |
|
"last": "Griffitt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulf", |
|
"middle": [], |
|
"last": "Hermjakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "178--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representa- tion for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoper- ability with Discourse. Association for Computa- tional Linguistics, Sofia, Bulgaria, pages 178-186. http://www.aclweb.org/anthology/W13-2322.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Hyperedge replacement and nonprojective dependency structures", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 12th International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+12)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "103--111", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Bauer and Owen Rambow. 2016. Hy- peredge replacement and nonprojective de- pendency structures. In Proceedings of the 12th International Workshop on Tree Ad- joining Grammars and Related Formalisms (TAG+12), June 29 -July 1, 2016, Heinrich Heine University, D\u00fcsseldorf, Germany. pages 103- 111. http://aclweb.org/anthology/W/W16/W16- 3311.pdf.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Between a Rock and a Hard Place -Uniform Parsing for Hyperedge Replacement DAG Grammars", |
|
"authors": [ |
|
{ |
|
"first": "Henrik", |
|
"middle": [], |
|
"last": "Bj\u00f6rklund", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Drewes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petter", |
|
"middle": [], |
|
"last": "Ericson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "521--532", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-319-30000-940" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Henrik Bj\u00f6rklund, Frank Drewes, and Petter Eric- son. 2016. Between a Rock and a Hard Place -Uniform Parsing for Hyperedge Replacement DAG Grammars, Springer International Publishing, Cham, pages 521-532. https://doi.org/10.1007/978- 3-319-30000-9 40.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Properties of regular DAG languages", |
|
"authors": [ |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Blum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Drewes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Language and Automata Theory and Applications -10th International Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "427--438", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-319-30000-933" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johannes Blum and Frank Drewes. 2016. Properties of regular DAG languages. In Language and Au- tomata Theory and Applications -10th International Conference, LATA 2016, Prague, Czech Republic, March 14-18, 2016, Proceedings. pages 427-438. https://doi.org/10.1007/978-3-319-30000-9 33.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Parsing graphs with hyperedge replacement grammars", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Andreas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [ |
|
"Moritz" |
|
], |
|
"last": "Hermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bevan", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "924--932", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Chiang, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, Bevan Jones, and Kevin Knight. 2013. Parsing graphs with hyper- edge replacement grammars. In Proceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 924-932. http://www.aclweb.org/anthology/P13-1091.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "The monadic second-order logic of graphs V: on closing the gap between definability and recognizability", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bruno Courcelle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Theor. Comput. Sci", |
|
"volume": "80", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/0304-3975" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bruno Courcelle. 1991. The monadic second-order logic of graphs V: on closing the gap between definability and recognizability. Theor. Comput. Sci. 80(2):153-202. https://doi.org/10.1016/0304- 3975(91)90387-H.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Graph Structure and Monadic Second-Order Logic, a Language Theoretic Approach", |
|
"authors": [ |
|
{ |
|
"first": "Bruno", |
|
"middle": [], |
|
"last": "Courcelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joost", |
|
"middle": [], |
|
"last": "Engelfriet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bruno Courcelle and Joost Engelfriet. 2011. Graph Structure and Monadic Second-Order Logic, a Lan- guage Theoretic Approach. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Np-completeness of kconnected hyperedge-replacement languages of order k", |
|
"authors": [ |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Drewes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Inf. Process. Lett", |
|
"volume": "45", |
|
"issue": "2", |
|
"pages": "89--94", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frank Drewes. 1993. Np-completeness of k- connected hyperedge-replacement languages of order k. Inf. Process. Lett. 45(2):89-94.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Predictive Top-Down Parsing for Hyperedge Replacement Grammars", |
|
"authors": [ |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Drewes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Berthold", |
|
"middle": [], |
|
"last": "Hoffmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Minas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--34", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-319-21145-92" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frank Drewes, Berthold Hoffmann, and Mark Mi- nas. 2015. Predictive Top-Down Parsing for Hyperedge Replacement Grammars, Springer International Publishing, Cham, pages 19-34. https://doi.org/10.1007/978-3-319-21145-9 2.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Handbook of Graph Grammars and Computing by Graph Transformation", |
|
"authors": [ |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Drewes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hans-J\u00f6rg", |
|
"middle": [], |
|
"last": "Kreowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Annegret", |
|
"middle": [], |
|
"last": "Habel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "95--162", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frank Drewes, Hans-J\u00f6rg Kreowski, and Annegret Ha- bel. 1997. Hyperedge replacement graph grammars. In Grzegorz Rozenberg, editor, Handbook of Graph Grammars and Computing by Graph Transforma- tion, World Scientific, pages 95-162.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "An efficient context-free parsing algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Jay", |
|
"middle": [], |
|
"last": "Earley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1970, |
|
"venue": "", |
|
"volume": "13", |
|
"issue": "", |
|
"pages": "94--102", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jay Earley. 1970. An efficient context-free parsing algorithm. ACM, New York, NY, USA, volume 13, pages 94-102.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "The string generating power of context-free hypergraph grammars", |
|
"authors": [ |
|
{ |
|
"first": "Joost", |
|
"middle": [], |
|
"last": "Engelfriet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linda", |
|
"middle": [], |
|
"last": "Heyker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Journal of Computer and System Sciences", |
|
"volume": "43", |
|
"issue": "2", |
|
"pages": "328--360", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joost Engelfriet and Linda Heyker. 1991. The string generating power of context-free hypergraph gram- mars. Journal of Computer and System Sciences 43(2):328-360.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Deepbank : a dynamically annotated treebank of the Wall Street Journal", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Flickinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valia", |
|
"middle": [], |
|
"last": "Kordoni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eleventh International Workshop on Treebanks and Linguistic Theories (TLT11)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "85--96", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Flickinger, Yi Zhang, and Valia Kordoni. 2012. Deepbank : a dynamically annotated treebank of the Wall Street Journal. In Proceedings of the Eleventh International Workshop on Treebanks and Linguistic Theories (TLT11). Lisbon, pages 85-96. HU.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Announcing prague czech-english dependency treebank 2.0", |
|
"authors": [ |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Haji\u010dov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jarmila", |
|
"middle": [], |
|
"last": "Panevov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Sgall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Silvie", |
|
"middle": [], |
|
"last": "Cinkov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Fu\u010d\u00edkov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [], |
|
"last": "Mikulov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Pajas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Popelka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji\u0159\u00ed", |
|
"middle": [], |
|
"last": "Semeck\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jana\u0161indlerov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Jan\u0161t\u011bp\u00e1nek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zde\u0148ka", |
|
"middle": [], |
|
"last": "Toman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zden\u011bk\u017eabokrtsk\u00fd", |
|
"middle": [], |
|
"last": "Ure\u0161ov\u00e1", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12). European Language Resources Association (ELRA)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Haji\u010d, Eva Haji\u010dov\u00e1, Jarmila Panevov, Petr Sgall, Ond\u0159ej Bojar, Silvie Cinkov\u00e1, Eva Fu\u010d\u00edkov\u00e1, Marie Mikulov\u00e1, Petr Pajas, Jan Popelka, Ji\u0159\u00ed Se- meck\u00fd, Jana\u0160indlerov\u00e1, Jan\u0160t\u011bp\u00e1nek, Josef Toman, Zde\u0148ka Ure\u0161ov\u00e1, and Zden\u011bk\u017dabokrtsk\u00fd. 2012. Announcing prague czech-english dependency tree- bank 2.0. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet Uur Doan, Bente Maegaard, Joseph Mariani, Asun- cion Moreno, Jan Odijk, and Stelios Piperidis, ed- itors, Proceedings of the Eight International Con- ference on Language Resources and Evaluation (LREC'12). European Language Resources Associ- ation (ELRA), Istanbul, Turkey.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Semanticsbased machine translation with hyperedge replacement grammars", |
|
"authors": [ |
|
{ |
|
"first": "Bevan", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Andreas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [ |
|
"Mortiz" |
|
], |
|
"last": "Hermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bevan Jones, Jacob Andreas, Daniel Bauer, Karl Mor- tiz Hermann, and Kevin Knight. 2012. Semantics- based machine translation with hyperedge replace- ment grammars. In Proceedings of COLING.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "The complexity of graph languages generated by hyperedge replacement", |
|
"authors": [ |
|
{ |
|
"first": "Clemens", |
|
"middle": [], |
|
"last": "Lautemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Clemens Lautemann. 1990. The complexity of graph languages generated by hyperedge re- placement.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Tree-Like Grammars and Separation Logic", |
|
"authors": [ |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Matheja", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christina", |
|
"middle": [], |
|
"last": "Jansen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Noll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "90--108", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-319-26529-26" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christoph Matheja, Christina Jansen, and Thomas Noll. 2015. Tree-Like Grammars and Separa- tion Logic, Springer International Publishing, Cham, pages 90-108. https://doi.org/10.1007/978-3-319- 26529-2 6.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "On the complexity analysis of static analyses", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mcallester", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "J. ACM", |
|
"volume": "49", |
|
"issue": "4", |
|
"pages": "512--537", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/581771.581774" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David McAllester. 2002. On the complexity anal- ysis of static analyses. J. ACM 49(4):512-537. https://doi.org/10.1145/581771.581774.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "A synchronous hyperedge replacement grammar based approach for AMR parsing", |
|
"authors": [ |
|
{ |
|
"first": "Xiaochang", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linfeng", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 19th Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "32--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaochang Peng, Linfeng Song, and Daniel Gildea. 2015. A synchronous hyperedge replacement gram- mar based approach for AMR parsing. In Pro- ceedings of the 19th Conference on Computa- tional Natural Language Learning, CoNLL 2015, Beijing, China, July 30-31, 2015. pages 32-41. http://aclweb.org/anthology/K/K15/K15-1004.pdf.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Principles and implementation of deductive parsing", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Stuart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [ |
|
"C N" |
|
], |
|
"last": "Schabes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Journal of Logic Programming", |
|
"volume": "24", |
|
"issue": "", |
|
"pages": "1--2", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stuart M. Shieber, Yves Schabes, and Fernando C. N. Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming 24(1-2).", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Characterizing structural descriptions produced by various grammatical formalisms", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Vijay-Shanker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Weir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravind", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Proceedings of the 25th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics", |
|
"volume": "87", |
|
"issue": "", |
|
"pages": "104--111", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/981175.981190" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Vijay-Shanker, David J. Weir, and Aravind K. Joshi. 1987. Characterizing structural descrip- tions produced by various grammatical formalisms. In Proceedings of the 25th Annual Meeting on Association for Computational Linguistics. Asso- ciation for Computational Linguistics, Strouds- burg, PA, USA, ACL '87, pages 104-111. https://doi.org/10.3115/981175.981190.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "The replacement of the X-labeled edge e in G by the graph H." |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "" |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Graph derived by grammar in" |
|
}, |
|
"FIGREF5": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Definition 6.(Chiang et al. 2013; Definition 6) Let I be a subgraph of a graph G. A boundary node of I is a node which is either an endpoint of an edge in G\\I or an external node of G. A boundary edge of I is an edge in I which has a boundary node as an endpoint. The boundary representation of I is the tuple b(I) = bn(I), be(I), m \u2208 I where 1. bn(I) is the set of boundary nodes of I 2. be(I) is the set of boundary edges of I 3. (m \u2208 I) is a flag indicating whether the marker node is in I." |
|
}, |
|
"FIGREF6": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Top left subgraph of" |
|
}, |
|
"FIGREF7": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "This graph cannot be normally ordered." |
|
}, |
|
"FIGREF8": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "A Hasse diagram of various string, tree and graph language families. An arrow from family A to family B indicates that family A is a subfamily of family B." |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "The inference rules for the top-down recognizer." |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>HRL</td><td>MSOL</td><td>Graphs</td></tr><tr><td/><td>RGL DAGAL</td><td/></tr><tr><td>CFTL</td><td>RTL</td><td>Trees</td></tr><tr><td>CFL *</td><td>RL</td><td>Strings</td></tr></table>", |
|
"text": "does not. Perhaps we need less expressivity than HRG but more than RGG." |
|
} |
|
} |
|
} |
|
} |