Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N19-1016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:58:44.092369Z"
},
"title": "Implementation of a Chomsky-Sch\u00fctzenberger n-Best Parser for Weighted Multiple Context-Free Grammars",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Ruprecht",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Technische Universit\u00e4t Dresden",
"location": {
"postCode": "01062",
"settlement": "Dresden",
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Tobias",
"middle": [],
"last": "Denkinger",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Technische Universit\u00e4t Dresden",
"location": {
"postCode": "01062",
"settlement": "Dresden",
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Constituent parsing has been studied extensively in the last decades. Chomsky-Sch\u00fctzenberger parsing as an approach to constituent parsing has only been investigated theoretically, yet. It uses the decomposition of a language into a regular language, a homomorphism, and a bracket language to divide the parsing problem into simpler subproblems. We provide the first implementation of Chomsky-Sch\u00fctzenberger parsing. It employs multiple context-free grammars and incorporates many refinements to achieve feasibility. We compare its performance to state-of-the-art grammar-based parsers.",
"pdf_parse": {
"paper_id": "N19-1016",
"_pdf_hash": "",
"abstract": [
{
"text": "Constituent parsing has been studied extensively in the last decades. Chomsky-Sch\u00fctzenberger parsing as an approach to constituent parsing has only been investigated theoretically, yet. It uses the decomposition of a language into a regular language, a homomorphism, and a bracket language to divide the parsing problem into simpler subproblems. We provide the first implementation of Chomsky-Sch\u00fctzenberger parsing. It employs multiple context-free grammars and incorporates many refinements to achieve feasibility. We compare its performance to state-of-the-art grammar-based parsers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The description of the syntax of natural languages (such as Danish, English, and German) with the help of formal grammars has been studied since Chomsky (1956) . With a formal grammar, computers can calculate a syntactic representation (called parse) of a sentence in a natural language. Of the grammar classes in the Chomsky hierarchy (Chomsky, 1959) , context-free grammars (short: CFGs) lack the expressive power necessary to model natural languages (Shieber, 1985) and parsing with context-sensitive grammars cannot be done efficiently (i.e. in polynomial time). This led to the introduction of a series of classes of mildly context-sensitive grammars (Joshi, 1985) that allow parsing in polynomial time but also capture an increasing amount of phenomena present in natural languages. Tree adjoining grammars (Joshi et al., 1975) , linear context-free string-rewriting systems (short: LCFRSs, Vijay-Shanker et al., 1987) , and multiple CFGs (short: MCFGs, Seki et al., 1991) are among those classes.",
"cite_spans": [
{
"start": 145,
"end": 159,
"text": "Chomsky (1956)",
"ref_id": "BIBREF4"
},
{
"start": 336,
"end": 351,
"text": "(Chomsky, 1959)",
"ref_id": "BIBREF5"
},
{
"start": 453,
"end": 468,
"text": "(Shieber, 1985)",
"ref_id": "BIBREF27"
},
{
"start": 656,
"end": 669,
"text": "(Joshi, 1985)",
"ref_id": "BIBREF17"
},
{
"start": 813,
"end": 833,
"text": "(Joshi et al., 1975)",
"ref_id": "BIBREF18"
},
{
"start": 881,
"end": 924,
"text": "(short: LCFRSs, Vijay-Shanker et al., 1987)",
"ref_id": null
},
{
"start": 960,
"end": 978,
"text": "Seki et al., 1991)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Chomsky-Sch\u00fctzenberger (short: CS) parsing was introduced by Hulden (2011) for CFGs and extended to MCFGs by Denkinger (2017) .",
"cite_spans": [
{
"start": 109,
"end": 125,
"text": "Denkinger (2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It uses a classical theorem by Chomsky and Sch\u00fctzenberger (1963, or the generalisation by Yoshinaka et al., 2010) , which states that the language L(G) of a CFG (or an MCFG) G can be represented by a regular language R, a homomorphism h, and a Dyck language (resp. multiple Dyck language) D such that L(G) = h(R \u2229 D). The elements of R \u2229 D correspond to parses in G. For a sentence w, a CS parser calculates the elements of h \u22121 (w)\u2229R\u2229D and transforms them into parses. CS parsing can be viewed as a coarse-tofine mechanism where R corresponds to the coarse grammar and R \u2229 D to the fine grammar. The respective coarse-to-fine pipeline consists of (conceptually) simple operations such as h \u22121 or the intersection with R, which provides great flexibility. The flexibility is used to provide a fallback mechanism in case a finer stage of the pipeline rejects all proposals of a coarser stage. It also permits CS parsing in a broader setting than usual (for parsing) with minimal modification (see sec. 6).",
"cite_spans": [
{
"start": 31,
"end": 67,
"text": "Chomsky and Sch\u00fctzenberger (1963, or",
"ref_id": "BIBREF6"
},
{
"start": 90,
"end": 113,
"text": "Yoshinaka et al., 2010)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We suspected that the coarse-to-fine view on CS parsing leads to an efficient implementation. Since initial tests revealed that the original algorithm for MCFGs (Denkinger, 2017, alg. 3 , recalled in sec. 2) is not feasible in practice, we explore numerous optimisations (sec. 4), one of which is the use of a context-free approximation of the multiple Dyck language D. We introduce component-wise derivations (sec. 3) to relate this context-free approximation to D. Employing the optimisations, we provide the first implementation of a CS parser. In sec. 5, we compare our parser's performance to Grammatical Framework (Angelov and Ljungl\u00f6f, 2014) , rparse (Kallmeyer and Maier, 2013) , and disco-dop (van Cranenburgh et al., 2016) . We restrict our comparison to (discontinuous) grammarbased parsers (excluding e.g. transition systems, Maier, 2015, Coavoux and Crabb\u00e9, 2017) since the principle of CS parsing requires a grammar.",
"cite_spans": [
{
"start": 161,
"end": 185,
"text": "(Denkinger, 2017, alg. 3",
"ref_id": null
},
{
"start": 620,
"end": 648,
"text": "(Angelov and Ljungl\u00f6f, 2014)",
"ref_id": "BIBREF0"
},
{
"start": 658,
"end": 685,
"text": "(Kallmeyer and Maier, 2013)",
"ref_id": "BIBREF19"
},
{
"start": 702,
"end": 732,
"text": "(van Cranenburgh et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 838,
"end": 862,
"text": "Maier, 2015, Coavoux and",
"ref_id": null
},
{
"start": 863,
"end": 876,
"text": "Crabb\u00e9, 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The sets of non-negative integers and positive integers are denoted by N and N + , respectively. We abbreviate {1, . . . , n} by [n] for each n \u2208 N.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Let A and B be sets. The powerset of A and the set of (finite) strings over A are denoted by P(A) and A * , respectively. The set of possibly infinite sequences of elements of A is denoted by A \u03c9 . A partition of A is a set P \u2286 P(A) whose elements (called cells) are non-empty, pairwise disjoint, and cover A (i.e. p\u2208P p = A). For each a \u2208 A and each equivalence relation \u2248 on A, we denoted the equivalence class of a w.r.t. \u2248 by [a] \u2248 . The set of functions from A to B is denoted by A \u2192 B. Note that (A \u2192 B) \u2286 (A \u00d7 B). The composition of two binary relations R 1 and R 2 is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "R 2 \u2022 R 1 = {(a, c) | \u2203b: (a, b) \u2208 R 1 , (b, c) \u2208 R 2 }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Finite state automata. We assume that the reader is familiar with finite state automata. For details, we refer to Hopcroft and Ullman (1979) .",
"cite_spans": [
{
"start": 114,
"end": 140,
"text": "Hopcroft and Ullman (1979)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "A finite state automaton (short: FSA) is a tuple A = (Q, \u2206, q i , q f , T ) where Q and \u2206 are finite sets (states and terminals, respectively), q i , q f \u2208 Q (initial and final state, respectively), and T \u2286 Q \u00d7 \u2206 * \u00d7 Q is finite (transitions). We call q the source and q the target of a transition (q, u, q ). A run is a string \u03b8 of transitions such that the target of a transition is the source of the next transition in \u03b8. The language of A is denoted by L(A).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Sorts. Sorts are a widespread concept in computer science: one can think of sorts as data types in a programming language. Let S be a set (of sorts). An S-sorted set is a tuple (\u2126, sort) where \u2126 is a set and sort: \u2126 \u2192 S. We abbreviate (\u2126, sort) by \u2126 and sort \u22121 (s) by \u2126 s for s \u2208 S. Now let \u2126 be an (S * \u00d7 S)-sorted set. The set of trees over \u2126 is the S-sorted set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "T \u2126 where (T \u2126 ) s = {\u03c9(t 1 , . . . , t k ) | s 1 , . . . , s k \u2208 S, \u03c9 \u2208 \u2126 (s 1 \u2022\u2022\u2022s k ,s) , t 1 \u2208 (T \u2126 ) s 1 , . . . , t k \u2208 (T \u2126 ) s k } for each s \u2208 S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Multiple context-free grammars. A rule of a context-free grammar has the ability to concatenate the strings generated by its right-hand side non-terminals. Multiple context-free grammars extend this ability to concatenating string-tuples. This is done with the help of composition functions. Let \u03a3 be a finite set. A composition function w.r.t. \u03a3 is a function c that takes tuples of strings over \u03a3 as arguments and returns a tuple of strings over \u03a3 (i.e. there are k \u2208 N and s 1 , . . . , s k , s \u2208 N + such that c:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "(\u03a3 * ) s 1 \u00d7 . . . \u00d7 (\u03a3 * ) s k \u2192 (\u03a3 * ) s ), and is defined by an equation c((x 1 1 , . . . , x s 1 1 ), . . . , (x 1 k , . . . , x s k k )) = (u 1 , . . . , u s ) where u 1 , . . . , u s are strings of x j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "i 's and symbols from \u03a3. We call c linear if each x j i occurs at most once in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "u 1 \u2022 \u2022 \u2022 u s . We some- times write [u 1 , . . . , u s ] instead of c. Furthermore, setting sort(c) = (s 1 \u2022 \u2022 \u2022 s k , s), the composition functions w.r.t. \u03a3 form a sorted set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "The following example shows how linear composition functions are used in the rules of a multiple context-free grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Example 1. Consider G = (N, \u03a3, S, P ) where N = {S, A, B} and \u03a3 = {a, b, c, d} are finite sets (non-terminals and terminals, respectively), S \u2208 N (initial non-terminal) and P is a finite set (rules) that contains the following five objects:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "\u03c1 1 = S \u2192 [x 1 1 x 1 2 x 2 1 x 2 2 ](A, B) \u03c1 2 = A \u2192 [ax 1 1 , cx 2 1 ](A) \u03c1 4 = A \u2192 [\u03b5, \u03b5]() \u03c1 3 = B \u2192 [bx 1 1 , dx 2 1 ](B) \u03c1 5 = B \u2192 [\u03b5, \u03b5]().",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "We call G a multiple context-free grammar. Consider the rule \u03c1 1 . Similar to a rule of a contextfree grammar, \u03c1 1 rule has one left-hand side nonterminal (S) and zero or more right-hand side non-terminals (A and B). A derivation of G can be build by combining rules in P to form a tree according to their left-and right-hand side nonterminals. If a derivation starts with the initial non-terminal (here S), then it is called complete. Hence, each complete derivation in G has the form d m,n = \u03c1 1 (\u03c1 m 2 (\u03c1 4 ), \u03c1 n 3 (\u03c1 5 )) for some m, n \u2208 N. If we replace each rule in a derivation by its composition function, we obtain a term of composition functions which can be evaluated. We call the resulting value the yield of a derivation. A derivation d m,n has yield yd(d m,n ) = a m b n c m d n . The set of yields of all complete derivations is the language of G:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "L(G) = {a m b n c m d n | m, n \u2208 N}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "The following definition formalises the notions of ex. 1 and introduces some additional concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Definition 2 (Seki et al., 1991) . A multiple context-free grammar (short: MCFG) is a tuple G = (N, \u03a3, S, P ) where N is a finite N + -sorted set (non-terminals), \u03a3 is a finite set (terminals),",
"cite_spans": [
{
"start": 13,
"end": 32,
"text": "(Seki et al., 1991)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "S \u2208 N 1 (initial non-terminal), P is a finite (N * \u00d7 N )-sorted set of strings \u03c1 of the form A \u2192 c(B 1 , . . . , B k ) such that A, B 1 , . . . , B k \u2208 N , c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "is a linear composition function, and sort(c) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "(sort(B 1 ) \u2022 \u2022 \u2022 sort(B k ), sort(A)). The sort of \u03c1 is (B 1 \u2022 \u2022 \u2022 B k , A).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "The left-hand side (short: lhs) of \u03c1 is A. The fanout of \u03c1 is fanout(\u03c1) = sort(A) and the rank of \u03c1 is rank (\u03c1) = k. The elements of P are called rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "The set of derivations (resp. complete derivations) of G is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "D G = T P (resp. D c G = (T P ) S ). Let w \u2208 \u03a3 * and d = \u03c1(d 1 , . . . , d k ) \u2208 D G with \u03c1 = A \u2192 c(B 1 , . . . , B k ). The yield of d is yd(d) = c(yd(d 1 ), . . . , yd(d k )). The set of derivations of w in G is D c G (w) = yd \u22121 (w)\u2229D c G . The language of A in G is L(G, A) = {yd(d) | d \u2208 (T P ) A }. The language of G is L(G) = L(G, S).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Any language generated by an MCFG is called multiple context-free (short: mcf).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "A context-free grammar (short: CFG) is an MCFG where each non-terminal has sort 1. Each rule of a CFG has the form A \u2192",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "[u 0 x 1 i(1) u 1 \u2022 \u2022 \u2022 x 1 i(n) u n ](B 1 , . . . , B k ). We abbrevi- ate this rule by A \u2192 u 0 B i(1) u 1 \u2022 \u2022 \u2022 B i(n) u n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Weighted multiple context-free grammars. A weighted MCFG is obtained by assigning a weight to each rule of an (unweighted) MCFG. In this paper, the weights will be taken from a partially ordered commutative monoid with zero (short: POCMOZ). A POCMOZ is an algebra (M, , 1, 0, \u00a2) where \u00a2 is a partial order on M ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "is associative, commutative, decreasing (i.e. m m \u00a2 m), and monotonous (i.e. m 1 \u00a2 m 2 implies m 1 m\u00a2m 2 m); 1 is neutral w.r.t. ; and 0 is absorbing w.r.t. . We call M factorisable if for each m \u2208 M \\ {1}, there are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "m 1 , m 2 \u2208 M \\ {1} with m = m 1 m 2 . The probability algebra Pr = ([0, 1], \u2022, 1, 0, \u2264) is a factorisable POCMOZ where r = \u221a r \u2022 \u221a r for each r \u2208 [0, 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Example 3 (continues ex. 1). Consider the tuple (G, \u00b5) where \u00b5: P \u2192 Pr is a function where \u00b5(\u03c1 1 ) = 1, \u00b5(\u03c1 2 ) = \u00b5(\u03c1 4 ) = 1/2, \u00b5(\u03c1 3 ) = 1/3, and \u00b5(\u03c1 5 ) = 2/3. We call (G, \u00b5) a weighted MCFG. The weight of the a derivation d m,n is obtained by multiplying the weights of all rule occurrences in it: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "wt(d m,n ) = 1/2 m+1 \u2022 2/3 n+1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Let d = \u03c1(d 1 , . . . , d k ) \u2208 D G . The weight of d is wt(d) = \u00b5(\u03c1) k i=1 wt(d i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "For the rest of this paper, we fix a wMCFG (G, \u00b5) with underlying MCFG G = (N, \u03a3, S, P ) and weight assignment \u00b5: P \u2192 M .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Chomsky-Sch\u00fctzenberger theorem. In the Chomsky-Sch\u00fctzenberger theorem for CFGs (cf. sec. 1), D contains strings of brackets where each opening bracket is matched by the corresponding closing bracket. This property can be described with an equivalence relation. Let \u2206 be a set (of opening brackets) and s \u2206 be the set (of closing brackets) that contains s \u03b4 for each \u03b4 \u2208 \u2206. We define \u2261 \u2206 as the smallest equivalence relation where u \u03b4 s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "\u03b4 v \u2261 \u2206 uv for each \u03b4 \u2208 \u2206 and u, v \u2208 \u2206 * . The Dyck language w.r.t. \u2206 is D \u2206 = [\u03b5] \u2261 \u2206 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "In the Chomsky-Sch\u00fctzenberger representation for MCFGs, the brackets fulfil three functions: (i) terminal brackets \u03c3 \u03c3 stand for a terminal symbol \u03c3, (ii) component brackets \u03c1 and \u03c1 denote beginning and end of substrings produced by the -th component of a rule \u03c1, and (iii) variable brackets j \u03c1,i and j \u03c1,i denote beginning and end of substrings produced by variable x j i in a rule \u03c1. As for CFGs, each opening bracket must be matched by the corresponding closing bracket. Furthermore, because applying a rule of an MCFG produces multiple strings simultaneously, we need to ensure that the brackets corresponding to the same application of a rule occur simultaneously. This is described with another equivalence relation. Let P be a partition of \u2206. Intuitively, each cell of P is a set of (opening) brackets that occur simultaneously. We define \u2261 P as the smallest equivalence relation on",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "P (\u2206 \u222a \u2206) * where for each {\u03b4 1 , . . . , \u03b4 s } \u2208 P with |{\u03b4 1 , . . . , \u03b4 s }| = s, u 0 , . . . , u s , v 1 , . . . , v s \u2208 D \u2206 , and L \u2286 (\u2206 \u222a \u2206) * : u 0 \u03b4 1 v 1 s \u03b4 1 u 1 \u2022 \u2022 \u2022 \u03b4 s v s s \u03b4 s u s \u222a L \u2261 P u 0 \u2022 \u2022 \u2022 u s , v 1 \u2022 \u2022 \u2022 v s \u222a L.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "The multiple Dyck language w.r.t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "P is mD P = (L | L \u2208 [{\u03b5}] \u2261 P ). Note that mD P \u2286 D \u2206 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Theorem 5 provides a representation of each mcf language by a multiple Dyck language (see above), a recognisable language (to ensure local consistency), and a homomorphism (to decode the bracket sequences into terminal strings). The corresponding construction is recalled in def. 6. Theorem 5 (cf. Yoshinaka et al., 2010, thm. 3) . For every mcf language L \u2286 \u03a3 * there are a homomorphism h:",
"cite_spans": [
{
"start": 298,
"end": 329,
"text": "Yoshinaka et al., 2010, thm. 3)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "(\u2206 \u222a s \u2206) * \u2192 \u03a3 * , a regular language R \u2286 (\u2206 \u222a s \u2206) *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": ", and a multiple Dyck",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "language mD \u2286 (\u2206 \u222a s \u2206) * such that L = h(R \u2229 mD).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Definition 6 (Denkinger, 2017, def. 3.6, 4.9, 5.15) . The multiple Dyck language w.r.t. G is mD G = mD P G where P G is the smallest set that contains the cell \u03c3 for each \u03c3 \u2208 \u03a3 and the cells",
"cite_spans": [
{
"start": 13,
"end": 51,
"text": "(Denkinger, 2017, def. 3.6, 4.9, 5.15)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "\u03c1 | \u2208 [sort(A)] and j \u03c1,i | j \u2208 [sort(B i )] for each \u03c1 = A \u2192 c(B 1 , . . . , B k ) \u2208 P and i \u2208 [k]. Let \u2206 G = p\u2208P G p.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "We denote the elements of \u011a \u2206 G by closing brackets, e.g. s \u03c3 = \u03c3 , and let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "\u2126 G = \u2206 G \u222a \u011a \u2206 G . The homomorphism w.r.t. G, denoted by hom G ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "is the unique extension of h: \u2126 G \u2192 \u03a3 \u222a {\u03b5} to strings where h(\u03b4) = \u03c3 if \u03b4 is of the form \u03c3 and h(\u03b4) = \u03b5 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "The automaton w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "r.t. G, denoted by A G , is the FSA (Q, \u2126 G , S 1 , \u010e S 1 , T ) where Q = A , \u010e A | A \u2208 N, \u2208 [sort(A)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "and T is the smallest set such that for each rule \u03c1 \u2208 P of the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "A \u2192 [u 1,0 y 1,1 u 1,1 \u2022 \u2022 \u2022 y 1,n 1 u 1,n 1 , . . . , u s,0 y s,1 u s,1 \u2022 \u2022 \u2022 y s,ns u s,ns ](B 1 , . . . , B k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "where the ys are elements of X and the us are elements of \u03a3 * , we have (abbreviating",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "\u03c3 1 \u03c3 2 \u2022 \u2022 \u2022 \u03c3 k \u03c3 k by \u03c3 1 \u2022 \u2022 \u2022 \u03c3 k ) the following tran- sitions in T : (i) A , \u03c1 u ,0 \u03c1 , s A \u2208 T for every \u2208 [s] with n = 0, (ii) A , \u03c1 u ,0 j \u03c1,i , B j i \u2208 T for every \u2208 [s] where n = 0 and y ,1 is of the form x j i , (iii) s B j i , j \u03c1,i u ,\u03ba j \u03c1,i , B j i \u2208 T for every \u2208 [s] and \u03ba \u2208 [n \u2212 1]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "where y ,\u03ba is of the form x j i and y ,\u03ba+1 is of the form x j i , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "(iv) s B j i , j \u03c1,i u ,n \u03c1 , s A \u2208 T for every \u2208 [s]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "where n = 0 and y ,n is of the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "x j i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "We abbreviate L(A G ) by R G .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Example 7 (continues ex. 1). The automaton w.r.t. G is shown in fig. 1 . An illustration of the application of \u2261 P G is given in the appendix (p. 11).",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 70,
"text": "fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "The vanilla parser. The vanilla parser (i.e. alg. 3 from Denkinger, 2017), is shown in fig. 2 (top). Similar to the parser proposed by Hulden (2011), we divide it in three essential phases: (i) FSA constructions for the intersection of hom \u22121 G (w) and R G , (ii) an extraction of (in our case multiple) Dyck words from the intersection, and (iii) the conversion of words into derivations.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 93,
"text": "fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "S 1 start A 1 1 \u03c1 1 1 \u03c1 1 ,1 1 \u03c1 2 a 1 \u03c1 2 ,1 A 1 1 \u03c1 4 1 \u03c1 4 1 \u03c1 2 ,1 1 \u03c1 2 B 1 1 \u03c1 1 ,1 1 \u03c1 1 ,2 1 \u03c1 3 b 1 \u03c1 3 ,1 B 1 1 \u03c1 5 1 \u03c1 5 1 \u03c1 3 ,1 1 \u03c1 3 A 2 1 \u03c1 1 ,2 2 \u03c1 1 ,1 2 \u03c1 2 c 2 \u03c1 2 ,1 A 2 2 \u03c1 4 2 \u03c1 4 2 \u03c1 2 ,1 2 \u03c1 2 B 2 2 \u03c1 1 ,1 2 \u03c1 1 ,2 2 \u03c1 3 d 2 \u03c1 3 ,1 B 2 2 \u03c1 5 2 \u03c1 5 2 \u03c1 3 ,1 2 \u03c1 3 S 1 2 \u03c1 1 ,2 1 \u03c1 1 Figure 1: Automaton w.r.t. G, cf. ex. 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Formally, the vanilla parser is the function V :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "\u03a3 * \u2192 (D c G ) \u03c9 defined as V = MAP(TODERIV) \u2022 FILTER(mD G ) \u2022 SORT(\u00b5 ) \u2022 (\u2229 R G ) \u2022 hom \u22121 G where hom \u22121 G (w) \u2229 R G is represented by an FSA for each w \u2208 \u03a3 * (phase (i)). \u00b5 (u)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "is the product of the weights of each occurrence of a bracket of the form \u03c1 or \u03c1 in u. These weights are fixed such that \u00b5 1 \u03c1 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "\u03c1 \u2022 \u2022 \u2022 \u03c1 \u03c1 ) = \u00b5(\u03c1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "for each \u03c1 \u2208 P with fanout . SORT(\u00b5 ) brings the elements of its argument, which is a subset of \u2126 G * , in some descending order w.r.t. \u00b5 and \u00a2, returning a (possibly infinite) sequence of elements of \u2126 G * , which we call candidates. Sequences are implemented as iterators. FILTER(mD G ) removes the candidates from its argument sequence that are not in mD G while preserving the order (cf. Denkinger, 2017, alg. 2). (Both steps, SORT(\u00b5 ) and FILTER(mD G ), are phase (ii).) TODERIV returns the derivation in G that corresponds to its argument (which is from the set R G \u2229 mD G ), cf. Denkinger (2017, function fromBrackets, p. 20). MAP(TODERIV) applies TODERIV to each candidate in its argument while preserving the order (phase (iii)). Denkinger (2017, thm. 5.22) showed that TAKE(n) \u2022 V solves the n-best parsing problem. 1 We omit the additional restrictions that he imposed on the given wMCFG because they are only necessary to show the termination of his algorithm. Figure 2 : Visualisation of the vanilla parser (top) and the parser with the optimisations from sec. 4 (bottom).",
"cite_spans": [
{
"start": 739,
"end": 766,
"text": "Denkinger (2017, thm. 5.22)",
"ref_id": null
},
{
"start": 826,
"end": 827,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 973,
"end": 981,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "hom \u22121 G \u2229 RG SORT(\u00b5 ) FILTER(mDG) TODERIV \u2208 \u03a3 * \u2286 \u2126 G * \u2286 \u2126 G * \u2208 (\u2126 G * ) \u03c9 \u2208 (\u2126 G * ) \u03c9 \u2208 (D c G ) \u03c9 TOMCFGDERIV EXTRACTDYCK(G, \u00b5 ) hom \u22121 G \u2229 RG TOCOWDERIV \u2208 \u03a3 * \u2208 (\u2126 G * ) \u03c9 \u2208 (cowD c G ) \u03c9 \u2208 (D c G ) \u03c9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "3 Component-Wise Derivations",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "In sec. 4, we will outline modifications to the vanilla parser that make the extraction of the elements of mD G from hom \u22121 G (w) \u2229 R G efficient (items 2-4). To facilitate this, we first",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "decompose FILTER(mD G ) into FILTER(mD G ) \u2022 FILTER(D \u2206 G ), which is possible because D \u2206 G \u2287 mD G . Secondly, we implement FILTER(D \u2206 G ) \u2022 SORT(\u00b5 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "with a dynamic programming algorithm (cf. Hulden, 2011, alg. 1, similar to Bar-Hillel et al., 1961, sec. 8) . And lastly, we replace FILTER(mD G ) by steps that exploit the wellbracketing of the elements of D \u2206 G .",
"cite_spans": [
{
"start": 75,
"end": 107,
"text": "Bar-Hillel et al., 1961, sec. 8)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "The elements of R G \u2229 D \u2206 G can be represented as trees over rules of G. 2 We label the edges of those trees to allow us to check if vertices that correspond to the same application of a rule of the MCFG G match. The resulting objects are called component-wise derivations. The set",
"cite_spans": [
{
"start": 73,
"end": 74,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "R G \u2229 D \u2206 G is characterised in terms a CFG G cf .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Definition 8. Let \u03c1 \u2208 P be a rule of the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "A \u2192 [u 1 , . . . , u s ](B 1 , . . . , B k ), \u2208 [s]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": ", and u be of the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "w 0 x j(1) i(1) w 1 \u2022 \u2022 \u2022 x j(n)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "i(n) w n for some w 0 , . . . , w n \u2208 \u03a3 * . We define the rule",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "\u03c1 ( ) = A \u2192 \u03c1 w 0 v 1 w 1 \u2022 \u2022 \u2022 v n w n \u03c1 where each v \u03ba = j(\u03ba) \u03c1,i(\u03ba) B j(\u03ba) i(\u03ba) j(\u03ba) \u03c1,i(\u03ba) . The context- free CS approximation of G (short: CFA), de- noted by G cf , is the CFG (N cf , \u2126 G , S 1 , P cf ) where N cf = {A | A \u2208 N, \u2208 [sort(A)]} and P cf = {\u03c1 ( ) | \u03c1 \u2208 P, \u2208 [fanout(\u03c1)]}. Observation 9. D \u2206 G \u2229 R G = L(G cf ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "We introduce component-wise derivations to relate the derivations of G cf with those of G.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Definition 10. Let \u2208 N + and t be a tree whose vertices are labelled with elements of P and whose edges are labelled with elements of N + \u00d7 N + . The label at the root of t is denoted by root(t).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "The set of labels of the outgoing edges from the root of t is denoted by out(t). A (i, j)-subtree of t, is a sub-graph of t consisting of all the vertices (and their edges) reachable from some target vertex of the outgoing edge from the root that is labelled with (i, j). If there is a unique (i, j)subtree of t, then we denote it by sub (i,j) (t). Now let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "root(t) = A \u2192 [u 1 , . . . , u s ](B 1 , . . . , B k ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "We call t an ( -)component-wise derivation, short: ( -)cow derivation, of G if the following four requirements are met: (i) out(t) contains exactly the pairs (i, j) such that x j i occurs in u , (ii) a unique (i, j)-subtree of t exists, (iii) root(sub (i,j) (t)) has lhs B i , and (iv) sub (i,j) (t) is a j-cow derivation for each (i, j) \u2208 out(t). We denote the set of cow derivations of G whose root's lhs is S by cowD c G . The set of -cow derivations whose root's label has lhs A is denoted by -cowD A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "G . An example of a cow derivation is shown in fig. 3a . The root is the top-most vertex.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 54,
"text": "fig. 3a",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Definition 11. Let \u03c1 = A \u2192 c(B 1 , . . . , B k ) \u2208 P , \u2208 [fanout(\u03c1)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": ", and the -th component of c be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "u 0 x j(1) i(1) u 1 \u2022 \u2022 \u2022 x j(n) i(n) u n with u 1 , . . . , u n \u2208 \u03a3 * . Furthermore, for each \u03ba \u2208 [n], let t \u03ba \u2208 j(\u03ba)-cowD B i(\u03ba) G . By \u03c1 (i(\u03ba), j(\u03ba))/t \u03ba | \u03ba \u2208 [n] , we denote the cow derivation t such that root(t) = \u03c1, out(t) = {(i(\u03ba), j(\u03ba)) | \u03ba \u2208 [n]}, and for each \u03ba \u2208 [n]: sub (i(\u03ba),j(\u03ba)) (t) = t \u03ba .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Lemma 12. There is a bijection toCowD between L(G cf ) and cowD c G . Proof sketch. We define the partial function toCowD from \u2126 G * to cow derivations of G as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "toCowD(u) = \u03c1 (i(\u03ba), j(\u03ba))/toCowD(v \u03ba ) | \u03ba \u2208 [n] if u is of the form \u03c1 u 0 j(1) \u03c1,i(1) v 1 j(1) \u03c1,i(1) u 1 . . . j(n) \u03c1,i(n) v n j(n) \u03c1,i(n) u n \u03c1 for some rule \u03c1 = A \u2192 c(B 1 , . . . , B k ) where the -th component of c is u 0 x j(1) i(1) u 1 \u2022 \u2022 \u2022 x j(n) i(n) u n with u 1 , . . . , u n \u2208 \u03a3 * ; otherwise, toCowD(u) is un- defined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "The partial function toCowD is a bijection between L(G cf ) and cowD c G (proven in appendix A.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Example 13 (continues ex. 1). We construct",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "G cf = ({S 1 , A 1 , A 2 , B 1 , B 2 }, \u2126 G , S 1 , P cf )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "where P cf contains, among others, the following rules: Figure 3a shows the image of the word",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 65,
"text": "Figure 3a",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "\u03c1 (1) 1 = S 1 \u2192 1 \u03c1 1 1 \u03c1 1 ,1 A 1 1 \u03c1 1 ,1 1 \u03c1 1 ,2 B 1 1 \u03c1 1 ,2 2 \u03c1 1 ,1 A 2 2 \u03c1 1 ,1 2 \u03c1 1 ,2 B 2 2 \u03c1 1 ,2 1 \u03c1 1 , \u03c1 (1) 3 = B 1 \u2192 1 \u03c1 3 b 1 \u03c1 3 ,1 B 1 1 \u03c1 3 ,1 1 \u03c1 3 , \u03c1 (1) 4 = A 1 \u2192 1 \u03c1 4 1 \u03c1 4 , \u03c1 (2) 4 = A 2 \u2192 2 \u03c1 4 2 \u03c1 4 , \u03c1 (1) 5 = B 1 \u2192 1 \u03c1 5 1 \u03c1 5 , . . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "1 \u03c1 1 1 \u03c1 1 ,1 1 \u03c1 4 1 \u03c1 4 1 \u03c1 1 ,1 1 \u03c1 1 ,2 1 \u03c1 3 b 1 \u03c1 3 ,1 1 \u03c1 5 1 \u03c1 5 1 \u03c1 3 ,1 1 \u03c1 3 1 \u03c1 1 ,2 2 \u03c1 1 ,1 2 \u03c1 4 2 \u03c1 4 2 \u03c1 1 ,1 2 \u03c1 1 ,2 2 \u03c1 5 2 \u03c1 5 2 \u03c1 1 ,2 1 \u03c1 1 in L(G cf ) under toCowD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "In the following, we define a property called consistency to discern those cow derivations that correspond to derivations of the MCFG G. Definition 14. Let s \u2208 N + and t 1 , . . . , t s be cow derivations of G. We call the set {t 1 , . . . , t s } consistent if there is a rule \u03c1 = A \u2192 c(B 1 , . . . , B k ) \u2208 P such that root(t 1 ) = . . . = root(t s ) = \u03c1, s = sort(A), and for each i \u2208 [k]: the set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "{sub (i,j) (t ) | \u2208 [s], j \u2208 [sort(B i )]: (i, j) \u2208 out(t )}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "is consistent. If s = 1, then we also call t 1 consistent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "The cow derivation shown in fig. 3a is not consistent. If we consider the set of nodes that is reachable from the root via edges labelled with a tuple whose first component is 2 (the right dotted box), then it is easy to see that the rules at these nodes are not equal. A consistent cow derivation is shown in the appendix ( fig. 6 ). Proposition 15. TODERIV \u2022 toCowD \u22121 is a bijection between the consistent cow derivations in cowD c G and D c G .",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "fig. 3a",
"ref_id": "FIGREF2"
},
{
"start": 325,
"end": 331,
"text": "fig. 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "In this section, we describe several improvements to the vanilla parser (cf. end of sec. 2). Since the definitions of A G , hom G , and mD G do not depend on the word w, we may compute appropriate representations for these objects before the beginning of the parsing process, and store them persistently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimisations",
"sec_num": "4"
},
{
"text": "S \u2192 [x 1 1 x 1 2 x 2 1 x 2 2 ](A, B) A \u2192 [\u03b5, \u03b5]() (1, 1) A \u2192 [\u03b5, \u03b5]() (1, 2) B \u2192 [bx 1 1 , dx 2 1 ](B) B \u2192 [\u03b5, \u03b5]() (1, 1) (2, 1) B \u2192 [\u03b5, \u03b5]() (2, 2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimisations",
"sec_num": "4"
},
{
"text": "(a) A cow derivation. The dotted boxes show clusters of nodes that are reachable from the root via edges labelled with matching first components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimisations",
"sec_num": "4"
},
{
"text": "S \u2192 [x 1 1 x 1 2 x 2 1 x 2 2 ](A, B) A \u2192 [\u03b5, \u03b5]() B \u2192 [bx 1 1 , \u03b5](B) B \u2192 [\u03b5, \u03b5]() (1, 1) (b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimisations",
"sec_num": "4"
},
{
"text": "Construction of new rules for each cluster in fig. 3a . If there were any unused nonterminals in these constructed rules, they are removed and the indices of variables changed accordingly. For each cluster, all reachable nodes are clustered via the first component of the labels as in fig. 3a . In the following, we briefly describe each improvement that we applied to the vanilla parser:",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "fig. 3a",
"ref_id": "FIGREF2"
},
{
"start": 285,
"end": 292,
"text": "fig. 3a",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Optimisations",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S \u2192 [x 1 1 x 1 2 x 2 1 x 2 2 ](A, B) A \u2192 [\u03b5, \u03b5]() B \u2192 [bx 1 1 , \u03b5](B) B \u2192 [\u03b5, \u03b5]()",
"eq_num": "("
}
],
"section": "Optimisations",
"sec_num": "4"
},
{
"text": "1. Let us call a rule \u03c1 in G w-consistent if each string of terminals that occurs in (the composition function of) \u03c1 is a substring of w. A rule is called useful w.r.t. w if it occurs in some complete derivation of G in which each rule is w-consistent. In the construction of the FSA for R G \u2229 hom \u22121 G (w), we only calculate the transitions that relate to rules of G that are useful w.r.t. w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimisations",
"sec_num": "4"
},
{
"text": "G ) is decomposed into FILTER(mD G ) \u2022 FILTER(D \u2206 G )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The function FILTER(mD",
"sec_num": "2."
},
{
"text": "in preparation for the next two items.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The function FILTER(mD",
"sec_num": "2."
},
{
"text": "\u2022 SORT(\u00b5 ) is implemented with the algorithm EXTRACTDYCK(G, \u00b5 ) that uses dynamic programming to extract Dyck words from the language of the given FSA more efficiently. For this, we extend alg. 1 by Hulden (2011) to use weights such that it returns the elements in descending order w.r.t. \u00b5 and \u00a2 (see appendix A.3, alg. 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FILTER(D \u2206 G )",
"sec_num": "3."
},
{
"text": "In our implementation, we change this al-Algorithm 1 reads off cow derivations from words of the CFA of G. return t gorithm even further such that items are explored in a similar fashion as in the CKYalgorithm (Kasami, 1966; Younger, 1967; Cocke and Schwartz, 1970) .",
"cite_spans": [
{
"start": 210,
"end": 224,
"text": "(Kasami, 1966;",
"ref_id": "BIBREF20"
},
{
"start": 225,
"end": 239,
"text": "Younger, 1967;",
"ref_id": "BIBREF31"
},
{
"start": 240,
"end": 265,
"text": "Cocke and Schwartz, 1970)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FILTER(D \u2206 G )",
"sec_num": "3."
},
{
"text": "Input: v \u2208 L(G cf ) Output: toCowD(v) 1: function TOCOWDERIV(v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FILTER(D \u2206 G )",
"sec_num": "3."
},
{
"text": "4. For FILTER(mD G ), instead of isMember by Denkinger (2017, p. 28-30) , which runs in quadratic time, we use the composition of two algorithms that run in linear time:",
"cite_spans": [
{
"start": 45,
"end": 71,
"text": "Denkinger (2017, p. 28-30)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FILTER(D \u2206 G )",
"sec_num": "3."
},
{
"text": "\u2022 alg. 1, which reads a cow derivation off a given word in R G \u2229 D \u2206 G , and \u2022 an algorithm that checks a given cow derivation for consistency. (This is similar to alg. 2; but instead of derivations, we return Boolean values. The algorithm is given explicitly in sec. A.3.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FILTER(D \u2206 G )",
"sec_num": "3."
},
{
"text": "G and D c G (see prop. 15). Analogously to def. 14, the function TOMCFGDERIV' checks a set of cow derivations for equivalence of the root symbol and the function COLLECTCHILDREN groups the subtrees via the first component of the successor labels. It is easy to see that TOMCFGDERIV(t) is only defined if the cow derivation t is consistent (cf. item 4). Thus, we use TOMCFGDERIV in combination with TOCOWDERIV to replace MAP(TODERIV) \u2022 FILTER(mD G ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "The time complexity of alg. 2 is linear in the Algorithm 2 converts a consistent element of cowD c G into a complete derivation of G.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "Input: t \u2208 cowD c G Output: \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 TODERIV toCowD \u22121 (t) if t is consistent undefined otherwise \uf8fc \uf8f4 \uf8fd \uf8f4 \uf8fe 1: function TOMCFGDERIV(t) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": ":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "return TOMCFGDERIV'({t}) 3: function TOMCFGDERIV'(T ) 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "if not t,t \u2208T root(t) = root(t ) then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "return undefined 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "(T 1 , . . . , T k ) \u2190 COLLECTCHILDREN(T ) 7: for i \u2208 [k] do 8: t i \u2190 TOMCFGDERIV'(T i ) 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "if t i = undefined then 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "return undefined 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "{\u03c3} \u2190 root(t) | t \u2208 T 12: return \u03c3(t 1 , . . . , t k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "13: function COLLECTCHILDREN(T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "14:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "{k} \u2190 rank (root(t)) | t \u2208 T 15: for i \u2208 [k] do 16: T i \u2190 sub (i,j) (t) | t \u2208 T, (i, j) \u2208 out(t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "17:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "return (T 1 , . . . , T k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "number of vertices of the given cow derivation. This number, in turn, is linear in the length of the processed candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "The parser obtained by applying items 1 to 5 to the vanilla parser is visualised in fig. 2 (bottom) . It is sound and complete. 3 The following two modifications (items 6 and 7) destroy both soundness and completeness. Item 6 allows only the best intermediate results to be processed further and limits the results to a subset of those of the vanilla parser. In item 7, we compensate this by an approximation we consider useful in practise.",
"cite_spans": [
{
"start": 128,
"end": 129,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 84,
"end": 99,
"text": "fig. 2 (bottom)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "6. EXTRACTDYCK is extended with an optional implementation of beam search by limiting the amount of items for certain groups of state spans to a specific number (beam width), cf. Collins (1999) . In our implementation, we chose these groups of state spans such that they correspond to equal states in the automaton for hom \u22121 G (w). Moreover, we introduce a variable that limits the number of candidates that are yielded by Algorithm 3 (candidate count). Both variables are the meta-parameters of our parser. 7. We introduce a fallback mechanism for the case that FILTER(mD G ) has input candidates but an empty output. Usually, in that case, we would suggest there is no derivation for w in G, yet for robustness, it is preferable to output some parse. Figure 3 illustrates a strategy to construct a complete derivation from any complete cow derivation with an example.",
"cite_spans": [
{
"start": 179,
"end": 193,
"text": "Collins (1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 754,
"end": 762,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Algorithm 2 computes the bijection between cowD c",
"sec_num": "5."
},
{
"text": "We implemented the parser with the modifications sketched in sec. 4 for \u03b5-free and simple wMCFGs, 4 but no problems should arise generalising this implementation to arbitrary wMCFGs. The implementation is available as a part of Rustomata, 5 a framework for weighted automata with storage written in the programming language Rust. We used the NeGra corpus (German newspaper articles, 20,602 sentences, 355,096 tokens; Skut et al., 1998) to compare our parser to Grammatical Framework (Angelov and Ljungl\u00f6f, 2014) , rparse (Kallmeyer and Maier, 2013) , and discodop (van Cranenburgh et al., 2016) with respect to parse time and accuracy. 6 Our experiments were conducted on defoliated trees, i.e. we removed the leaves from each tree in the corpus. Parsing was performed on gold part-of-speech tags. We performed a variant of ten-fold cross validation (short: TFCV; cf. Mosteller and Tukey, 1968) , i.e. we split the corpus into ten consecutive parts; each part becomes the validation set in one iteration while the others serve as training set. We used the first iteration to select suitable values for our meta-parameters and the remaining nine for validation. In case of Rustomata, a binarised and markovized grammar was induced with discodop (head-outward binarisation, v = 1, h = 2, cf. Klein and Manning, 2003) in each iteration. For all other parsers, we induced a proba-4 A wMCFG G is called \u03b5-free and simple if each composition function that occurs in the rules of G is either of the form [u1, . . . , us] for some non-empty strings of variables u1, . . . , us, or of the form [t] for some terminal symbol t.",
"cite_spans": [
{
"start": 417,
"end": 435,
"text": "Skut et al., 1998)",
"ref_id": "BIBREF28"
},
{
"start": 483,
"end": 511,
"text": "(Angelov and Ljungl\u00f6f, 2014)",
"ref_id": "BIBREF0"
},
{
"start": 521,
"end": 548,
"text": "(Kallmeyer and Maier, 2013)",
"ref_id": "BIBREF19"
},
{
"start": 564,
"end": 594,
"text": "(van Cranenburgh et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 868,
"end": 894,
"text": "Mosteller and Tukey, 1968)",
"ref_id": "BIBREF23"
},
{
"start": 1290,
"end": 1314,
"text": "Klein and Manning, 2003)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Conclusion",
"sec_num": "5"
},
{
"text": "5 available on https://github.com/tud-fop/ rustomata. We used commit 867a451 for evaluation. 6 The evaluation scripts are available on https:// github.com/truprecht/rustomata-eval. bilistic LCFRS with the respective default configurations (for details, cf. the evaluation scripts). After that, we ran our parser on each sentence of the validation set and recorded the parse time and the computed 1-best parse. The computed parses were evaluated against the gold parses of the validation set w.r.t. precision, recall, and f 1 -score (according to the labelled parseval measures, cf. Black et al., 1991; Collins, 1997 , we used the implementation by van Cranenburgh et al., 2016 . Previous experiments with an implementation of the vanilla parser already struggled with small subsets (we used grammars extracted from 250-1500 parse trees) of the NeGra corpus. Therefore, we omit evaluation of the vanilla parser.",
"cite_spans": [
{
"start": 93,
"end": 94,
"text": "6",
"ref_id": null
},
{
"start": 582,
"end": 601,
"text": "Black et al., 1991;",
"ref_id": "BIBREF3"
},
{
"start": 602,
"end": 615,
"text": "Collins, 1997",
"ref_id": "BIBREF9"
},
{
"start": 616,
"end": 676,
"text": ", we used the implementation by van Cranenburgh et al., 2016",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Conclusion",
"sec_num": "5"
},
{
"text": "Meta-parameters. A grid search for metaparameters was performed on sentences of up to 20 tokens (see the appendix, tab. 2, for a detailed listing). The results suggested to set the beam width to 200 and the candidate count to 10,000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Conclusion",
"sec_num": "5"
},
{
"text": "Comparison to other parsers. The experiments were performed on sentences with up to 30 tokens. We instructed rparse, Grammatical Framework (short: GF) and Rustomata (short: OP) to stop parsing each sentence after 30 seconds (timeout). Disco-dop did not permit passing a timeout. In the case of disco-dop's LCFRS parser (short: ddlcfrs), we limited the validation set to sentences of at most 20 tokens, since ddlcfrs frequently exceeded 30 seconds of parse time for longer sentences in preliminary tests. Disco-dop's coarseto-fine data-oriented parser (short: ddctf-dop) and disco-dop's coarse-to-fine LCFRS parser (short: ddctf-lcfrs) rarely exceeded 30 seconds of parse time in preliminary tests and we let them run on sentences of up to 30 tokens without the timeout. Figure 4a shows the parse times for each sentence length and parser. The parsers ddctf-dop, ddctf-lcfrs, GF, and OP perform similar for sentences of up to 20 tokens. The parse times of rparse and ddlcfrs grow rapidly after 10 and 16 tokens, respectively. Rparse even exceeds the timeout for more than half of the test sentences that are longer than 15 tokens. For sentences with up to 30 tokens, the parse times of ddctf-dop, ddctf-lcfrs and OP seem to remain almost constant. Table 1 shows the accuracy (i.e. precision, recall, and f 1 -score) and the coverage (i.e. the percentage of sentences that could be parsed) for each parser on the validation set. We report these scores to assert a correct implementation of our parser and to compare the different approximation strategies (and our fallback mechanism) implemented in the parsers. The low coverage of rparse stems from the frequent occurrences of timeouts. They also depress the recall for rparse. For sentences with at most 20 tokens, ddlcfrs, ddctf-lcfrs and OP perform very similar. These three parsers are outperformed by ddctf-dop in all aspects. For sentences of up to 30 tokens, the scores of all tested parsers drop similarly. However, ddctf-dop's scores drop the least amount.",
"cite_spans": [],
"ref_spans": [
{
"start": 770,
"end": 779,
"text": "Figure 4a",
"ref_id": "FIGREF0"
},
{
"start": 1247,
"end": 1254,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Evaluation and Conclusion",
"sec_num": "5"
},
{
"text": "We repeated a part of the experiments with the Lassy corpus (Lassy Small, various kinds of written Dutch, 65,200 sentences, 975,055 tokens; van Noord et al., 2013) . Since it is considerably larger than the NeGra corpus, we limited the experiments to one iteration of TFCV, and we only investigate OP, ddctf-lcfrs, and ddctf-dop. The results are shown in fig. 4b (parse time) and at the bottom of tab. 1 (accuracy). Figure 4b shows the difference of ddctf-lcfrs, ddctf-dop and OP in terms of parse times (which is not discernible in fig. 4a ). This plot shows that OP maintains very small parse times -even for large copora -compared to the state-of-the-art parser disco-dop.",
"cite_spans": [
{
"start": 99,
"end": 139,
"text": "Dutch, 65,200 sentences, 975,055 tokens;",
"ref_id": null
},
{
"start": 140,
"end": 163,
"text": "van Noord et al., 2013)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 355,
"end": 375,
"text": "fig. 4b (parse time)",
"ref_id": "FIGREF0"
},
{
"start": 416,
"end": 425,
"text": "Figure 4b",
"ref_id": "FIGREF0"
},
{
"start": 533,
"end": 540,
"text": "fig. 4a",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Evaluation and Conclusion",
"sec_num": "5"
},
{
"text": "All in all, our parser performs comparable to state-of-the-art MCFG parsers (GF, rparse, ddlcfrs, ddctf-lcfrs) and, using the NeGra corpus, it shows excellent results in parse time and good results in accuracy. Moreover, our parser can deal with any \u03b5-free and simple MCFG provided by an external tool, making it more flexible than discodop and rparse. However, we are not able to compete with ddctf-dop in terms of accuracy, since discontinuous data-oriented parsing is a more accurate formalism (van Cranenburgh and Bod, 2013).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Conclusion",
"sec_num": "5"
},
{
"text": "We see potential to improve the fallback mechanism explained in sec. 4. For now, we only considered reporting the first cow derivation. By introducing some degree of consistency of cow derivations, we could select a cow derivation that is closer to a derivation of G.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "6"
},
{
"text": "Since recognisable languages are closed under inverse homomorphisms, we can use any recognisable language as input for hom \u22121 G (cf. fig. 2 ) without changing the rest of the pipeline. This is useful when the input of the parsing task is ambiguous, as in lattice-based parsing (e.g. Goldberg and Tsarfaty, 2008) .",
"cite_spans": [
{
"start": 283,
"end": 311,
"text": "Goldberg and Tsarfaty, 2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 133,
"end": 139,
"text": "fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Future Work",
"sec_num": "6"
},
{
"text": "Moreover, since weighted recognisable languages are closed under inverse homomorphisms and scalar product, we can even use a weighted recognisable language as input for hom \u22121 G , as in the setting of Rastogi et al. (2016) .",
"cite_spans": [
{
"start": 201,
"end": 222,
"text": "Rastogi et al. (2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "6"
},
{
"text": "A.1 Additional Examples Example 1 (continuing from p. 2). Figure 5 shows graphical representations of a derivation and the corresponding term over composition functions.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 66,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "Example 7 (continuing from p. 4). The following calculation reduces a word of A G to \u03b5 using the equivalence relation \u2261 P G (abbreviated by \u2261) and thereby proves that it is an element of mD G . In each step, we point out, which cell of P G was/were used. Note that the set obtained after two applications of \u2261 has two elements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "1 \u03c1 1 1 \u03c1 1 ,1 1 \u03c1 4 1 \u03c1 4 1 \u03c1 1 ,1 1 \u03c1 1 ,2 1 \u03c1 3 b 1 \u03c1 3 ,1 1 \u03c1 5 1 \u03c1 5 1 \u03c1 3 ,1 1 \u03c1 3 1 \u03c1 1 ,2 2 \u03c1 1 ,1 2 \u03c1 4 2 \u03c1 4 2 \u03c1 1 ,1 2 \u03c1 1 ,2 2 \u03c1 3 d 2 \u03c1 3 ,1 2 \u03c1 5 2 \u03c1 5 2 \u03c1 3 ,1 2 \u03c1 3 2 \u03c1 1 ,2 1 \u03c1 1 \u2261 1 \u03c1 1 ,1 1 \u03c1 4 1 \u03c1 4 1 \u03c1 1 ,1 1 \u03c1 1 ,2 1 \u03c1 3 b 1 \u03c1 3 ,1 1 \u03c1 5 1 \u03c1 5 1 \u03c1 3 ,1 1 \u03c1 3 1 \u03c1 1 ,2 2 \u03c1 1 ,1 2 \u03c1 4 2 \u03c1 4 2 \u03c1 1 ,1 2 \u03c1 1 ,2 2 \u03c1 3 d 2 \u03c1 3 ,1 2 \u03c1 5 2 \u03c1 5 2 \u03c1 3 ,1 2 \u03c1 3 2 \u03c1 1 ,2 (because 1 \u03c1 1 \u2208 PG) \u2261 1 \u03c1 4 1 \u03c1 4 2 \u03c1 4 2 \u03c1 4 , 1 \u03c1 1 ,2 1 \u03c1 3 b 1 \u03c1 3 ,1 1 \u03c1 5 1 \u03c1 5 1 \u03c1 3 ,1 1 \u03c1 3 1 \u03c1 1 ,2 2 \u03c1 1 ,2 2 \u03c1 3 d 2 \u03c1 3 ,1 2 \u03c1 5 2 \u03c1 5 2 \u03c1 3 ,1 2 \u03c1 3 2 \u03c1 1 ,2 (because 1 \u03c1 1 ,1 , 2 \u03c1 1 ,1 \u2208 PG) \u2261 \u03b5, 1 \u03c1 1 ,2 1 \u03c1 3 b 1 \u03c1 3 ,1 1 \u03c1 5 1 \u03c1 5 1 \u03c1 3 ,1 1 \u03c1 3 1 \u03c1 1 ,2 2 \u03c1 1 ,2 2 \u03c1 3 d 2 \u03c1 3 ,1 2 \u03c1 5 2 \u03c1 5 2 \u03c1 3 ,1 2 \u03c1 3 2 \u03c1 1 ,2 (because 1 \u03c1 4 , 2 \u03c1 4 \u2208 PG) \u2261 \u03b5, 1 \u03c1 3 b 1 \u03c1 3 ,1 1 \u03c1 5 1 \u03c1 5 1 \u03c1 3 ,1 1 \u03c1 3 2 \u03c1 3 d 2 \u03c1 3 ,1 2 \u03c1 5 2 \u03c1 5 2 \u03c1 3 ,1 2 \u03c1 3 (because 1 \u03c1 1 ,2 , 2 \u03c1 1 ,2 \u2208 PG) \u2261 \u03b5, b 1 \u03c1 3 ,1 1 \u03c1 5 1 \u03c1 5 1 \u03c1 3 ,1 d 2 \u03c1 3 ,1 2 \u03c1 5 2 \u03c1 5 2 \u03c1 3 ,1 (because 1 \u03c1 3 , 2 \u03c1 3 \u2208 PG) \u2261 \u03b5, 1 \u03c1 3 ,1 1 \u03c1 5 1 \u03c1 5 1 \u03c1 3 ,1 d 2 \u03c1 3 ,1 2 \u03c1 5 2 \u03c1 5 2 \u03c1 3 ,1 (because b \u2208 PG and b = b b ) \u2261 \u03b5, 1 \u03c1 3 ,1 1 \u03c1 5 1 \u03c1 5 1 \u03c1 3 ,1 2 \u03c1 3 ,1 2 \u03c1 5 2 \u03c1 5 2 \u03c1 3 ,1 (because d \u2208 PG and d = d d ) \u2261 \u03b5, 1 \u03c1 5 1 \u03c1 5 2 \u03c1 5 2 \u03c1 5 (because 1 \u03c1 3 ,1 , 2 \u03c1 3 ,1 \u2208 PG) \u2261 {\u03b5} (because 1 \u03c1 5 , 2 \u03c1 5 \u2208 PG)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "A.2 Additional Proofs Lemma 12. There is a bijection toCowD between L(G cf ) and cowD c G .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "Proof. For each \u2208 N + , we define the partial function f from (\u2206 G \u222a \u011a \u2206 G ) * tocow derivations of G as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "f (u) = \u03c1 (i(\u03ba), j(\u03ba))/f j(\u03ba) (v \u03ba ) | \u03ba \u2208 [n] if u is of the \u03c1 1 \u03c1 2 . . . \u03c1 2 \u03c1 4 \u03c1 3 . . . \u03c1 3 \u03c1 5 m times n times [x 1 1 x 1 2 x 1 2 x 2 2 ] [ax 1 1 , cx 2 1 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": ". . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "[ax 1 1 , cx 2 1 ] [\u03b5, \u03b5] [bx 1 1 , dx 2 1 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": ". . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "[bx 1 1 , dx 2 1 ] [\u03b5, \u03b5]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "m times n times ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "S \u2192 [x 1 1 x 1 2 x 2 1 x 2 2 ](A, B) A \u2192 [\u03b5, \u03b5]() (1, 1) A \u2192 [\u03b5, \u03b5]() (1, 2) B \u2192 [bx 1 1 , dx 2 1 ](B) B \u2192 [\u03b5, \u03b5]() (1, 1) B \u2192 [\u03b5, \u03b5]() (1, 2) (2, 1) B \u2192 [bx 1 1 , dx 2 1 ](B) B \u2192 [\u03b5, \u03b5]() (1, 1) B \u2192 [\u03b5, \u03b5]() (1, 2) (2, 2) Figure 6: A consistent cow derivation. form \u03c1 u 0 j(1) \u03c1,i(1) v 1 j(1) \u03c1,i(1) u 1 . . . j(n) \u03c1,i(n) v n j(n) \u03c1,i(n) u n \u03c1 for some rule \u03c1 = A \u2192 c(B 1 , . . . , B k ) where the -th component of c is u 0 x j(1) i(1) u 1 \u2022 \u2022 \u2022 x j(n) i(n) u n with u 1 , . . . , u n \u2208 \u03a3 * ; otherwise, f (u) is undefined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "Note that f 1 , f 2 , . . . are pairwise disjoint (in the set-theoretic sense).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "To prove that the function f is bijective, we show that it is injective and surjective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "(Injectivity) We show, for each \u2208 N + , by induction on the structure of cow derivations that Let v, v \u2208 (\u2206 G \u222a \u011a \u2206 G ) * be in the domain of f and let t = f (v) = f (v ). Furthermore, let \u03c1 = A \u2192 c(B 1 , . . . , B k ) = root(t), t (i,j) = sub (i,j) (t) for each (i, j) \u2208 out(t), and let the -th component of c be of the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "f (v) = f (v ) implies v = v for each v, v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "u 0 x j(1) i(1) u 1 \u2022 \u2022 \u2022 x j(n) i(n) u n with u 1 , . . . , u n \u2208 \u03a3 * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "By definition of f , we know that u is the string",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "\u03c1 u 0 j(1) \u03c1,i(1) v 1 j(1) \u03c1,i(1) u 1 . . . j(n) \u03c1,i(n) v n j(n) \u03c1,i(n) u n \u03c1 for some v 1 , . . . , v n \u2208 (\u2206 G \u222a \u011a \u2206 G ) * , u is the string \u03c1 u 0 j(1) \u03c1,i(1) v 1 j(1) \u03c1,i(1) u 1 . . . j(n) \u03c1,i(n) v n j(n) \u03c1,i(n) u n \u03c1 for some v 1 , . . . , v n \u2208 (\u2206 G \u222a \u011a \u2206 G ) * , and f j(\u03ba) (v \u03ba ) = f j(\u03ba) (v \u03ba ) = t (i(\u03ba),j(\u03ba)) for each \u03ba \u2208 [n]. By principle of induction, we get v \u03ba = v \u03ba for each \u03ba \u2208 [n]. Hence v = v .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "(Surjectivity) We show by induction on the structure of cow derivations that for each A \u2208 N , \u2208 sort(A), and t \u2208 -cowD A G , there is a string v \u2208 L(G cf , A ) such that f (v) = t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "Let A \u2208 N , \u2208 sort(A), and t \u2208 -cowD A G . Furthermore, let \u03c1 = A \u2192 c(B 1 , . . . , B k ) = root(t), t (i,j) = sub (i,j) (t) for each (i, j) \u2208 out(t), and let the -th component of c be of the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "u 0 x j(1) i(1) u 1 \u2022 \u2022 \u2022 x j(n)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "i(n) u n with u 1 , . . . , u n \u2208 \u03a3 * . By principle of induction, we know that there are v \u03ba \u2208 L(G cf , B j(\u03ba) i(\u03ba) ) with f j(\u03ba) (v \u03ba ) = t (i(\u03ba),j(\u03ba)) for each \u03ba \u2208 [n]. Now let v be the string",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "\u03c1 u 0 j(1) \u03c1,i(1) v 1 j(1) \u03c1,i(1) u 1 . . . j(n) \u03c1,i(n) v n j(n) \u03c1,i(n) u n \u03c1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "By definition of the rule \u03c1 ( ) (def. 8), we know that v \u2208 L(G cf , A ). By definition of f , we know that f (v) = t. Hence, f is surjective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "It is easy to see that toCowD = \u2208N + f . If we restrict the domain of toCowD to L(G cf ), then (since each element of L(G cf ) starts with a bracket of the form 1 \u03c1 for some \u03c1 \u2208 P ) the resulting function is a subset of f 1 . Since f 1 is bijective, we know that toCowD is a bijection between L(G cf ) and cowD c G . A.4 Results of Grid Search for Meta-Parameters Table 2 shows the results of the grid search for the two introduced meta-parameters. For each combination of beam width and candidate count, we list the median and mean parse times (since medians hide outliers, those two may differ drastically) for all sentences of length 20 and f 1 -score over all test sentences. Moreover, we show the percentage of sentences (coverage) that we were able to parse with and without the fallback mechanism. The results for the combination of meta-parameters that was selected for later experiments (i.e. a beam width of 200 and a candidate count of 10 4 ) are written in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 364,
"end": 371,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "Lemma 16. Let v \u2208 L(G cf ). Then v \u2208 mD G \u21d0\u21d2 toCowD(v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "In the following, we will gloss over the distinction between derivations and parses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Those trees correspond to the derivations of the guiding grammar in the coarse-to-fine parsing approach ofBarth\u00e9lemy et al. (2001, sec. 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A parser is complete if it (eventually) computes all complete derivations of the given word in the given grammar. A parser is called sound if all computed parses are complete derivations of the given word in the given grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank our colleague Kilian Gebhardt as well as the anonymous reviewers for their insightful comments on drafts of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "Proof. Let s \u2208 N + and let t 1 , . . . , t s be cow derivations in G. We abbreviate the following property by C(t 1 , . . . , t s ): (i) {t 1 , . . . , t s } is consistent, (ii) s = fanout(root(t 1 )), and (iii) t is an -cow derivation for each \u2208 [s] . Now, we show by structural induction that C(toCowD(. For each \u2208 [s], let v be obtained from the right-hand side of \u03c1 ( ) by replacing each non-terminal of the formi( ,n ) be the nonterminals on the rhs of \u03c1 ( ) , then t is the cow derivationNote that \u03c1 is the root symbol of each t 1 , . . . , t s and the set of indices(by defs. 6 and 8)Since the v j i s and \u03c1 were selected arbitrarily, we can obtain any element of L(G, A ) in that manner. In particular, for eachProposition 15. TODERIV \u2022 toCowD \u22121 is a bijection between the consistent cow derivations in cowD c G and D c G .Proof. By lems. 12 and 16, there is a bijection between the consistent cow derivations in (1 -cowD G ) S and R G \u2229 mD G , and there is a bijection between R G \u2229 mD G and D c G (Denkinger, 2017, cor. 3.9) .",
"cite_spans": [
{
"start": 247,
"end": 250,
"text": "[s]",
"ref_id": null
},
{
"start": 1008,
"end": 1035,
"text": "(Denkinger, 2017, cor. 3.9)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "Algorithm 3 is a modification of the algorithm given by Hulden (2011, alg. 1) . The changes involve an introduction of weights in the algorithm; elements of A are drawn by maximum weight instead of being drawn randomly. In our implementation, we defined the weight of an item (p, v, q) as the weight \u00b5 (v) defined in def. 6.Algorithm 3 extracts Dyck words from an FSA.Input: a weight assignment wt :Since the function given in alg. 4 is very similar to def. 14, we omit further discussions. (T 1 , . . . , T k ) \u2190 COLLECTCHILDREN(T ) 5:",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 77,
"text": "(2011, alg. 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.3 Additional Algorithms",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fast statistical parsing with parallel multiple context-free grammars",
"authors": [
{
"first": "Krasimir",
"middle": [],
"last": "Angelov",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Ljungl\u00f6f",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "368--376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krasimir Angelov and Peter Ljungl\u00f6f. 2014. Fast sta- tistical parsing with parallel multiple context-free grammars. In Proceedings of the 14th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 368-376.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "On formal properties of simple phrase structure grammars",
"authors": [
{
"first": "Yehoshua",
"middle": [],
"last": "Bar-Hillel",
"suffix": ""
},
{
"first": "Micha",
"middle": [
"Asher"
],
"last": "Perles",
"suffix": ""
},
{
"first": "Eli",
"middle": [],
"last": "Shamir",
"suffix": ""
}
],
"year": 1961,
"venue": "Zeitschrift f\u00fcr Phonetik, Sprachwissenschaft und Kommunikationsforschung",
"volume": "14",
"issue": "",
"pages": "143--172",
"other_ids": {
"DOI": [
"10.1524/stuf.1961.14.14.143"
]
},
"num": null,
"urls": [],
"raw_text": "Yehoshua Bar-Hillel, Micha Asher Perles, and Eli Shamir. 1961. On formal properties of simple phrase structure grammars. Zeitschrift f\u00fcr Phonetik, Sprachwissenschaft und Kommunikationsforschung, 14:143-172.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Philippe Deschamp, and\u00c9ric Villemonte de la Clergerie",
"authors": [
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Barth\u00e9lemy",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Boullier",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois Barth\u00e9lemy, Pierre Boullier, Philippe De- schamp, and\u00c9ric Villemonte de la Clergerie. 2001. Guided parsing of range concatenation languages. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Procedure for Quantitatively Comparing the Syntactic Coverage of English Grammars",
"authors": [
{
"first": "Ezra",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Abney",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Gdaniec",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Harrison",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Hindle",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Ingria",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Klavans",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Liberman",
"suffix": ""
},
{
"first": "Tomek",
"middle": [],
"last": "Strzalkowski",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the Workshop on Speech and Natural Language",
"volume": "",
"issue": "",
"pages": "306--311",
"other_ids": {
"DOI": [
"10.3115/112405.112467"
]
},
"num": null,
"urls": [],
"raw_text": "Ezra Black, Steven Abney, Dan Flickinger, Claudia Gdaniec, Ralph Grishman, Philip Harrison, Donald Hindle, Robert Ingria, Fred Jelinek, Judith Klavans, Mark Liberman, and Tomek Strzalkowski. 1991. A Procedure for Quantitatively Comparing the Syntac- tic Coverage of English Grammars. In Proceedings of the Workshop on Speech and Natural Language, pages 306-311. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Three models for the description of language",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1956,
"venue": "IEEE Transactions on Information Theory",
"volume": "2",
"issue": "3",
"pages": "113--124",
"other_ids": {
"DOI": [
"10.1109/tit.1956.1056813"
]
},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky. 1956. Three models for the descrip- tion of language. IEEE Transactions on Information Theory, 2(3):113-124.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "On certain formal properties of grammars",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1959,
"venue": "Information and control",
"volume": "2",
"issue": "2",
"pages": "137--167",
"other_ids": {
"DOI": [
"10.1016/S0019-9958(59)90362-6"
]
},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky. 1959. On certain formal properties of grammars. Information and control, 2(2):137-167.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The algebraic theory of context-free languages. Computer Programming and Formal Systems",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
},
{
"first": "Marcel",
"middle": [
"Paul"
],
"last": "Sch\u00fctzenberger",
"suffix": ""
}
],
"year": 1963,
"venue": "Studies in Logic",
"volume": "",
"issue": "",
"pages": "118--161",
"other_ids": {
"DOI": [
"10.1016/S0049-237X(09)70104-1"
]
},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky and Marcel Paul Sch\u00fctzenberger. 1963. The algebraic theory of context-free lan- guages. Computer Programming and Formal Sys- tems, Studies in Logic, pages 118-161.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Incremental discontinuous phrase structure parsing with the gap transition",
"authors": [
{
"first": "Maximin",
"middle": [],
"last": "Coavoux",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Crabb\u00e9",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1259--1270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximin Coavoux and Benoit Crabb\u00e9. 2017. Incre- mental discontinuous phrase structure parsing with the gap transition. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Pa- pers, pages 1259-1270. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Programming languages and their compilers: Preliminary notes. techreport, Courant Institute of Mathematical Sciences",
"authors": [
{
"first": "John",
"middle": [],
"last": "Cocke",
"suffix": ""
},
{
"first": "J",
"middle": [
"T"
],
"last": "Schwartz",
"suffix": ""
}
],
"year": 1970,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Cocke and J. T. Schwartz. 1970. Programming languages and their compilers: Preliminary notes. techreport, Courant Institute of Mathematical Sci- ences, New York University.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Three generative, lexicalised models for statistical parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th annual meeting on Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/976909.979620"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th annual meeting on Association for Com- putational Linguistics and Eighth Conference of the European Chapter of the Association for Compu- tational Linguistics. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Head-driven statistical models for natural language parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1999. Head-driven statistical models for natural language parsing. Ph.D. thesis.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Discontinuous parsing with an efficient and accurate dop model",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Van Cranenburgh",
"suffix": ""
},
{
"first": "Rens",
"middle": [],
"last": "Bod",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 13th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas van Cranenburgh and Rens Bod. 2013. Dis- continuous parsing with an efficient and accurate dop model. In Proceedings of the 13th International Conference on Parsing Technologies.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Data-oriented parsing with discontinuous constituents and function tags",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Van Cranenburgh",
"suffix": ""
},
{
"first": "Remko",
"middle": [],
"last": "Scha",
"suffix": ""
},
{
"first": "Rens",
"middle": [],
"last": "Bod",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Language Modelling",
"volume": "4",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.15398/jlm.v4i1.100"
]
},
"num": null,
"urls": [],
"raw_text": "Andreas van Cranenburgh, Remko Scha, and Rens Bod. 2016. Data-oriented parsing with discontinu- ous constituents and function tags. Journal of Lan- guage Modelling, 4(1):57.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Chomsky-Sch\u00fctzenberger parsing for weighted multiple context-free languages",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Denkinger",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Language Modelling",
"volume": "5",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.15398/jlm.v5i1.159"
]
},
"num": null,
"urls": [],
"raw_text": "Tobias Denkinger. 2017. Chomsky-Sch\u00fctzenberger parsing for weighted multiple context-free lan- guages. Journal of Language Modelling, 5(1):3.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A single generative model for joint morphological segmentation and syntactic parsing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Reut",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "371--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Reut Tsarfaty. 2008. A single generative model for joint morphological segmenta- tion and syntactic parsing. Proceedings of ACL-08: HLT, pages 371-379.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Introduction to Automata Theory, Languages and Computation",
"authors": [
{
"first": "John",
"middle": [
"Edward"
],
"last": "Hopcroft",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "David Ullman",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Edward Hopcroft and Jeffrey David Ullman. 1979. Introduction to Automata Theory, Languages and Computation, 1st edition.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Parsing CFGs and PCFGs with a Chomsky-Sch\u00fctzenberger representation",
"authors": [
{
"first": "",
"middle": [],
"last": "Mans Hulden",
"suffix": ""
}
],
"year": 2011,
"venue": "Human Language Technology. Challenges for Computer Science and Linguistics",
"volume": "6562",
"issue": "",
"pages": "151--160",
"other_ids": {
"DOI": [
"10.1007/978-3-642-20095-3_14"
]
},
"num": null,
"urls": [],
"raw_text": "Mans Hulden. 2011. Parsing CFGs and PCFGs with a Chomsky-Sch\u00fctzenberger representation. In Hu- man Language Technology. Challenges for Com- puter Science and Linguistics, volume 6562 of Lec- ture Notes in Computer Science, pages 151-160.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Tree adjoining grammars: How much context-sensitivity is needed for characterizing structural descriptions?, chapter 6",
"authors": [
{
"first": "K",
"middle": [],
"last": "Aravind",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aravind K. Joshi. 1985. Tree adjoining grammars: How much context-sensitivity is needed for charac- terizing structural descriptions?, chapter 6. Camb- drige University Press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Tree adjunct grammars",
"authors": [
{
"first": "Aravind Krishna",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Leon",
"middle": [
"S"
],
"last": "Levy",
"suffix": ""
},
{
"first": "Masako",
"middle": [],
"last": "Takahashi",
"suffix": ""
}
],
"year": 1975,
"venue": "Journal of Computer and System Sciences",
"volume": "10",
"issue": "1",
"pages": "136--163",
"other_ids": {
"DOI": [
"10.1016/S0022-0000(75)80019-5"
]
},
"num": null,
"urls": [],
"raw_text": "Aravind Krishna Joshi, Leon S. Levy, and Masako Takahashi. 1975. Tree adjunct grammars. Journal of Computer and System Sciences, 10(1):136-163.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Datadriven parsing using probabilistic linear contextfree rewriting systems",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Kallmeyer",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Maier",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "1",
"pages": "87--119",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00136"
]
},
"num": null,
"urls": [],
"raw_text": "Laura Kallmeyer and Wolfgang Maier. 2013. Data- driven parsing using probabilistic linear context- free rewriting systems. Computational Linguistics, 39(1):87-119.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "An efficient recognition and syntax-analysis algorithm for context-free languages. techreport R-257",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kasami",
"suffix": ""
}
],
"year": 1966,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Kasami. 1966. An efficient recognition and syntax-analysis algorithm for context-free lan- guages. techreport R-257, AFCRL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/1075096.1075150"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Accu- rate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Compu- tational Linguistics. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Discontinuous incremental shift-reduce parsing",
"authors": [
{
"first": "Wolfgang",
"middle": [],
"last": "Maier",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1202--1212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfgang Maier. 2015. Discontinuous incremental shift-reduce parsing. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 1202-1212, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Data analysis, including statistics",
"authors": [
{
"first": "Frederick",
"middle": [],
"last": "Mosteller",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Wilder Tukey",
"suffix": ""
}
],
"year": 1968,
"venue": "Handbook of Social Psychology",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frederick Mosteller and John Wilder Tukey. 1968. Data analysis, including statistics. In G. Lindzey and E. Aronson, editors, Handbook of Social Psy- chology, volume 2. Addison-Wesley.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Jelmer van der Linde, Ineke Schuurman, Erik Tjong Kim Sang, and Vincent Vandeghinste",
"authors": [
{
"first": "Gosse",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Bouma",
"suffix": ""
},
{
"first": "Dani\u00ebl",
"middle": [],
"last": "Van Eynde",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "De Kok",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "147--164",
"other_ids": {
"DOI": [
"10.1007/978-3-642-30910-6_9"
]
},
"num": null,
"urls": [],
"raw_text": "Gertjan van Noord, Gosse Bouma, Frank Van Eynde, Dani\u00ebl de Kok, Jelmer van der Linde, Ineke Schuur- man, Erik Tjong Kim Sang, and Vincent Vandeghin- ste. 2013. Large Scale Syntactic Annotation of Writ- ten Dutch: Lassy, pages 147-164. Springer Berlin Heidelberg, Berlin, Heidelberg.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Weighting finite-state transductions with neural context",
"authors": [
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "623--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transductions with neu- ral context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 623-633, San Diego, Califor- nia. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "On multiple contextfree grammars",
"authors": [
{
"first": "Hiroyuki",
"middle": [],
"last": "Seki",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Matsumura",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Fujii",
"suffix": ""
},
{
"first": "Tadao",
"middle": [],
"last": "Kasami",
"suffix": ""
}
],
"year": 1991,
"venue": "Theoretical Computer Science",
"volume": "88",
"issue": "2",
"pages": "191--229",
"other_ids": {
"DOI": [
"10.1016/0304-3975(91)90374-B"
]
},
"num": null,
"urls": [],
"raw_text": "Hiroyuki Seki, Takashi Matsumura, Mamoru Fujii, and Tadao Kasami. 1991. On multiple context- free grammars. Theoretical Computer Science, 88(2):191-229.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Evidence against the contextfreeness of natural language",
"authors": [
{
"first": "M",
"middle": [],
"last": "Stuart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shieber",
"suffix": ""
}
],
"year": 1985,
"venue": "Linguistics and Philosophy",
"volume": "8",
"issue": "3",
"pages": "333--343",
"other_ids": {
"DOI": [
"10.1007/bf00630917"
]
},
"num": null,
"urls": [],
"raw_text": "Stuart M. Shieber. 1985. Evidence against the context- freeness of natural language. Linguistics and Phi- losophy, 8(3):333-343.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A Linguistically Interpreted Corpus of German Newspaper Text",
"authors": [
{
"first": "Wojciech",
"middle": [],
"last": "Skut",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Brigitte",
"middle": [],
"last": "Krenn",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 10th European Summer School in Logic, Language and Information. Workshop on Recent Advances in Corpus Annotation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wojciech Skut, Thorsten Brants, Brigitte Krenn, and Hans Uszkoreit. 1998. A Linguistically Interpreted Corpus of German Newspaper Text. In Proceed- ings of the 10th European Summer School in Logic, Language and Information. Workshop on Recent Ad- vances in Corpus Annotation.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Characterizing structural descriptions produced by various grammatical formalisms",
"authors": [
{
"first": "Krishnamurti",
"middle": [],
"last": "Vijay-Shanker",
"suffix": ""
},
{
"first": "David",
"middle": [
"Jeremy"
],
"last": "Weir",
"suffix": ""
},
{
"first": "Aravind Krishna",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 1987,
"venue": "Proceedings of the 25th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {
"DOI": [
"10.3115/981175.981190"
]
},
"num": null,
"urls": [],
"raw_text": "Krishnamurti Vijay-Shanker, David Jeremy Weir, and Aravind Krishna Joshi. 1987. Characterizing struc- tural descriptions produced by various grammati- cal formalisms. In Proceedings of the 25th Annual Meeting on Association for Computational Linguis- tics, pages 104-111.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Chomsky-Sch\u00fctzenberger-type characterization of multiple context-free languages",
"authors": [
{
"first": "Ryo",
"middle": [],
"last": "Yoshinaka",
"suffix": ""
},
{
"first": "Yuichi",
"middle": [],
"last": "Kaji",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Seki",
"suffix": ""
}
],
"year": 2010,
"venue": "Language and Automata Theory and Applications",
"volume": "",
"issue": "",
"pages": "596--607",
"other_ids": {
"DOI": [
"10.1007/978-3-642-13089-2_50"
]
},
"num": null,
"urls": [],
"raw_text": "Ryo Yoshinaka, Yuichi Kaji, and Hiroyuki Seki. 2010. Chomsky-Sch\u00fctzenberger-type characterization of multiple context-free languages. In Adrian-Horia Dediu, Henning Fernau, and Carlos Mart\u00edn-Vide, editors, Language and Automata Theory and Appli- cations, pages 596-607.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Recognition and parsing of context-free languages in time n3",
"authors": [
{
"first": "H",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Younger",
"suffix": ""
}
],
"year": 1967,
"venue": "Information and Control",
"volume": "10",
"issue": "2",
"pages": "189--208",
"other_ids": {
"DOI": [
"10.1016/s0019-9958(67)80007-x"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel H. Younger. 1967. Recognition and parsing of context-free languages in time n3. Information and Control, 10(2):189-208.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "A weighted MCFG (short: wMCFG) is a tuple (G, \u00b5) where G = (N, \u03a3, S, P ) is an MCFG (underlying MCFG), \u00b5: P \u2192 M \\ {0} (weight assignment), and(M, , 1, 0, \u00a2)is a factorisable POCMOZ.(G, \u00b5) inherits all objects associated with G.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "c) Construction of new rules from the clusters in fig. 3b.",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "A strategy to convert a non-consistent cow derivation into a complete derivation of an MCFG.",
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"num": null,
"text": "NeGra corpus, |w| \u2264 30 (for ddlcfrs: |w| \u2264 20) Lassy corpus, |w| \u2264 30",
"type_str": "figure",
"uris": null
},
"FIGREF4": {
"num": null,
"text": "Median parse times",
"type_str": "figure",
"uris": null
},
"FIGREF5": {
"num": null,
"text": "A derivation d m,n (top) together with the corresponding term over composition functions (bottom), cf. ex. 1.",
"type_str": "figure",
"uris": null
},
"FIGREF6": {
"num": null,
"text": "in the domain of f (i.e. f (v) and f (v ) are both defined).",
"type_str": "figure",
"uris": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Results of the grid search for meta-parameters.",
"html": null
}
}
}
}