Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D08-1018",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:31:05.764189Z"
},
"title": "Better Binarization for the CKY Parsing",
"authors": [
{
"first": "Xinying",
"middle": [],
"last": "Song",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Shilin",
"middle": [],
"last": "Ding",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a study on how grammar binarization empirically affects the efficiency of the CKY parsing. We argue that binarizations affect parsing efficiency primarily by affecting the number of incomplete constituents generated, and the effectiveness of binarization also depends on the nature of the input. We propose a novel binarization method utilizing rich information learnt from training corpus. Experimental results not only show that different binarizations have great impacts on parsing efficiency, but also confirm that our learnt binarization outperforms other existing methods. Furthermore we show that it is feasible to combine existing parsing speed-up techniques with our binarization to achieve even better performance.",
"pdf_parse": {
"paper_id": "D08-1018",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a study on how grammar binarization empirically affects the efficiency of the CKY parsing. We argue that binarizations affect parsing efficiency primarily by affecting the number of incomplete constituents generated, and the effectiveness of binarization also depends on the nature of the input. We propose a novel binarization method utilizing rich information learnt from training corpus. Experimental results not only show that different binarizations have great impacts on parsing efficiency, but also confirm that our learnt binarization outperforms other existing methods. Furthermore we show that it is feasible to combine existing parsing speed-up techniques with our binarization to achieve even better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Binarization, which transforms an n-ary grammar into an equivalent binary grammar, is essential for achieving an O(n 3 ) time complexity in the contextfree grammar parsing. O(n 3 ) tabular parsing algorithms, such as the CKY algorithm (Kasami, 1965; Younger, 1967) , the GHR parser (Graham et al., 1980) , the Earley algorithm (Earley, 1970) and the chart parsing algorithm (Kay, 1980; Klein and Manning, 2001 ) all convert their grammars into binary branching forms, either explicitly or implicitly (Charniak et al., 1998) .",
"cite_spans": [
{
"start": 235,
"end": 249,
"text": "(Kasami, 1965;",
"ref_id": "BIBREF8"
},
{
"start": 250,
"end": 264,
"text": "Younger, 1967)",
"ref_id": "BIBREF19"
},
{
"start": 282,
"end": 303,
"text": "(Graham et al., 1980)",
"ref_id": "BIBREF6"
},
{
"start": 327,
"end": 341,
"text": "(Earley, 1970)",
"ref_id": "BIBREF4"
},
{
"start": 374,
"end": 385,
"text": "(Kay, 1980;",
"ref_id": "BIBREF9"
},
{
"start": 386,
"end": 409,
"text": "Klein and Manning, 2001",
"ref_id": "BIBREF10"
},
{
"start": 500,
"end": 523,
"text": "(Charniak et al., 1998)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In fact, the number of all possible binarizations of a production with n + 1 symbols on its right hand side is known to be the nth Catalan Number",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "C n = 1 n+1 2n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "n . All binarizations lead to the same parsing accuracy, but maybe different parsing efficiency, i.e. parsing speed. We are interested in investigating whether and how binarizations will affect the efficiency of the CKY parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Do different binarizations lead to different parsing efficiency? Figure 1 gives an example to help answer this question. Figure 1(a) illustrates the correct parse of the phrase \"get the bag and go\". We assume that N P \u2192 N P CC N P is in the original grammar. The symbols enclosed in square brackets in the figure are intermediate symbols. If a left binarized grammar is used, see Figure 1(b) , an extra constituent [N P CC] spanning \"the bag and\" will be produced. Because rule [N P CC] \u2192 N P CC is in the left binarized grammar and there is an N P over \"the bag\" and a CC over the right adjacent \"and\". Having this constituent is unnecessary, because it lacks an N P to the right to complete the production. However, if a right binarization is used, as shown in Figure 1(c) , such unnecessary constituent can be avoided.",
"cite_spans": [
{
"start": 380,
"end": 391,
"text": "Figure 1(b)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 65,
"end": 73,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 121,
"end": 132,
"text": "Figure 1(a)",
"ref_id": "FIGREF0"
},
{
"start": 763,
"end": 774,
"text": "Figure 1(c)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One observation from this example is that different binarizations affect constituent generation, thus affect parsing efficiency. Another observation is that for rules like X \u2192 Y CC Y , it is more suitable to binarize them in a right branching way. This can be seen as a linguistic nature: for \"and\", usually the right neighbouring word can indicate the correct parse. A good binarization should reflect such liguistic nature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we aim to study the effect of binarization on the efficiency of the CKY parsing. To our knowledge, this is the first work on this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose the problem to find the optimal binarization in terms of parsing efficiency (Section 3). We argue that binarizations affect parsing efficiency primarily by affecting the number of incomplete constituents generated, and the effectiveness of binarization also depends on the nature of the input (Section 4). Therefore we propose a novel binarization method utilizing rich information learnt from training corpus (Section 5). Experimental results show that our binarization outperforms other existing methods (Section 7.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since binarization is usually a preprocessing step before parsing, we argue that better performance can be achieved by combining other parsing speed-up techniques with our binarization (Section 6). We conduct experiments to confirm this (Section 7.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we assume that the original grammar, perhaps after preprocessing, contains noproductions or useless symbols. However, we allow the existence of unary productions, since we adopt an extended version of the CKY algorithm which can handle the unary productions. Moreover we do not distinguish nonterminals and terminals explicitly. We treat them as symbols. What we focus on is the procedure of binarization. Definition 1. A binarization is a function \u03c0, mapping an n-ary grammar G to an equivalent binary grammar G . We say that G is a binarized grammar of G, denoted as \u03c0(G).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binarization",
"sec_num": "2"
},
{
"text": "Two grammars are equivalent if they define the same probability distribution over strings (Charniak et al., 1998) .",
"cite_spans": [
{
"start": 90,
"end": 113,
"text": "(Charniak et al., 1998)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Binarization",
"sec_num": "2"
},
{
"text": "We use the most widely used left binarization (Aho and Ullman, 1972) to show the procedure of binarization, as illustrated in Table 1 , where p and q are the probabilities of the productions. Left binarization always selects the left most pair of symbols and combines them to form an intermediate nonterminal. This procedure is repeated until all productions are binary.",
"cite_spans": [
{
"start": 46,
"end": 68,
"text": "(Aho and Ullman, 1972)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Binarization",
"sec_num": "2"
},
{
"text": "Y \u2192 A B C : p [A B] \u2192 A B : 1.0 Z \u2192 A B D : q Y \u2192 [A B] C : p Z \u2192 [A B] D : q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original grammar Left binarized grammar",
"sec_num": null
},
{
"text": "In this paper, we assume that all binarizations follow the fashion above, except that the choice of pair of symbols for combination can be arbitrary. Next we show three other known binarizations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original grammar Left binarized grammar",
"sec_num": null
},
{
"text": "Right binarization is almost the same with left binarization, except that it always selects the right most pair, instead of left, to combine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original grammar Left binarized grammar",
"sec_num": null
},
{
"text": "Head binarization always binarizes from the head outward (Klein and Manning, 2003b) . Please refer to Charniak et al. (2006) for more details.",
"cite_spans": [
{
"start": 57,
"end": 83,
"text": "(Klein and Manning, 2003b)",
"ref_id": "BIBREF12"
},
{
"start": 102,
"end": 124,
"text": "Charniak et al. (2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Original grammar Left binarized grammar",
"sec_num": null
},
{
"text": "Compact binarization (Schmid, 2004) tries to minimize the size of the binarized grammar. It leads to a compact grammar. We therefore call it compact binarization. It is done via a greedy approach: it always selects the pair that occurs most on the right hand sides of rules to combine.",
"cite_spans": [
{
"start": 21,
"end": 35,
"text": "(Schmid, 2004)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Original grammar Left binarized grammar",
"sec_num": null
},
{
"text": "The optimal binarization should help CKY parsing to achieve its best efficiency. We formalize the idea as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The optimal binarization",
"sec_num": "3"
},
{
"text": "Definition 2. The optimal binarization is \u03c0 * , for a given n-ary grammar G and a test corpus C:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The optimal binarization",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c0 * = arg min \u03c0 T (\u03c0(G), C)",
"eq_num": "(1)"
}
],
"section": "The optimal binarization",
"sec_num": "3"
},
{
"text": "where T (\u03c0(G), C) is the running time for CKY to parse corpus C, using the binarized grammar \u03c0(G).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The optimal binarization",
"sec_num": "3"
},
{
"text": "It is hard to find the optimal binarization directly from Definition 2. We next give an empirical analysis of the running time of the CKY algorithm and simplify the problem by introducing assumptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The optimal binarization",
"sec_num": "3"
},
{
"text": "It is known that the complexity of the CKY algorithm is O(n 3 L). The constant L depends on the bi-narized grammar in use. Therefore binarization will affect L. Our goal is to find a good binarization that makes parsing more efficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of CKY parsing efficiency",
"sec_num": "3.1"
},
{
"text": "It is also known that in the inner most loop of CKY as shown in Algorithm 1, the for-statement in Line 1 can be implemented in several different methods. The choice will affect the efficiency of CKY. We present here four possible methods: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of CKY parsing efficiency",
"sec_num": "3.1"
},
{
"text": "M1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of CKY parsing efficiency",
"sec_num": "3.1"
},
{
"text": "We have shown that both binarization and the forstatement implementation in the inner most loop of CKY will affect the parsing speed. About the for-statement implementations, no previous study has addressed which one is superior. The actual choice may affect our study on binarization. If using M1, since it enumerates all rules in the grammar, the optimal binarization will be the one with minimal number of rules, i.e. minimal binarized grammar size. However, M1 is usually not preferred in practice (Goodman, 1997) . For other methods, it is hard to tell which binarization is optimal theoretically. In this paper, for simplicity reasons we do not consider the effect of for-statement implementations on the optimal binarization.",
"cite_spans": [
{
"start": 502,
"end": 517,
"text": "(Goodman, 1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model assumption",
"sec_num": "3.2"
},
{
"text": "On the other hand, it is well known that reducing the number of constituents produced in parsing can greatly improve CKY parsing efficiency. That is how most thresholding systems (Goodman, 1997; Tsuruoka and Tsujii, 2004; Charniak et al., 2006) speed up CKY parsing. Apparently, the number of constituents produced in parsing is not affected by for-statement implementations.",
"cite_spans": [
{
"start": 179,
"end": 194,
"text": "(Goodman, 1997;",
"ref_id": "BIBREF5"
},
{
"start": 195,
"end": 221,
"text": "Tsuruoka and Tsujii, 2004;",
"ref_id": "BIBREF17"
},
{
"start": 222,
"end": 244,
"text": "Charniak et al., 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model assumption",
"sec_num": "3.2"
},
{
"text": "Therefore we assume that the running time of CKY is primarily determined by the number of constituents generated in parsing. We simplify the optimal binarization to be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model assumption",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c0 * \u2248 arg min \u03c0 E(\u03c0(G), C)",
"eq_num": "(2)"
}
],
"section": "Model assumption",
"sec_num": "3.2"
},
{
"text": "where E(\u03c0(G), C) is the number of constituents generated when CKY parsing C with \u03c0(G).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model assumption",
"sec_num": "3.2"
},
{
"text": "We next discuss how binarizations affect the number of constituents generated in parsing, and present our algorithm for finding a good binarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model assumption",
"sec_num": "3.2"
},
{
"text": "Throughout this section and the next, we will use an example to help illustrate the idea. The grammar is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How binarizations affect constituents",
"sec_num": "4"
},
{
"text": "X \u2192 A B C D Y \u2192 A B C C \u2192 C D Z \u2192 A B C E W \u2192 F C D E The input sentence is 0 A 1 B 2 C 3 D 4 E 5 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How binarizations affect constituents",
"sec_num": "4"
},
{
"text": "where the subscripts are used to indicate the positions of spans. For example, [1, 3] stands for B C. The final parse 2 is shown in Figure 2 . Symbols surrounded by dashed circles are fictitious, which do not actually exist in the parse. ",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 140,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "How binarizations affect constituents",
"sec_num": "4"
},
{
"text": "F B:[1,2] A:[0,1] C:[2,3] D:[3,4] E:[4,5] W Y:[0,3] X:[0,4] C:[2,4] Y:[0,4] Z:[0,5]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How binarizations affect constituents",
"sec_num": "4"
},
{
"text": "In the procedure of CKY parsing, there are two kinds of constituents generated: complete and incomplete. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complete and incomplete constituents",
"sec_num": "4.1"
},
{
"text": "Binarizations do not affect whether a CC will be produced. If there is a CC in the parse, whatever binarization we use, it will be produced. The difference merely lies on what intermediate ICs are used. Therefore given a grammar and an input sentence, no matter what binarization is used, the CKY parsing will generate the same set of CCs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact on complete constituents",
"sec_num": "4.2"
},
{
"text": "For example in Figure 2 there is a CC X :",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Impact on complete constituents",
"sec_num": "4.2"
},
{
"text": "[0, 4], which is associated with rule X \u2192 A B C D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact on complete constituents",
"sec_num": "4.2"
},
{
"text": "No matter what binarization we use, this CC will be recognized eventually. For example if using left binarization, we will get ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact on complete constituents",
"sec_num": "4.2"
},
{
"text": "[A B]:[0, 2], [A B C]:[0, 3]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact on complete constituents",
"sec_num": "4.2"
},
{
"text": "Binarizations do affect the generation of ICs, because they generate different intermediate symbols.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact on incomplete constituents",
"sec_num": "4.3"
},
{
"text": "We discuss the impact on two aspects:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact on incomplete constituents",
"sec_num": "4.3"
},
{
"text": "Shared IC. Some ICs can be used to generate multiple CCs in parsing. We call them shared. If a binarization can lead to more shared ICs, then overall there will be fewer ICs needed in parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact on incomplete constituents",
"sec_num": "4.3"
},
{
"text": "For example, in Figure 2 , if we use left binarization, then [A B]:[0, 2] can be shared to generate both X :[0, 4] and Y :[0, 3], in which we can save one IC overall. However, if right binarization is used, there will be no common ICs to share in the generation steps of X :[0, 4] and Y :[0, 3], and overall there are one more IC generated.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 24,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Impact on incomplete constituents",
"sec_num": "4.3"
},
{
"text": "Failed IC. For a CC, if it can be recognized eventually by applying an original rule of length k, whatever binarization to use, we will have to generate the same number of k \u2212 2 ICs before we can complete the CC. However, if the CC cannot be fully recog-nized but only partially recognized, then the number of ICs needed will be quite different.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact on incomplete constituents",
"sec_num": "4.3"
},
{
"text": "For example, in Figure 2 , the rule W \u2192 F C D E can be only partially recognized over [2, 5] , so it cannot generate the corresponding CC. Right binarization needs two ICs ([D E]: [3, 5] and [C D E]: [2, 5] ) to find that the CC cannot be recognized, while left binarization needs none.",
"cite_spans": [
{
"start": 86,
"end": 89,
"text": "[2,",
"ref_id": null
},
{
"start": 90,
"end": 92,
"text": "5]",
"ref_id": null
},
{
"start": 180,
"end": 183,
"text": "[3,",
"ref_id": null
},
{
"start": 184,
"end": 186,
"text": "5]",
"ref_id": null
},
{
"start": 200,
"end": 203,
"text": "[2,",
"ref_id": null
},
{
"start": 204,
"end": 206,
"text": "5]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 16,
"end": 24,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Impact on incomplete constituents",
"sec_num": "4.3"
},
{
"text": "As mentioned earlier, ICs are auxiliary means to generate CCs. If an IC cannot help generate any CCs, it is totally useless and even harmful. We call such an IC failed, otherwise it is successful. Therefore, if a binarization can help generate fewer failed ICs then parsing would be more efficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact on incomplete constituents",
"sec_num": "4.3"
},
{
"text": "Now we show that the impact of binarization also depends on the actual input. When the input changes, the impact may also change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binarization and the nature of the input",
"sec_num": "4.4"
},
{
"text": "For example, in the previous example about the rule W \u2192 F C D E in Figure 2 , we believe that left binarization is better based on the observation that there are more snippets of [C D E] in the input which lack for F to the left. If there are more snippets of [F C D] in the input lacking for E to the right, then right binarization would be better.",
"cite_spans": [
{
"start": 179,
"end": 186,
"text": "[C D E]",
"ref_id": null
},
{
"start": 260,
"end": 267,
"text": "[F C D]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 67,
"end": 75,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Binarization and the nature of the input",
"sec_num": "4.4"
},
{
"text": "The discussion above confirms such a view: the effect of binarization depends on the nature of the input language, and a good binarization should reflect this nature. This accords with our intuition. So we use training corpus to learn a good binarization. And we verify the effectiveness of the learnt binarization using a test corpus with the same nature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binarization and the nature of the input",
"sec_num": "4.4"
},
{
"text": "In summary, binarizations affect the efficiency of parsing primarily by affecting the number of ICs generated, where more shared and fewer failed ICs will help lead to higher efficiency. Meanwhile, the effectiveness of binarization also depends on the nature of its input language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binarization and the nature of the input",
"sec_num": "4.4"
},
{
"text": "Based on the analysis in the previous section, we employ a greedy approach to find a good binarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards a good binarization",
"sec_num": "5"
},
{
"text": "We use training corpus to compute metrics for every possible intermediate symbol. We use this information to greedily select the best pair to combine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards a good binarization",
"sec_num": "5"
},
{
"text": "Given the original grammar G and training corpus C, for every sentence in C, we firstly obtain the final parse (like Figure 2) . For every possible intermediate symbol, i.e. every ngram of the original symbols, denoted by w, we compute the following two metrics:",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 126,
"text": "Figure 2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "5.1"
},
{
"text": "1. How many ICs labeled by w can be generated in the final parse, denoted by num(w) (number of related ICs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "5.1"
},
{
"text": "2. How many CCs can be generated via ICs labeled by w, denoted by ctr(w) (contribution of related ICs). Table 2 . We will discuss how to compute these two metrics in Section 5.2. The two metrics indicate the goodness of a possible intermediate symbol w: num(w) indicates how many ICs labeled by w are likely to be generated in parsing; while ctr(w) represents how much w can contribute to the generation of CCs. If ctr(w) is larger, the corresponding ICs are more likely to be shared. If ctr is zero, those ICs are surely failed. Therefore the smaller num(w) is and the larger ctr(w) is, the better w would be.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 111,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "5.1"
},
{
"text": "[A B] 1 4 [B C E] 1 1 [A B C] 2 4 [C D] 1 2 [A B C D] 1 1 [C D E] 1 0 [A B C E] 1 1 [C E] 1 1 [B C] 2 4 [D E] 1 0 [B C D] 1 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "w num ctr w num ctr",
"sec_num": null
},
{
"text": "Combining num and ctr, we define a utility function for each ngram w in the original grammar:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "w num ctr w num ctr",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "utility(w) = f (num(w), ctr(w))",
"eq_num": "(3)"
}
],
"section": "w num ctr w num ctr",
"sec_num": null
},
{
"text": "where f is a ranking function, satisfying that f (x, y) is larger when x is smaller and y is larger. We will discuss more details about it in Section 5.3. Using utility as the ranking function, we sort all pairs of symbols and choose the best to combine. The formal algorithm is as follows: S1 For every symbol pair of v 1 , v 2 (where v 1 and v 2 can be original symbols or intermediate symbols generated in previous rounds), let w 1 and w 2 be the ngrams of original symbols represented by v 1 and v 2 , respectively. Let w = w 1 w 2 be the ngram represented by the symbol pair. Compute utility(w).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "w num ctr w num ctr",
"sec_num": null
},
{
"text": "S2 Select the ngram w with the highest utility(w), let it be w * (in case of a tie, select the one with a smaller num). Let the corresponding symbol pair ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "w num ctr w num ctr",
"sec_num": null
},
{
"text": "be v * 1 , v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "w num ctr w num ctr",
"sec_num": null
},
{
"text": "v * \u2192 v * 1 v * 2 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "w num ctr w num ctr",
"sec_num": null
},
{
"text": "1.0. S5 Repeat S1 \u223c S4, until there are no rules with more than two symbols on the right hand side.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "w num ctr w num ctr",
"sec_num": null
},
{
"text": "In this section, we discuss how to compute num and ctr in details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics computing",
"sec_num": "5.2"
},
{
"text": "Computing ctr is straightforward. First we get final parses like in Figure 2 for training sentences. From a final parse, we traverse along every parent node and enumerate every subsequence of its child nodes. For example in Figure 2 , from the parent node of X : [0, 4], we can enumerate the following: [2, 4] . We add 1 to all the ctr of these ngrams, respectively.",
"cite_spans": [
{
"start": 303,
"end": 306,
"text": "[2,",
"ref_id": null
},
{
"start": 307,
"end": 309,
"text": "4]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 68,
"end": 76,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 224,
"end": 232,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Metrics computing",
"sec_num": "5.2"
},
{
"text": "[A B] : [0, 2], [A B C] : [0, 3], [A B C D] : [0, 4], [B C]:[1, 3], [B C D]:[1, 4], [C D]:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics computing",
"sec_num": "5.2"
},
{
"text": "To compute num, we resort to the same idea of dynamic programming as in CKY. We perform a normal left binarization except that we add all ngrams in the original grammar G as intermediate symbols into the binarized grammar G . For example, for the rule of S \u2192 A B C : p, the constructed grammar is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics computing",
"sec_num": "5.2"
},
{
"text": "[A B] \u2192 A B : 1.0 S \u2192 [A B] C : p [B C] \u2192 B C : 1.0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics computing",
"sec_num": "5.2"
},
{
"text": "Using the constructed G , we employ a normal CKY parsing on the training corpus and compute how many constituents are produced for each ngram. The result is num. Suppose the length of the training sentence is n, the original grammar G has N symbols, and the maximum length of rules is k, then the complexity of this method can be written as O(N k n 3 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics computing",
"sec_num": "5.2"
},
{
"text": "We discuss the details of the ranking function f used to compute the utility of each ngram w. We come up with two forms for f : linear and log-linear",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking function",
"sec_num": "5.3"
},
{
"text": "1. linear: f (x, y) = \u2212\u03bb 1 x + \u03bb 2 y 2. log-linear 3 : f (x, y) = \u2212\u03bb 1 log(x) + \u03bb 2 log(y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking function",
"sec_num": "5.3"
},
{
"text": "where \u03bb 1 and \u03bb 2 are non-negative weights subject to \u03bb 1 + \u03bb 2 = 1 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking function",
"sec_num": "5.3"
},
{
"text": "We will use development set to determine which form is better and to learn the best weight settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking function",
"sec_num": "5.3"
},
{
"text": "Binarization usually plays a role of preprocessing in the procedure of parsing. Grammars are binarized before they are fed into the stage of parsing. There are many known works on speeding up the CKY parsing. So we can expect that if we replace the part of binarization by a better one while keeping the subsequent parsing unchanged, the parsing will be more efficient. We will conduct experiment to confirm this idea in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination with other techniques",
"sec_num": "6"
},
{
"text": "We would like to make more discussions before we advance to the experiments. The first is about parsing accuracy in combining binarization with other parsing speed-up techniques. Binarization itself does not affect parsing accuracy. When combined with exact inference algorithms, like the iterative CKY (Tsuruoka and Tsujii, 2004) , the accuracy will be the same. However, if combined with other inexact pruning techniques like beam-pruning (Goodman, 1997) or coarse-to-fine parsing (Charniak et al., 2006) , binarization may interact with those pruning methods in a complicated way to affect parsing accuracy. This is due to different binarizations generate different sets of intermediate sym-bols. With the same complete constituents, one binarization might derive incomplete constitutes that could be pruned while another binarization may not. This would affect the accuracy. We do not address this interaction on in this paper, but leave it to the future work. In Section 7.3 we will use the iterative CKY for testing.",
"cite_spans": [
{
"start": 303,
"end": 330,
"text": "(Tsuruoka and Tsujii, 2004)",
"ref_id": "BIBREF17"
},
{
"start": 441,
"end": 456,
"text": "(Goodman, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 483,
"end": 506,
"text": "(Charniak et al., 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combination with other techniques",
"sec_num": "6"
},
{
"text": "In addition, we believe there exist some speed-up techniques which are incompatible with our binarization. One such example may be the top-down left-corner filtering (Graham et al., 1980; Moore, 2000) , which seems to be only applicable to the process of left binarization. A detailed investigation on this problem will be left to the future work.",
"cite_spans": [
{
"start": 166,
"end": 187,
"text": "(Graham et al., 1980;",
"ref_id": "BIBREF6"
},
{
"start": 188,
"end": 200,
"text": "Moore, 2000)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combination with other techniques",
"sec_num": "6"
},
{
"text": "The last issue is how our binarization performs on a lexicalized parser, like Collins (1997) . Our intuition is that we cannot apply our binarization to Collins (1997) . The key fact in lexicalized parsers is that we cannot explicitly write down all rules and compute their probabilities precisely, due to the great number of rules and the severe data sparsity problem. Therefore in Collins (1997) grammar rules are already factorized into a set of probabilities. In order to capture the dependency relationship between lexcial heads Collins (1997) breaks down the rules from head outwards, which prevents us from factorizing them in other ways. Therefore our binarization cannot apply to the lexicalized parser. However, there are state-of-the-art unlexicalized parsers (Klein and Manning, 2003b; Petrov et al., 2006) , to which we believe our binarization can be applied.",
"cite_spans": [
{
"start": 78,
"end": 92,
"text": "Collins (1997)",
"ref_id": "BIBREF3"
},
{
"start": 153,
"end": 167,
"text": "Collins (1997)",
"ref_id": "BIBREF3"
},
{
"start": 383,
"end": 397,
"text": "Collins (1997)",
"ref_id": "BIBREF3"
},
{
"start": 534,
"end": 548,
"text": "Collins (1997)",
"ref_id": "BIBREF3"
},
{
"start": 771,
"end": 797,
"text": "(Klein and Manning, 2003b;",
"ref_id": "BIBREF12"
},
{
"start": 798,
"end": 818,
"text": "Petrov et al., 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combination with other techniques",
"sec_num": "6"
},
{
"text": "We conducted two experiments on Penn Treebank II corpus (Marcus et al., 1994) . The first is to compare the effects of different binarizations on parsing and the second is to test the feasibility to combine our work with iterative CKY parsing (Tsuruoka and Tsujii, 2004) to achieve even better efficiency.",
"cite_spans": [
{
"start": 56,
"end": 77,
"text": "(Marcus et al., 1994)",
"ref_id": "BIBREF13"
},
{
"start": 243,
"end": 270,
"text": "(Tsuruoka and Tsujii, 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "Following conventions, we learnt the grammar from Wall Street Journal (WSJ) section 2 to 21 and modified it by discarding all functional tags and empty nodes. The parser obtained this way is a pure unlexicalized context-free parser with the raw treebank grammar. Its accuracy turns out to be 72.46% in terms of F1 measure, quite the same as 72.62% as stated in Klein and Manning (2003b) . We adopt this parser in our experiment not only because of simplicity but also because we focus on parsing efficiency.",
"cite_spans": [
{
"start": 361,
"end": 386,
"text": "Klein and Manning (2003b)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "7.1"
},
{
"text": "For all sentences with no more than 40 words in section 22, we use the first 10% as the development set, and the last 90% as the test set. There are 158 and 1,420 sentences in development set and test set, respectively. We use the whole 2,416 sentences in section 23 as the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "7.1"
},
{
"text": "We use the development set to determine the better form of the ranking function f as well as to tune its weights. Both metrics of num and ctr are normalized before use. Since there is only one free variable in \u03bb 1 and \u03bb 2 , we can just enumerate 0 \u2264 \u03bb 1 \u2264 1, and set \u03bb 2 = 1 \u2212 \u03bb 1 . The increasing step is firstly set to 0.05 for the approximate location of the optimal weight, then set to 0.001 to learn more precisely around the optimal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "7.1"
},
{
"text": "We find that the optimal is 5,773,088 (constituents produced in parsing development set) with \u03bb 1 = 0.014 for linear form, while for log-linear form the optimal is 5,905,292 with \u03bb 1 = 0.691. Therefore we determine that the better form for the ranking function is linear with \u03bb 1 = 0.014 and \u03bb 2 = 0.986.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "7.1"
},
{
"text": "The size of each binarized grammar used in the experiment is shown in Table 3 . \"Original\" refers to the raw treebank grammar. \"Ours\" refers to the learnt binarized grammar by our approach. For the rest please refer to Section 2. We also tested whether the size of the training set would have significant effect. We use the first 10%, 20%, \u2022 \u2022 \u2022 , up to 100% of section 23 as the training set, respectively, and parse the development set. We find that all sizes examined have a similar impact, since the numbers of constituents produced are all around 5,780,000. It means the training corpus does not have to be very large.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "7.1"
},
{
"text": "The entire experiments are conducted on a server with an Intel Xeon 2.33 GHz processor and 8 GB memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "# of Symbols # of Rules",
"sec_num": null
},
{
"text": "In this part, we use CKY to parse the entire test set and evaluate the efficiency of different binarizations. The for-statement implementation of the inner most loop of CKY will affect the parsing time though it won't affect the number of constituents produced as discussed in Section 3.2. The best implementations may be different for different binarized grammars. We examine M1\u223cM4, testing their parsing time on the development set. Results show that for right binarization the best method is M3, while for the rest the best is M2. We use the best method for each binarized grammar when comparing the parsing time in Experiment 1. Table 4 reports the total number of constituents and total time required for parsing the entire test set. It shows that different binarizations have great impacts on the efficiency of CKY. With our binarization, the number of constituents produced is nearly 20% of that required by right binarization and nearly 25% of that by the widely-used left binarization. As for the parsing time, CKY with our binarization is about 2.5 times as fast as with right binarization and about 1.75 times as fast as with left binarization. This illustrates that our binarization can significantly improve the efficiency of the CKY parsing.",
"cite_spans": [],
"ref_spans": [
{
"start": 633,
"end": 640,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Experiment 1: compare among binarizations",
"sec_num": "7.2"
},
{
"text": "Constituents Figure 3 reports the detailed number of complete constituents, successful incomplete constituents and failed incomplete constituents produced in parsing. The result proves that our binarization can significantly reduce the number of failed incomplete constituents, by a factor of 10 in contrast with left binarization. Meanwhile, the number of successful in-complete constituents is also reduced by a factor of 2 compared to left binarization. Another interesting observation is that parsing with a smaller grammar does not always yield a higher efficiency. Our binarized grammar is more than twice the size of compact binarization, but ours is more efficient. It proves that parsing efficiency is related to both the size of grammar in use as well as the number of constituents produced.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 21,
"text": "Figure 3",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Binarization",
"sec_num": null
},
{
"text": "In Section 1, we used an example of \"get the bag and go\" to illustrate that for rules like X \u2192 Y CC Y , right binarization is more suitable. We also investigated the corresponding linguistic nature that the word to the right of \"and\" is more likely to indicate the true relationship represented by \"and\". We argued that a better binarization can reflect such linguistic nature of the input language. To our surprise, our learnt binarization indeed captures this linguistic insight, by binarizing N P \u2192 N P CC N P from right to left.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binarization",
"sec_num": null
},
{
"text": "Finally, we would like to acknowledge the limitation of our assumption made in Section 3.2. Table 4 shows that the parsing time of CKY is not always monotonic increasing with the number of constituents produced. Head binarization produces fewer constituents than left binarization but consumes more parsing time.",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 99,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Binarization",
"sec_num": null
},
{
"text": "In this part, we test the performance of combining our binarization with the iterative CKY (Tsuruoka and Tsujii, 2004 ) (henceforth T&T) algorithm.",
"cite_spans": [
{
"start": 91,
"end": 117,
"text": "(Tsuruoka and Tsujii, 2004",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: combine with iterative CKY",
"sec_num": "7.3"
},
{
"text": "Iterative CKY is a procedure of multiple passes of normal CKY: in each pass, it uses a threshold to prune bad constituents; if it cannot find a successful parse in one pass, it will relax the threshold and start another; this procedure is repeated until a successful parse is returned. T&T used left binarization. We re-implement their experiments and combine iterative CKY with our binarization. Note that iterative CKY is an exact inference algorithm that guarantees to return the optimal parse. As discussed in Section 6, the parsing accuracy is not changed in this experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: combine with iterative CKY",
"sec_num": "7.3"
},
{
"text": "T&T used a held-out set to learn the best step of threshold decrease. They reported that the best step was 11 (in log-probability). We found that the best step was indeed 11 for left binarization; for our binarizaiton, the best step was 17. T&T used M4 as the for-statement implementation of CKY. In this part, we follow the same method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: combine with iterative CKY",
"sec_num": "7.3"
},
{
"text": "The result is shown in Table 5 . We can see that iterative CKY can achieve better performance by using a better binarization. We also see that the reduction by binarization with pruning is less significant than without pruning. It seems that the pruning itself in iterative CKY can counteract the reduction effect of binarization to some extent. Still the best performance is archieved by combining iterative CKY with a better binarization.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 5",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Experiment 2: combine with iterative CKY",
"sec_num": "7.3"
},
{
"text": "Constituents Time (s) Tsuruoka and Tsujii (2004) Almost all work on parsing starts from a binarized grammar. Usually binarization plays a role of preprocessing. Left binarization is widely used (Aho and Ullman, 1972; Charniak et al., 1998; Tsuruoka and Tsujii, 2004) while right binarization is rarely used in the literature. Compact binarization was introduced in Schmid (2004) , based on the intuition that a more compact grammar will help acheive a highly efficient CKY parser, though from our experiment it is not always true.",
"cite_spans": [
{
"start": 22,
"end": 48,
"text": "Tsuruoka and Tsujii (2004)",
"ref_id": "BIBREF17"
},
{
"start": 194,
"end": 216,
"text": "(Aho and Ullman, 1972;",
"ref_id": "BIBREF0"
},
{
"start": 217,
"end": 239,
"text": "Charniak et al., 1998;",
"ref_id": "BIBREF1"
},
{
"start": 240,
"end": 266,
"text": "Tsuruoka and Tsujii, 2004)",
"ref_id": "BIBREF17"
},
{
"start": 365,
"end": 378,
"text": "Schmid (2004)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CKY + Binarization",
"sec_num": null
},
{
"text": "We define the fashion of binarizations in Section 2, where we encode an intermediate symbol using the ngrams of original symbols (content) it derives. This encoding is known as the Inside-Trie (I-Trie) in Klein and Manning (2003a) , in which they also mentioned another encoding called Outside-Trie (O-Trie). O-Trie encodes an intermediate symbol using the its parent and the symbols surrounding it in the original rule (context). Klein and Manning (2003a) claimed that O-Trie is superior for calculating estimates for A* parsing. We plan to investigate binarization defined by O-Trie in the future.",
"cite_spans": [
{
"start": 205,
"end": 230,
"text": "Klein and Manning (2003a)",
"ref_id": "BIBREF11"
},
{
"start": 431,
"end": 456,
"text": "Klein and Manning (2003a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CKY + Binarization",
"sec_num": null
},
{
"text": "Both I-Trie and O-Trie are equivalent encodings, resulting in equivalent grammars, because they both encode using the complete content or context information of an intermediate symbol. If we use part of the information to encode, for example just parent in O-Trie case, the encoding will be non-equivalent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CKY + Binarization",
"sec_num": null
},
{
"text": "Proper non-equivalent encodings are used to generalize the grammar and prevent the binarized grammar becoming too specific (Charniak et al., 2006) . It is equipped with head binarization to help improve parsing accuracy, following the traditional linguistic insight that phrases are organized around the head (Collins, 1997; Klein and Manning, 2003b ). In contrast, we focus our attention on parsing efficiency not accuracy in this paper.",
"cite_spans": [
{
"start": 123,
"end": 146,
"text": "(Charniak et al., 2006)",
"ref_id": "BIBREF2"
},
{
"start": 309,
"end": 324,
"text": "(Collins, 1997;",
"ref_id": "BIBREF3"
},
{
"start": 325,
"end": 349,
"text": "Klein and Manning, 2003b",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CKY + Binarization",
"sec_num": null
},
{
"text": "Binarization also attracts attention in the syntaxbased models for machine translation, where translation can be modeled as a parsing problem and binarization is essential for efficient parsing (Zhang et al., 2006; Huang, 2007) . Wang et al. (2007) employs binarization to decompose syntax trees to acquire more re-usable translation rules in order to improve translation accuracy. Their binarization is restricted to be a mixture of left and right binarization. This constraint may decrease the power of binarization when applied to speeding up parsing in our problem.",
"cite_spans": [
{
"start": 194,
"end": 214,
"text": "(Zhang et al., 2006;",
"ref_id": "BIBREF20"
},
{
"start": 215,
"end": 227,
"text": "Huang, 2007)",
"ref_id": "BIBREF7"
},
{
"start": 230,
"end": 248,
"text": "Wang et al. (2007)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CKY + Binarization",
"sec_num": null
},
{
"text": "We have studied the impact of grammar binarization on parsing efficiency and presented a novel binarization which utilizes rich information learnt from training corpus. Experiments not only showed that our learnt binarization outperforms other existing ones in terms of parsing efficiency, but also demon-strated the feasibility to combine our binarization with known parsing speed-up techniques to achieve even better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "9"
},
{
"text": "An advantage of our approach to finding a good binarization would be that the training corpus does not need to be parsed sentences. Only POS tagged sentences will suffice for training. This will save the effort to adapt the model to a new domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "9"
},
{
"text": "Our approach is based on the assumption that the efficiency of CKY parsing is primarily determined by the number of constituents produced. This is a fairly sound one, but not always true, as shown in Section 7.2. One future work will be relaxing the assumption and finding a better appraoch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "9"
},
{
"text": "Another future work will be to apply our work to chart parsing. It is known that binarization is also essential for an O(n 3 ) complexity of chart parsing, where dotted rules are used to binarize the grammar implicitly from left. As shown in Charniak et al. (1998) , we can binarize explicitly and use intermediate symbols to replace dotted rules in chart parsing. Therefore chart parsing can use multiple binarizations. We expect that a better binarization will also help improve the efficiency of chart parsing.",
"cite_spans": [
{
"start": 242,
"end": 264,
"text": "Charniak et al. (1998)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "9"
},
{
"text": "Note that we should skip Y (Z) if it never appears as the first (second) symbol on the right hand side of any rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "More precisely, it is more than a parse tree for it contains all symbols recognized in parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For log-linear form, if num(w) = 0 (and consequently ctr(w) = 0), we set f (num(w), ctr(w)) = 0; if num(w) > 0 but ctr(w) = 0, we set f (num(w), ctr(w)) = \u2212\u221e.4 Since f is used for ranking, the magnitude is not important.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviwers for their pertinent comments, Yoshimasa Tsuruoka for the detailed explanations on his referred paper, Yunbo Cao, Shujian Huang, Zhenxing Wang , John Blitzer and Liang Huang for their valuable suggestions in preparing the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The theory of parsing, translation, and compiling",
"authors": [
{
"first": "A",
"middle": [
"V"
],
"last": "Aho",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aho, A. V. and Ullman, J. D. (1972). The theory of parsing, translation, and compiling. Prentice- Hall, Inc., Upper Saddle River, NJ, USA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Edge-based best-first chart parsing",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Johnson",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Six Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "127--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charniak, E., Goldwater, S., and Johnson, M. (1998). Edge-based best-first chart parsing. In Proceedings of the Six Workshop on Very Large Corpora, pages 127-133.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multilevel coarse-to-fine pcfg parsing",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Austerweil",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ellis",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Haxton",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Shrivaths",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pozar",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Vu",
"suffix": ""
}
],
"year": 2006,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charniak, E., Johnson, M., Elsner, M., Austerweil, J., Ellis, D., Haxton, I., Hill, C., Shrivaths, R., Moore, J., Pozar, M., and Vu, T. (2006). Multi- level coarse-to-fine pcfg parsing. In HLT-NAACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Three generative, lexicalised models for statistical parsing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1997,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, M. (1997). Three generative, lexicalised models for statistical parsing. In ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An efficient context-free parsing algorithm",
"authors": [
{
"first": "J",
"middle": [],
"last": "Earley",
"suffix": ""
}
],
"year": 1970,
"venue": "Commun. ACM",
"volume": "13",
"issue": "2",
"pages": "94--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Earley, J. (1970). An efficient context-free parsing algorithm. Commun. ACM, 13(2):94-102.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Global thresholding and multiple-pass parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1997,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goodman, J. (1997). Global thresholding and multiple-pass parsing. In EMNLP.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An improved context-free recognizer",
"authors": [
{
"first": "S",
"middle": [
"L"
],
"last": "Graham",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Harrison",
"suffix": ""
},
{
"first": "W",
"middle": [
"L"
],
"last": "Ruzzo",
"suffix": ""
}
],
"year": 1980,
"venue": "ACM Trans. Program. Lang. Syst",
"volume": "2",
"issue": "3",
"pages": "415--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham, S. L., Harrison, M. A., and Ruzzo, W. L. (1980). An improved context-free recognizer. ACM Trans. Program. Lang. Syst., 2(3):415-462.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Binarization, synchronous binarization, and target-side binarization",
"authors": [
{
"first": "L",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of SSST, NAACL-HLT 2007 / AMTA Workshop on Syntax and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, L. (2007). Binarization, synchronous bina- rization, and target-side binarization. In Proceed- ings of SSST, NAACL-HLT 2007 / AMTA Work- shop on Syntax and Structure in Statistical Trans- lation, pages 33-40, Rochester, New York. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An efficient recognition and syntax analysis algorithm for context-free languages",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kasami",
"suffix": ""
}
],
"year": 1965,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kasami, T. (1965). An efficient recognition and syntax analysis algorithm for context-free lan- guages. Technical Report AFCRL-65-758, Air Force Cambridge Research Laboratory, Bedford, Massachusetts.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Algorithm schemata and data structures in syntactic processing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kay, M. (1980). Algorithm schemata and data struc- tures in syntactic processing. Technical Report CSL80-12, Xerox PARC, Palo Alto, CA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Parsing and hypergraphs",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2001,
"venue": "IWPT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klein, D. and Manning, C. D. (2001). Parsing and hypergraphs. In IWPT.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A* parsing: Fast exact viterbi parse selection",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klein, D. and Manning, C. D. (2003a). A* parsing: Fast exact viterbi parse selection. In HLT-NAACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klein, D. and Manning, C. D. (2003b). Accurate unlexicalized parsing. In ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The penn treebank: Annotating predicate argument structure",
"authors": [
{
"first": "M",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Macintyre",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ferguson",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Schasberger",
"suffix": ""
}
],
"year": 1994,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus, M. P., Kim, G., Marcinkiewicz, M. A., MacIntyre, R., Bies, A., Ferguson, M., Katz, K., and Schasberger, B. (1994). The penn treebank: Annotating predicate argument structure. In HLT- NAACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improved left-corner chart parsing for large context-free grammars",
"authors": [
{
"first": "R",
"middle": [
"C"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2000,
"venue": "IWPT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moore, R. C. (2000). Improved left-corner chart parsing for large context-free grammars. In IWPT.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning accurate, compact, and interpretable tree annotation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Thibaux",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petrov, S., Barrett, L., Thibaux, R., and Klein, D. (2006). Learning accurate, compact, and inter- pretable tree annotation. In ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Efficient parsing of highly ambiguous context-free grammars with bit vectors",
"authors": [
{
"first": "H",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2004,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schmid, H. (2004). Efficient parsing of highly am- biguous context-free grammars with bit vectors. In COLING.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Iterative cky parsing for probabilistic context-free grammars",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2004,
"venue": "IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsuruoka, Y. and Tsujii, J. (2004). Iterative cky pars- ing for probabilistic context-free grammars. In IJCNLP.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Binarizing syntax trees to improve syntax-based machine translation accuracy",
"authors": [
{
"first": "W",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, W., Knight, K., and Marcu, D. (2007). Bina- rizing syntax trees to improve syntax-based ma- chine translation accuracy. In EMNLP-CoNLL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Recognition and parsing of context-free languages in time n 3",
"authors": [
{
"first": "D",
"middle": [
"H"
],
"last": "Younger",
"suffix": ""
}
],
"year": 1967,
"venue": "Information and Control",
"volume": "10",
"issue": "2",
"pages": "189--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Younger, D. H. (1967). Recognition and parsing of context-free languages in time n 3 . Information and Control, 10(2):189-208.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Synchronous binarization for machine translation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2006,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, H., Huang, L., Gildea, D., and Knight, K. (2006). Synchronous binarization for machine translation. In HLT-NAACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Parsing with left and right binarization.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Parse of the sentence A B C D E",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Complete constituents (henceforth CCs) are those composed by the original grammar symbols and spans. For example in Figure 2, X :[0, 4], Y :[0, 3] and Y :[0, 4] are all CCs. Incomplete constituents (henceforth ICs) are those labeled by intermediate symbols. Figure 2 does not show them directly, but we can still read the possible ones. For example, if the binarized grammar in use contains an intermediate symbol [A B C], then there will be two related ICs [A B C]:[0, 3] and [A B C]:[0, 4] (the latter is due to C:[2, 4]) produced in parsing. ICs represent the intermediate steps to recognize and complete CCs.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "and finally X :[0, 4]; if using right binarization, we will get [C D]:[2, 4], [B C D]:[1, 4] and again X:[0, 4].",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF4": {
"text": "example in Figure 2, for a possible intermediate symbol [A B C], there are two related ICs ([A B C] : [0, 3] and [A B C] : [0, 4]) in the parse, so we have num([A B C]) = 2. Meanwhile, four CCs (Y :[0, 3], X :[0, 4], Y :[0, 4] and Z :[0, 5]) can be generated from the two related ICs. Therefore ctr([A B C]) = 4. We list the two metrics for every ngram in Figure 2 in",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF5": {
"text": "Comparison on various constituents",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"html": null,
"text": "Left binarization In the binarized grammar, symbols of form [A B] are new (also called intermediate) nonterminals.",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF2": {
"html": null,
"text": "Enumerate all rules X \u2192 Y Z, and check if Y is in left span and Z in right span.",
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">M2 For each Y in left span, enumerate all rules X \u2192</td></tr><tr><td/><td>Y Z, and check if Z is in right span.</td></tr><tr><td colspan=\"2\">M3 For each Z in right span, enumerate all rules X \u2192</td></tr><tr><td/><td>Y Z, and check if Y is in left span.</td></tr><tr><td colspan=\"2\">M4 Enumerate each Y in left span and Z in right span 1 ,</td></tr><tr><td/><td>check if there are any rules X \u2192 Y Z.</td></tr><tr><td colspan=\"2\">Algorithm 1 The inner most loop of CKY</td></tr><tr><td colspan=\"2\">1: for X \u2192 Y Z, Y in left span and Z in right span</td></tr><tr><td>2:</td><td>Add X to parent span</td></tr></table>"
},
"TABREF3": {
"html": null,
"text": "",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"html": null,
"text": "Grammar size of different binarizations",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF8": {
"html": null,
"text": "Performance on test set",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF11": {
"html": null,
"text": "Combining with iterative CKY parsing",
"num": null,
"type_str": "table",
"content": "<table><tr><td>8 Related work</td></tr></table>"
}
}
}
}