Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D10-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:52:21.501024Z"
},
"title": "Soft Syntactic Constraints for Hierarchical Phrase-based Translation Using Latent Syntactic Distributions",
"authors": [
{
"first": "Zhongqiang",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland College Park",
"location": {
"postCode": "20742",
"region": "MD"
}
},
"email": "[email protected]"
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present a novel approach to enhance hierarchical phrase-based machine translation systems with linguistically motivated syntactic features. Rather than directly using treebank categories as in previous studies, we learn a set of linguistically-guided latent syntactic categories automatically from a source-side parsed, word-aligned parallel corpus, based on the hierarchical structure among phrase pairs as well as the syntactic structure of the source side. In our model, each X nonterminal in a SCFG rule is decorated with a real-valued feature vector computed based on its distribution of latent syntactic categories. These feature vectors are utilized at decoding time to measure the similarity between the syntactic analysis of the source side and the syntax of the SCFG rules that are applied to derive translations. Our approach maintains the advantages of hierarchical phrase-based translation systems while at the same time naturally incorporates soft syntactic constraints.",
"pdf_parse": {
"paper_id": "D10-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present a novel approach to enhance hierarchical phrase-based machine translation systems with linguistically motivated syntactic features. Rather than directly using treebank categories as in previous studies, we learn a set of linguistically-guided latent syntactic categories automatically from a source-side parsed, word-aligned parallel corpus, based on the hierarchical structure among phrase pairs as well as the syntactic structure of the source side. In our model, each X nonterminal in a SCFG rule is decorated with a real-valued feature vector computed based on its distribution of latent syntactic categories. These feature vectors are utilized at decoding time to measure the similarity between the syntactic analysis of the source side and the syntax of the SCFG rules that are applied to derive translations. Our approach maintains the advantages of hierarchical phrase-based translation systems while at the same time naturally incorporates soft syntactic constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, syntax-based translation models (Chiang, 2007; Galley et al., 2004; Liu et al., 2006) have shown promising progress in improving translation quality, thanks to the incorporation of phrasal translation adopted from the widely used phrase-based models (Och and Ney, 2004) to handle local fluency and the engagement of synchronous context-free grammars (SCFG) to handle non-local phrase reordering. Approaches to syntaxbased translation models can be largely categorized into two classes based on their dependency on annotated corpus (Chiang, 2007) . Linguistically syntaxbased models (e.g., (Yamada and Knight, 2001; Galley et al., 2004; Liu et al., 2006) ) utilize structures defined over linguistic theory and annotations (e.g., Penn Treebank) and guide the derivation of SCFG rules with explicit parsing on at least one side of the parallel corpus. Formally syntax-based models (e.g., (Wu, 1997; Chiang, 2007) ) extract synchronous grammars from parallel corpora based on the hierarchical structure of natural language pairs without any explicit linguistic knowledge or annotations. In this work, we focus on the hierarchical phrase-based models of Chiang (2007) , which is formally syntax-based, and always refer the term SCFG, from now on, to the grammars of this model class.",
"cite_spans": [
{
"start": 49,
"end": 63,
"text": "(Chiang, 2007;",
"ref_id": "BIBREF2"
},
{
"start": 64,
"end": 84,
"text": "Galley et al., 2004;",
"ref_id": "BIBREF4"
},
{
"start": 85,
"end": 102,
"text": "Liu et al., 2006)",
"ref_id": "BIBREF12"
},
{
"start": 267,
"end": 286,
"text": "(Och and Ney, 2004)",
"ref_id": "BIBREF19"
},
{
"start": 548,
"end": 562,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF2"
},
{
"start": 606,
"end": 631,
"text": "(Yamada and Knight, 2001;",
"ref_id": "BIBREF27"
},
{
"start": 632,
"end": 652,
"text": "Galley et al., 2004;",
"ref_id": "BIBREF4"
},
{
"start": 653,
"end": 670,
"text": "Liu et al., 2006)",
"ref_id": "BIBREF12"
},
{
"start": 903,
"end": 913,
"text": "(Wu, 1997;",
"ref_id": "BIBREF24"
},
{
"start": 914,
"end": 927,
"text": "Chiang, 2007)",
"ref_id": "BIBREF2"
},
{
"start": 1167,
"end": 1180,
"text": "Chiang (2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the one hand, hierarchical phrase-based models do not suffer from errors in syntactic constraints that are unavoidable in linguistically syntax-based models. Despite the complete lack of linguistic guidance, the performance of hierarchical phrasebased models is competitive when compared to linguistically syntax-based models. As shown in , hierarchical phrase-based models significantly outperform tree-to-string models (Liu et al., 2006; Huang et al., 2006) , even when attempts are made to alleviate parsing errors using either forest-based decoding or forest-based rule extraction .",
"cite_spans": [
{
"start": 424,
"end": 442,
"text": "(Liu et al., 2006;",
"ref_id": "BIBREF12"
},
{
"start": 443,
"end": 462,
"text": "Huang et al., 2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the other hand, when properly used, syntactic constraints can provide invaluable benefits to improve translation quality. The tree-to-string models of can actually signif-icantly outperform hierarchical phrase-based models when using forest-based rule extraction together with forest-based decoding. Chiang (2010) also obtained significant improvement over his hierarchical baseline by using syntactic parse trees on both source and target sides to induce fuzzy (not exact) tree-to-tree rules and by also allowing syntactically mismatched substitutions.",
"cite_spans": [
{
"start": 303,
"end": 316,
"text": "Chiang (2010)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we augment rules in hierarchical phrase-based translation systems with novel syntactic features. Unlike previous studies (e.g., (Zollmann and Venugopal, 2006) ) that directly use explicit treebank categories such as NP, NP/PP (NP missing PP from the right) to annotate phrase pairs, we induce a set of latent categories to capture the syntactic dependencies inherent in the hierarchical structure of phrase pairs, and derive a real-valued feature vector for each X nonterminal of a SCFG rule based on the distribution of the latent categories. Moreover, we convert the equality test of two sequences of syntactic categories, which are either identical or different, into the computation of a similarity score between their corresponding feature vectors. In our model, two symbolically different sequences of syntactic categories could have a high similarity score in the feature vector representation if they are syntactically similar, and a low score otherwise. In decoding, these feature vectors are utilized to measure the similarity between the syntactic analysis of the source side and the syntax of the SCFG rules that are applied to derive translations. Our approach maintains the advantages of hierarchical phrase-based translation systems while at the same time naturally incorporates soft syntactic constraints. To the best of our knowledge, this is the first work that applies real-valued syntactic feature vectors to machine translation.",
"cite_spans": [
{
"start": 143,
"end": 173,
"text": "(Zollmann and Venugopal, 2006)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. Section 2 briefly reviews hierarchical phrase-based translation models. Section 3 presents an overview of our approach, followed by Section 4 describing the hierarchical structure of aligned phrase pairs and Section 5 describing how to induce latent syntactic categories. Experimental results are reported in Section 6, followed by discussions in Section 7. Section 8 concludes this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An SCFG is a synchronous rewriting system generating source and target side string pairs simultaneously based on a context-free grammar. Each synchronous production (i.e., rule) rewrites a nonterminal into a pair of strings, \u03b3 and \u03b1, where \u03b3 (or \u03b1) contains terminal and nonterminal symbols from the source (or target) language and there is a one-toone correspondence between the nonterminal symbols on both sides. In particular, the hierarchical model (Chiang, 2007) studied in this paper explores hierarchical structures of natural language and utilize only a unified nonterminal symbol X in the grammar,",
"cite_spans": [
{
"start": 453,
"end": 467,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Phrase-Based Translation",
"sec_num": "2"
},
{
"text": "X \u2192 \u03b3, \u03b1, \u223c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Phrase-Based Translation",
"sec_num": "2"
},
{
"text": "where \u223c is the one-to-one correspondence between X's in \u03b3 and \u03b1, and it can be indicated by underscripted co-indexes. Two example English-to-Chinese translation rules are represented as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Phrase-Based Translation",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X \u2192 give the pen to me, \u94a2\u7b14 \u7ed9 \u6211 (1) X \u2192 give X 1 to me, X 1 \u7ed9 \u6211",
"eq_num": "(2)"
}
],
"section": "Hierarchical Phrase-Based Translation",
"sec_num": "2"
},
{
"text": "The SCFG rules of hierarchical phrase-based models are extracted automatically from corpora of word-aligned parallel sentence pairs (Brown et al., 1993; Och and Ney, 2000) . An aligned sentence pair is a tuple (E, F, A), where E = e 1 \u2022 \u2022 \u2022 e n can be interpreted as an English sentence of length n, F = f 1 \u2022 \u2022 \u2022 f m its translation of length m in a foreign language, and A a set of links between words of the two sentences. Figure 1 (a) shows an example of aligned English-to-Chinese sentence pair. Widely adopted in phrase-based models (Och and Ney, 2004) , a pair of consecutive sequences of words from E and F is a phrase pair if all words are aligned only within the sequences and not to any word outside. We call a sequence of words a phrase if it corresponds to either side of a phrase pair, and a non-phrase otherwise. Note that the boundary words of a phrase pair may not be aligned to any other word. We call the phrase pairs with all boundary words aligned tight phrase pairs (Zhang et al., 2008) . A tight phrase pair is the minimal phrase pair among all that share the same set of alignment links. The extraction of SCFG rules proceeds as follows. In the first step, all phrase pairs below a maximum length are extracted as phrasal rules. In the second step, abstract rules are extracted from tight phrase pairs that contain other tight phrase pairs by replacing the sub phrase pairs with co-indexed Xnonterminals. Chiang (2007) also introduced several requirements (e.g., there are at most two nonterminals at the right hand side of a rule) to safeguard the quality of the abstract rules as well as keeping decoding efficient. In our example above, rule (2) can be extracted from rule (1) with the following sub phrase pair:",
"cite_spans": [
{
"start": 132,
"end": 152,
"text": "(Brown et al., 1993;",
"ref_id": "BIBREF0"
},
{
"start": 153,
"end": 171,
"text": "Och and Ney, 2000)",
"ref_id": "BIBREF18"
},
{
"start": 539,
"end": 558,
"text": "(Och and Ney, 2004)",
"ref_id": "BIBREF19"
},
{
"start": 988,
"end": 1008,
"text": "(Zhang et al., 2008)",
"ref_id": "BIBREF28"
},
{
"start": 1429,
"end": 1442,
"text": "Chiang (2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 426,
"end": 438,
"text": "Figure 1 (a)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Hierarchical Phrase-Based Translation",
"sec_num": "2"
},
{
"text": "X \u2192 the pen, \u94a2\u7b14",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Phrase-Based Translation",
"sec_num": "2"
},
{
"text": "The use of a unified X nonterminal makes hierarchical phrase-based models flexible at capturing non-local reordering of phrases. However, such flexibility also comes at the cost that it is not able to differentiate between different syntactic usages of phrases. Suppose rule X \u2192 I am reading X 1 , \u2022 \u2022 \u2022 is extracted from a phrase pair with I am reading a book on the source side where X 1 is abstracted from the noun phrase pair . If this rule is used to translate I am reading the brochure of a book fair, it would be better to apply it over the entire string than over sub-strings such as I ... the brochure of. This is because the nonterminal X 1 in the rule was abstracted from a noun phrase on the source side of the training data and would thus be better (more informative) to be applied to phrases of the same type. Hierarchical phrase-based models are not able to distinguish syntactic differences like this. Zollmann and Venugopal (2006) attempted to address this problem by annotating phrase pairs with treebank categories based on automatic parse trees. They introduced an extended set of categories (e.g., NP+V for she went and DT\\NP for great wall, an noun phrase with a missing determiner on the left) to annotate phrase pairs that do not align with syntactic constituents. Their hard syntactic constraint requires that the nonterminals should match exactly to rewrite with a rule, which could rule out potentially correct derivations due to errors in the syntactic parses as well as to data sparsity. For example, NP cannot be instantiated with phrase pairs of type DT+NN, in spite of their syntactic similarity. Venugopal et al. (2009) addressed this problem by directly introducing soft syntactic preferences into SCFG rules using preference grammars, but they had to face the computational challenges of large preference vectors. Chiang (2010) also avoided hard constraints and took a soft alternative that directly models the cost of mismatched rule substitutions. This, however, would require a large number of parameters to be tuned on a generally small-sized heldout set, and it could thus suffer from over-tuning.",
"cite_spans": [
{
"start": 918,
"end": 947,
"text": "Zollmann and Venugopal (2006)",
"ref_id": "BIBREF30"
},
{
"start": 1629,
"end": 1652,
"text": "Venugopal et al. (2009)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Phrase-Based Translation",
"sec_num": "2"
},
{
"text": "In this work, we take a different approach to introduce linguistic syntax to hierarchical phrase-based translation systems and impose soft syntactic constraints between derivation rules and the syntactic parse of the sentence to be translated. For each phrase pair extracted from a sentence pair of a source-side parsed parallel corpus, we abstract its syntax by the sequence of highest root categories, which we call a tag sequence, that exactly 1 dominates the syntactic tree fragments of the source-side phrase. Figure 3 (b) shows the source-side parse tree of a sentence pair. The tag sequence for \"the pen\" is simply \"NP\" because it is a noun phrase, while phrase \"give the pen\" is dominated by a verb followed by a noun phrase, and thus its tag sequence is \"VBP NP\".",
"cite_spans": [],
"ref_spans": [
{
"start": 515,
"end": 523,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "Let T S = {ts 1 , \u2022 \u2022 \u2022 , ts m } be the set of all tag sequences extracted from a parallel corpus. The syntax of each X nonterminal 2 in a SCFG rule can be then Table 1 :",
"cite_spans": [],
"ref_spans": [
{
"start": 161,
"end": 168,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "The distribution of tag sequences for X 1 in X \u2192 I am reading X 1 , \u2022 \u2022 \u2022 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "characterized by the distribution of tag sequences",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "P X (T S) = (p X (ts 1 ), \u2022 \u2022 \u2022 , p X (ts m ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": ", based on the phrase pairs it is abstracted from. Table 1 shows an example distribution of tag sequences for X 1 in X \u2192 I am reading",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "X 1 , \u2022 \u2022 \u2022 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "Instead of directly using tag sequences, as we discussed their disadvantages above, we represent each of them by a real-valued feature vector. Suppose we have a collection of n latent syntactic cate-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "gories C = {c 1 , \u2022 \u2022 \u2022 , c n }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "For each tag sequence ts, we compute its distribution of latent syntactic categories",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "P ts (C) = (p ts (c 1 ), \u2022 \u2022 \u2022 , p ts (c n ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": ". For example, P \"NP VP\" (C) = {0.5, 0.2, 0.3} means that the latent syntactic categories c 1 , c 2 , and c 3 are distributed as p(c 1 ) = 0.5, p(c 2 ) = 0.2, and p(c 3 ) = 0.3 for tag sequence \"NP VP\". We further convert the distribution to a normalized feature vector F (ts) to represent tag sequence ts:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "F (ts) = (f 1 (ts), \u2022 \u2022 \u2022 , f n (ts)) = (p ts (c 1 ), \u2022 \u2022 \u2022 , p ts (c n )) (p ts (c 1 ), \u2022 \u2022 \u2022 , p ts (c n ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "The advantage of using real-valued feature vectors is that the degree of similarity between two tag sequences ts and ts in the space of the latent syntactic categories C can be simply computed as a dotproduct 3 of their feature vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "F (ts) \u2022 F (ts ) = 1\u2264i\u2264n f i (ts)f i (ts )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "which computes a syntactic similarity score in the range of 0 (totally syntactically different) to 1 (completely syntactically identical).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "Similarly, we can represent the syntax of each X nonterminal in a rule with a feature vector F (X), computed as the sum of the feature vectors of tag sequences weighted by the distribution of tag sequences of the nonterminal X:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "F (X) = ts\u2208T S p X (ts) F (ts)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "Now we can impose soft syntactic constraints using these feature vectors when a SCFG rule is used to translate a parsed source sentence. Given that a X nonterminal in the rule is applied to a span with tag sequence 4 ts as determined by a syntactic parser, we can compute the following syntax similarity feature:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "SynSim(X, ts) = \u2212 log( F (ts) \u2022 F (X))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "Except that it is computed on the fly, this feature can be used in the same way as the regular features in hierarchical translation systems to determine the best translation, and its feature weight can be tuned in the same way together with the other features on a held-out data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "In our approach, the set of latent syntactic categories is automatically induced from a source-side parsed, word-aligned parallel corpus based on the hierarchical structure among phrase pairs along with the syntactic parse of the source side. In what follows, we will explain the two critical aspects of our approach, i.e., how to identify the hierarchical structures among all phrase pairs in a sentence pair, and how to induce the latent syntactic categories from the hierarchy to syntactically explain the phrase pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3"
},
{
"text": "The aforementioned abstract rule extraction algorithm of Chiang (2007) is based on the property that a tight phrase pair can contain other tight phrase pairs. Given two non-disjoint tight phrase pairs that share at least one common alignment link, there are only two relationships: either one completely includes another or they do not include one another but have a non-empty overlap, which we call a nontrivial overlap. In the second case, the intersection, differences, and union of the two phrase pairs are Figure 2 : A decomposition tree of tight phrase pairs with all tight phrase pairs listed on the right. As highlighted, the two non-maximal phrase pairs are generated by consecutive sibling nodes. also tight phrase pairs (see Figure 1 (b) for example), and the two phrase pairs, as well as their intersection and differences, are all sub phrase pairs of their union. Zhang et al. (2008) exploited this property to construct a hierarchical decomposition tree (Bui-Xuan et al., 2005) of phrase pairs from a sentence pair to extract all phrase pairs in linear time. In this paper, we focus on learning the syntactic dependencies along the hierarchy of phrase pairs. Our hierarchy construction follows Heber and Stoye (2001) .",
"cite_spans": [
{
"start": 57,
"end": 70,
"text": "Chiang (2007)",
"ref_id": "BIBREF2"
},
{
"start": 877,
"end": 896,
"text": "Zhang et al. (2008)",
"ref_id": "BIBREF28"
},
{
"start": 968,
"end": 991,
"text": "(Bui-Xuan et al., 2005)",
"ref_id": "BIBREF1"
},
{
"start": 1208,
"end": 1230,
"text": "Heber and Stoye (2001)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 511,
"end": 519,
"text": "Figure 2",
"ref_id": null
},
{
"start": 736,
"end": 748,
"text": "Figure 1 (b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Alignment-based Hierarchy",
"sec_num": "4"
},
{
"text": "Let P be the set of tight phrase pairs extracted from a sentence pair. We call a sequentially-ordered list 5 L = (p 1 , \u2022 \u2022 \u2022 , p k ) of unique phrase pairs p i \u2208 P a chain if every two successive phrase pairs in L have a non-trivial overlap. A chain is maximal if it can not be extended to its left or right with other phrase pairs. Note that any sub-sequence of phrase pairs in a chain generates a tight phrase pair. In particular, chain L generates a tight phrase pair \u03c4 (L) that corresponds exactly to the union of the alignment links in p \u2208 L. We call the phrase pairs generated by maximal chains maximal phrase pairs and call the other phrase pairs non-maximal. Nonmaximal phrase pairs always overlap non-trivially with some other phrase pairs while maximal phrase pairs do not, and it can be shown that any nonmaximal phrase pair can be generated by a sequence of maximal phrase pairs. Note that the largest tight phrase pair that includes all alignment links in A is also a maximal phrase pair. 5 The phrase pairs can be sequentially ordered first by the boundary positions of the source-side phrase and then by the boundary positions of the target-side phrase.",
"cite_spans": [
{
"start": 1003,
"end": 1004,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment-based Hierarchy",
"sec_num": "4"
},
{
"text": "give the pen to me .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment-based Hierarchy",
"sec_num": "4"
},
{
"text": "X B B B X X X X X PP VBP DT NN TO PRP . NP VP S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment-based Hierarchy",
"sec_num": "4"
},
{
"text": "give the pen to me . Lemma 1 Given two different maximal phrase pairs p 1 and p 2 , exactly one of the following alternatives is true: p 1 and p 2 are disjoint, p 1 is a sub phrase pair of p 2 , or p 2 is a sub phrase pair of p 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment-based Hierarchy",
"sec_num": "4"
},
{
"text": "(a) (b) X X B B B X X X X X X X X B B B X X X X X X VBP X X B B B X X X X X X DT NN TO PRP . NP PP CR VP S I(!) O(!) X X B B B X X X X X",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment-based Hierarchy",
"sec_num": "4"
},
{
"text": "A direct outcome of Lemma 1 is that there is an unique decomposition tree T = (N, E) covering all of the tight phrase pairs of a sentence pair, where N is the set of maximal phrase pairs and E is the set of edges that connect between pairs of maximal phrase pairs if one is a sub phrase pair of another. All of the tight phrase pairs of a sentence pair can be extracted directly from the nodes of the decomposition tree (these phrase pairs are maximal), or generated by sequences of consecutive sibling nodes 6 (these phrase pairs are non-maximal). Figure 2 shows the decomposition tree as well as all of the tight phrase pairs that can be extracted from the example sentence pair in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 549,
"end": 557,
"text": "Figure 2",
"ref_id": null
},
{
"start": 684,
"end": 692,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Alignment-based Hierarchy",
"sec_num": "4"
},
{
"text": "We focus on the source side of the decomposition tree, and expand it to include all of the non-phrase single words within the scope of the decomposition tree as frontiers and attach each as a child of the lowest node that contains the word. We then abstract the trees nodes with two symbol, X for phrases, and B for non-phrases, and call the result the decomposition tree of the source side phrases. Figure 3 (a) depicts such tree for the English side of our example sentence pair. We further recursively binarize 7 the decomposition tree into a binarized decomposition forest such that all phrases are directly represented as nodes in the forest. Figure 3 (c) shows two of the many binarized decomposition trees in the forest.",
"cite_spans": [],
"ref_spans": [
{
"start": 400,
"end": 412,
"text": "Figure 3 (a)",
"ref_id": "FIGREF2"
},
{
"start": 648,
"end": 660,
"text": "Figure 3 (c)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Alignment-based Hierarchy",
"sec_num": "4"
},
{
"text": "The binarized decomposition forest compactly encodes the hierarchical structure among phrases and non-phrases. However, the coarse abstraction of phrases with X and non-phrases with B provides little information on the constraints of the hierarchy. In order to bring in syntactic constraints, we annotate the nodes in the decomposition forest with syntactic observations based on the automatic syntactic parse tree of the source side. If a node aligns with a constituent in the parse tree, we add the syntactic category (e.g., NP) of the constituent as an emitted observation of the node, otherwise, it crosses constituent boundaries and we add a designated crossing category CR as its observation. We call the resulting forest a syntactic decomposition forest. Figure 3 (d) shows two syntactic decomposition trees of the forest based on the parse tree in Figure 3 (b) . We will next describe how to learn finer-grained X and B categories based on the hierarchical syntactic constraints.",
"cite_spans": [],
"ref_spans": [
{
"start": 762,
"end": 774,
"text": "Figure 3 (d)",
"ref_id": "FIGREF2"
},
{
"start": 856,
"end": 868,
"text": "Figure 3 (b)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Alignment-based Hierarchy",
"sec_num": "4"
},
{
"text": "If we designate a unique symbol S as the new root of the syntactic decomposition forests introduced in the previous section, it can be shown that these forests can be generated by a probabilistic contextfree grammar G = (V, \u03a3, S, R, \u03c6), where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "\u2022 V = {S, X, B} is the set of nonterminals,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "\u2022 \u03a3 is the set of terminals comprising treebank categories plus the CR tag (the crossing category),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "\u2022 S \u2208 V is the unique start symbol,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "\u2022 R is the union of the set of production rules each rewriting a nonterminal to a sequence of nonterminals and the set of emission rules each generating a terminal from a nonterminal,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "\u2022 and \u03c6 assigns a probability score to each rule r \u2208 R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "Such a grammar can be derived from the set of syntactic decomposition forests extracted from a source-side parsed parallel corpus, with rule probability scores estimated as the relative frequencies of the production and emission rules. The X and B nonterminals in the grammar are coarse representations of phrase and non-phrases and do not carry any syntactic information at all. In order to introduce syntax to these nonterminals, we incrementally split 8 them into a set of latent categories {X 1 , \u2022 \u2022 \u2022 , X n } for X and another set {B 1 , \u2022 \u2022 \u2022 , B n } for B, and then learn a set of rule probabilities 9 \u03c6 on the latent categories so that the likelihood of the training forests are maximized. The motivation is to let the latent categories learn different preferences of (emitted) syntactic categories as well as structural dependencies along the hierarchy so that they can carry syntactic information. We call them latent syntactic categories. The learned X i 's represent syntactically-induced finer-grained categories of phrases and are used as the set of latent syntactic categories C described in Section 3. In related research, Matsuzaki et al. (2005) and Petrov et al. (2006) introduced latent variables to learn finergrained distinctions of treebank categories for parsing, and used a similar approach to learn finer-grained part-of-speech tags for tagging. Our method is in spirit similar to these approaches.",
"cite_spans": [
{
"start": 1140,
"end": 1163,
"text": "Matsuzaki et al. (2005)",
"ref_id": "BIBREF15"
},
{
"start": 1168,
"end": 1188,
"text": "Petrov et al. (2006)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "Optimization of grammar parameters to maximize the likelihood of training forests can be achieved by a variant of Expectation-Maximization (EM) algorithm. Recall that our decomposition forests are fully binarized (except the root). In the hypergraph representation (Huang and Chiang, 2005) , the hyperedges of our forests all have the same format 10 (V, W ), U , meaning that node U expands to nodes V and W with production rule U \u2192 V W . Given a forest F with root node R, we denote e(U ) the emitted syntactic category at node U and LR(U ) (or PL(W ), or PR(V )) 11 the set of node pairs (V, W ) (or (U, V ), or (U, W )) such that (V, W ), U is a hyperedge of the forest. Now consider node U , which is either S, X, or B, in the forest. Let U x be the latent syntactic category 12 of node U . We define I(U x ) the part of the forest (includes e(U ) but not U x ) inside U , and O(U x ) the other part of the forest (includes U x but not e(U )) outside U , as illustrated in Figure 3 (d) . The inside-outside probabilities are defined as:",
"cite_spans": [
{
"start": 265,
"end": 289,
"text": "(Huang and Chiang, 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 977,
"end": 989,
"text": "Figure 3 (d)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "P IN (U x ) = P (I(U x )|U x ) P OUT (U x ) = P (O(U x )|S)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "which can be computed recursively as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "P IN (U x ) = (V,W )\u2208LR(U ) y,z \u03c6(U x \u2192 e(U )) \u00d7\u03c6(U x \u2192 V y W z ) \u00d7P IN (V y )P IN (W z ) P OUT (U x ) = (V,W )\u2208PL(U) y,z \u03c6(V y \u2192 e(V )) \u00d7\u03c6(V y \u2192 W z U x ) \u00d7P OUT (V y )P IN (W z ) + (V,W )\u2208PR(U) y,z \u03c6(V y \u2192 e(V )) \u00d7\u03c6(V y \u2192 U x W z ) \u00d7P OUT (V y )P IN (W z )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "In the E-step, the posterior probability of the occurrence of production rule 13 U x \u2192 V y W z is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "P (U x \u2192 V y W z |F ) = \u03c6(U x \u2192 e(U )) \u00d7\u03c6(U x \u2192 V y W z ) \u00d7P OUT (U x )P IN (V y )P IN (W w ) P IN (R)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "In the M-step, the expected counts of rule U x \u2192 V y W z for all latent categories V y and W z are accumulated together and then normalized to obtain an update of the probability estimation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "\u03c6(U x \u2192 V y W z ) = #(U x \u2192 V y W z ) (V ,W ) y,z #(U x \u2192 V y W z )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "Recall that each node U labeled as X in a forest is associated with a phrase whose syntax is abstracted by a tag sequence. Once a grammar is learned, for each such node with a corresponding tag sequence ts in forest F , we compute the posterior probability that the latent category of node U being X i as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "P (X i |ts) = P OUT (U i )P IN (U i ) P IN (R)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "This contributes P (X i |ts) evidence that tag sequence ts belongs to a X i category. When all of the evidences are computed and accumulated in #(X i , ts), they can then be normalized to obtain the probability that the latent category of ts is X i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "p ts (X i ) = #(X i , ts) i #(X i , ts)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "As described in Section 3, the distributions of latent categories are used to compute the syntactic feature vectors for the SCFG rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inducing Latent Syntactic Categories",
"sec_num": "5"
},
{
"text": "We conduct experiments on two tasks, English-to-German and English-to-Chinese, both aimed for speech-to-speech translation. The training data for the English-to-German task is a filtered subset of the Europarl corpus (Koehn, 2005) , containing \u223c300k parallel bitext with \u223c4.5M tokens on each side. The dev and test sets both contain 1k sentences with one reference for each. The training data for the Englishto-Chinese task is collected from transcription and human translation of conversations in travel domain. It consists of \u223c500k parallel bitext with \u223c3M tokens 14 on each side. Both dev and test sets contain \u223c1.3k sentences, each with two references. Both corpora are also preprocessed with punctuation removed and words down-cased to make them suitable for speech translation.",
"cite_spans": [
{
"start": 217,
"end": 230,
"text": "(Koehn, 2005)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "The baseline system is our implementation of the hierarchical phrase-based model of Chiang (2007) , and it includes basic features such as rule and lexicalized rule translation probabilities, language model scores, rule counts, etc. We use 4-gram language models in both tasks, and conduct minimumerror-rate training (Och, 2003) to optimize feature weights on the dev set. Our baseline hierarchical model has 8.3M and 9.7M rules for the English-to-German and English-to-Chinese tasks, respectively.",
"cite_spans": [
{
"start": 84,
"end": 97,
"text": "Chiang (2007)",
"ref_id": "BIBREF2"
},
{
"start": 317,
"end": 328,
"text": "(Och, 2003)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "The English side of the parallel data is parsed by our implementation of the Berkeley parser trained on the combination of Broadcast News treebank from Ontonotes (Weischedel et al., 2008 ) and a speechified version of the WSJ treebank (Marcus et al., 1999) to achieve higher parsing accuracy (Huang et al., 2010) . Our approach introduces a new syntactic feature and its feature weight is tuned in the same way together with the features in the baseline model. In this study, we induce 16 latent categories for both X and B nonterminals.",
"cite_spans": [
{
"start": 162,
"end": 186,
"text": "(Weischedel et al., 2008",
"ref_id": "BIBREF23"
},
{
"start": 235,
"end": 256,
"text": "(Marcus et al., 1999)",
"ref_id": "BIBREF13"
},
{
"start": 292,
"end": 312,
"text": "(Huang et al., 2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Our approach identifies \u223c180k unique tag sequences for the English side of phrase pairs in both tasks. As shown by the examples in Table 2 , the syntactic feature vector representation is able to identify similar and dissimilar tag sequences. For instance, it determines that the sequence of \"DT JJ NN\" is syntactically very similar to \"DT ADJP NN\" while very dissimilar to \"NN CD VP\". Notice that our latent categories are learned automatically to maximize the likelihood of the training forests extracted based on alignment and are not explicitly instructed to discriminate between syntactically different tag sequences. Our approach is not guaranteed to always assign similar feature vectors to syntactically similar tag sequences. However, as the experimental results show below, the latent categories are able to capture some similarities among tag sequences that are beneficial for translation. Table 3 and 4 report the experimental results on the English-to-German and English-to-Chinese tasks, respectively. The addition of the syntax feature achieves a statistically significant improvement (p \u2264 0.01) of 0.6 in BLEU on the test set of the English-to-German task. This improvement is substantial given that only one reference is used for each test sentence. On the English-to-Chinese task, the syntax feature achieves a smaller improvement of 0.41 BLEU on the test set. One potential explanation for the smaller improvement is that the sentences on the English-to-Chinese task are much shorter, with an average of only 6 words per sentence, compared to 15 words in the English-to-German task. The hypothesis space of translating a longer sentence is much larger than that of a shorter sentence. Therefore, there is more potential gain from using syntax features to rule out unlikely derivations of longer sentences, while phrasal rules might be adequate for shorter sentences, leaving less room for syntax to help as in the case of the English-to-Chinese task.",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 2",
"ref_id": "TABREF5"
},
{
"start": 901,
"end": 908,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "The incorporation of the syntactic feature into the hierarchical phrase-based translation system also brings in additional memory load and computational cost. In the worst case, our approach requires storing one feature vector for each tag sequence and one feature vector for each nonterminal of a SCFG rule, with the latter taking the majority of the extra memory storage. We observed that about 90% of the X nonterminals in the rules only have one tag sequence, and thus the required memory space can be significantly reduced by only storing a pointer to the feature vector of the tag sequence for these nonterminals. Our approach also requires computing one dot-product of two feature vectors for each nonterminal when a SCFG rule is applied to a source span.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "7"
},
{
"text": "Not so similar Very dissimilar This cost can be reduced, however, by caching the dot-products of the tag sequences that are frequently accessed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Very similar",
"sec_num": null
},
{
"text": "F (ts) \u2022 F (ts ) > 0.9 0.4 \u2264 F (ts) \u2022 F (ts ) \u2264 0.6 F (ts) \u2022 F (ts ) < 0.1 DT JJ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Very similar",
"sec_num": null
},
{
"text": "There are other successful investigations to impose soft syntactic constraints to hierarchical phrase-based models by either introducing syntaxbased rule features such as the prior derivation model of Zhou et al. (2008) or by imposing constraints on translation spans at decoding time, e.g., (Marton and Resnik, 2008; Xiong et al., 2009; Xiong et al., 2010) . These approaches are all orthogonal to ours and it is expected that they can be combined with our approach to achieve greater improvement.",
"cite_spans": [
{
"start": 201,
"end": 219,
"text": "Zhou et al. (2008)",
"ref_id": "BIBREF29"
},
{
"start": 292,
"end": 317,
"text": "(Marton and Resnik, 2008;",
"ref_id": "BIBREF14"
},
{
"start": 318,
"end": 337,
"text": "Xiong et al., 2009;",
"ref_id": "BIBREF25"
},
{
"start": 338,
"end": 357,
"text": "Xiong et al., 2010)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Very similar",
"sec_num": null
},
{
"text": "This work is an initial effort to investigate latent syntactic categories to enhance hierarchical phrasebased translation models, and there are many directions to continue this line of research. First, while the current approach imposes soft syntactic constraints between the parse structure of the source sentence and the SCFG rules used to derive the translation, the real-valued syntactic feature vectors can also be used to impose soft constraints between SCFG rules when rule rewrite occurs. In this case, target side parse trees could also be used alone or together with the source side parse trees to induce the latent syntactic categories. Second, instead of using single parse trees during both training and decoding, our approach is likely to benefit from exploring parse forests as in . Third, in addition to the treebank categories obtained by syntactic parsing, lexical cues directly available in sentence pairs could also to explored to guide the learning of latent categories. Last but not the least, it would be interesting to investigate discriminative training approaches to learn latent categories that directly optimize on translation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Very similar",
"sec_num": null
},
{
"text": "We have presented a novel approach to enhance hierarchical phrase-based machine translation systems with real-valued linguistically motivated feature vectors. Our approach maintains the advantages of hierarchical phrase-based translation systems while at the same time naturally incorporates soft syntactic constraints. Experimental results showed that this approach improves the baseline hierarchical phrase-based translation models on both English-to-German and English-to-Chinese tasks. We will continue this line of research and exploit better ways to learn syntax and apply syntactic constraints to machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "In case of a non-tight phrase pair, we only abstract and compare the syntax of the largest tight part.2 There are three X nonterminals (one on the left and two on the right) for binary abstract rules, two for unary abstract rules, and one for phrasal rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Other measures such as KL-divergence in the probability space are also feasible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A normalized uniform feature vector is used for tag sequences (of parsed test sentences) that are not seen on the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Unaligned words may be added.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The intermediate binarization nodes are also labeled as either X or B based on whether they exactly cover a phrase or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We incrementally split each nonterminal to 2, 4, 8, and finally 16 categories, with each splitting followed by several EM iterations to tune model parameters. We consider 16 an appropriate number for latent categories, not too small to differentiate between different syntactic usages and not too large for the extra computational and storage costs.9 Each binary production rule is now associated with a 3dimensional matrix of probabilities, and each emission rule associated with a 1-dimensional array of probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The hyperedge corresponding to the root node has a different format because it is unary, but it can be handled similarly. When clear from context, we use the same variable to present both a node and its label.11 LR stands for the left and right children, PL for the parent and left children, and PR for the parent and right children.12 We never split the start symbol S, and denote S0 = S.13 The emission rules can be handled similarly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The Chinese sentences are automatically segmented into words. However, BLEU scores are computed at character level for tuning and evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was done when the first author was visiting IBM T. J. Watson Research Center as a research intern. We would like to thank Mary Harper for lots of insightful discussions and suggestions and the anonymous reviewers for the helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The mathematics of statistical machine translation: parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [
"Della"
],
"last": "Vincent",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathemat- ics of statistical machine translation: parameter esti- mation. Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Revisiting T. Uno and M. Yagiura's algorithm",
"authors": [
{
"first": "Minh",
"middle": [],
"last": "Binh",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Bui-Xuan",
"suffix": ""
},
{
"first": "Christophe",
"middle": [],
"last": "Habib",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paul",
"suffix": ""
}
],
"year": 2005,
"venue": "ISAAC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Binh Minh Bui-Xuan, Michel Habib, and Christophe Paul. 2005. Revisiting T. Uno and M. Yagiura's al- gorithm. In ISAAC.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning to translate with source and target syntax",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2010. Learning to translate with source and target syntax. In ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "What's in a translation rule",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Graehl",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Deneefe",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Thayer",
"suffix": ""
}
],
"year": 2004,
"venue": "HLT/NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2004. What's in a translation rule. In HLT/NAACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Finding all common intervals of k permutations",
"authors": [
{
"first": "Steffen",
"middle": [],
"last": "Heber",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Stoye",
"suffix": ""
}
],
"year": 2001,
"venue": "CPM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steffen Heber and Jens Stoye. 2001. Finding all common intervals of k permutations. In CPM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Better k-best parsing",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "International Workshop on Parsing Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and David Chiang. 2005. Better k-best parsing. In International Workshop on Parsing Tech- nology.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Selftraining PCFG grammars with latent annotations across languages",
"authors": [
{
"first": "Zhongqiang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Harper",
"suffix": ""
}
],
"year": 2009,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongqiang Huang and Mary Harper. 2009. Self- training PCFG grammars with latent annotations across languages. In EMNLP.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A syntax-directed translator with extended domain of locality",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2006,
"venue": "CHSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang, Kevin Knight, and Aravind Joshi. 2006. A syntax-directed translator with extended domain of locality. In CHSLP.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improving a simple bigram hmm partof-speech tagger by latent annotation and self-training",
"authors": [
{
"first": "Zhongqiang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Eidelman",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Harper",
"suffix": ""
}
],
"year": 2009,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongqiang Huang, Vladimir Eidelman, and Mary Harper. 2009. Improving a simple bigram hmm part- of-speech tagger by latent annotation and self-training. In NAACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Self-training with products of latent variable",
"authors": [
{
"first": "Zhongqiang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Harper",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2010,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongqiang Huang, Mary Harper, and Slav Petrov. 2010. Self-training with products of latent variable. In EMNLP.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Europarl: A parallel corpus for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "MT Summit",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT Summit.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Tree-tostring alignment template for statistical machine translation",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-to- string alignment template for statistical machine trans- lation. In ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Treebank-3. Linguistic Data Consortium",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, Mary Ann Marcinkiewicz, and Ann Taylor, 1999. Treebank-3. Linguistic Data Consortium, Philadelphia.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Soft syntactic constraints for hierarchical phrased-based translation",
"authors": [
{
"first": "Yuval",
"middle": [],
"last": "Marton",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuval Marton and Philip Resnik. 2008. Soft syntactic constraints for hierarchical phrased-based translation. In ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Probabilistic CFG with latent annotations",
"authors": [
{
"first": "Takuya",
"middle": [],
"last": "Matsuzaki",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takuya Matsuzaki, Yusuke Miyao, and Jun'ichi Tsujii. 2005. Probabilistic CFG with latent annotations. In ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Forest-based translation rule extraction",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2008,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitao Mi and Liang Huang. 2008. Forest-based transla- tion rule extraction. In EMNLP.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Forestbased translation",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitao Mi, Liang Huang, and Qun Liu. 2008. Forest- based translation. In ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The alignment template approach to statistical machine translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2004. The align- ment template approach to statistical machine transla- tion. Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In ACL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning accurate, compact, and interpretable tree annotation",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Romain",
"middle": [],
"last": "Thibaux",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and inter- pretable tree annotation. In ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Preference grammars: softening syntactic constraints to improve statistical machine translation",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Venugopal",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Zollmann",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2009,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Venugopal, Andreas Zollmann, Noah A. Smith, and Stephan Vogel. 2009. Preference grammars: soft- ening syntactic constraints to improve statistical ma- chine translation. In NAACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "OntoNotes Release 2.0. Linguistic Data Consortium",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Greenberg",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Weischedel, Sameer Pradhan, Lance Ramshaw, Martha Palmer, Nianwen Xue, Mitchell Marcus, Ann Taylor, Craig Greenberg, Eduard Hovy, Robert Belvin, and Ann Houston, 2008. OntoNotes Release 2.0. Lin- guistic Data Consortium, Philadelphia.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A syntax-driven bracketing model for phrase-based translation",
"authors": [
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Aiti",
"middle": [],
"last": "Aw",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deyi Xiong, Min Zhang, Aiti Aw, and Haizhou Li. 2009. A syntax-driven bracketing model for phrase-based translation. In ACL-IJCNLP.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning translation boundaries for phrase-based decoding",
"authors": [
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deyi Xiong, Min Zhang, and Haizhou Li. 2010. Learn- ing translation boundaries for phrase-based decoding. In NAACL-HLT.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A syntax-based statistical translation model",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. In ACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Extracting synchronous grammar rules from word-level alignments in linear time",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2008,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhang, Daniel Gildea, and David Chiang. 2008. Ex- tracting synchronous grammar rules from word-level alignments in linear time. In COLING.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Prior derivation models for formally syntax-based translation using linguistically syntactic parsing and tree kernels",
"authors": [
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yuqing",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2008,
"venue": "SSST",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bowen Zhou, Bing Xiang, Xiaodan Zhu, and Yuqing Gao. 2008. Prior derivation models for formally syntax-based translation using linguistically syntactic parsing and tree kernels. In SSST.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Syntax augmented machine translation via chart parsing",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Zollmann",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Venugopal",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Zollmann and Ashish Venugopal. 2006. Syntax augmented machine translation via chart parsing. In StatMT.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Figure 1(b) highlights the tight phrase pairs in the example sentence pair.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "An example of word-aligned sentence pair (a) with tight phrase pairs marked in a matrix representation (b).",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "(a) decomposition tree for the English side of the example sentence pair with all phrases underlined, (b) automatic parse tree of the English side, (c) two example binarized decomposition trees with syntactic emissions in depicted in (d), where the two dotted curves give an example I(\u2022) and O(\u2022) that separate the forest into two parts.",
"num": null,
"type_str": "figure"
},
"TABREF2": {
"type_str": "table",
"text": "BLEU scores of the English-to-German task (one reference).",
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">Baseline +Syntax \u2206</td></tr><tr><td>Dev</td><td>46.47</td><td>47.39</td><td>0.92</td></tr><tr><td>Test</td><td>45.45</td><td>45.86</td><td>0.41</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"text": "BLEU scores of the English-to-Chinese task (two references).",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF5": {
"type_str": "table",
"text": "Examples of similar and dissimilar tag sequences.",
"num": null,
"html": null,
"content": "<table/>"
}
}
}
}