Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N15-1033",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:35:07.657723Z"
},
"title": "Multi-Target Machine Translation with Multi-Synchronous Context-free Grammars",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {
"addrLine": "Ikoma-shi",
"postCode": "8916-5",
"settlement": "Takayama-cho, Nara",
"country": "Japan"
}
},
"email": "[email protected]"
},
{
"first": "Philip",
"middle": [],
"last": "Arthur",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {
"addrLine": "Ikoma-shi",
"postCode": "8916-5",
"settlement": "Takayama-cho, Nara",
"country": "Japan"
}
},
"email": "[email protected]"
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {
"addrLine": "Ikoma-shi",
"postCode": "8916-5",
"settlement": "Takayama-cho, Nara",
"country": "Japan"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a method for simultaneously translating from a single source language to multiple target languages T1, T2, etc. The motivation behind this method is that if we only have a weak language model for T1 and translations in T1 and T2 are associated, we can use the information from a strong language model over T2 to disambiguate the translations in T1, providing better translation results. As a specific framework to realize multi-target translation, we expand the formalism of synchronous context-free grammars to handle multiple targets, and describe methods for rule extraction, scoring, pruning, and search with these models. Experiments find that multi-target translation with a strong language model in a similar second target language can provide gains of up to 0.8-1.5 BLEU points. 1",
"pdf_parse": {
"paper_id": "N15-1033",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a method for simultaneously translating from a single source language to multiple target languages T1, T2, etc. The motivation behind this method is that if we only have a weak language model for T1 and translations in T1 and T2 are associated, we can use the information from a strong language model over T2 to disambiguate the translations in T1, providing better translation results. As a specific framework to realize multi-target translation, we expand the formalism of synchronous context-free grammars to handle multiple targets, and describe methods for rule extraction, scoring, pruning, and search with these models. Experiments find that multi-target translation with a strong language model in a similar second target language can provide gains of up to 0.8-1.5 BLEU points. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In statistical machine translation (SMT), the great majority of work focuses on translation of a single language pair, from the source F to the target E. However, in many actual translation situations, identical documents are translated not from one language to another, but between a large number of different languages. Examples of this abound in commercial translation, and prominent open data sets used widely by the MT community include UN documents in 6 languages (Eisele and Chen, 2010) , European Parliament Proceedings in 21 languages 1 Code and data to replicate the experiments can be found at http://phontron.com/project/naacl2015 Target 2 Target 1 Source Figure 1 : An example of multi-target translation, where a second target language is used to assess the quality of the first target language. (Koehn, 2005) , and video subtitles on TED in as many as 50 languages (Cettolo et al., 2012) .",
"cite_spans": [
{
"start": 470,
"end": 493,
"text": "(Eisele and Chen, 2010)",
"ref_id": "BIBREF7"
},
{
"start": 544,
"end": 545,
"text": "1",
"ref_id": null
},
{
"start": 810,
"end": 823,
"text": "(Koehn, 2005)",
"ref_id": "BIBREF12"
},
{
"start": 880,
"end": 902,
"text": "(Cettolo et al., 2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 668,
"end": 676,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, despite this abundance of multilingual data, there have been few attempts to take advantage of it. One exception is the multi-source SMT method of Och and Ney (2001) , which assumes a situation where we have multiple source sentences, and would like to combine the translations from these sentences to create a better, single target translation.",
"cite_spans": [
{
"start": 156,
"end": 174,
"text": "Och and Ney (2001)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a framework of multitarget SMT. In multi-target translation, we translate F to not a single target E, but to a set of sentences E = E 1 , E 2 , . . . , E |E| in multiple target languages (which we will abbreviate T1, T2, etc.). This, in a way, can be viewed as the automated version of the multi-lingual dissemination of content performed by human translators when creating data for the UN, EuroParl, or TED corpora mentioned above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "But what, one might ask, do we expect to gain by generating multiple target sentences at the same time? An illustrative example in Figure 1 shows three potential Chinese T1 translations for an Arabic input sentence. If an English speaker was asked to simply choose one of the Chinese translations, they likely could not decide which is correct. However, if they were additionally given English T2 translations corresponding to each of the Chinese translations, they could easily choose the third as the most natural, even without knowing a word of Chinese.",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 139,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Translating this into MT terminology, this is equivalent to generating two corresponding target sentences E 1 and E 2 , and using the naturalness of E 2 to help decide which E 1 to generate. Language models (LMs) are the traditional tool for assessing the naturalness of sentences, and it is widely known that larger and stronger LMs greatly help translation (Brants et al., 2007) . It is easy to think of a situation where we can only create a weak LM for T1, but much more easily create a strong LM for T2. For example, T1 could be an under-resourced language, or a new entrant to the EU or UN.",
"cite_spans": [
{
"start": 359,
"end": 380,
"text": "(Brants et al., 2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a concrete method to realize multi-target translation, we build upon Chiang (2007) 's framework of synchronous context free grammars (SCFGs), which we first overview in Section 2. 2 SCFGs are an extension of context-free grammars that define rules that synchronously generate source and target strings F and E. We expand this to a new formalism of multi-synchronous CFGs (MSCFGs, Section 3) that simultaneously generate not just two, but an arbitrary number of strings F, E 1 , E 2 , . . . , E N . We describe how to acquire these from data (Section 4), and how to perform search, including calculation of LM probabilities over multiple target language strings (Section 5).",
"cite_spans": [
{
"start": 72,
"end": 85,
"text": "Chiang (2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To evaluate the effectiveness of multi-target translation in the context of having a strong T2 LM to help with T1 translation, we perform experiments on translation of United Nations documents (Section 6). These experiments, and our subsequent analysis, show that the framework of multi-target translation can, indeed, provide significant gains in accuracy (of up to 1.5 BLEU points), particularly when the two target languages in question are similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first briefly cover SCFGs, which are widely used in MT, most notably in the framework of hierarchi-2 One could also consider a multi-target formulation of phrase-based translation (Koehn et al., 2003) , but generating multiple targets while considering reordering in phrase-based search is not trivial. We leave this to future work.",
"cite_spans": [
{
"start": 183,
"end": 203,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous Context-Free Grammars",
"sec_num": "2"
},
{
"text": "(a) SCFG Grammar",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous Context-Free Grammars",
"sec_num": "2"
},
{
"text": "r 1 : X \u2192 <X 1 of the X 2 , X 1 des X 2 > r 2 : X \u2192 <activity, activit\u00e9s> r 3 : X \u2192 <chambers, chambres> Derivation <X 1 , X 1 > r 1 <X 2 of the X 3 , X 2 des X 3 > <activity of the X 3 , activit\u00e9s des X 3 > r 2 <activity of the chambers, activit\u00e9s des chambres> r 3 (b) MSCFG Grammar r 1 : X \u2192 <X 1 of the X 2 , X 1 des X 2 , X 2 \u7684 X 1 > r 2 : X \u2192 <activity, activit\u00e9s, \u6d3b\u52a8 > r 3 : X \u2192 <chambers, chambres, \u5206\u5ead > Derivation <X 1 , X 1 , X 1 > r 1 <X 2 of the X 3 , X 2 des X 3 , X 3 \u7684 X 2 > <activity of the X 3 , activit\u00e9s des X 3 , X 3 \u7684 \u6d3b\u52a8 > r 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous Context-Free Grammars",
"sec_num": "2"
},
{
"text": "<activity of the chambers, activit\u00e9s des chambres, \u5206\u5ead \u7684 \u6d3b\u52a8 > r 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous Context-Free Grammars",
"sec_num": "2"
},
{
"text": "Figure 2: Synchronous grammars and derivations using (a) standard SCFGs and (b) the proposed MSCFGs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous Context-Free Grammars",
"sec_num": "2"
},
{
"text": "cal phrase-based translation (Hiero; Chiang (2007) ). SCFGs are based on synchronous rules defined as tuples of X, \u03b3, and \u03b1",
"cite_spans": [
{
"start": 37,
"end": 50,
"text": "Chiang (2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous Context-Free Grammars",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X \u2192 \u03b3, \u03b1 ,",
"eq_num": "(1)"
}
],
"section": "Synchronous Context-Free Grammars",
"sec_num": "2"
},
{
"text": "where X is the head of the rule, and \u03b3 and \u03b1 are strings of terminals and indexed non-terminals on the source and target side of the grammar. Each non-terminal on the right side is indexed, with nonterminals with identical indices corresponding to each-other. For example, a synchronous rule could take the form of 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous Context-Free Grammars",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X \u2192 X 0 of the X 1 , X 0 des X 1 .",
"eq_num": "(2)"
}
],
"section": "Synchronous Context-Free Grammars",
"sec_num": "2"
},
{
"text": "By simply generating from this grammar, it is possible to generate a string in two languages synchronously, as shown in Figure 2 (a). When we are already given a source side sentence and would like to using an SCFG to generate the translation, we find all rules that match the source side and perform search using the CKY+ algorithm (Chappelier et al., 1998) . When we would additionally like to consider an LM, as is standard in SMT, we perform a modified version of CKY+ that approximately explores the search space using a method such as cube pruning (Chiang, 2007) .",
"cite_spans": [
{
"start": 333,
"end": 358,
"text": "(Chappelier et al., 1998)",
"ref_id": "BIBREF3"
},
{
"start": 554,
"end": 568,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 120,
"end": 128,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synchronous Context-Free Grammars",
"sec_num": "2"
},
{
"text": "In this section, we present the basic formalism that will drive our attempts at multi-target translation. Specifically, we propose a generalization of SCFGs, which we will call multi-synchronous context free grammars (MSCFGs). In an MSCFG, the elementary structures are rewrite rules containing not a source and target, but an arbitrary number M of strings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Synchronous CFGs",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X \u2192 \u03b7 1 , ..., \u03b7 M ,",
"eq_num": "(3)"
}
],
"section": "Multi-Synchronous CFGs",
"sec_num": "3"
},
{
"text": "where X is the head of the rule and \u03b7 m is a string of terminal and non-terminal symbols. 4 In this paper, for notational convenience, we will use a specialized version of Equation 3 in which we define a single \u03b3 as the source side string, and \u03b1 1 , ...\u03b1 N as an arbitrary number N of target side strings:",
"cite_spans": [
{
"start": 90,
"end": 91,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Synchronous CFGs",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X \u2192 \u03b3, \u03b1 1 , ..., \u03b1 N .",
"eq_num": "(4)"
}
],
"section": "Multi-Synchronous CFGs",
"sec_num": "3"
},
{
"text": "Therefore, at each derivation step, one non-terminal in \u03b3 is chosen and all the nonterminals with same indices in \u03b1 1 , ..., \u03b1 N will be rewritten using a single rule. Figure 2 (b) gives an example of generating sentences in three languages using MSCFGs. Translation can also be performed by using the CKY+ algorithm to parse the source side, and then generate targets in not one, but multiple languages. It can be noted that this formalism is a relatively simple expansion of standard SCFGs. However, the additional targets require non-trivial modifications to the standard training and search processes, which we discuss in the following sections.",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 176,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-Synchronous CFGs",
"sec_num": "3"
},
{
"text": "This section describes how, given a set of parallel sentences in N languages, we can create translation models (TMs) using MSCFGs. 4 We will also make the restriction that indices are linear and non-deleting, indicating that each non-terminal index present in any of the strings will appear exactly once in all of the strings. Thus, MSCFGs can also be thought of as a subset of the \"generalized multi-text grammars\" of Melamed et al. (2004) .",
"cite_spans": [
{
"start": 131,
"end": 132,
"text": "4",
"ref_id": null
},
{
"start": 434,
"end": 440,
"text": "(2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Multi-Synchronous Grammars",
"sec_num": "4"
},
{
"text": "First, we briefly outline rule extraction for SCFGs in the standard two-language case, as proposed by Chiang (2007) . We first start by preparing two corpora in the source and target language, F and E, and obtaining word alignments for each sentence automatically, using a technique such as the IBM models implemented by GIZA++ (Och and Ney, 2003) .",
"cite_spans": [
{
"start": 102,
"end": 115,
"text": "Chiang (2007)",
"ref_id": "BIBREF4"
},
{
"start": 328,
"end": 347,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SCFG Rule Extraction",
"sec_num": "4.1"
},
{
"text": "We then extract initial phrases for each sentence. Given a source f J 1 , target e I 1 , and alignment A = { i 1 , i 1 , . . . , i |A| , i |A| } where i and i represent indices of aligned words in F and E respectively. First, based on this alignment, we extract all pairs of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SCFG Rule Extraction",
"sec_num": "4.1"
},
{
"text": "phrases BP = { f j 1 i 1 , e j 1 i 1 , . . . , f j |BP | i |BP | , e j |BP | i |BP | }, where f j 1 i 1 is a substring of f J 1 spanning from i 1 to j 1 , and e j 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SCFG Rule Extraction",
"sec_num": "4.1"
},
{
"text": "i 1 is analogous for the target side. The criterion for whether a phrase f j i , e j i can be extracted or not is whether there exists at least one alignment in A that falls within the bounds of both f j i and e j i , and no alignments that fall within the bounds of one, but not the other. It is also common to limit the maximum length of phrases to be less than a constant S (in our experiments, 10). The phrase-extract algorithm of Och (2002) can be used to extract phrases that meet these criteria.",
"cite_spans": [
{
"start": 435,
"end": 445,
"text": "Och (2002)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SCFG Rule Extraction",
"sec_num": "4.1"
},
{
"text": "Next, to create synchronous grammar rules, we cycle through the phrases in BP , and extract all potential rules encompassed by this phrase. This is done by finding all sets of 0 or more non-overlapping sub-phrases of initial phrase f j i , e j i , and replacing them by non-terminals to form rules. In addition, it is common to limit the number of non-terminals to two and not allow consecutive non-terminals on the source side to ensure search efficiency, and limit the number of terminals to limit model size (in our experiments, we set this limit to five).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SCFG Rule Extraction",
"sec_num": "4.1"
},
{
"text": "In this section, we generalize the rule extraction process in the previous section to accommodate multiple targets. We do so by first independently creating alignments between the source corpus F, and each of N target corpora {E 1 , . . . , E N }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MSCFG Rule Extraction",
"sec_num": "4.2"
},
{
"text": "Given a particular sentence we now have source F , N target strings {E 1 , . . . , E N }, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MSCFG Rule Extraction",
"sec_num": "4.2"
},
{
"text": "N alignments {A 1 , . . . , A N }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MSCFG Rule Extraction",
"sec_num": "4.2"
},
{
"text": "We next in-dependently extract initial phrases for each of the N languages using the standard bilingual phrase-extract algorithm, yielding initial phrase sets {BP 1 , . . . , BP N }. Finally, we convert these bilingual sets of phrases into a single set of multilingual phrases. This can be done by noting that all source phrases f j i will be associated with a set of 0 or more phrases in each target language. We define the set of multilingual phrases associated with f j i as the cross product of these sets. In other words, if f j i is associated with 2 phrases in T1, and 3 phrases in T2, then there will be a total of 2 * 3 = 6 phrase triples extracted as associated with f j i . 5 Once we have extracted multilingual phrases, the remaining creation of rules is essentially the same as the bilingual case, with sub-phrases being turned into non-terminals for the source and all targets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MSCFG Rule Extraction",
"sec_num": "4.2"
},
{
"text": "After we have extracted rules, we assign them feature functions. In traditional SCFGs, given a source and target \u03b3 and \u03b1 1 , it is standard to calculate the log forward and backward translation probabilities P (\u03b3|\u03b1 1 ) and P (\u03b1 1 |\u03b3), log forward and backward lexical translation probabilities P lex (\u03b3|\u03b1 1 ) and P lex (\u03b1 1 |\u03b3), a word penalty counting the nonterminals in \u03b1 1 , and a constant phrase penalty of 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Scoring",
"sec_num": "4.3"
},
{
"text": "In our MSCFG formalism, we also add new features regarding the additional targets. Specifically in the case where we have one additional target \u03b1 2 , we add the log translation probabilities P (\u03b3|\u03b1 2 ) and P (\u03b1 2 |\u03b3), log lexical probabilities P lex (\u03b3|\u03b1 2 ) and P lex (\u03b1 2 |\u03b3), and word penalty for \u03b1 2 . In addition, we add log translation probabilities that consider both targets at the same time P (\u03b3|\u03b1 1 , \u03b1 2 ) and P (\u03b1 1 , \u03b1 2 |\u03b3). As a result, compared to the 6-feature set in standard SCFGs, an MSCFG rule with two targets will have 13 features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Scoring",
"sec_num": "4.3"
},
{
"text": "In MT, it is standard practice to limit the number of rules used for any particular source \u03b3 to ensure realistic search times and memory usage. This limit is generally imposed by ordering rules by the phrase probability P (\u03b1 1 |\u03b3) and only using the top few (in our case, 10) for each source \u03b3. However, in the MSCFG case, this is not so simple. As the previous section mentioned, in the two-target MSCFG, we have a total of three probabilities conditioned on \u03b3: P (\u03b1 1 , \u03b1 2 |\u03b3), P (\u03b1 1 |\u03b3), P (\u03b1 2 |\u03b3). As our main motivation for multi-target translation is to use T2 to help translation of T1, we can assume that the final of these three probabilities, which only concerns T2, is of less use. Thus, we propose two ways for pruning the rule table based on the former two.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Table Pruning",
"sec_num": "4.4"
},
{
"text": "The first method, which we will call T1+T2, is based on P (\u03b1 1 , \u03b1 2 |\u03b3). The use of this probability is straightforward, as it is possible to simply list the top rules based on this probability. However, this method also has a significant drawback. If we are mainly interested in accurate generation of the T1 sentence, there is a possibility that the addition of the T2 phrase \u03b1 2 will fragment the probabilities for \u03b1 1 . This is particularly true when the source and T1 are similar, while T2 is a very different language. For example, in the case of a source of English, T1 of French, and T2 of Chinese, translations of English to French will have much less variation than translastions of English to Chinese, due to less freedom of translation and higher alignment accuracy between English and French. In this situation, the pruned model will have a variety of translations in T2, but almost no variety in T1, which is not conducive to translating T1 accurately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Table Pruning",
"sec_num": "4.4"
},
{
"text": "As a potential solution to this problem, we also test a T1 method, which is designed to maintain variety of T1 translations for each rule. In order to do so, we first list the top \u03b1 1 candidates based only on the P (\u03b1 1 |\u03b3) probability. Each \u03b1 1 will be associated with one or more \u03b1 2 rule, and thus we choose the \u03b1 2 resulting in the highest joint probability of the two targets P (\u03b1 1 , \u03b1 2 |\u03b3) as the representative rule for \u03b1 1 . This pruning method has the potential advantage of increasing the variety in the T1 translations, but also has the potential disadvantage of artificially reducing genuine variety in T2. We examine which method is more effective in the experiments section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Table Pruning",
"sec_num": "4.4"
},
{
"text": "LMs computes the probability P (E) of observing a particular target sentence, and are a fundamental",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search with Multiple LMs",
"sec_num": "5"
},
{
"text": "<i, j> <i, j> \u2022 * \u25a0 <i, j> \u25c6 * \u25a0 <i, j> \u2022 * \u25b2 <i, j> \u2022 * \u25b2 (a) Single-target Beam (b) Joint <i, j> <i, j> \u2022 * \u25a0 <i, j> \u2022 * \u25a0 <i, j> \u25c6 * \u25a0 \u25b2 * \u2022 \u25b2 * \u2022 \u25c6 * \u25b2 \u25c6 * \u25b2 <i, j> \u25c6 * \u2022 <i, j> \u25c6 * \u2022 (c) Sequential <i, j> <i, j> \u2022 * \u25a0 <i, j> \u25c6 * \u25a0 <i, j> \u2022 * \u25b2 <i, j> \u2022 * \u25a0 \u25b2* \u2022 <i, j> \u2022 * \u25a0 \u25c6 * \u25b2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search with Multiple LMs",
"sec_num": "5"
},
{
"text": "<i, j> \u25c6 * \u25a0 \u25c6 * \u25b2 <i, j> \u25c6 * \u25a0 \u25a0 * \u25b2 <i, j> \u2022 * \u25b2 \u25b2 * \u2022 <i, j> \u2022 * \u25b2 \u2665\ufe0e * \u25b2 Figure 3 : State splitting with (a) one LM, (b) two LMs with joint search, and (c) two LMs with sequential search, where T1 and T2 are the first (red) and second (blue) columns respectively. part of both standard SMT systems and the proposed method. Unlike the other features assigned to rules in Section 4.3, LM probabilities are non-local features, and cannot be decomposed over rules. In case of n-gram LMs, this probability is defined as:",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 85,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Search with Multiple LMs",
"sec_num": "5"
},
{
"text": "P LM (E) = |E|+1 i=1 p(e i |e i\u2212n+1 , . . . , e i\u22122 , e i\u22121 ) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search with Multiple LMs",
"sec_num": "5"
},
{
"text": "where the probabilities of the next word e i depend on the previous n \u2212 1 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search with Multiple LMs",
"sec_num": "5"
},
{
"text": "When not considering an LM, it is possible to efficiently find the best translation for an input sentence f J 1 using the CKY+ algorithm, which performs dynamic programming remembering the most probable translation rule for each state corresponding to source span f j i . When using an LM, it is further necessary split each state corresponding to f j i to distinguish between not only the span, but also the strings of n \u2212 1 boundary words on the left and right side of the translation hypothesis, as illustrated in Figure 3 (a) . As this expands the search space to an intractable size, this space is further reduced based on a limit on expanded edges (the pop limit), or total states per span (the stack limit), through a procedure such as cube pruning (Chiang, 2007) .",
"cite_spans": [
{
"start": 757,
"end": 771,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 517,
"end": 530,
"text": "Figure 3 (a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Search with Multiple LMs",
"sec_num": "5"
},
{
"text": "In a multi-target translation situation with one LM for each target, managing the LM state becomes more involved, as we need to keep track of the n \u2212 1 boundary words for both targets. We propose two methods for handling this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search with Multiple LMs",
"sec_num": "5"
},
{
"text": "The first method, which we will dub the joint search method, is based on consecutively expanding the LM states of both T1 and T2. As shown in the illustration in Figure 3 (b) , this means that each post-split search state will be annotated with boundary words from both targets. This is a natural and simple expansion of the standard search algorithm, simply using a more complicated representation of the LM state. On the other hand, because the new state space is the cross-product of all sets of boundary words in the two languages, the search space becomes significantly larger, with the side-effect of reducing the diversity of T1 translations for the same beam size. For example, in the figure, it can be seen that despite the fact that 3 hypotheses have been expanded, we only have 2 unique T1 LM states.",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 174,
"text": "Figure 3 (b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Search with Multiple LMs",
"sec_num": "5"
},
{
"text": "Our second method, which we will dub the sequential search method, first expands the state space of T1, then later expands the search space of T2. This procedure can be found in Figure 3 (c) . It can be seen that by first expanding the T1 space we ensure diversity in the T1 search space, then additionally expand the states necessary for scoring with the T2 LM. On the other hand, if the T2 LM is important for creating high-quality translations, it is possible that the first pass of search will be less accurate and prune important hypotheses.",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 190,
"text": "Figure 3 (c)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Search with Multiple LMs",
"sec_num": "5"
},
{
"text": "We evaluate the proposed multi-target translation method through translation experiments on the Mul-tiUN corpus (Eisele and Chen, 2010) . We choose this corpus as it contains a large number of parallel documents in Arabic (ar), English (en), Spanish (es), French (fr), Russian (ru), and Chinese (zh), languages with varying degrees of similarity. We use English as our source sentence in all cases, as it is the most common actual source language for UN documents. To prepare the data, we first deduplicate the sentences in the corpus, then hold out 1,500 sentences each for tuning and test. In our basic training setup, we use 100k sentences for training both the TM and the T1 LM. This somewhat small number is to simulate a T1 language that has relatively few resources. For the T2 language, we assume we have a large language model trained on all of the UN data, amounting to 3.5M sentences total.",
"cite_spans": [
{
"start": 112,
"end": 135,
"text": "(Eisele and Chen, 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1"
},
{
"text": "As a decoder, we use the Travatar (Neubig, 2013) toolkit, and implement all necessary extensions to the decoder and rule extraction code to allow for multiple targets. Unless otherwise specified, we use joint search with a pop limit of 2,000, and T1 rule pruning with a limit of 10 rules per source rule. BLEU is used for both tuning and evaluating all models. In particular, we tune and evaluate all models based on T1 BLEU, simulating a situation similar to that in the introduction, where we want to use a large LM in T2 to help translation in T1. In order to control for optimizer instability, we follow Clark et al. (2011) 's recommendation of performing tuning 3 times, and reporting the average of the runs along with statistical significance obtained by pairwise bootstrap resampling (Koehn, 2004) .",
"cite_spans": [
{
"start": 34,
"end": 48,
"text": "(Neubig, 2013)",
"ref_id": "BIBREF17"
},
{
"start": 608,
"end": 627,
"text": "Clark et al. (2011)",
"ref_id": "BIBREF5"
},
{
"start": 792,
"end": 805,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1"
},
{
"text": "In this section we first perform experiments to investigate the effectiveness of the overall framework of multi-target translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Experimental Results",
"sec_num": "6.2"
},
{
"text": "We assess four models, starting with standard single-target SCFGs and moving gradually towards our full MSCFG model: SCFG: A standard SCFG grammar with only the source and T1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Experimental Results",
"sec_num": "6.2"
},
{
"text": "SCFG+T2Al: SCFG constrained during rule extraction to only extract rules that also match the T2 alignments. This will help measure the effect, if any, of being limited by T2 alignments in rule extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Experimental Results",
"sec_num": "6.2"
},
{
"text": "The MSCFG, without using the T2 LM. Compared to SCFG+T2Al, this will examine the effect caused by adding T2 rules in scoring (Section 4.3) and pruning (Section 4.4) the rule table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MSCFG-T2LM:",
"sec_num": null
},
{
"text": "MSCFG: The full MSCFG model with the T2 LM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MSCFG-T2LM:",
"sec_num": null
},
{
"text": "The result of experiments using all five languages as T1, and the remaining four languages as T2 for all of these methods is shown in First, looking at the overall results, we can see that MSCFGs with one of the choices of T2 tends to outperform SCFG for all instances of T1. In particular, the gain for the full MSCFG model is large for the cases where the two target languages are French and Spanish, with en-fr/es achieving a gain of 1.46 BLEU points, and en-es/fr achieving a gain of 0.76 BLEU points over the baseline SCFG. This is followed by Arabic, Russian and Chinese, which all saw small gains of less than 0.3 when using Spanish as T2, with no significant difference for Chinese. This result indicates that multi-target MT has the potential to provide gains in T1 accuracy, particularly in cases where the languages involved are similar to each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MSCFG-T2LM:",
"sec_num": null
},
{
"text": "It should be noted however, that positive results are sensitive to the languages chosen for T1 and T2, and in the cases involving Russian or Chinese, there is often even a drop in accuracy compared to the baseline SCFG. The reason for this can be seen by examining the results for SCFG+T2Al and MSCFG-T2LM. It can be seen that in the cases where there is an overall decrease in accuracy, this decrease can generally be attributed to a decrease when going from SCFG to SCFG+T2Al (indicating that rule extraction suffers from the additional constraints imposed by T2), or a decrease from SCFG+T2Al to MSCFG-LM2 (indicating that rule extraction suffers from fragmentation of the T1 translations by adding the T2 translation). On the other hand, we can see that in the majority of cases, going from MSCFG-LM2 to MSCFG results in at least a small gain in accuracy, indicating that a T2 LM is generally useful, after discounting any negative effects caused by a change in the rule table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MSCFG-T2LM:",
"sec_num": null
},
{
"text": "In Table 2 we show additional statistics illustrating the effect of adding a second language on the number of rules that can be extracted. From these results, we can see that all languages reduce the number of rules extracted, with the reduction being greater for languages with a larger difference from English and French, providing a convincing explanation for the drop in accuracy observed between these two settings.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "MSCFG-T2LM:",
"sec_num": null
},
{
"text": "The motivation for multi-target translation stated in the introduction was that information about T2 may give us hints about the appropriate translation in T1. It is also a reasonable assumption that the less information we have about T1, the more valuable the information about T2 may be. To test this hypothesis, we next show results of experiments in which we vary the size of the training data for the T1 LM in intervals from 0 to 3.5M sentences. For T2, we either use no LM (MSCFG-T2LM) or an LM trained on 3.5M sentences (MSCFG). The results for when French is used as T1 are shown in Figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 591,
"end": 599,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Effect of T1 Language Model Strength",
"sec_num": "6.3"
},
{
"text": "From these results, we can see that this hypothesis is correct. When no T1 LM is used, we generally see some gain in translation accuracy by introducing a strong T2 LM, with the exception of Chinese, which never provides a benefit. When using Spanish as T2, this benefit continues even with a relatively strong T1 LM, with the gap closing after we have 400k sentences of data. For Arabic and Russian, on the other hand, the gap closes rather quickly, with consistent gains only being found up to about 20-40k sentences. In general this indicates that the more informative the T2 LM is in general, the more T1 data will be required before the T2 LM is no longer able to provide additional gains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of T1 Language Model Strength",
"sec_num": "6.3"
},
{
"text": "Next we examine the effect of the rule pruning methods explained in Section 4.4. We set T1 to either French or Chinese, and use either the naive pruning criterion using T1+T2, or the criterion that picks the top translations in T1 along with their most probable T2 translation. Like previous experiments, we use the top 10 rules for any particular F . Results are shown in Table 3 . From these results we can see that in almost all cases, pruning using T1 achieves better results. This indicates the veracity of the observation in Section 4.4 that considering multiple T2 for a particular T1 causes fragmentation of TM probabilities, and that this has a significant effect on translation results. Interestingly, the one exception to this trend is T1 of French and T2 of Spanish, indicating that with sufficiently similar languages, the fragmentation due to the introduction of T2 translations may not be as much of a problem.",
"cite_spans": [],
"ref_spans": [
{
"start": 373,
"end": 380,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Effect of Rule Pruning",
"sec_num": "6.4"
},
{
"text": "It should be noted that in this section, we are using the joint search algorithm, and the interaction between search and pruning will be examined more completely in the following section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Rule Pruning",
"sec_num": "6.4"
},
{
"text": "Next we examine the effect of the search algorithms suggested in Section 5. To do so, we perform experiments where we vary the search algorithm (joint or sequential), the TM pruning criterion (T1 or T1+T2), and the pop limit. For sequential search, we set the pop limit of T2 to be 10, as this value did not have a large effect on results. For reference, we also show results when using no T2 LM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Search",
"sec_num": "6.5"
},
{
"text": "From the BLEU results shown in Figure 5 , we can see that the best search algorithm depends on the pruning criterion and language pair. 6 In general, when trimming using T1, we achieve better results using joint search, indicating that maintaining T1 variety in the TM is enough to maintain search accuracy. On the other hand, when using the T1+T2 pruned model when T2 is Chinese, sequential search is better. This shows that in cases where there are large amounts of ambiguity introduced by T2, sequential search effectively maintains necessary T1 variety before expanding the T2 search space. As there is no general conclusion, an interesting direction for future work is search algorithms that can combine the advantages of these two approaches.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Effect of Search",
"sec_num": "6.5"
},
{
"text": "While there is very little previous work on multitarget translation, there is one line of work by Gonz\u00e1lez and Casacuberta (2006) and P\u00e9rez et al. (2007) , which adapts a WFST-based model to output multiple targets. However, this purely monotonic method is unable to perform non-local reordering, and thus is not applicable most language pairs. It is also motivated by efficiency concerns, as opposed to this work's objective of learning from a T2 language.",
"cite_spans": [
{
"start": 111,
"end": 129,
"text": "Casacuberta (2006)",
"ref_id": "BIBREF8"
},
{
"start": 134,
"end": 153,
"text": "P\u00e9rez et al. (2007)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Factored machine translation (Koehn and Hoang, 2007) is also an example where an LM over a second stream of factors (for example POS tags, classes, or lemmas) has been shown to increase accuracy. These factors are limited, however, by the strong constraint of being associated with a single word and not allowing reordering, and thus are not applicable to our setting of using multiple languages.",
"cite_spans": [
{
"start": 29,
"end": 52,
"text": "(Koehn and Hoang, 2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "There has also been work on using multiple languages to improve the quality of extracted translation lexicons or topic models (Mausam et al., 2009; Baldwin et al., 2010; Mimno et al., 2009) . These are not concerned with multi-target translation, but may provide us with useful hints about how to generate more effective multi-target translation models.",
"cite_spans": [
{
"start": 126,
"end": 147,
"text": "(Mausam et al., 2009;",
"ref_id": "BIBREF14"
},
{
"start": 148,
"end": 169,
"text": "Baldwin et al., 2010;",
"ref_id": "BIBREF0"
},
{
"start": 170,
"end": 189,
"text": "Mimno et al., 2009)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "In this paper, we have proposed a method for multitarget translation using a generalization of SCFGs, and proposed methods to learn and perform search over the models. In experiments, we found that these models are effective in the case when a strong LM exists in a second target that is highly related to the first target of interest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "As the overall framework of multi-target translation is broad-reaching, there are a still many challenges left for future work, a few of which we outline here. First, the current framework relies on data that is entirely parallel in all languages of interest. Can we relax this constraint and use comparable data, or apply MSCFGs to pivot translation? Second, we are currently performing alignment independently for each target. Can we improve results by considering all languages available (Lardilleux and Lepage, 2009) ? Finally, in this paper we considered the case where we are only interested in T1 accuracy, but optimizing translation accuracy for two or more targets, possibly through the use of multi-metric optimization techniques (Duh et al., 2012) is also an interesting future direction.",
"cite_spans": [
{
"start": 491,
"end": 520,
"text": "(Lardilleux and Lepage, 2009)",
"ref_id": "BIBREF13"
},
{
"start": 740,
"end": 758,
"text": "(Duh et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "It is possible to use symbols other than X (e.g.: N P , V P ) to restrict rule application to follow grammatical structure, but we focus on the case with a single non-terminal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Taking the cross-product here has the potential for combinatorial explosion as more languages are added, but in our current experiments with two target languages this did not cause significant problems, and we took no preventative measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Results for model score, a more direct measure of search errors, were largely similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank Taro Watanabe and anonymous reviewers for helpful suggestions. This work was supported in part by JSPS KAKENHI Grant Number 25730136 and the Microsoft CORE project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "PanLex and LEXTRACT: Translating all words of all languages of the world",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Pool",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Colowick",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. COLING",
"volume": "",
"issue": "",
"pages": "37--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Baldwin, Jonathan Pool, and Susan Colowick. 2010. PanLex and LEXTRACT: Translating all words of all languages of the world. In Proc. COLING, pages 37-40.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Large language models in machine translation",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Ashok",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Popat",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "858--867",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proc. EMNLP, pages 858- 867.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "WIT3: web inventory of transcribed and translated talks",
"authors": [
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Girardi",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. EAMT",
"volume": "",
"issue": "",
"pages": "261--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. WIT3: web inventory of transcribed and trans- lated talks. In Proc. EAMT, pages 261-268.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A generalized CYK algorithm for parsing stochastic CFG",
"authors": [
{
"first": "Jean-C\u00e9dric",
"middle": [],
"last": "Chappelier",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Rajman",
"suffix": ""
}
],
"year": 1998,
"venue": "TAPD",
"volume": "",
"issue": "",
"pages": "133--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean-C\u00e9dric Chappelier, Martin Rajman, et al. 1998. A generalized CYK algorithm for parsing stochastic CFG. In TAPD, pages 133-137.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, 33(2):201-228.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Better hypothesis testing for statistical machine translation: Controlling for optimizer instability",
"authors": [
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "176--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer insta- bility. In Proc. ACL, pages 176-181.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning to translate with multiple objectives",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Xianchao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Duh, Katsuhito Sudoh, Xianchao Wu, Hajime Tsukada, and Masaaki Nagata. 2012. Learning to translate with multiple objectives. In Proc. ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "MultiUN: A multilingual corpus from United Nation documents",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Eisele",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Eisele and Yu Chen. 2010. MultiUN: A mul- tilingual corpus from United Nation documents. In Proc. LREC.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multi-target machine translation using finite-state transducers",
"authors": [
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Teresa",
"middle": [],
"last": "Gonz\u00e1lez",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of TC-Star Speech to Speech Translation Workshop",
"volume": "",
"issue": "",
"pages": "105--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Teresa Gonz\u00e1lez and Francisco Casacuberta. 2006. Multi-target machine translation using finite-state transducers. In Proc. of TC-Star Speech to Speech Translation Workshop, pages 105-110.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Factored translation models",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Hieu Hoang. 2007. Factored transla- tion models. In Proc. EMNLP.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Phillip",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. HLT",
"volume": "",
"issue": "",
"pages": "48--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phillip Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proc. HLT, pages 48-54.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proc. EMNLP.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Europarl: A parallel corpus for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. MT Summit",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proc. MT Summit, pages 79-86.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Samplingbased multilingual alignment",
"authors": [
{
"first": "Adrien",
"middle": [],
"last": "Lardilleux",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Lepage",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. RANLP",
"volume": "",
"issue": "",
"pages": "214--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adrien Lardilleux and Yves Lepage. 2009. Sampling- based multilingual alignment. In Proc. RANLP, pages 214-218.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Compiling a massive, multilingual dictionary via probabilistic inference",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Mausam",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Weld",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Skinner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bilmes",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "262--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mausam, Stephen Soderland, Oren Etzioni, Daniel Weld, Michael Skinner, and Jeff Bilmes. 2009. Compiling a massive, multilingual dictionary via probabilistic in- ference. In Proc. ACL, pages 262-270.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Generalized multitext grammars",
"authors": [
{
"first": "I",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Wellington",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "661--668",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Dan Melamed, Giorgio Satta, and Benjamin Welling- ton. 2004. Generalized multitext grammars. In Proc. ACL, pages 661-668.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Polylingual topic models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Hanna",
"middle": [
"M"
],
"last": "Wallach",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "880--889",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Mimno, Hanna M. Wallach, Jason Naradowsky, David A. Smith, and Andrew McCallum. 2009. Polylingual topic models. In Proc. EMNLP, pages 880-889.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Travatar: A forest-to-string machine translation engine based on tree transducers",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. ACL Demo Track",
"volume": "",
"issue": "",
"pages": "91--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig. 2013. Travatar: A forest-to-string ma- chine translation engine based on tree transducers. In Proc. ACL Demo Track, pages 91-96.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Statistical multi-source translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. MT Summit",
"volume": "",
"issue": "",
"pages": "253--258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2001. Statistical multi-source translation. In Proc. MT Summit, pages 253-258.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Statistical machine translation: from single-word models to alignment templates",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2002. Statistical machine transla- tion: from single-word models to alignment templates. Ph.D. thesis, RWTH Aachen.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Speech-input multitarget machine translation",
"authors": [
{
"first": "Alicia",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
},
{
"first": "M",
"middle": [
"Teresa"
],
"last": "Gonz\u00e1lez",
"suffix": ""
},
{
"first": "M",
"middle": [
"In\u00e9s"
],
"last": "Torres",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. WMT",
"volume": "",
"issue": "",
"pages": "56--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alicia P\u00e9rez, M. Teresa Gonz\u00e1lez, M. In\u00e9s Torres, and Francisco Casacuberta. 2007. Speech-input multi- target machine translation. In Proc. WMT, pages 56- 63.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "BLEU scores for different T1 LM sizes without (-LM2) or with (+LM2) an LM for the second target.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "The impact of search on accuracy. Lines indicate a single LM (1 LM), two LMs with joint search (Joint) or two LMs with sequential search (Seq.) for various pop limits and pruning criteria.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"text": "",
"content": "<table><tr><td>.</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"text": "Differences in rule table sizes for a T1 of French.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF5": {
"text": "BLEU scores by pruning criterion. Columns indicate T1 (fr or zh) and the pruning criterion (T1+T2 joint probability, or T1 probability plus max T2). Rows indicate T2.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}