Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C10-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:58:49.013922Z"
},
"title": "Mixture Model-based Minimum Bayes Risk Decoding using Multiple Machine Translation Systems",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Dongdong",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present Mixture Model-based Minimum Bayes Risk (MMMBR) decoding, an approach that makes use of multiple SMT systems to improve translation accuracy. Unlike existing MBR decoding methods defined on the basis of single SMT systems, an MMMBR decoder reranks translation outputs in the combined search space of multiple systems using the MBR decision rule and a mixture distribution of component SMT models for translation hypotheses. MMMBR decoding is a general method that is independent of specific SMT models and can be applied to various commonly used search spaces. Experimental results on the NIST Chinese-to-English MT evaluation tasks show that our approach brings significant improvements to single system-based MBR decoding and outperforms a stateof-the-art system combination method. 1",
"pdf_parse": {
"paper_id": "C10-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "We present Mixture Model-based Minimum Bayes Risk (MMMBR) decoding, an approach that makes use of multiple SMT systems to improve translation accuracy. Unlike existing MBR decoding methods defined on the basis of single SMT systems, an MMMBR decoder reranks translation outputs in the combined search space of multiple systems using the MBR decision rule and a mixture distribution of component SMT models for translation hypotheses. MMMBR decoding is a general method that is independent of specific SMT models and can be applied to various commonly used search spaces. Experimental results on the NIST Chinese-to-English MT evaluation tasks show that our approach brings significant improvements to single system-based MBR decoding and outperforms a stateof-the-art system combination method. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Minimum Bayes Risk (MBR) decoding is becoming more and more popular in recent Statistical Machine Translation (SMT) research. This approach requires a second-pass decoding procedure to re-rank translation hypotheses by risk scores computed based on model's distribution. Kumar and Byrne (2004) first introduced MBR decoding to SMT field and developed it on the N-best list translations. Their work has shown that MBR decoding performs better than Maximum a Posteriori (MAP) decoding for different evaluation criteria. After that, many dedi-cated efforts have been made to improve the performances of SMT systems by utilizing MBRinspired methods. Tromble et al. (2008) proposed a linear approximation to BLEU score (log-BLEU) as a new loss function in MBR decoding and extended it from N-best lists to lattices, and Kumar et al. (2009) presented more efficient algorithms for MBR decoding on both lattices and hypergraphs to alleviate the high computational cost problem in Tromble et al.'s work. DeNero et al. (2009) proposed a fast consensus decoding algorithm for MBR for both linear and non-linear similarity measures.",
"cite_spans": [
{
"start": 271,
"end": 293,
"text": "Kumar and Byrne (2004)",
"ref_id": "BIBREF7"
},
{
"start": 646,
"end": 667,
"text": "Tromble et al. (2008)",
"ref_id": "BIBREF17"
},
{
"start": 815,
"end": 834,
"text": "Kumar et al. (2009)",
"ref_id": "BIBREF8"
},
{
"start": 973,
"end": 1016,
"text": "Tromble et al.'s work. DeNero et al. (2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "All work mentioned above share a common setting: an MBR decoder is built based on one and only one MAP decoder. On the other hand, recent research has shown that substantial improvements can be achieved by utilizing consensus statistics over multiple SMT systems (Rosti et al., 2007; Li et al., 2009a; Li et al., 2009b; Liu et al., 2009) . It could be desirable to adapt MBR decoding to multiple SMT systems as well.",
"cite_spans": [
{
"start": 251,
"end": 283,
"text": "SMT systems (Rosti et al., 2007;",
"ref_id": null
},
{
"start": 284,
"end": 301,
"text": "Li et al., 2009a;",
"ref_id": "BIBREF9"
},
{
"start": 302,
"end": 319,
"text": "Li et al., 2009b;",
"ref_id": "BIBREF11"
},
{
"start": 320,
"end": 337,
"text": "Liu et al., 2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present Mixture Modelbased Minimum Bayes Risk (MMMBR) decoding, an approach that makes use of multiple SMT systems to improve translation performance. In this work, we can take advantage of a larger search space for hypothesis selection, and employ an improved probability distribution over translation hypotheses based on mixture modeling, which linearly combines distributions of multiple component systems for Bayes risk computation. The key contribution of this paper is the usage of mixture modeling in MBR, which allows multiple SMT models to be involved in and makes the computation of n-gram consensus statistics to be more accurate. Evaluation results have shown that our approach not only brings significant improvements to single system-based MBR decoding but also outperforms a state-ofthe-art word-level system combination method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows: In Section 2, we first review traditional MBR decoding method and summarize various search spaces that can be utilized by an MBR decoder. Then, we describe how a mixture model can be used to combine distributions of multiple SMT systems for Bayes risk computation. Lastly, we present detailed MMMBR decoding model on multiple systems and make comparison with single system-based MBR decoding methods. Section 3 describes how to optimize different types of parameters. Experimental results will be shown in Section 4. Section 5 discusses some related work and Section 6 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a source sentence , MBR decoding aims to find the translation with the least expected loss under a probability distribution. The objective of an MBR decoder can be written as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Bayes Risk Decoding",
"sec_num": "2.1"
},
{
"text": "( 1)where denotes a search space for hypothesis selection;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Bayes Risk Decoding",
"sec_num": "2.1"
},
{
"text": "denotes an evidence space for Bayes risk computation; denotes a function that measures the loss between and ; is the underlying distribution based on . Some of existing work on MBR decoding focused on exploring larger spaces for both and , e.g. from N-best lists to lattices or hypergraphs (Tromble et al., 2008; Kumar et al., 2009) . Various loss functions have also been investigated by using different evaluation criteria for similarity computation, e.g. Word Error Rate, Position-independent Word Error Rate, BLEU and log-BLEU (Kumar and Byrne, 2004; Tromble et al., 2008) . But less attention has been paid to distribution . Currently, many SMT systems based on different paradigms can yield similar performances but are good at modeling different inputs in the translation task (Koehn et al., 2004a; Och et al., 2004; Chiang, 2007; Mi et al., 2008; Huang, 2008) . We expect to integrate the advantages of different SMT models into MBR decoding for further improvements. In particular, we make in-depth investigation into MBR decoding concentrating on the translation distribution by leveraging a mixture model based on multiple SMT systems.",
"cite_spans": [
{
"start": 290,
"end": 312,
"text": "(Tromble et al., 2008;",
"ref_id": "BIBREF17"
},
{
"start": 313,
"end": 332,
"text": "Kumar et al., 2009)",
"ref_id": "BIBREF8"
},
{
"start": 531,
"end": 554,
"text": "(Kumar and Byrne, 2004;",
"ref_id": "BIBREF7"
},
{
"start": 555,
"end": 576,
"text": "Tromble et al., 2008)",
"ref_id": "BIBREF17"
},
{
"start": 784,
"end": 805,
"text": "(Koehn et al., 2004a;",
"ref_id": "BIBREF5"
},
{
"start": 806,
"end": 823,
"text": "Och et al., 2004;",
"ref_id": "BIBREF15"
},
{
"start": 824,
"end": 837,
"text": "Chiang, 2007;",
"ref_id": "BIBREF0"
},
{
"start": 838,
"end": 854,
"text": "Mi et al., 2008;",
"ref_id": "BIBREF12"
},
{
"start": 855,
"end": 867,
"text": "Huang, 2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Bayes Risk Decoding",
"sec_num": "2.1"
},
{
"text": "There are three major forms of search spaces that can be obtained from an MAP decoder as a byproduct, depending on the design of the decoder: N-best lists, lattices and hypergraphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Translation Search Spaces",
"sec_num": "2.2"
},
{
"text": "An N-best list contains the most probable translation hypotheses produced by a decoder. It only presents a very small portion of the entire search space of an SMT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Translation Search Spaces",
"sec_num": "2.2"
},
{
"text": "A hypergraph is a weighted acyclic graph which compactly encodes an exponential number of translation hypotheses. It allows us to represent both phrase-based and syntax-based systems in a unified framework. Formally, a hypergraph is a pair , where is a set of hypernodes and is a set of hyperedges. Each hypernode corresponds to translation hypotheses with identical decoding states, which usually include the span of the words being translated, the grammar symbol for that span and the left and right boundary words of hypotheses for computing language model (LM) scores. Each hyperedge corresponds to a translation rule and connects a head node and a set of tail nodes . The number of tail nodes is called the arity of the hyperedge and the arity of a hypergraph is the maximum arity of its hyperedges. If the arity of a hyperedge is zero, is then called a source node. Each hypergraph has a unique root node and each path in a hypergraph induces a translation hypothesis. A lattice (Ueffing et al., 2002) can be viewed as a special hypergraph, in which the maximum arity is one.",
"cite_spans": [
{
"start": 985,
"end": 1007,
"text": "(Ueffing et al., 2002)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Translation Search Spaces",
"sec_num": "2.2"
},
{
"text": "We first describe how to construct a general distribution for translation hypotheses over multiple SMT systems using mixture modeling for usage in MBR decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.3"
},
{
"text": "Mixture modeling is a technique that has been applied to many statistical tasks successfully. For the SMT task in particular, given SMT systems with their corresponding model distributions, a mixture model is defined as a probability distribution over the combined search space of all component systems and computed as a weighted sum of component model distributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.3"
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.3"
},
{
"text": "In Equation 2,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.3"
},
{
"text": "are system weights which hold following constraints: and , is the th distribution estimated on the search space based on the log-linear formulation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.3"
},
{
"text": "where is the score function of the th system for translation , is a scaling factor that determines the flatness of the distribution sharp ( ) or smooth ( ). Due to the inherent differences in SMT models, translation hypotheses have different distributions in different systems. A mixture model can effectively combine multiple distributions with tunable system weights. The distribution of a single model used in traditional MBR can be seen as a special mixture model, where is one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.3"
},
{
"text": "Let denote machine translation systems, denotes the search space produced by system in MAP decoding procedure. An MMMBR decoder aims to seek a translation from the combined search space that maximizes the expected gain score based on a mixture model . We write the objective function of MMMBR decoding as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "( 3)For the gain function , we follow Tromble et al. (2008) to use log-BLEU, which is scored by the hypothesis length and a linear function of n-gram matches as:",
"cite_spans": [
{
"start": 38,
"end": 59,
"text": "Tromble et al. (2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "In this definition, is a reference translation, is the length of hypothesis , is an ngram presented in , is the number of times that occurs in , and is an indicator function which equals to 1 when occurs in and 0 otherwise. are model parameters, where is the maximum order of the n-grams involved. For the mixture model , we replace it by Equation 2 and rewrite the total gain score for hypothesis in Equation 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "(4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "In Equation 4, the total gain score on the combined search space can be further decomposed into each local search space with a specified distribution . This is a nice property and it allows us to compute the total gain score as a weighted sum of local gain scores on different search spaces. We expand the local gain score for computed on search space with using log-BLEU as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "We make two approximations for the situations when : the first is and the second is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "In fact, due to the differences in generative capabilities of SMT models, training data selection and various pruning techniques used, search spaces of different systems are always not identical in practice. For the convenience of formal analysis, we treat all as ideal distributions with assumptions that all systems work in similar settings, and translation candidates are shared by all systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "The method for computing n-gram posterior probability in Equation 5 depends on different types of search space : \uf0b7 When is an N-best list, it can be computed immediately by enumerating all translation candidates in the N-best list: \uf0b7 When is a hypergraph (or a lattice) that encodes exponential number of hypotheses, it is often impractical to compute this probability directly. In this paper, we use the algorithm presented in Kumar et al. (2009) which is described in Algorithm 1 2 : counts the edge with n-gram that has the highest edge posterior probability relative to predecessors in the entire graph , and is the edge posterior probability that can be efficiently computed with standard inside and outside probabilities and as:",
"cite_spans": [
{
"start": 428,
"end": 447,
"text": "Kumar et al. (2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "where is the weight of hyperedge in , is the normalization factor that equals to the inside probability of the root node in .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "Algorithm 1: Compute n-gram posterior probabilities on hypergraph (Kumar et al., 2009) 1: sort hypernodes topologically 2:",
"cite_spans": [
{
"start": 66,
"end": 86,
"text": "(Kumar et al., 2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "compute inside/outside probabilities and for each hypernode 3: compute edge posterior probability for each hyperedge 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "for each hyperedge do 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "merge n-grams on and keep the highest probability when n-grams are duplicated 6: apply the rule of edge to n-grams on and propagate gram prefixes/suffixes to 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "for each n-gram introduced by do 8: if then 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "10: else 11: 12: end if 13: end for 14: end for 15: return n-gram posterior probability set 2 We omit the similar algorithm for lattices because of their homogenous structures comparing to hypergraphs as we discussed in Section 2.2.",
"cite_spans": [
{
"start": 92,
"end": 93,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "Thus, the total gain score for hypothesis on can be further expanded as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "where is a mixture ngram posterior probability. The most important fact derived from Equation 6 is that, the mixture of different distributions can be simplified to the weighted sum of n-gram posterior probabilities on different search spaces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "We now derive the decision rule of MMMBR decoding based on Equation 6 below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "(7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "We also notice that MAP decoding and MBR decoding are two different ways of estimating the probability and each of them has advantages and disadvantages. It is desirable to interpolate them together when choosing the final translation outputs. So we include each system's MAP decoding cost as an additional feature further and modify Equation 7 to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "\u2032 (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "where is the model cost assigned by the MAP decoder for hypothesis . Because the costs of MAP decoding on different SMT models are not directly comparable, we utilize the MERT algorithm to assign an appropriate weight for each component system. Compared to single system-based MBR decoding, which obeys the decision rule below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "MMMBR decoding has a similar objective function (Equation 8). The key difference is that, in MMMBR decoding, n-gram posterior probability is computed as based on an ensemble of search spaces; meanwhile, in single system-based MBR decoding, this quantity is computed locally on single search space . The procedure of MMMBR decoding on multiple SMT systems is described in Algorithm 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "Algorithm 2: MMMBR decoding on multiple SMT systems 1: for each component system do 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "run MAP decoding and generate the corresponding search space 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "compute the n-gram posterior probability set for based on Algorithm 1 4: end for 5 compute the mixture n-gram posterior probability for each : 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "for each unique n-gram appeared in do 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "for each search space do 8 9: end for 10: end for 11: for each hyperedge in do 12: assign to the edge for all contained in 13: end for 14: return the best path according to Equation 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture Model for SMT",
"sec_num": "2.4"
},
{
"text": "In Equation 8, there are two types of parameters: parameters introduced by the gain function and the model cost , and system weights introduced by the mixture model . Because Equation 8 is not a linear function when all parameters are taken into account, MERT algorithm (Och, 2003) cannot be directly applied to optimize them at the same time. Our solution is to employ a two-pass training strategy, in which we optimize parameters for MBR first and then system weights for the mixture model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Two-Pass Parameter Optimization",
"sec_num": "3"
},
{
"text": "The inputs of an MMMBR decoder can be a combination of translation search spaces with arbitrary structures. For the sake of a general and convenience solution for optimization, we utilize the simplest N-best lists with proper sizes as approximations to arbitrary search spaces to optimize MBR parameters using MERT in the first-pass training. System weights can be set empirically based on different performances, or equally without any bias. Note that although we tune MBR parameters on N-best lists, n-gram posterior probabilities used for Bayes risk computation could still be estimated on hypergraphs for non N-best-based search spaces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Optimization for MBR",
"sec_num": "3.1"
},
{
"text": "After MBR parameters optimized, we begin to tune system weights for the mixture model in the second-pass training. We rewrite Equation 8 as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Optimization for Mixture Model",
"sec_num": "3.2"
},
{
"text": "\u2032 For each , the aggregated score surrounded with braces can be seen as its feature value. Equation 9 now turns to be a linear function for all weights and can be optimized by the MERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Optimization for Mixture Model",
"sec_num": "3.2"
},
{
"text": "We conduct experiments on the NIST Chineseto-English machine translation tasks. We use the newswire portion of the NIST 2006 test set (MT06-nw) as the development set for parameter optimization, and report results on the NIST 2008 test set (MT08). Translation performances are measured in terms of case-insensitive BLEU scores. Statistical significance is computed using the bootstrap re-sampling method proposed by Koehn (2004b) . All bilingual corpora available for the NIST 2008 constrained track of Chinese-to-English machine translation task are used as training data, which contain 5.1M sentence pairs, 128M Chinese words and 147M English words after preprocessing. Word alignments are performed by GIZA++ with an intersect-diag-grow refinement.",
"cite_spans": [
{
"start": 416,
"end": 429,
"text": "Koehn (2004b)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Metric",
"sec_num": "4.1"
},
{
"text": "A 5-gram language model is trained on the English side of all bilingual data plus the Xinhua portion of LDC English Gigaword Version 3.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Metric",
"sec_num": "4.1"
},
{
"text": "We use two baseline systems. The first one (SYS1) is a hierarchical phrase-based system (Chiang, 2007) based on Synchronous Context Free Grammar (SCFG), and the second one (SYS2) is a phrasal system (Xiong et al., 2006) based on Bracketing Transduction Grammar (Wu, 1997) with a lexicalized reordering component based on maximum entropy model. Phrasal rules shared by both systems are extracted on all bilingual data, while hierarchical rules for SYS1 only are extracted on a selected data set, including LDC2003E07, LDC2003E14, LDC2005T06, LDC2005T10, LDC2005E83, LDC2006E26, LDC2006E34, LDC2006E85 and LDC2006E92, which contain about 498,000 sentence pairs. Translation hypergraphs are generated by each baseline system during the MAP decoding phase, and 1000-best lists used for MERT algorithm are extracted from hypergraphs by the k-best parsing algorithm (Huang and Chiang, 2005) . We tune scaling factor to optimize the performance of HyperGraph-based MBR decoding (HGMBR) on MT06-nw for each system (0.5 for SYS1 and 0.01 for SYS2).",
"cite_spans": [
{
"start": 88,
"end": 102,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF0"
},
{
"start": 199,
"end": 219,
"text": "(Xiong et al., 2006)",
"ref_id": "BIBREF20"
},
{
"start": 860,
"end": 884,
"text": "(Huang and Chiang, 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "4.2"
},
{
"text": "We first present the overall results of MMMBR decoding on two baseline systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MMMBR Results on Multiple Systems",
"sec_num": "4.3"
},
{
"text": "To compare with single system-based MBR methods, we re-implement N-best MBR, which performs MBR decoding on 1000-best lists with the fast consensus decoding algorithm (DeNero et al., 2009) , and HGMBR, which performs MBR decoding on a hypergraph (Kumar et al., 2009) . Both methods use log-BLEU as the loss function. We also compare our method with IHMM Word-Comb, a state-of-the-art word-level system combination approach based on incremental HMM alignment proposed by Li et al. (2009b) . We report results of MMMBR decoding on both N-best lists (N-best MMMBR) and hypergraphs (Hypergraph MMMBR) of two baseline systems. As MBR decoding can be used for any SMT system, we also evaluate MBR-IHMM Word-Comb, which uses N-best lists generated by HGMBR on each baseline systems.",
"cite_spans": [
{
"start": 167,
"end": 188,
"text": "(DeNero et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 246,
"end": 266,
"text": "(Kumar et al., 2009)",
"ref_id": "BIBREF8"
},
{
"start": 470,
"end": 487,
"text": "Li et al. (2009b)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MMMBR Results on Multiple Systems",
"sec_num": "4.3"
},
{
"text": "The default beam size is set to 50 for MAP decoding and hypergraph generation. The setting of N-best candidates used for (MBR-) IHMM Word-Comb is the same as the one used in Li et al. (2009b) . The maximum order of n-grams involved in MBR model is set to 4. Table 2 Table 2 . MMMBR decoding on multiple systems (*: significantly better than HGMBR with ; +: significantly better than IHMM Word-Comb with )",
"cite_spans": [
{
"start": 174,
"end": 191,
"text": "Li et al. (2009b)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 258,
"end": 265,
"text": "Table 2",
"ref_id": null
},
{
"start": 266,
"end": 273,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "MMMBR Results on Multiple Systems",
"sec_num": "4.3"
},
{
"text": "From Table 2 we can see that, compared to MAP decoding, N-best MBR and HGMBR only improve the performance in a relative small range (+0.1~+0.6 BLEU), while MMMBR decoding on multiple systems can yield significant improvements on both dev set (+0.9 BLEU on N-best MMMBR and +1.3 BLEU on Hypergraph MMMBR) and test set (+0.9 BLEU on Nbest MMMBR and +1.4 BLEU on Hypergraph MMMBR); compared to IHMM Word-Comb, N-best MMMBR can achieve comparable results on both dev and test sets, while Hypergraphs MMMBR can achieve even better results (+0.3 BLEU on dev and +0.6 BLEU on test); compared to MBR-IHMM Word-Comb, Hypergraph MMMBR can also obtain comparable results with tiny improvements (+0.1 BLEU on dev and +0.2 BLEU on test). However, MBR-IHMM Word-Comb has ability to generate new hypotheses, while Hypergraph MMMBR only chooses translations from original search spaces.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "MMMBR Results on Multiple Systems",
"sec_num": "4.3"
},
{
"text": "We next evaluate performances of MMMBR decoding on hypergraphs generated by different beam size settings, and compare them to (MBR-) IHMM Word-Comb with the same candidate size and HGMBR with the same beam size. We list the results of MAP decoding for comparison. The comparative results on MT08 are shown in Figure 1 , where X-axis is the size used for all methods each time, Y-axis is the BLEU score, MAP-and HGMBR-stand for MAP decoding and HGMBR decoding for the th system. Figure 1 we can see that, MMMBR decoding performs consistently better than both (MBR-) IHMM Word-Comb and HGMBR on all sizes. The gains achieved are around +0.5 BLEU compared to IHMM Word-Comb, +0.2 BLEU compared to MBR-IHMM Word-Comb, and +0.8 BLEU compared to HGMBR. Compared to MAP decoding, the best result (30.1) is obtained when the size is 100, and the largest improvement (+1.4 BLEU) is obtained when the size is 50. However, we did not observe significant improvement when the size is larger than 50.",
"cite_spans": [],
"ref_spans": [
{
"start": 309,
"end": 317,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 478,
"end": 486,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "MMMBR Results on Multiple Systems",
"sec_num": "4.3"
},
{
"text": "We then setup an experiment to verify that the mixture model based on multiple distributions is more effective than any individual distributions for Bayes risk computation in MBR decoding. We use Mix-HGMBR to denote MBR decoding performed on single hypergraph of each system in the meantime using a mixture model upon distributions of two systems for Bayes risk computation. We compare it with HGMBR and Hypergraph MMMBR and list results in Table 3 Table 3 . Performance of MBR decoding on different settings of search spaces and distributions It can be seen that based on the same search space, the performance of Mix-HGMBR is significantly better than that of HGMBR (+0.3/+0.6 BLEU on dev/test). Yet the performance is still not as good as Hypergraph, which indicates the fact that the mixture model and the combination of search spaces are both helpful to MBR decoding, and the best choice is to use them together.",
"cite_spans": [],
"ref_spans": [
{
"start": 441,
"end": 448,
"text": "Table 3",
"ref_id": null
},
{
"start": 449,
"end": 456,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "MMMBR Results on Multiple Systems",
"sec_num": "4.3"
},
{
"text": "We also empirically investigate the impacts of different system weight settings upon the performances of Hypergraph MMMBR on dev set in Figure 2 , where X-axis is the weight for SYS1, Y-axis is the BLEU score. The weight for SYS2 equals to as only two systems involved. The best evaluation result on dev set is achieved when the weight pair is set to 0.7/0.3 for SYS1/SYS2, which is also very close to the one trained automatically by the training strategy presented in Section 3.2. Although this training strategy can be processed repeatedly, the performance is stable after the 1 st round finished. ",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 144,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "MMMBR Results on Multiple Systems",
"sec_num": "4.3"
},
{
"text": "Inspired by Macherey and Och (2007), we arrange a similar experiment to test MMMBR decoding for each baseline system on an ensemble of sub-systems built by the following two steps. Firstly, we iteratively apply the following procedure 3 times: at the th time, we randomly sample 80% sentence pairs from the total bilingual data to train a translation model and use it to build a new system based on the same decoder, which is denoted as sub-system-. Table 4 shows the evaluation results of all sub-systems on MT08, where MAP decoding (the former ones) and corresponding HGMBR (the latter ones) are grouped together by a slash. We set all beam sizes to 20 for a time-saving purpose. Table 4 . Performance of sub-systems Secondly, starting from each baseline system, we gradually add one more sub-system each time and perform Hypergraph MMMBR on hypergraphs generated by current involved systems. We can see from Table 5 that, compared to the results of MAP decoding, MMMBR decoding can achieve significant improvements when more than one sub-system are involved; however, compared to the results of HGMBR on baseline systems, there are few changes of performance when the number of sub-systems increases. One potential reason is that the translation hypotheses between multiple sub-systems under the same SMT model hold high degree of correlation, which is discussed in Macherey and Och (2007) .",
"cite_spans": [
{
"start": 1369,
"end": 1392,
"text": "Macherey and Och (2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 450,
"end": 457,
"text": "Table 4",
"ref_id": null
},
{
"start": 682,
"end": 689,
"text": "Table 4",
"ref_id": null
},
{
"start": 911,
"end": 918,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "MMMBR Results on Identical Systems with Different Translation Models",
"sec_num": "4.4"
},
{
"text": "We also evaluate MBR-IHMM Word-Comb on N-best lists generated by each baseline system with its corresponding three sub-systems. Evaluation results are shown in Table 6 , where Hypergraph MMMBR still outperforms MBR-IHMM Word-Comb on both baseline systems. ",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 167,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "MMMBR Results on Identical Systems with Different Translation Models",
"sec_num": "4.4"
},
{
"text": "Employing consensus between multiple systems to improve machine translation quality has made rapid progress in recent years. System combination methods based on confusion networks (Rosti et al., 2007; Li et al., 2009b) have shown state-of-the-art performances in MT benchmarks. Different from them, MMMBR decoding method does not generate new translations. It maintains the essential of MBR methods to seek translations from existing search spaces. Hypothesis selection method (Hildebrand and Vogel, 2008) resembles more our method in making use of n-gram statistics. Yet their work does not belong to the MBR framework and treats all systems equally. Li et al. (2009a) presents a codecoding method, in which n-gram agreement and disagreement statistics between translations of multiple decoders are employed to re-rank both full and partial hypotheses during decoding. Liu et al. (2009) proposes a joint-decoding method to combine multiple SMT models into one decoder and integrate translation hypergraphs generated by different models. Both of the last two methods work in a white-box way and need to implement a more complicated decoder to integrate multiple SMT models to work together; meanwhile our method can be conveniently used as a second-pass decoding procedure, without considering any system implementation details.",
"cite_spans": [
{
"start": 180,
"end": 200,
"text": "(Rosti et al., 2007;",
"ref_id": "BIBREF16"
},
{
"start": 201,
"end": 218,
"text": "Li et al., 2009b)",
"ref_id": "BIBREF11"
},
{
"start": 477,
"end": 505,
"text": "(Hildebrand and Vogel, 2008)",
"ref_id": "BIBREF2"
},
{
"start": 652,
"end": 669,
"text": "Li et al. (2009a)",
"ref_id": "BIBREF9"
},
{
"start": 870,
"end": 887,
"text": "Liu et al. (2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this paper, we have presented a novel MMMBR decoding approach that makes use of a mixture distribution of multiple SMT systems to improve translation accuracy. Compared to single system-based MBR decoding methods, our method can achieve significant improvements on both dev and test sets. What is more, MMMBR decoding approach also outperforms a state-of-the-art system combination method. We have empirically verified that the success of our method comes from both the mixture modeling of translation hypotheses and the combined search space for translation selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "In the future, we will include more SMT systems with more complicated models into our MMMBR decoder and employ more general MERT algorithms on hypergraphs and lattices (Kumar et al., 2009) for parameter optimization.",
"cite_spans": [
{
"start": 168,
"end": 188,
"text": "(Kumar et al., 2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "This work has been done while the author was visiting Microsoft Research Asia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Hierarchical Phrase Based Translation",
"authors": [
{
"first": "Chiang",
"middle": [],
"last": "David",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiang David. 2007. Hierarchical Phrase Based Translation. Computational Linguistics, 33(2): 201-228.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Fast Consensus Decoding over Translation Forests",
"authors": [
{
"first": "Denero",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of 47 th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "567--575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "DeNero John, David Chiang, and Kevin Knight. 2009. Fast Consensus Decoding over Translation Forests. In Proc. of 47 th Meeting of the Associa- tion for Computational Linguistics, pages 567-575.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Combination of Machine Translation Systems via Hypothesis Selection from Combined Nbest lists",
"authors": [
{
"first": "Almut",
"middle": [],
"last": "Hildebrand",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Silja",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of the Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "254--261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hildebrand Almut Silja and Stephan Vogel. 2008. Combination of Machine Translation Systems via Hypothesis Selection from Combined N- best lists. In Proc. of the Association for Machine Translation in the Americas, pages 254-261.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Better k-best Parsing",
"authors": [
{
"first": "Huang",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of 7 th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "53--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang Liang and David Chiang. 2005. Better k-best Parsing. In Proc. of 7 th International Conference on Parsing Technologies, pages 53-64.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Forest Reranking: Discriminative Parsing with Non-Local Features",
"authors": [
{
"first": "Huang",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of 46 th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "586--594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang Liang. 2008. Forest Reranking: Discrimin- ative Parsing with Non-Local Features. In Proc. of 46 th Meeting of the Association for Com- putational Linguistics, pages 586-594.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Phrase-based Model for SMT",
"authors": [
{
"first": "Koehn",
"middle": [],
"last": "Philipp",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "1",
"pages": "114--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn Philipp. 2004a. Phrase-based Model for SMT. Computational Linguistics, 28(1): 114-133.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Statistical Significance Tests for Machine Translation Evaluation",
"authors": [
{
"first": "Koehn",
"middle": [],
"last": "Philipp",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of Empirical Methods on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn Philipp. 2004b. Statistical Significance Tests for Machine Translation Evaluation. In Proc. of Empirical Methods on Natural Language Processing, pages 388-395.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Minimum Bayes-Risk Decoding for Statistical Machine Translation",
"authors": [
{
"first": "Kumar",
"middle": [],
"last": "Shankar",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "169--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar Shankar and William Byrne. 2004. Minimum Bayes-Risk Decoding for Statistical Machine Translation. In Proc. of the North American Chapter of the Association for Computational Lin- guistics, pages 169-176.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Efficient Minimum Error Rate Training and Minimum Bayes-Risk Decoding for Translation Hypergraphs and Lattices",
"authors": [
{
"first": "Kumar",
"middle": [],
"last": "Shankar",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of 47th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "163--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar Shankar, Wolfgang Macherey, Chris Dyer, and Franz Och. 2009. Efficient Minimum Error Rate Training and Minimum Bayes-Risk De- coding for Translation Hypergraphs and Lat- tices. In Proc. of 47th Meeting of the Association for Computational Linguistics, pages 163-171.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Collaborative Decoding: Partial Hypothesis Re-Ranking Using Translation Consensus between Decoders",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Mu",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Dongdong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chi-Ho",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of 47 th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "585--592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Mu, Nan Duan, Dongdong Zhang, Chi-Ho Li, and Ming Zhou. 2009a. Collaborative Decoding: Partial Hypothesis Re-Ranking Using Trans- lation Consensus between Decoders. In Proc. of 47 th Meeting of the Association for Computa- tional Linguistics, pages 585-592.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Joint Decoding with Multiple Translation Models",
"authors": [
{
"first": "Liu",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of 47 th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "576--584",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu Yang, Haitao Mi, Yang Feng, and Qun Liu. 2009. Joint Decoding with Multiple Translation Models. In Proc. of 47 th Meeting of the Associa- tion for Computational Linguistics, pages 576-584.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Incremental HMM Alignment for MT system Combination",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Chi-Ho",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yupeng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Xi",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of 47 th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "949--957",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Chi-Ho, Xiaodong He, Yupeng Liu, and Ning Xi. 2009b. Incremental HMM Alignment for MT system Combination. In Proc. of 47 th Meeting of the Association for Computational Linguistics, pages 949-957.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Forest-Based Translation",
"authors": [
{
"first": "Mi",
"middle": [],
"last": "Haitao",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of 46 th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "192--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mi Haitao, Liang Huang, and Qun Liu. 2008. Forest- Based Translation. In Proc. of 46 th Meeting of the Association for Computational Linguistics, pages 192-199.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An Empirical Study on Computing Consensus Translations from multiple Machine Translation Systems",
"authors": [
{
"first": "Macherey",
"middle": [],
"last": "Wolfgang",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of Empirical Methods on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "986--995",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Macherey Wolfgang and Franz Och. 2007. An Em- pirical Study on Computing Consensus Trans- lations from multiple Machine Translation Systems. In Proc. of Empirical Methods on Natu- ral Language Processing, pages 986-995.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Minimum Error Rate Training in Statistical Machine Translation",
"authors": [
{
"first": "Och",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of 41 th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och Franz. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proc. of 41 th Meeting of the Association for Computational Linguistics, pages 160-167.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The Alignment template approach to Statistical Machine Translation",
"authors": [
{
"first": "Och",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "4",
"pages": "417--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och Franz and Hermann Ney. 2004. The Alignment template approach to Statistical Machine Translation. Computational Linguistics, 30(4): 417-449.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Improved Word-Level System Combination for Machine Translation",
"authors": [
{
"first": "Rosti",
"middle": [],
"last": "Antti-Veikko",
"suffix": ""
},
{
"first": "Spyros",
"middle": [],
"last": "Matsoukas",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of 45 th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "312--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rosti Antti-Veikko, Spyros Matsoukas, and Richard Schwartz. 2007. Improved Word-Level System Combination for Machine Translation. In Proc. of 45 th Meeting of the Association for Computa- tional Linguistics, pages 312-319.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Lattice Minimum Bayes-Risk Decoding for Statistical Machine Translation",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of Empirical Methods on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "620--629",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Tromble, Shankar Kumar, Franz Och, and Wolf- gang Macherey. 2008. Lattice Minimum Bayes- Risk Decoding for Statistical Machine Trans- lation. In Proc. of Empirical Methods on Natural Language Processing, pages 620-629.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Generation of Word Graphs in Statistical Machine Translation",
"authors": [
{
"first": "Ueffing",
"middle": [],
"last": "Nicola",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of Empirical Methods on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "156--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ueffing Nicola, Franz Och, and Hermann Ney. 2002. Generation of Word Graphs in Statistical Ma- chine Translation. In Proc. of Empirical Me- thods on Natural Language Processing, pages 156-163.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora",
"authors": [
{
"first": "",
"middle": [],
"last": "Wu Dekai",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu Dekai. 1997. Stochastic Inversion Transduc- tion Grammars and Bilingual Parsing of Pa- rallel Corpora. Computational Linguistics, 23(3): 377-404.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Maximum Entropy based Phrase Reordering Model for Statistical Machine Translation",
"authors": [
{
"first": "Xiong",
"middle": [],
"last": "Deyi",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of 44 th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "521--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiong Deyi, Qun Liu, and Shouxun Lin. 2006. Max- imum Entropy based Phrase Reordering Model for Statistical Machine Translation. In Proc. of 44 th Meeting of the Association for Com- putational Linguistics, pages 521-528.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "MMMBR vs. (MBR-) IHMM Word-Comb and HGMBR with different sizes From"
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Impacts of different system weights in the mixture model"
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">gives data statistics.</td></tr><tr><td>Data Set</td><td>#Sentence</td><td>#Word</td></tr><tr><td>MT06-nw (dev)</td><td>616</td><td>17,316</td></tr><tr><td>MT08 (test)</td><td>1,357</td><td>31,600</td></tr><tr><td colspan=\"3\">Table 1. Statistics on dev and test data sets</td></tr></table>",
"html": null,
"text": "",
"num": null
},
"TABREF4": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">MT08</td></tr><tr><td/><td colspan=\"2\">SYS1 SYS2</td></tr><tr><td>MAP</td><td>28.4</td><td>27.6</td></tr><tr><td>HGMBR</td><td>29.0</td><td>27.8</td></tr><tr><td colspan=\"2\">Hypergraph MMMBR</td><td/></tr><tr><td>+ sub-system-1</td><td>29.1</td><td>27.9</td></tr><tr><td>+ sub-system-2</td><td>29.1</td><td>28.1</td></tr><tr><td>+ sub-system-3</td><td>29.3</td><td>28.3</td></tr><tr><td colspan=\"3\">Table 5. Performance of Hypergraph MMMBR</td></tr><tr><td>on multiple sub-systems</td><td/><td/></tr></table>",
"html": null,
"text": "shows the evaluation results.",
"num": null
}
}
}
}