Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N13-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:40:20.987391Z"
},
"title": "Applying Pairwise Ranked Optimisation to Improve the Interpolation of Translation Models",
"authors": [
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In Statistical Machine Translation we often have to combine different sources of parallel training data to build a good system. One way of doing this is to build separate translation models from each data set and linearly interpolate them, and to date the main method for optimising the interpolation weights is to minimise the model perplexity on a heldout set. In this work, rather than optimising for this indirect measure, we directly optimise for BLEU on the tuning set and show improvements in average performance over two data sets and 8 language pairs.",
"pdf_parse": {
"paper_id": "N13-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "In Statistical Machine Translation we often have to combine different sources of parallel training data to build a good system. One way of doing this is to build separate translation models from each data set and linearly interpolate them, and to date the main method for optimising the interpolation weights is to minimise the model perplexity on a heldout set. In this work, rather than optimising for this indirect measure, we directly optimise for BLEU on the tuning set and show improvements in average performance over two data sets and 8 language pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Statistical Machine Translation (SMT) requires large quantities of parallel training data in order to produce high quality translation systems. This training data, however, is often scarce and must be drawn from whatever sources are available. If these data sources differ systematically from each other, and/or from the test data, then the problem of combining these disparate data sets to create the best possible translation system is known as domain adaptation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One approach to domain adaptation is to build separate models for each training domain, then weight them to create a system tuned to the test domain. In SMT, a successful approach to building domain specific language models is to build one from each corpus, then linearly interpolate them, choosing weights that minimise the perplexity on a suitable heldout set of in-domain data. This method has been applied by many authors (e.g. ), and is implemented in popular language modelling tools like IRSTLM (Federico et al., 2008) and SRILM (Stolcke, 2002) .",
"cite_spans": [
{
"start": 502,
"end": 525,
"text": "(Federico et al., 2008)",
"ref_id": "BIBREF8"
},
{
"start": 536,
"end": 551,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Similar interpolation techniques have been developed for translation model interpolation (Foster et al., 2010; Sennrich, 2012) for phrase-based systems but have not been as widely adopted, perhaps because the efficacy of the methods is not as clearcut. In this previous work, the authors used standard phrase extraction heuristics to extract phrases from a heldout set of parallel sentences, then tuned the translation model (i.e. the phrase table) interpolation weights to minimise the perplexity of the interpolated model on this set of extracted phrases.",
"cite_spans": [
{
"start": 89,
"end": 110,
"text": "(Foster et al., 2010;",
"ref_id": "BIBREF10"
},
{
"start": 111,
"end": 126,
"text": "Sennrich, 2012)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we try to improve on this perplexity optimisation of phrase table interpolation weights by addressing two of its shortcomings. The first problem is that the perplexity is not well defined because of the differing coverage of the phrase tables, and their partial coverage of the phrases extracted from the heldout set. Secondly, perplexity may not correlate with the performance of the final SMT system. So, instead of optimising the interpolation weights for the indirect goal of translation model perplexity, we optimise them directly for translation performance. We do this by incorporating these weights into SMT tuning using a modified version of Pairwise Ranked Optimisation (PRO) (Hopkins and May, 2011) .",
"cite_spans": [
{
"start": 701,
"end": 724,
"text": "(Hopkins and May, 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In experiments on two different domain adaptation problems and 8 language pairs, we show that our method achieves comparable or improved performance, when compared to the perplexity minimisation method. This is an encouraging result as it shows that PRO can be adapted to optimise translation parameters other than those in the standard linear model. Table Interpolation Weights",
"cite_spans": [],
"ref_spans": [
{
"start": 351,
"end": 370,
"text": "Table Interpolation",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the work of Foster and Kuhn (2007) , linear interpolation weights were derived from different measures of distance between the training corpora, but this was not found to be successful. Optimising the weights to minimise perplexity, as described in the introduction, was found by later authors to be more useful (Foster et al., 2010; Sennrich, 2012) , generally showing small improvements over the default approach of concatenating all training data. An alternative approach is to use log-linear interpolation, so that the interpolation weights can be easily optimised in tuning Bertoldi and Federico, 2009; Banerjee et al., 2011) . However, this effectively multiplies the probabilities across phrase tables, which does not seem appropriate, especially for phrases absent from 1 table.",
"cite_spans": [
{
"start": 15,
"end": 37,
"text": "Foster and Kuhn (2007)",
"ref_id": "BIBREF9"
},
{
"start": 315,
"end": 336,
"text": "(Foster et al., 2010;",
"ref_id": "BIBREF10"
},
{
"start": 337,
"end": 352,
"text": "Sennrich, 2012)",
"ref_id": "BIBREF19"
},
{
"start": 582,
"end": 610,
"text": "Bertoldi and Federico, 2009;",
"ref_id": "BIBREF1"
},
{
"start": 611,
"end": 633,
"text": "Banerjee et al., 2011)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Approaches",
"sec_num": "2.1"
},
{
"text": "The standard SMT model scores translation hypotheses as a linear combination of features. The model score of a hypothesis e is then defined to be w \u2022 h(e, f, a) where w is a weight vector, and h(e, f, a) a vector of feature functions defined over source sentences (f ), hypotheses, and their alignments (a). The weights are normally optimised (tuned) to maximise BLEU on a heldout set (the tuning set).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning SMT Systems",
"sec_num": "2.2"
},
{
"text": "The most popular algorithm for this weight optimisation is the line-search based MERT (Och, 2003) , but recently other algorithms that support more features, such as PRO (Hopkins and May, 2011) or MIRA-based algorithms (Watanabe et al., 2007; Chiang et al., 2008; Cherry and Foster, 2012) , have been introduced. All these algorithms assume that the model score is a linear function of the parameters w. However since the phrase table probabilities enter the score function in log form, if these probabilities are a linear interpolation, then the model score is not a linear function of the interpolation weights. We will show that PRO can be used to simultaneously optimise such non-linear parameters.",
"cite_spans": [
{
"start": 86,
"end": 97,
"text": "(Och, 2003)",
"ref_id": "BIBREF16"
},
{
"start": 170,
"end": 193,
"text": "(Hopkins and May, 2011)",
"ref_id": "BIBREF13"
},
{
"start": 219,
"end": 242,
"text": "(Watanabe et al., 2007;",
"ref_id": "BIBREF22"
},
{
"start": 243,
"end": 263,
"text": "Chiang et al., 2008;",
"ref_id": "BIBREF6"
},
{
"start": 264,
"end": 288,
"text": "Cherry and Foster, 2012)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning SMT Systems",
"sec_num": "2.2"
},
{
"text": "PRO is a batch tuning algorithm in the sense that there is an outer loop which repeatedly decodes a small (1000-2000 sentence) tuning set and passes the n-best lists from this tuning set to the core algorithm (also known as the inner loop). The core algorithm samples pairs of hypotheses from the nbest lists (according to a specific procedure), and uses these samples to optimise the weight vector w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pairwise Ranked Optimisation",
"sec_num": "2.3"
},
{
"text": "The core algorithm in PRO will now be explained in more detail. Suppose that the N sampled hypothesis pairs (x \u03b1 i , x \u03b2 i ) are indexed by i and have corresponding feature vectors pairs (h \u03b1 i , h \u03b2 i ). If the gain of a given hypothesis (we use smoothed sentence BLEU) is given by the function g(x), then we define",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pairwise Ranked Optimisation",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y i by y i \u2261 sgn(g(x \u03b1 i ) \u2212 g(x \u03b2 i ))",
"eq_num": "(1)"
}
],
"section": "Pairwise Ranked Optimisation",
"sec_num": "2.3"
},
{
"text": "For weights w, and hypothesis pair (x \u03b1 i , x \u03b2 i ), the (model) score difference \u2206s w i is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pairwise Ranked Optimisation",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2206s w i \u2261 s w (x \u03b1 i ) \u2212 s w (x \u03b2 i ) \u2261 w \u2022 h \u03b1 i \u2212 h \u03b2 i",
"eq_num": "(2)"
}
],
"section": "Pairwise Ranked Optimisation",
"sec_num": "2.3"
},
{
"text": "Then the core PRO algorithm updates the weight vector to w * by solving the following optimisation problem:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pairwise Ranked Optimisation",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w * = arg max w N i=1 log (\u03c3 (y i \u2206s w i ))",
"eq_num": "(3)"
}
],
"section": "Pairwise Ranked Optimisation",
"sec_num": "2.3"
},
{
"text": "where \u03c3(x) is the standard sigmoid function. The derivative of the function can be computed easily, and the optimisation problem can be solved with standard numerical optimisation algorithms such as L-BFGS (Byrd et al., 1995) . PRO is normally implemented by converting each sample to a training example for a 2 class maximum entropy classifier, with the feature values set to \u2206h i and the responses set to the y i , whereupon the log-likelihood is the objective given in Equation (3). As in maximum entropy modeling, it is usual to add a Gaussian prior to the objective (3) in PRO training.",
"cite_spans": [
{
"start": 206,
"end": 225,
"text": "(Byrd et al., 1995)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pairwise Ranked Optimisation",
"sec_num": "2.3"
},
{
"text": "We now show how to apply the PRO tuning algorithm of the previous subsection to simultaneously optimise the weights of the translation system, and the interpolation weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending PRO for Mixture Models",
"sec_num": "2.4"
},
{
"text": "In the standard phrase-based model, some of the features are derived from logs of phrase translation probabilities. If the phrase table is actually a linear interpolation of two (or more) phrase tables, then we can consider these features as also being functions of the interpolation weights. The interpolation weights then enter the score differences {\u2206s w i } via the phrase features, and we can jointly optimise the objective in Equation 3for translation model weights and interpolation weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending PRO for Mixture Models",
"sec_num": "2.4"
},
{
"text": "To make this more concrete, suppose that the feature vector consists of m phrase table features and n \u2212 m other features 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending PRO for Mixture Models",
"sec_num": "2.4"
},
{
"text": "h \u2261 (log(p 1 ), . . . , log(p m ), h m+1 , . . . h n ) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending PRO for Mixture Models",
"sec_num": "2.4"
},
{
"text": "where each p j is an interpolation of two probability distributions p j A and p j B . So,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending PRO for Mixture Models",
"sec_num": "2.4"
},
{
"text": "p j \u2261 \u03bb j p j A +(1\u2212\u03bb j )p j B with 0 \u2264 \u03bb j \u2264 1. Defining \u03bb \u2261 (\u03bb 1 . . . \u03bb m )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending PRO for Mixture Models",
"sec_num": "2.4"
},
{
"text": ", the optimisation problem is then:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending PRO for Mixture Models",
"sec_num": "2.4"
},
{
"text": "(w * , \u03bb * ) = arg max (w,\u03bb) N i=1 log \u03c3 y i \u2206s (w,\u03bb) i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending PRO for Mixture Models",
"sec_num": "2.4"
},
{
"text": "(5) where the sum is over the sampled hypothesis pairs and the \u2206 indicates the difference between the model scores of the two hypotheses in the pair, as before. The model score s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending PRO for Mixture Models",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(w,\u03bb) i is given by m j=1 w j \u2022 log \u03bb j p j Ai + (1 \u2212 \u03bb j )p j Bi ) + n j=m+1 w j h j i",
"eq_num": "(6)"
}
],
"section": "Extending PRO for Mixture Models",
"sec_num": "2.4"
},
{
"text": "where w \u2261 (w i . . . w n ). A Gaussian regularisation term is added to the objective, as it was for PRO. By replacing the core algorithm of PRO with the optimisation above, the interpolation weights can be trained simultaneously with the other model weights. Actually, the above explanation contains a simplification, in that it shows the phrase features interpolated at sentence level. In reality the phrase features are interpolated at the phrase level, then combined to give the sentence level feature value. This makes the definition of the objective more complex than that shown above, but still optimisable using bounded L-BFGS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extending PRO for Mixture Models",
"sec_num": "2.4"
},
{
"text": "We ran experiments with data from the WMT shared tasks Callison-Burch et al., 2012) , as well as OpenSubtitles data 2 released by the OPUS project (Tiedemann, 2009) .",
"cite_spans": [
{
"start": 55,
"end": 83,
"text": "Callison-Burch et al., 2012)",
"ref_id": "BIBREF4"
},
{
"start": 147,
"end": 164,
"text": "(Tiedemann, 2009)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus and Baselines",
"sec_num": "3.1"
},
{
"text": "The experiments targeted both the newscommentary (nc) and OpenSubtitles (st) domains, with nc-devtest2007 and nc-test2007 for tuning and testing in the nc domain, respectively, and corresponding 2000 sentence tuning and test sets selected from the st data. The newscommentary v7 corpus and a 200k sentence corpus selected from the remaining st data were used as in-domain training data for the respective domains, with europarl v7 (ep) used as out-of-domain training data in both cases. The language pairs we tested were the WMT language pairs for nc (English (en) to and from Spanish (es), German (de), French (fr) and Czech (cs)), with Dutch (nl) substituted for de in the st experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus and Baselines",
"sec_num": "3.1"
},
{
"text": "To build phrase-based translation systems, we used the standard Moses training pipeline, in particular employing the usual 5 phrase features -forward and backward phrase probabilities, forward and backward lexical scores and a phrase penalty. The 5-gram Kneser-Ney smoothed language models were trained by SRILM (Stolcke, 2002) , with KenLM (Heafield, 2011) used at runtime. The language model is always a linear interpolation of models estimated on the in-and outof-domain corpora, with weights tuned by SRILM's perplexity minimisation 3 . All experiments were run three times with BLEU scores averaged, as recommended by Clark et al. (2011) . Performance was evaluated using case-insensitive BLEU (Papineni et al., 2002) , as implemented in Moses.",
"cite_spans": [
{
"start": 312,
"end": 327,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF20"
},
{
"start": 341,
"end": 357,
"text": "(Heafield, 2011)",
"ref_id": "BIBREF12"
},
{
"start": 623,
"end": 642,
"text": "Clark et al. (2011)",
"ref_id": "BIBREF7"
},
{
"start": 699,
"end": 722,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus and Baselines",
"sec_num": "3.1"
},
{
"text": "The baseline systems were tuned using the Moses version of PRO, a reimplementation of the original algorithm using the sampling scheme recommended by Hopkins and May. We ran 15 iterations of PRO, choosing the weights that maximised BLEU on the tuning set. For the PRO training of the interpolated models, we used the same sampling scheme, with optimisation of the model weights and interpolation weights implemented in Python using scipy 4 . The implementation is available in Moses, in the contrib/promix directory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus and Baselines",
"sec_num": "3.1"
},
{
"text": "The phrase table interpolation and perplexitybased minimisation of interpolation weights used the code accompanying Sennrich (2012) , also available in Moses.",
"cite_spans": [
{
"start": 116,
"end": 131,
"text": "Sennrich (2012)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus and Baselines",
"sec_num": "3.1"
},
{
"text": "For each of the two test sets (nc and st), we compare four different translation systems (three baseline systems, and our new interpolation method): in Phrase and reordering tables were built from just the in-domain data. joint Phrase and reordering tables were built from the in-and out-of-domain data, concatenated. perp Separate phrase tables built on in-and out-ofdomain data, interpolated using perplexity minimisation. The reordering table is as for joint. pro-mix As perp, but interpolation weights optimised using our modified PRO algorithm. So the two interpolated models (perp and pro-mix) are the same as joint except that their 4 non-constant phrase features are interpolated across the two separate phrase tables. Note that the language models are the same across all four systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "The results of this comparison over the 8 language pairs are shown in Figure 1 , and summarised in Table 1, which shows the mean BLEU change relative to the in system. It can be seen that the pro-mix method presented here is out-performing the perplexity optimisation on the nc data set, and performing similarly on the st data set. joint perp pro-mix nc +0.18 +0.44 +0.91 st -0.04 +0.55 +0.48 Table 1 : Mean BLEU relative to in system for each data set. System names as in Figure 1 . 4 www.scipy.org",
"cite_spans": [
{
"start": 485,
"end": 486,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 70,
"end": 78,
"text": "Figure 1",
"ref_id": null
},
{
"start": 394,
"end": 401,
"text": "Table 1",
"ref_id": null
},
{
"start": 474,
"end": 482,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "The results show that the pro-mix method is a viable way of tuning systems built with interpolated phrase tables, and performs better than the current perplexity minimisation method on one of two data sets used in experiments. On the other data set (st), the out-of-domain data makes much less difference to the system performance in general, most probably because the difference between the in and outof-domain data sets in much larger (Haddow and Koehn, 2012) . Whilst the differences between promix and perplexity minimisation are not large on the nc test set (about +0.5 BLEU) the results have been demonstrated to apply across many language pairs. The advantage of the pro-mix method over other approaches is that it directly optimises the measure that we are interested in, rather than optimising an intermediate measure and hoping that translation performance improves. In this work we optimise for BLEU, but the same method could easily be used to optimise for any sentence-level translation metric. Figure 1: Comparison of the performance (BLEU) on in-domain data, of our pro-mix interpolation weight tuning method with three baselines: in using just in-domain parallel training data training; joint also using europarl data; and perp using perplexity minimisation to interpolate in-domain and europarl data.",
"cite_spans": [
{
"start": 437,
"end": 461,
"text": "(Haddow and Koehn, 2012)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "4"
},
{
"text": "Since the phrase penalty feature is a constant across phrase pairs it is not interpolated, and so is classed with the the \"other\" features. The lexical scores, although not actually probabilities, are interpolated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.opensubtitles.org 3 Our method could also be applied to language model interpolation but we chose to focus on phrase tables in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement 288769 (ACCEPT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Domain Adaptation in Statistical Machine Translation of User-Forum Data using Component Level Mixture Modelling",
"authors": [
{
"first": "Pratyush",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Sudip",
"suffix": ""
},
{
"first": "Johann",
"middle": [],
"last": "Naskar",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Roturier",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Way",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of MT Summit",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pratyush Banerjee, Sudip K. Naskar, Johann Roturier, Andy Way, and Josef van Genabith. 2011. Domain Adaptation in Statistical Machine Translation of User- Forum Data using Component Level Mixture Mod- elling. In Proceedings of MT Summit.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Domain Adaptation for Statistical Machine Translation from Monolingual Resources",
"authors": [
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicola Bertoldi and Marcello Federico. 2009. Domain Adaptation for Statistical Machine Translation from Monolingual Resources. In Proceedings of WMT.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A limited memory algorithm for bound constrained optimization",
"authors": [
{
"first": "R",
"middle": [
"H"
],
"last": "Byrd",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nocedal",
"suffix": ""
}
],
"year": 1995,
"venue": "SIAM Journal on Scientific and Statistical Computing",
"volume": "16",
"issue": "5",
"pages": "1190--1208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. H. Byrd, P. Lu, and J. Nocedal. 1995. A limited memory algorithm for bound constrained optimiza- tion. SIAM Journal on Scientific and Statistical Com- puting, 16(5):1190-1208.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "(meta-) evaluation of machine translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Burch",
"suffix": ""
},
{
"first": "Cameron",
"middle": [],
"last": "Fordyce",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Schroeder",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "136--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (meta-) evaluation of machine translation. In Proceedings of the Second Workshop on Statistical Machine Transla- tion, pages 136-158, Prague, Czech Republic, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Findings of the 2012 Workshop on Statistical Machine Translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "10--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 Workshop on Statistical Machine Translation. In Proceedings of the Seventh Work- shop on Statistical Machine Translation, pages 10- 51, Montr\u00e9al, Canada, June. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Batch Tuning Strategies for Statistical Machine Translation",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Cherry and George Foster. 2012. Batch Tuning Strategies for Statistical Machine Translation. In Pro- ceedings of NAACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Online Large-Margin Training of Syntactic and Structural Translation Features",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Marton",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang, Yuval Marton, and Philip Resnik. 2008. Online Large-Margin Training of Syntactic and Struc- tural Translation Features. In Proceedings of EMNLP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Better hypothesis testing for statistical machine translation: Controlling for optimizer instability",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Clark, Chris Dyer, Alon Lavie, and Noah Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "IRSTLM: an Open Source Toolkit for Handling Large Scale Language Models",
"authors": [
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcello Federico, Nicola Bertoldi, and Mauro Cettolo. 2008. IRSTLM: an Open Source Toolkit for Handling Large Scale Language Models. In Proceedings of In- terspeech, Brisbane, Australie.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Mixture-model adaptation for SMT",
"authors": [
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "128--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Foster and Roland Kuhn. 2007. Mixture-model adaptation for SMT. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 128-135, Prague, Czech Republic, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Discriminative Instance Weighting for Domain Adaptation in Statistical Machine Translation",
"authors": [
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "451--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Foster, Cyril Goutte, and Roland Kuhn. 2010. Discriminative Instance Weighting for Domain Adap- tation in Statistical Machine Translation. In Proceed- ings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 451-459, Cam- bridge, MA, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Analysing the Effect of Out-of-Domain Data on SMT Systems",
"authors": [
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "422--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barry Haddow and Philipp Koehn. 2012. Analysing the Effect of Out-of-Domain Data on SMT Systems. In Proceedings of the Seventh Workshop on Statisti- cal Machine Translation, pages 422-432, Montr\u00e9al, Canada, June. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "KenLM: Faster and Smaller Language Model Queries",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "187--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield. 2011. KenLM: Faster and Smaller Language Model Queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187-197, Edinburgh, Scotland, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1352--1362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hopkins and Jonathan May. 2011. Tuning as Ranking. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1352-1362, Edinburgh, Scotland, UK., July. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Experiments in Domain Adaptation for Statistical Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Schroeder",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "224--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Josh Schroeder. 2007. Experiments in Domain Adaptation for Statistical Machine Transla- tion. In Proceedings of the Second Workshop on Sta- tistical Machine Translation, pages 224-227, Prague, Czech Republic, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Moses: Open Source Toolkit for Statistical Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL Demo Sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Con- stantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Pro- ceedings of the ACL Demo Sessions, pages 177-180, Prague, Czech Republic, June. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Minimum Error Rate Training in Statistical Machine Translation",
"authors": [
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Bleu: a Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 40th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a Method for Automatic Eval- uation of Machine Translation. In Proceedings of 40th",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylva- nia, USA, July. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Perplexity Minimization for Translation Model Domain Adaptation in Statistical Machine Translation",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich. 2012. Perplexity Minimization for Trans- lation Model Domain Adaptation in Statistical Ma- chine Translation. In Proceedings of EACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "SRILM -An Extensible Language Modeling Toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. Intl. Conf. on Spoken Language Processing",
"volume": "2",
"issue": "",
"pages": "901--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM -An Extensible Lan- guage Modeling Toolkit. In Proc. Intl. Conf. on Spo- ken Language Processing, vol. 2, pages 901-904.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "News from OPUS -A Collection of Multilingual Parallel Corpora with Tools and Interfaces",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2009,
"venue": "Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "237--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2009. News from OPUS -A Collection of Multilingual Parallel Corpora with Tools and Inter- faces. In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors, Recent Advances in Natural Lan- guage Processing (vol V), pages 237-248. John Ben- jamins, Amsterdam/Philadelphia.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Online Large-Margin Training for Statistical Machine Translation",
"authors": [
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "764--773",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taro Watanabe, Jun Suzuki, Hajime Tsukada, and Hideki Isozaki. 2007. Online Large-Margin Training for Sta- tistical Machine Translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Nat- ural Language Processing and Computational Natu- ral Language Learning (EMNLP-CoNLL), pages 764- 773, Prague, Czech Republic, June. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {}
}
}