Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N12-1026",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:05:16.843967Z"
},
"title": "Optimized Online Rank Learning for Machine Translation",
"authors": [
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Information and Communications Technology",
"location": {
"addrLine": "3-5 Hikaridai, Seika-cho, Soraku-gun",
"postCode": "619-0289",
"settlement": "Kyoto",
"country": "JAPAN"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an online learning algorithm for statistical machine translation (SMT) based on stochastic gradient descent (SGD). Under the online setting of rank learning, a corpus-wise loss has to be approximated by a batch local loss when optimizing for evaluation measures that cannot be linearly decomposed into a sentence-wise loss, such as BLEU. We propose a variant of SGD with a larger batch size in which the parameter update in each iteration is further optimized by a passive-aggressive algorithm. Learning is efficiently parallelized and line search is performed in each round when merging parameters across parallel jobs. Experiments on the NIST Chinese-to-English Open MT task indicate significantly better translation results.",
"pdf_parse": {
"paper_id": "N12-1026",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an online learning algorithm for statistical machine translation (SMT) based on stochastic gradient descent (SGD). Under the online setting of rank learning, a corpus-wise loss has to be approximated by a batch local loss when optimizing for evaluation measures that cannot be linearly decomposed into a sentence-wise loss, such as BLEU. We propose a variant of SGD with a larger batch size in which the parameter update in each iteration is further optimized by a passive-aggressive algorithm. Learning is efficiently parallelized and line search is performed in each round when merging parameters across parallel jobs. Experiments on the NIST Chinese-to-English Open MT task indicate significantly better translation results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The advancement of statistical machine translation (SMT) relies on efficient tuning of several or many parameters in a model. One of the standards for such tuning is minimum error rate training (MERT) (Och, 2003) , which directly minimize the loss of translation evaluation measures, i.e. BLEU (Papineni et al., 2002) . MERT has been successfully used in practical applications, although, it is known to be unstable . To overcome this instability, it requires multiple runs from random starting points and directions (Moore and Quirk, 2008) , or a computationally expensive procedure by linear programming and combinatorial optimization (Galley and Quirk, 2011) .",
"cite_spans": [
{
"start": 201,
"end": 212,
"text": "(Och, 2003)",
"ref_id": "BIBREF27"
},
{
"start": 294,
"end": 317,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF28"
},
{
"start": 517,
"end": 540,
"text": "(Moore and Quirk, 2008)",
"ref_id": "BIBREF25"
},
{
"start": 637,
"end": 661,
"text": "(Galley and Quirk, 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many alternative methods have been proposed based on the algorithms in machine learning, such as averaged perceptron (Liang et al., 2006) , maximum entropy (Och and Ney, 2002; Blunsom et al., 2008) , Margin Infused Relaxed Algorithm (MIRA) (Watanabe et al., 2007; Chiang et al., 2008b) , or pairwise rank optimization (PRO) (Hopkins and May, 2011) . They primarily differ in the mode of training; online or MERT-like batch, and in their objectives; max-margin (Taskar et al., 2004) , conditional loglikelihood (or softmax loss) (Berger et al., 1996) , risk (Smith and Eisner, 2006; Li and Eisner, 2009) , or ranking (Herbrich et al., 1999) .",
"cite_spans": [
{
"start": 117,
"end": 137,
"text": "(Liang et al., 2006)",
"ref_id": "BIBREF23"
},
{
"start": 156,
"end": 175,
"text": "(Och and Ney, 2002;",
"ref_id": "BIBREF26"
},
{
"start": 176,
"end": 197,
"text": "Blunsom et al., 2008)",
"ref_id": "BIBREF1"
},
{
"start": 240,
"end": 263,
"text": "(Watanabe et al., 2007;",
"ref_id": "BIBREF36"
},
{
"start": 264,
"end": 285,
"text": "Chiang et al., 2008b)",
"ref_id": "BIBREF5"
},
{
"start": 324,
"end": 347,
"text": "(Hopkins and May, 2011)",
"ref_id": "BIBREF15"
},
{
"start": 460,
"end": 481,
"text": "(Taskar et al., 2004)",
"ref_id": "BIBREF32"
},
{
"start": 528,
"end": 549,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF0"
},
{
"start": 557,
"end": 581,
"text": "(Smith and Eisner, 2006;",
"ref_id": "BIBREF31"
},
{
"start": 582,
"end": 602,
"text": "Li and Eisner, 2009)",
"ref_id": "BIBREF22"
},
{
"start": 616,
"end": 639,
"text": "(Herbrich et al., 1999)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present an online learning algorithm based on stochastic gradient descent (SGD) with a larger batch size (Shalev-Shwartz et al., 2007) . Like Hopkins and May (2011), we optimize ranking in nbest lists, but learn parameters in an online fashion. As proposed by Haddow et al. (2011) , BLEU is approximately computed in the local batch, since BLEU is not linearly decomposed into a sentencewise score (Chiang et al., 2008a) , and optimization for sentence-BLEU does not always achieve optimal parameters for corpus-BLEU. Setting the larger batch size implies the more accurate corpus-BLEU, but at the cost of slower convergence of SGD. Therefore, we propose an optimized update method inspired by the passive-aggressive algorithm (Crammer et al., 2006) , in which each parameter update is further rescaled considering the tradeoff between the amount of updates to the parameters and the ranking loss. Learning is efficiently parallelized by splitting training data among shards and by merging parameters in each round (McDonald et al., 2010) . Instead of simple averaging, we perform an additional line search step to find the optimal merging across parallel jobs.",
"cite_spans": [
{
"start": 108,
"end": 137,
"text": "(Shalev-Shwartz et al., 2007)",
"ref_id": "BIBREF30"
},
{
"start": 263,
"end": 283,
"text": "Haddow et al. (2011)",
"ref_id": "BIBREF13"
},
{
"start": 401,
"end": 423,
"text": "(Chiang et al., 2008a)",
"ref_id": "BIBREF4"
},
{
"start": 730,
"end": 752,
"text": "(Crammer et al., 2006)",
"ref_id": "BIBREF9"
},
{
"start": 1018,
"end": 1041,
"text": "(McDonald et al., 2010)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experiments were carried out on the NIST 2008 Chinese-to-English Open MT task. We found significant gains over traditional MERT and other tuning algorithms, such as MIRA and PRO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "SMT can be formulated as a maximization problem of finding the most likely translation e given an input sentence f using a set of parameters \u03b8 (Brown et al., 1993) ",
"cite_spans": [
{
"start": 143,
"end": 163,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00ea = arg max e p(e|f ; \u03b8).",
"eq_num": "(1)"
}
],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "Under this maximization setting, we assume that p(\u2022) is represented by a linear combination of feature functions h(f, e) which are scaled by a set of parameters w (Och and Ney, 2002) e = arg max e w \u22a4 h(f, e).",
"cite_spans": [
{
"start": 163,
"end": 182,
"text": "(Och and Ney, 2002)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "Each element of h(\u2022) is a feature function which captures different aspects of translations, for instance, log of n-gram language model probability, the number of translated words or log of phrasal probability. In this paper, we concentrate on the problem of learning w, which is referred to as tuning. One of the standard methods for parameter tuning is minimum error rate training (Och, 2003) (MERT) which directly minimizes the task loss \u2113(\u2022), i.e. negative BLEU (Papineni et al., 2002) , given training data D = {(f 1 , e 1 ), ..., (f N , e N )}, sets of paired source sentence f i and its reference translations e \u00ee w = arg min",
"cite_spans": [
{
"start": 383,
"end": 394,
"text": "(Och, 2003)",
"ref_id": "BIBREF27"
},
{
"start": 466,
"end": 489,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "w \u2113( { arg max e w \u22a4 h(f i , e) } N i=1 , { e i } N i=1 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "(3) The objective in Equation 3 is discontinuous and non-convex, and it requires decoding of all the training data given w. Therefore, MERT relies on a derivative-free unconstrained optimization method, such as Powell's method, which repeatedly chooses one direction to optimize using a line search procedure as in Algorithm 1. Expensive decoding is approximated by an n-best merging technique in which decoding is carried out in each epoch of iterations t and the maximization in Eq. 3 is approxi-Algorithm 1 MERT 1: Initialize w 1 2: for t = 1, ..., T do \u25b7 Or, until convergence 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "Generate n-bests using w t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "2"
},
{
"text": "Learn new w t+1 by Powell's method 5: end for 6: return w T +1 mated by search over the n-bests merged across iterations. The merged n-bests are also used in the line search procedure to efficiently draw the error surface for efficient computation of the outer minimization of Eq. 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4:",
"sec_num": null
},
{
"text": "Instead of the direct task loss minimization of Eq. 3, we would like to find w by solving the L 2regularized constrained minimization problem",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Learning",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "arg min w \u03bb 2 \u2225w\u2225 2 2 + \u2113(w; D)",
"eq_num": "(4)"
}
],
"section": "Rank Learning",
"sec_num": "3.1"
},
{
"text": "where \u03bb > 0 is a hyperparameter controlling the fitness to the data. The loss function \u2113(\u2022) we consider here is inspired by a pairwise ranking method (Hopkins and May, 2011) in which pairs of correct translation and incorrect translation are sampled from nbests and suffer a hinge loss",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Learning",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 M (w; D) \u2211 (f,e)\u2208D \u2211 e * ,e \u2032 max { 0, 1 \u2212 w \u22a4 \u03a6(f, e * , e \u2032 ) }",
"eq_num": "(5)"
}
],
"section": "Rank Learning",
"sec_num": "3.1"
},
{
"text": "where e \u2032 \u2208 NBEST(w; f ) \\ ORACLE(w; f, e) e * \u2208 ORACLE(w; f, e) \u03a6(f, e * , e \u2032 ) = h(f, e * ) \u2212 h(f, e \u2032 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Learning",
"sec_num": "3.1"
},
{
"text": "NBEST(\u2022) is the n-best translations of f generated with the parameter w, and ORACLE(\u2022) is a set of oracle translations chosen among NBEST(\u2022). Note that each e \u2032 (and e * ) implicitly represents a derivation consisting of a tuple (e \u2032 , \u03d5), where \u03d5 is a latent structure, i.e. phrases in a phrase-based SMT, but we omit \u03d5 for brevity. M (\u2022) is a normalization constant which is equal to the number of paired loss terms \u03a6(f, e * , e \u2032 ) in Equation 5. Since it is impossible to enumerate all possible translations, we follow the convention of approximating the domain of translation by n-bests. Unlike Hopkins and May (2011), we do not randomly sample from all the pairs in the n-best translations, but extract pairs by selecting one oracle translation and one other translation in the nbests other than those in ORACLE(\u2022). Oracle translations are selected by minimizing the task loss,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Learning",
"sec_num": "3.1"
},
{
"text": "\u2113( { e \u2032 \u2208 NBEST(w; f i ) } N i=1 , { e i } N i=1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Learning",
"sec_num": "3.1"
},
{
"text": "i.e. negative BLEU, with respect to a set of reference translations e. In order to compute oracles with corpus-BLEU, we apply a greedy search strategy over n-bests (Venugopal, 2005) . Equation 5 can be easily interpreted as a constant loss \"1\" for choosing a wrong translation under current parameters w, which is in contrast with the direct task-loss used in max-margin approach to structured output learning (Taskar et al., 2004) .",
"cite_spans": [
{
"start": 164,
"end": 181,
"text": "(Venugopal, 2005)",
"ref_id": "BIBREF35"
},
{
"start": 410,
"end": 431,
"text": "(Taskar et al., 2004)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Learning",
"sec_num": "3.1"
},
{
"text": "As an alternative, we would also consider a softmax loss (Collins and Koo, 2005) ",
"cite_spans": [
{
"start": 57,
"end": 80,
"text": "(Collins and Koo, 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Learning",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "represented by 1 N \u2211 (f,e)\u2208D \u2212 log Z O (w; f, e) Z N (w; f )",
"eq_num": "(6)"
}
],
"section": "Rank Learning",
"sec_num": "3.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Learning",
"sec_num": "3.1"
},
{
"text": "Z O (w; f, e) = \u2211 e * \u2208ORACLE(w;f,e) exp(w \u22a4 f (f, e * )) Z N (w; f ) = \u2211 e \u2032 \u2208NBEST(w;f ) exp(w \u22a4 f (f, e \u2032 )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Learning",
"sec_num": "3.1"
},
{
"text": "Equation 6 is a log-linear model used in common NLP tasks such as tagging, chunking and named entity recognition, but differ slightly in that multiple correct translations are discriminated from the others (Charniak and Johnson, 2005) .",
"cite_spans": [
{
"start": 206,
"end": 234,
"text": "(Charniak and Johnson, 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Learning",
"sec_num": "3.1"
},
{
"text": "Hopkins and May (2011) applied a MERT-like procedure in Alg. 1 in which Equation 4 was solved to obtain new parameters in each iteration. Here, we employ stochastic gradient descent (SGD) methods as presented in Algorithm 2 motivated by Pegasos (Shalev-Shwartz et al., 2007) . In each iteration, we randomly permute D and choose a set of batches",
"cite_spans": [
{
"start": 245,
"end": 274,
"text": "(Shalev-Shwartz et al., 2007)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "B t = {b t 1 , ..., b t K } with each b t j consisting of N/K training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "For each batch b in B t , we generate n-bests from the source sentences in b and compute oracle translations from the newly created n-bests",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "Algorithm 2 Stochastic Gradient Descent 1: k = 1, w 1 \u2190 0 2: for t = 1, ..., T do 3: Choose B t = {b t 1 , ..., b t K } from D 4: for b \u2208 B t do 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "Compute n-bests and oracles of b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "Set learning rate \u03b7 k 7: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "w k+ 1 2 \u2190 w k \u2212 \u03b7 k \u2207(w k ; b) \u25b7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "w k+1 \u2190 min { 1, 1/ \u221a \u03bb \u2225w k+ 1 2 \u2225 2 } w k+ 1 2 9: k \u2190 k + 1 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "end for 11: end for 12: return w k (line 5) using a batch local corpus-BLEU (Haddow et al., 2011) . Then, we optimize an approximated objective function",
"cite_spans": [
{
"start": 76,
"end": 97,
"text": "(Haddow et al., 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "arg min w \u03bb 2 \u2225w\u2225 2 2 + \u2113(w; b)",
"eq_num": "(7)"
}
],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "by replacing D with b in the objective of Eq. 4. The parameters w k are updated by the sub-gradient of Equation 7, \u2207(w k ; b), scaled by the learning rate \u03b7 k (line 7). We use an exponential decayed learning rate \u03b7 k = \u03b7 0 \u03b1 k/K , which converges very fast in practice (Tsuruoka et al., 2009) 1 . The sub-gradient of Eq.7 with the hinge loss of Eq. 5 is",
"cite_spans": [
{
"start": 269,
"end": 292,
"text": "(Tsuruoka et al., 2009)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bbw k \u2212 1 M (w k ; b) \u2211 (f,e)\u2208b \u2211 e * ,e \u2032 \u03a6(f, e * , e \u2032 )",
"eq_num": "(8)"
}
],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "such that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 \u2212 w \u22a4 k \u03a6(f, e * , e \u2032 ) > 0.",
"eq_num": "(9)"
}
],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "We found that the normalization term by M (\u2022) was very slow in convergence, thus, instead, we used M \u2032 (w; b), which was the number of paired loss terms satisfied the constraints in Equation 9. In the case of the softmax loss objective of Eq. 6, the subgradient is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bbw k \u2212 1 |b| \u2211 (f,e)\u2208b \u2202 \u2202w L(w; f, e) w=w k",
"eq_num": "(10)"
}
],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "where L(w; f, e) = log (Z O (w; f, e)/Z N (w; f )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "After the parameter update, w k+ 1 2 is projected within the L 2 -norm ball (Shalev-Shwartz et al., 2007) .",
"cite_spans": [
{
"start": 76,
"end": 105,
"text": "(Shalev-Shwartz et al., 2007)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "Setting smaller batch size implies frequent updates to the parameters and a faster convergence. However, as briefly mentioned in Haddow et al. (2011) , setting batch size to a smaller value, such as |b| = 1, does not work well in practice, since BLEU is devised for a corpus based evaluation, not for an individual sentence-wise evaluation, and it is not linearly decomposed into a sentence-wise score (Chiang et al., 2008a) . Thus, the smaller batch size may also imply less accurate batch-local corpus-BLEU and incorrect oracle translation selections, which may lead to incorrect sub-gradient estimations or slower convergence. In the next section we propose an optimized parameter update which works well when setting a smaller batch size is impractical due to its task loss setting.",
"cite_spans": [
{
"start": 129,
"end": 149,
"text": "Haddow et al. (2011)",
"ref_id": "BIBREF13"
},
{
"start": 402,
"end": 424,
"text": "(Chiang et al., 2008a)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Approximation",
"sec_num": "3.2"
},
{
"text": "In line 7 of Algorithm 2, parameters are updated by the sub-gradient of each training instance in a batch b. When the sub-gradient in Equation 8 is employed, the update procedure can be rearranged as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimized Parameter Update",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w k+ 1 2 \u2190 (1\u2212\u03bb\u03b7 k )w k + \u2211 (f,e)\u2208b,e * ,e \u2032 \u03b7 k M (w k ; b) \u03a6(f, e * , e \u2032 )",
"eq_num": "(11)"
}
],
"section": "Optimized Parameter Update",
"sec_num": "4.1"
},
{
"text": "in which each individual loss term \u03a6(\u2022) is scaled uniformly by a constant \u03b7 k /M (\u2022).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimized Parameter Update",
"sec_num": "4.1"
},
{
"text": "Instead of the uniform scaling, we propose to update the parameters in two steps: First, we suffer the sub-gradient from the L 2 regularization",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimized Parameter Update",
"sec_num": "4.1"
},
{
"text": "w k+ 1 4 \u2190 (1 \u2212 \u03bb\u03b7 k )w k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimized Parameter Update",
"sec_num": "4.1"
},
{
"text": "Second, we solve the following problem ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimized Parameter Update",
"sec_num": "4.1"
},
{
"text": "arg min w 1 2 \u2225w \u2212 w k+ 1 4 \u2225 2 2 + \u03b7 k \u2211 (f,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimized Parameter Update",
"sec_num": "4.1"
},
{
"text": "w \u22a4 \u03a6(f, e * , e \u2032 ) \u2265 1 \u2212 \u03be f,e * ,e \u2032 \u03be f,e * ,e \u2032 \u2265 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimized Parameter Update",
"sec_num": "4.1"
},
{
"text": "The problem is inspired by the passive-aggressive algorithm (Crammer et al., 2006) in which new parameters are derived through the tradeoff between the amount of updates to the parameters and the margin-based loss. Note that the objective in MIRA is represented by as our previous parameters and set \u03bb = 1/\u03b7 k , they are very similar. Unlike MIRA, the learning rate \u03b7 k is directly used as a tradeoff parameter which decays as training proceeds, and the subgradient of the global L 2 regularization term is also combined in the problem through w k+ 1",
"cite_spans": [
{
"start": 60,
"end": 82,
"text": "(Crammer et al., 2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Optimized Parameter Update",
"sec_num": "4.1"
},
{
"text": "arg min w \u03bb 2 \u2225w \u2212 w k \u2225 2 2 + \u2211 (f,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimized Parameter Update",
"sec_num": "4.1"
},
{
"text": ". ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "{ 1 \u2212 w \u22a4 k+ 1 4 \u03a6(f, e * , e \u2032 ) }",
"eq_num": "(14)"
}
],
"section": "4",
"sec_num": null
},
{
"text": "subject to \u2211 (f,e)\u2208b,e * ,e \u2032 \u03c4 e * ,e \u2032 \u2264 \u03b7 k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "We used a dual coordinate descent algorithm 2 to efficiently solve the quadratic program (QP) in Equation 14, leading to an update",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "w k+ 1 2 \u2190 w k+ 1 4 + \u2211",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "(f,e)\u2208b,e * ,e \u2032 \u03c4 e * ,e \u2032 \u03a6(f, e * , e \u2032 ). 15When compared with Equation 11, the update procedure in Equation 15 rescales the contribution from each sub-gradient through the Lagrange multipliers \u03c4 e * ,e \u2032 . Note that if we set \u03c4 e * ,e \u2032 = \u03b7 k /M (\u2022), we satisfy the constraints in Eq. 14, and recover the update in Eq. 11. In the same manner as Eq. 12, we derive an optimized update procedure for the softmax loss, which replaces the update with Equation 10, by solving the following problem",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "arg min w 1 2 \u2225w \u2212 w k+ 1 4 \u2225 2 2 + \u03b7 k \u2211 (f,e)\u2208b \u03be f (16) such that w \u22a4 \u03a8(w k ; f, e) \u2265 \u2212L(w k ; f, e) \u2212 \u03be f \u03be f \u2265 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "in which \u03a8(w \u2032 ; f, e) = \u2202 \u2202w L(w; f, e) w=w \u2032 . Equation 16 can be interpreted as a cutting-plane approximation for the objective of Eq. 7, in which the original objective of Eq. 7 with the softmax loss in Eq. 6 is approximated by |b| linear constraints derived from the sub-gradients at point w k (Teo et al., 2010) . Eq. 16 is efficiently solved by its Lagrange dual, leading to an update",
"cite_spans": [
{
"start": 299,
"end": 317,
"text": "(Teo et al., 2010)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w k+ 1 2 \u2190 w k+ 1 4 + \u2211 (f,e)\u2208b \u03c4 f \u03a8(w k ; f, e)",
"eq_num": "(17)"
}
],
"section": "4",
"sec_num": null
},
{
"text": "subject to \u2211 (f,e)\u2208b \u03c4 f \u2264 \u03b7 k . Similar to Eq. 15, the parameter update by \u03a8(\u2022) is rescaled by its Lagrange multipliers \u03c4 f in place of the uniform scale of 1/|b| in the sub-gradient of Eq. 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "For faster training, we employ an efficient parallel training strategy proposed by McDonald et al. (2010) . The training data D is split into S disjoint shards, {D 1 , ..., D S }. Each shard learns its own parameters in each single epoch t and performs parameter mixing by averaging parameters across shards.",
"cite_spans": [
{
"start": 83,
"end": 105,
"text": "McDonald et al. (2010)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Line Search for Parameter Mixing",
"sec_num": "4.2"
},
{
"text": "We propose an optimized parallel training in Algorithm 3 which performs better mixing with respect to the task loss, i.e. negative BLEU. In line 5, w t+ 1 2 is computed by averaging w t+1,s from all the shards after local training using their own data D s . Then, the new parameters w t+1 are obtained by linearly interpolating with the parameters from the previous epoch w t . The linear interpolation weight \u03c1 is efficiently computed by a line search procedure which directly minimizes the negative corpus-BLEU. The procedure is exactly the same as the line search strategy employed in MERT using w t as our starting point with the direction w t+ 1 2 \u2212 w t . The idea of using the line search procedure is to find the optimum parameters under corpus-BLEU without a Algorithm 3 Distributed training with line search 1: w 1 \u2190 0 2: for t = 1, ..., T do 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Line Search for Parameter Mixing",
"sec_num": "4.2"
},
{
"text": "w t,s \u2190 w t \u25b7 Distribute parameters 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Line Search for Parameter Mixing",
"sec_num": "4.2"
},
{
"text": "Each shard learns w t+1,s using D s \u25b7 Line 3-10 in Alg. 2 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Line Search for Parameter Mixing",
"sec_num": "4.2"
},
{
"text": "w t+ 1 2 \u2190 1/S \u2211 s w t+1,s \u25b7 Mixing 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Line Search for Parameter Mixing",
"sec_num": "4.2"
},
{
"text": "w t+1 \u2190 (1 \u2212 \u03c1)w t + \u03c1w t+ 1 2 \u25b7 Line search 7: end for 8: return w T +1 batch-local approximation. Unlike MERT, however, we do not memorize nor merge all the n-bests generated across iterations, but keep only n-bests in each iteration for faster training and for memory saving. Thus, the optimum \u03c1 obtained by the line search may be suboptimal in terms of the training objective, but potentially better than averaging for minimizing the final task loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Line Search for Parameter Mixing",
"sec_num": "4.2"
},
{
"text": "Experiments were carried out on the NIST 2008 Chinese-to-English Open MT task. The training data consists of nearly 5.6 million bilingual sentences and additional monolingual data, English Gigaword, for 5-gram language model estimation. MT02 and MT06 were used as our tuning and development testing, and MT08 as our final testing with all data consisting of four reference translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We use an in-house developed hypergraph-based toolkit for training and decoding with synchronous-CFGs (SCFG) for hierarchical phrase-bassed SMT (Chiang, 2007) . The system employs 14 features, consisting of standard Hiero-style features (Chiang, 2007) , and a set of indicator features, such as the number of synchronous-rules in a derivation. Two 5-gram language models are also included, one from the English-side of bitexts and the other from English Gigaword, with features counting the number of out-of-vocabulary words in each model . For faster experiments, we precomputed translation forests inspired by Xiao et al. (2011) . Instead of generating forests from bitexts in each iteration, we construct and save translation forests by intersecting the source side of SCFG with input sentences and by keeping the target side of the inter-sected rules. n-bests are generated from the precomputed forests on the fly using the forest rescoring framework (Huang and Chiang, 2007) with additional non-local features, such as 5-gram language models.",
"cite_spans": [
{
"start": 144,
"end": 158,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF6"
},
{
"start": 237,
"end": 251,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF6"
},
{
"start": 612,
"end": 630,
"text": "Xiao et al. (2011)",
"ref_id": null
},
{
"start": 955,
"end": 979,
"text": "(Huang and Chiang, 2007)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We compared four algorithms, MERT, PRO, MIRA and our proposed online settings, online rank optimization (ORO). Note that ORO without our optimization methods in Section 4 is essentially the same as Pegasos, but differs in that we employ the algorithm for ranking structured outputs with varied objectives, hinge loss or softmax loss 3 . MERT learns parameters from forests (Kumar et al., 2009) with 4 restarts and 8 random directions in each iteration. We experimented on a variant of PRO 4 , in which the objective in Eq. 4 with the hinge loss of Eq. 5 was solved in each iteration in line 4 of Alg. 1 using an off-the-shelf solver 5 . Our MIRA solves the problem in Equation 13 in line 7 of Alg. 2. For a systematic comparison, we used our exhaustive oracle translation selection method in Section 3 for PRO, MIRA and ORO. For each learning algorithm, we ran 30 iterations and generated duplicate removed 1,000-best translations in each iteration. The hyperparameter \u03bb for PRO and ORO was set to 10 \u22125 , selected from among {10 \u22123 , 10 \u22124 , 10 \u22125 }, and 10 2 for MIRA, chosen from {10, 10 2 , 10 3 } by preliminary testing on MT06. Both decoding and learning are parallelized and run on 8 cores. Each online learning took roughly 12 hours, and PRO took one day. It took roughly 3 days for MERT with 20 iterations. Translation results are measured by case sensitive BLEU. Table 1 presents our main results. Among the parameters from multiple iterations, we report the outputs that performed the best on MT06. With Moses (Koehn et al., 2007) , we achieved 30.36 and 23.64 BLEU for MT06 and MT08, respectively. We denote the \"O-\" prefix for the optimized parameter updates discussed in Section 4.1, and the \"-L\" suffix csie.ntu.edu.tw/\u02dccjlin/liblinear with the solver type of 3. Table 1 : Translation results by BLEU. Results without significant differences from the MERT baseline are marked \u2020. The numbers in boldface are significantly better than the MERT baseline (both measured by the bootstrap resampling (Koehn, 2004) with p > 0.05). for parameter mixing by line search as described in Section 4.2. The batch size was set to 16 for MIRA and ORO. In general, our PRO and MIRA settings achieved the results very comparable to MERT. The hinge-loss and softmax objective OROs were lower than those of the three baselines. The softmax objective with the optimized update (O-ORO-L softmax ) performed better than the non-optimized version, but it was still lower than our baselines. In the case of the hinge-loss objective with the optimized update (O-ORO-L hinge ), the gain in MT08 was significant, and achieved the best BLEU. Figure 1 presents the learning curves for three algorithms MIRA-L, ORO-L hinge and O-ORO-L hinge , in which the performance is measured by BLEU on the training data (MT02) and on the test data (MT08). MIRA-L quickly converges and is slightly unstable in the test set, while ORO-L hinge is very stable and slow to converge, but with low performance on the training and test data. The stable learning curve in ORO-L hinge is probably influenced by our learning rate parameter \u03b7 0 = 0.2, which will be investigated in future work. O-ORO-L hinge is less stable in several iterations, but steadily improves its BLEU. The behavior is justified by our optimized update procedure, in which the learning rate \u03b7 k is used as a tradeoff parameter. Thus, it tries a very aggressive update at the early stage of training, but eventually becomes conservative in updating parameters.",
"cite_spans": [
{
"start": 373,
"end": 393,
"text": "(Kumar et al., 2009)",
"ref_id": "BIBREF21"
},
{
"start": 1521,
"end": 1541,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF19"
},
{
"start": 2009,
"end": 2022,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 1373,
"end": 1380,
"text": "Table 1",
"ref_id": null
},
{
"start": 1778,
"end": 1785,
"text": "Table 1",
"ref_id": null
},
{
"start": 2628,
"end": 2636,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Next, we compare the effect of line search for parameter mixing in Table 2 . Line search was very effective for MIRA and O-ORO hinge , but less effective for the others. Since the line search procedure directly minimizes a task loss, not objectives, this may hurt the performance for the softmax objective, where the margins between the correct and incorrect translations are softly penalized.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 2",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Finally, Table 3 shows the effect of batch size selected from {1, 4, 8, 16}. There seems to be no clear trends in MIRA, and we achieved BLEU score of 24.58 by setting the batch size to 8. Clearly, setting smaller batch size is better for ORO, but it is the reverse for the optimized variants of both the hinge and softmax objectives. Figure 2 compares ORO-L hinge and O-ORO-L hinge on MT02 with different batch size settings. ORO-L hinge converges faster when the batch size is smaller and fine tun- ing of the learning rate parameter will be required for a larger batch size. As discussed in Section 3, the smaller batch size means frequent updates to parameters and a faster convergence, but potentially leads to a poor performance since the corpus-BLEU is approximately computed in a local batch. Our optimized update algorithms address the problem by adjusting the tradeoff between the amount of update to parameters and the loss, and perform better for larger batch sizes with a more accurate corpus-BLEU.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 3",
"ref_id": null
},
{
"start": 334,
"end": 342,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Our work is largely inspired by pairwise rank optimization (Hopkins and May, 2011) , but runs in an online fashion similar to (Watanabe et al., 2007; Chiang et al., 2008b) . Major differences come from the corpus-BLEU computation used to select oracle translations. Instead of the sentence-BLEU used by Hopkins and May (2011) or the corpus-BLEU statistics accumulated from previous translations generated by different parameters (Watanabe et al., 2007; Chiang et al., 2008b) , we used a simple batch local corpus-BLEU (Haddow et al., 2011) in the same way as an online approximation to the objectives. An alternative is the use of a Taylor series approximation (Smith and Eisner, 2006; Rosti et al., 2011) , which was not investigated in this paper. Training is performed by SGD with a parameter projection method (Shalev-Shwartz et al., 2007 Table 3 : Translation results with varied batch size.",
"cite_spans": [
{
"start": 59,
"end": 82,
"text": "(Hopkins and May, 2011)",
"ref_id": "BIBREF15"
},
{
"start": 126,
"end": 149,
"text": "(Watanabe et al., 2007;",
"ref_id": "BIBREF36"
},
{
"start": 150,
"end": 171,
"text": "Chiang et al., 2008b)",
"ref_id": "BIBREF5"
},
{
"start": 303,
"end": 325,
"text": "Hopkins and May (2011)",
"ref_id": "BIBREF15"
},
{
"start": 429,
"end": 452,
"text": "(Watanabe et al., 2007;",
"ref_id": "BIBREF36"
},
{
"start": 453,
"end": 474,
"text": "Chiang et al., 2008b)",
"ref_id": "BIBREF5"
},
{
"start": 518,
"end": 539,
"text": "(Haddow et al., 2011)",
"ref_id": "BIBREF13"
},
{
"start": 661,
"end": 685,
"text": "(Smith and Eisner, 2006;",
"ref_id": "BIBREF31"
},
{
"start": 686,
"end": 705,
"text": "Rosti et al., 2011)",
"ref_id": "BIBREF29"
},
{
"start": 814,
"end": 842,
"text": "(Shalev-Shwartz et al., 2007",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 843,
"end": 850,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "for more accurate corpus-BLEU is addressed by optimally scaling parameter updates in the spirit of a passive-aggressive algorithm (Crammer et al., 2006) . The derived algorithm is very similar to MIRA, but differs in that the learning rate is employed as a hyperparameter for controlling the fitness to training data which decays when training proceeds. The non-uniform sub-gradient based update is also employed in an exponentiated gradient (EG) algorithm (Kivinen and Warmuth, 1997; Kivinen and Warmuth, 2001) in which parameter updates are maximum-likely estimated using an exponentially combined sub-gradients. In contrast, our approach relies on an ultraconservative update which tradeoff between the amount of updates performed to the parameters and the progress made for the objectives by solving a QP subproblem. Unlike a complex parallelization by Chiang et al. (2008b) , in which support vectors are asynchronously exchanged among parallel jobs, training is efficiently and easily carried out by distributing training data among shards and by mixing parameters in each iteration (McDonald et al., 2010) . Rather than simple averaging, new parameters are derived by linearly interpolating with the previously mixed parameters, and its weight is determined by the line search algorithm employed in (Och, 2003) .",
"cite_spans": [
{
"start": 130,
"end": 152,
"text": "(Crammer et al., 2006)",
"ref_id": "BIBREF9"
},
{
"start": 457,
"end": 484,
"text": "(Kivinen and Warmuth, 1997;",
"ref_id": "BIBREF18"
},
{
"start": 485,
"end": 511,
"text": "Kivinen and Warmuth, 2001)",
"ref_id": "BIBREF19"
},
{
"start": 857,
"end": 878,
"text": "Chiang et al. (2008b)",
"ref_id": "BIBREF5"
},
{
"start": 1089,
"end": 1112,
"text": "(McDonald et al., 2010)",
"ref_id": "BIBREF24"
},
{
"start": 1306,
"end": 1317,
"text": "(Och, 2003)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We proposed a variant of an online learning algorithm inspired by a batch learning algorithm of (Hopkins and May, 2011) . Training is performed by SGD with a parameter projection (Shalev-Shwartz et al., 2007) using a larger batch size for a more accurate batch local corpus-BLEU estimation. Parameter updates in each iteration is further optimized using an idea from a passive-aggressive algorithm (Cram-mer et al., 2006) . Learning is efficiently parallelized (McDonald et al., 2010) and the locally learned parameters are mixed by an additional line search step. Experiments indicate that better performance was achieved by our optimized updates and by the more sophisticated parameter mixing.",
"cite_spans": [
{
"start": 96,
"end": 119,
"text": "(Hopkins and May, 2011)",
"ref_id": "BIBREF15"
},
{
"start": 179,
"end": 208,
"text": "(Shalev-Shwartz et al., 2007)",
"ref_id": "BIBREF30"
},
{
"start": 398,
"end": 421,
"text": "(Cram-mer et al., 2006)",
"ref_id": null
},
{
"start": 461,
"end": 484,
"text": "(McDonald et al., 2010)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "In future work, we would like to investigate other objectives with a more direct task loss, such as maxmargin (Taskar et al., 2004) , risk (Smith and Eisner, 2006) or softmax-loss (Gimpel and Smith, 2010), and different regularizers, such as L 1 -norm for a sparse solution. Instead of n-best approximations, we may directly employ forests for a better conditional log-likelihood estimation (Li and Eisner, 2009) . We would also like to explore other mixing strategies for parallel training which can directly minimize the training objectives like those proposed for a cutting-plane algorithm (Franc and Sonnenburg, 2008) .",
"cite_spans": [
{
"start": 110,
"end": 131,
"text": "(Taskar et al., 2004)",
"ref_id": "BIBREF32"
},
{
"start": 139,
"end": 163,
"text": "(Smith and Eisner, 2006)",
"ref_id": "BIBREF31"
},
{
"start": 391,
"end": 412,
"text": "(Li and Eisner, 2009)",
"ref_id": "BIBREF22"
},
{
"start": 593,
"end": 621,
"text": "(Franc and Sonnenburg, 2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We set \u03b1 = 0.85 and \u03b70 = 0.2 which converged well in our preliminary experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Specifically, each parameter is bound constrained 0 \u2264 \u03c4 \u2264 \u03b7 k but is not summation constrained \u2211 \u03c4 \u2264 \u03b7 k . Thus, we renormalize \u03c4 after optimization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The other major difference is the use of a simpler learning rate, 1 \u03bbk , which was very slow in our preliminary studies.4 Hopkins and May (2011) minimized logistic loss sampled from the merged n-bests, and sentence-BLEU was used for determining ranks.5 We used liblinear(Fan et al., 2008) at http://www.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank anonymous reviewers and our colleagues for helpful comments and discussion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "Adam",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam L. Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Lin- guistics, 22:39-71, March.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A discriminative latent variable model for statistical machine translation",
"authors": [
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "200--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phil Blunsom, Trevor Cohn, and Miles Osborne. 2008. A discriminative latent variable model for statistical machine translation. In Proc. of ACL-08: HLT, pages 200-208, Columbus, Ohio, June.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The mathematics of statistical machine translation: parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [
"Della"
],
"last": "Vincent",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: parameter estima- tion. Computational Linguistics, 19:263-311, June.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Coarse-tofine n-best parsing and maxent discriminative reranking",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ACL 2005",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to- fine n-best parsing and maxent discriminative rerank- ing. In Proc. of ACL 2005, pages 173-180, Ann Arbor, Michigan, June.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Decomposability of translation metrics for improved evaluation and efficient algorithms",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Deneefe",
"suffix": ""
},
{
"first": "Yee",
"middle": [],
"last": "Seng Chan",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "610--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang, Steve DeNeefe, Yee Seng Chan, and Hwee Tou Ng. 2008a. Decomposability of transla- tion metrics for improved evaluation and efficient al- gorithms. In Proc. of EMNLP 2008, pages 610-619, Honolulu, Hawaii, October.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Online large-margin training of syntactic and structural translation features",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Marton",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "224--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang, Yuval Marton, and Philip Resnik. 2008b. Online large-margin training of syntactic and struc- tural translation features. In Proc. of EMNLP 2008, pages 224-233, Honolulu, Hawaii, October.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, 33(2):201-228.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Better hypothesis testing for statistical machine translation: Controlling for optimizer instability",
"authors": [
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ACL 2011",
"volume": "",
"issue": "",
"pages": "176--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer insta- bility. In Proc. of ACL 2011, pages 176-181, Portland, Oregon, USA, June.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Discriminative reranking for natural language parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "",
"pages": "25--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Terry Koo. 2005. Discrimina- tive reranking for natural language parsing. Compu- tational Linguistics, 31:25-70, March.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Shai Shalev-Shwartz, and Yoram Singer",
"authors": [
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Dekel",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Keshet",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Machine Learning Research",
"volume": "7",
"issue": "",
"pages": "551--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev- Shwartz, and Yoram Singer. 2006. Online passive- aggressive algorithms. Journal of Machine Learning Research, 7:551-585, March.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The cmu-ark german-english translation system",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of SMT 2011",
"volume": "",
"issue": "",
"pages": "337--343",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Kevin Gimpel, Jonathan H. Clark, and Noah A. Smith. 2011. The cmu-ark german-english translation system. In Proc. of SMT 2011, pages 337- 343, Edinburgh, Scotland, July.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Optimized cutting plane algorithm for support vector machines",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Rong-En Fan",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Xiang-Rui",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of ICML '08",
"volume": "9",
"issue": "",
"pages": "320--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. Journal of Ma- chine Learning Research, 9:1871-1874, June. Vojt\u011bch Franc and Soeren Sonnenburg. 2008. Optimized cutting plane algorithm for support vector machines. In Proc. of ICML '08, pages 320-327, Helsinki, Fin- land.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Softmaxmargin crfs: Training log-linear models with cost functions",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Edinburgh",
"suffix": ""
},
{
"first": "U",
"middle": [
"K"
],
"last": "Scotland",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "July",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Gimpel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of NAACL-HLT 2010",
"volume": "",
"issue": "",
"pages": "733--736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley and Chris Quirk. 2011. Optimal search for minimum error rate training. In Proc. of EMNLP 2011, pages 38-49, Edinburgh, Scotland, UK., July. Kevin Gimpel and Noah A. Smith. 2010. Softmax- margin crfs: Training log-linear models with cost functions. In Proc. of NAACL-HLT 2010, pages 733- 736, Los Angeles, California, June.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Samplerank training for phrase-based machine translation",
"authors": [
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Abhishek",
"middle": [],
"last": "Arun",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of SMT 2011",
"volume": "",
"issue": "",
"pages": "261--271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barry Haddow, Abhishek Arun, and Philipp Koehn. 2011. Samplerank training for phrase-based machine translation. In Proc. of SMT 2011, pages 261-271, Ed- inburgh, Scotland, July.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Support vector learning for ordinal regression",
"authors": [
{
"first": "Ralf",
"middle": [],
"last": "Herbrich",
"suffix": ""
},
{
"first": "Thore",
"middle": [],
"last": "Graepel",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Obermayer",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of International Conference on Artificial Neural Networks",
"volume": "",
"issue": "",
"pages": "97--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralf Herbrich, Thore Graepel, and Klaus Obermayer. 1999. Support vector learning for ordinal regression. In In Proc. of International Conference on Artificial Neural Networks, pages 97-102.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Tuning as ranking",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of EMNLP 2011",
"volume": "",
"issue": "",
"pages": "1352--1362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hopkins and Jonathan May. 2011. Tuning as rank- ing. In Proc. of EMNLP 2011, pages 1352-1362, Ed- inburgh, Scotland, UK., July.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A dual coordinate descent method for large-scale linear svm",
"authors": [
{
"first": "Cho-Jui",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sathiya",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Keerthi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sundararajan",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of ICML '08",
"volume": "",
"issue": "",
"pages": "408--415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cho-Jui Hsieh, Kai-Wei Chang, Chih-Jen Lin, S. Sathiya Keerthi, and S. Sundararajan. 2008. A dual coordinate descent method for large-scale linear svm. In Proc. of ICML '08, pages 408-415, Helsinki, Finland.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Forest rescoring: Faster decoding with integrated language models",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ACL 2007",
"volume": "",
"issue": "",
"pages": "144--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and David Chiang. 2007. Forest rescor- ing: Faster decoding with integrated language models. In Proc. of ACL 2007, pages 144-151, Prague, Czech Republic, June.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exponentiated gradient versus gradient descent for linear predictors",
"authors": [
{
"first": "Jyrki",
"middle": [],
"last": "Kivinen",
"suffix": ""
},
{
"first": "Manfred",
"middle": [
"K"
],
"last": "Warmuth",
"suffix": ""
}
],
"year": 1997,
"venue": "Information and Computation",
"volume": "132",
"issue": "1",
"pages": "1--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jyrki Kivinen and Manfred K. Warmuth. 1997. Expo- nentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1- 63, January.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Kivinen",
"suffix": ""
},
{
"first": "M",
"middle": [
"K"
],
"last": "Warmuth ; Philipp Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2001,
"venue": "Procc of ACL 2007",
"volume": "45",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Kivinen and M. K. Warmuth. 2001. Relative loss bounds for multidimensional regression problems. Machine Learning, 45(3):301-329, December. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Con- stantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Procc of ACL 2007, pages 177-180, Prague, Czech Republic, June.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proc. of EMNLP 2004, pages 388-395, Barcelona, Spain, July.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Efficient minimum error rate training and minimum bayes-risk decoding for translation hypergraphs and lattices",
"authors": [
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of ACL-IJCNLP 2009",
"volume": "",
"issue": "",
"pages": "163--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shankar Kumar, Wolfgang Macherey, Chris Dyer, and Franz Och. 2009. Efficient minimum error rate train- ing and minimum bayes-risk decoding for translation hypergraphs and lattices. In Proc. of ACL-IJCNLP 2009, pages 163-171, Suntec, Singapore, August.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "First-and second-order expectation semirings with applications to minimumrisk training on translation forests",
"authors": [
{
"first": "Zhifei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "40--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhifei Li and Jason Eisner. 2009. First-and second-order expectation semirings with applications to minimum- risk training on translation forests. In Proc. of EMNLP 2009, pages 40-51, Singapore, August.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "An end-to-end discriminative approach to machine translation",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Bouchard-C\u00f4t\u00e9",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of COL-ING/ACL 2006",
"volume": "",
"issue": "",
"pages": "761--768",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Alexandre Bouchard-C\u00f4t\u00e9, Dan Klein, and Ben Taskar. 2006. An end-to-end discriminative approach to machine translation. In Proc. of COL- ING/ACL 2006, pages 761-768, Sydney, Australia, July.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Distributed training strategies for the structured perceptron",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Gideon",
"middle": [],
"last": "Mann",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of NAACL-HLT 2010",
"volume": "",
"issue": "",
"pages": "456--464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Keith Hall, and Gideon Mann. 2010. Distributed training strategies for the structured per- ceptron. In Proc. of NAACL-HLT 2010, pages 456- 464, Los Angeles, California, June.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Random restarts in minimum error rate training for statistical machine translation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Quirk",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of COLING",
"volume": "",
"issue": "",
"pages": "585--592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert C. Moore and Chris Quirk. 2008. Random restarts in minimum error rate training for statistical machine translation. In Proc. of COLING 2008, pages 585-592, Manchester, UK, August.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Discriminative training and maximum entropy models for statistical machine translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL 2002",
"volume": "",
"issue": "",
"pages": "295--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2002. Discrimina- tive training and maximum entropy models for statis- tical machine translation. In Proc. of ACL 2002, pages 295-302, Philadelphia, July.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of ACL 2003",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of ACL 2003, pages 160-167, Sapporo, Japan, July.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL 2002",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proc. of ACL 2002, pages 311-318, Philadelphia, Pennsylvania, USA, July.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Expected bleu training for graphs: Bbn system description for wmt11 system combination task",
"authors": [
{
"first": "Antti-Veikko",
"middle": [],
"last": "Rosti",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Spyros",
"middle": [],
"last": "Matsoukas",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of SMT 2011",
"volume": "",
"issue": "",
"pages": "159--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antti-Veikko Rosti, Bing Zhang, Spyros Matsoukas, and Richard Schwartz. 2011. Expected bleu training for graphs: Bbn system description for wmt11 system combination task. In Proc. of SMT 2011, pages 159- 165, Edinburgh, Scotland, July.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Pegasos: Primal estimated sub-gradient solver for svm",
"authors": [
{
"first": "Shai",
"middle": [],
"last": "Shalev-Shwartz",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Srebro",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ICML '07",
"volume": "",
"issue": "",
"pages": "807--814",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. 2007. Pegasos: Primal estimated sub-gradient solver for svm. In Proc. of ICML '07, pages 807-814, Cor- valis, Oregon.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Minimum risk annealing for training log-linear models",
"authors": [
{
"first": "David",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of COLING/ACL 2006",
"volume": "",
"issue": "",
"pages": "787--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David A. Smith and Jason Eisner. 2006. Minimum risk annealing for training log-linear models. In Proc. of COLING/ACL 2006, pages 787-794, Sydney, Aus- tralia, July.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Max-margin parsing",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of EMNLP 2004",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Taskar, Dan Klein, Mike Collins, Daphne Koller, and Christopher Manning. 2004. Max-margin parsing. In Proc. of EMNLP 2004, pages 1-8, Barcelona, Spain, July.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Bundle methods for regularized risk minimization",
"authors": [
{
"first": "Choon",
"middle": [],
"last": "Hui Teo",
"suffix": ""
},
{
"first": "S",
"middle": [
"V N"
],
"last": "Vishwanthan",
"suffix": ""
},
{
"first": "Alex",
"middle": [
"J"
],
"last": "Smola",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Machine Learning Research",
"volume": "11",
"issue": "",
"pages": "311--365",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Choon Hui Teo, S.V.N. Vishwanthan, Alex J. Smola, and Quoc V. Le. 2010. Bundle methods for regularized risk minimization. Journal of Machine Learning Re- search, 11:311-365, March.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Stochastic gradient descent training for l1-regularized log-linear models with cumulative penalty",
"authors": [
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Tsujii",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of ACL-IJCNLP 2009",
"volume": "",
"issue": "",
"pages": "477--485",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshimasa Tsuruoka, Jun'ichi Tsujii, and Sophia Ana- niadou. 2009. Stochastic gradient descent training for l1-regularized log-linear models with cumulative penalty. In Proc. of ACL-IJCNLP 2009, pages 477- 485, Suntec, Singapore, August.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Considerations in maximum mutual information and minimum classification error training for statistical machine translation",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Venugopal",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of EAMT-05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Venugopal. 2005. Considerations in maximum mutual information and minimum classification error training for statistical machine translation. In Proc. of EAMT-05, page 3031.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Fast generation of translation forest for largescale smt discriminative training",
"authors": [
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "880--888",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taro Watanabe, Jun Suzuki, Hajime Tsukada, and Hideki Isozaki. 2007. Online large-margin training for statis- tical machine translation. In Proc. of EMNLP-CoNLL 2007, pages 764-773, Prague, Czech Republic, June. Xinyan Xiao, Yang Liu, Qun Liu, and Shouxun Lin. 2011. Fast generation of translation forest for large- scale smt discriminative training. In Proc. of EMNLP 2011, pages 880-888, Edinburgh, Scotland, UK., July.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Learning curves for three algorithms, MIRA-L, ORO-L hinge and O-ORO-L hinge .",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Learning curves on MT02 for ORO-L hinge and O-ORO-L hinge with different batch size.",
"uris": null
},
"TABREF0": {
"num": null,
"text": "Our proposed algorithm solve Eq. 12 or 16",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"num": null,
"text": "Parameter mixing by line search.",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF7": {
"num": null,
"text": "ORO-L hinge 31.44 \u2020 31.54 \u2020 31.35 \u2020 32.06 23.72 24.02 \u2020 24.28 \u2020 24.95",
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td colspan=\"2\">MT06</td><td/><td/><td colspan=\"2\">MT08</td><td/></tr><tr><td>batch size</td><td>1</td><td>4</td><td>8</td><td>16</td><td>1</td><td>4</td><td>8</td><td>16</td></tr><tr><td>MIRA-L</td><td colspan=\"8\">31.28 \u2020 31.53 \u2020 31.63 \u2020 31.42 \u2020 23.46 23.97 \u2020 24.58 24.15 \u2020</td></tr><tr><td>ORO-L hinge</td><td colspan=\"2\">31.32 \u2020 30.69</td><td>29.61</td><td colspan=\"3\">29.76 23.63 23.12</td><td>22.07</td><td>21.96</td></tr><tr><td>O-ORO-L softmax</td><td colspan=\"6\">25.10 31.66 \u2020 31.31 \u2020 30.77 19.27 23.59</td><td>23.50</td><td>23.07</td></tr><tr><td colspan=\"7\">O-ORO-L softmax 31.15 \u2020 31.17 \u2020 30.90 31.16 \u2020 23.62 23.31</td><td>23.03</td><td>23.20</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>).</td></tr><tr><td/><td/><td/><td/><td colspan=\"5\">Slower training incurred by the larger batch size</td></tr></table>"
}
}
}
}