Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D11-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:32:54.399046Z"
},
"title": "Optimal Search for Minimum Error Rate Training",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Minimum error rate training is a crucial component to many state-of-the-art NLP applications, such as machine translation and speech recognition. However, common evaluation functions such as BLEU or word error rate are generally highly non-convex and thus prone to search errors. In this paper, we present LP-MERT, an exact search algorithm for minimum error rate training that reaches the global optimum using a series of reductions to linear programming. Given a set of N-best lists produced from S input sentences, this algorithm finds a linear model that is globally optimal with respect to this set. We find that this algorithm is polynomial in N and in the size of the model, but exponential in S. We present extensions of this work that let us scale to reasonably large tuning sets (e.g., one thousand sentences), by either searching only promising regions of the parameter space, or by using a variant of LP-MERT that relies on a beam-search approximation. Experimental results show improvements over the standard Och algorithm.",
"pdf_parse": {
"paper_id": "D11-1004",
"_pdf_hash": "",
"abstract": [
{
"text": "Minimum error rate training is a crucial component to many state-of-the-art NLP applications, such as machine translation and speech recognition. However, common evaluation functions such as BLEU or word error rate are generally highly non-convex and thus prone to search errors. In this paper, we present LP-MERT, an exact search algorithm for minimum error rate training that reaches the global optimum using a series of reductions to linear programming. Given a set of N-best lists produced from S input sentences, this algorithm finds a linear model that is globally optimal with respect to this set. We find that this algorithm is polynomial in N and in the size of the model, but exponential in S. We present extensions of this work that let us scale to reasonably large tuning sets (e.g., one thousand sentences), by either searching only promising regions of the parameter space, or by using a variant of LP-MERT that relies on a beam-search approximation. Experimental results show improvements over the standard Och algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Minimum error rate training (MERT)-also known as direct loss minimization in machine learning-is a crucial component in many complex natural language applications such as speech recognition (Chou et al., 1993; Stolcke et al., 1997; Juang et al., 1997) , statistical machine translation (Och, 2003; Smith and Eisner, 2006; Duh and Kirchhoff, 2008; Chiang et al., 2008) , dependency parsing (McDonald et al., 2005) , summarization (McDonald, 2006) , and phonetic alignment (McAllester et al., 2010) . MERT directly optimizes the evaluation metric under which systems are being evaluated, yielding superior performance (Och, 2003) when compared to a likelihood-based discriminative method (Och and Ney, 2002) . In complex text generation tasks like SMT, the ability to optimize BLEU (Papineni et al., 2001 ), TER (Snover et al., 2006) , and other evaluation metrics is critical, since these metrics measure qualities (such as fluency and adequacy) that often do not correlate well with task-agnostic loss functions such as log-loss.",
"cite_spans": [
{
"start": 190,
"end": 209,
"text": "(Chou et al., 1993;",
"ref_id": "BIBREF7"
},
{
"start": 210,
"end": 231,
"text": "Stolcke et al., 1997;",
"ref_id": "BIBREF34"
},
{
"start": 232,
"end": 251,
"text": "Juang et al., 1997)",
"ref_id": "BIBREF11"
},
{
"start": 286,
"end": 297,
"text": "(Och, 2003;",
"ref_id": "BIBREF27"
},
{
"start": 298,
"end": 321,
"text": "Smith and Eisner, 2006;",
"ref_id": "BIBREF32"
},
{
"start": 322,
"end": 346,
"text": "Duh and Kirchhoff, 2008;",
"ref_id": "BIBREF8"
},
{
"start": 347,
"end": 367,
"text": "Chiang et al., 2008)",
"ref_id": "BIBREF5"
},
{
"start": 389,
"end": 412,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF22"
},
{
"start": 429,
"end": 445,
"text": "(McDonald, 2006)",
"ref_id": "BIBREF23"
},
{
"start": 471,
"end": 496,
"text": "(McAllester et al., 2010)",
"ref_id": "BIBREF21"
},
{
"start": 616,
"end": 627,
"text": "(Och, 2003)",
"ref_id": "BIBREF27"
},
{
"start": 686,
"end": 705,
"text": "(Och and Ney, 2002)",
"ref_id": "BIBREF26"
},
{
"start": 780,
"end": 802,
"text": "(Papineni et al., 2001",
"ref_id": "BIBREF28"
},
{
"start": 810,
"end": 831,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While competitive in practice, MERT faces several challenges, the most significant of which is search. The unsmoothed error count is a highly non-convex objective function and therefore difficult to optimize directly; prior work offers no algorithm with a good approximation guarantee. While much of the earlier work in MERT (Chou et al., 1993; Juang et al., 1997) relies on standard convex optimization techniques applied to non-convex problems, the Och algorithm (Och, 2003) represents a significant advance for MERT since it applies a series of special line minimizations that happen to be exhaustive and efficient. Since this algorithm remains inexact in the multidimensional case, much of the recent work on MERT has focused on extending Och's algorithm to find better search directions and starting points (Cer et al., 2008; Moore and Quirk, 2008) , and on experimenting with other derivative-free methods such as the Nelder-Mead simplex algorithm (Nelder and Mead, 1965; Zens et al., 2007; Zhao and Chen, 2009) .",
"cite_spans": [
{
"start": 325,
"end": 344,
"text": "(Chou et al., 1993;",
"ref_id": "BIBREF7"
},
{
"start": 345,
"end": 364,
"text": "Juang et al., 1997)",
"ref_id": "BIBREF11"
},
{
"start": 465,
"end": 476,
"text": "(Och, 2003)",
"ref_id": "BIBREF27"
},
{
"start": 812,
"end": 830,
"text": "(Cer et al., 2008;",
"ref_id": "BIBREF3"
},
{
"start": 831,
"end": 853,
"text": "Moore and Quirk, 2008)",
"ref_id": "BIBREF24"
},
{
"start": 954,
"end": 977,
"text": "(Nelder and Mead, 1965;",
"ref_id": "BIBREF25"
},
{
"start": 978,
"end": 996,
"text": "Zens et al., 2007;",
"ref_id": "BIBREF37"
},
{
"start": 997,
"end": 1017,
"text": "Zhao and Chen, 2009)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present LP-MERT, an exact search algorithm for N -best optimization that exploits general assumptions commonly made with MERT, e.g., that the error metric is decomposable by sentence. 1 While there is no known optimal algo-rithm to optimize general non-convex functions, the unsmoothed error surface has a special property that enables exact search: the set of translations produced by an SMT system for a given input is finite, so the piecewise-constant error surface contains only a finite number of constant regions. As in Och (2003) , one could imagine exhaustively enumerating all constant regions and finally return the best scoring one-Och does this efficiently with each one-dimensional search-but the idea doesn't quite scale when searching all dimensions at once. Instead, LP-MERT exploits algorithmic devices such as lazy enumeration, divide-and-conquer, and linear programming to efficiently discard partial solutions that cannot be maximized by any linear model. Our experiments with thousands of searches show that LP-MERT is never worse than the Och algorithm, which provides strong evidence that our algorithm is indeed exact. In the appendix, we formally prove that this search algorithm is optimal. We show that this algorithm is polynomial in N and in the size of the model, but exponential in the number of tuning sentences. To handle reasonably large tuning sets, we present two modifications of LP-MERT that either search only promising regions of the parameter space, or that rely on a beam-search approximation. The latter modification copes with tuning sets of one thousand sentences or more, and outperforms the Och algorithm on a WMT 2010 evaluation task.",
"cite_spans": [
{
"start": 544,
"end": 554,
"text": "Och (2003)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper makes the following contributions. To our knowledge, it is the first known exact search algorithm for optimizing task loss on N -best lists in general dimensions. We also present an approximate version of LP-MERT that offers a natural means of trading speed for accuracy, as we are guaranteed to eventually find the global optimum as we gradually increase beam size. This trade-off may be beneficial in commercial settings and in large-scale evaluations like the NIST evaluation, i.e., when one has a stable system and is willing to let MERT run for days or weeks to get the best possible accuracy. We think this work would also be useful as we turn to more human involvement in training (Zaidan and Callison-Burch, 2009) , as MERT in this case is intrinsically slow.",
"cite_spans": [
{
"start": 699,
"end": 732,
"text": "(Zaidan and Callison-Burch, 2009)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Let f S 1 = f 1 . . . f S denote the S input sentences of our tuning set. For each sentence f s , let C s = e s,1 . . . e s,N denote a set of N candidate translations. For simplicity and without loss of generality, we assume that N is constant for each index s. Each input and output sentence pair (f s , e s,n ) is weighted by a linear model that combines model parameters",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unidimensional MERT",
"sec_num": "2"
},
{
"text": "w = w 1 . . . w D \u2208 R D with D feature functions h 1 (f , e, \u223c) . . . h D (f , e, \u223c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unidimensional MERT",
"sec_num": "2"
},
{
"text": ", where \u223c is the hidden state associated with the derivation from f to e, such as phrase segmentation and alignment. Furthermore, let h s,n \u2208 R D denote the feature vector representing the translation pair (f s , e s,n ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unidimensional MERT",
"sec_num": "2"
},
{
"text": "In MERT, the goal is to minimize an error count E(r, e) by scoring translation hypotheses against a set of reference translations r S 1 = r 1 . . . r S . Assuming as in Och (2003) that error count is additively decomposable by sentence-i.e., E(r S",
"cite_spans": [
{
"start": 169,
"end": 179,
"text": "Och (2003)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unidimensional MERT",
"sec_num": "2"
},
{
"text": "1 , e S 1 ) = s E(r s , e s )-this results in the following optimization problem: 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unidimensional MERT",
"sec_num": "2"
},
{
"text": "w = arg min w S s=1 E(r s ,\u00ea(f s ; w)) = arg min w S s=1 N n=1 E(r s , e s,n )\u03b4(e s,n ,\u00ea(f s ; w)) (1) where\u00ea (f s ; w) = arg max n\u2208{1...N } w h s,n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unidimensional MERT",
"sec_num": "2"
},
{
"text": "The quality of this approximation is dependent on how accurately the N -best lists represent the search space of the system. Therefore, the hypothesis list is iteratively grown: decoding with an initial parameter vector seeds the N -best lists; next, parameter estimation and N -best list gathering alternate until the search space is deemed representative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unidimensional MERT",
"sec_num": "2"
},
{
"text": "The crucial observation of Och (2003) is that the error count along any line is a piecewise constant function. Furthermore, this function for a single sentence may be computed efficiently by first finding the hypotheses that form the upper envelope of the model score function, then gathering the error count for each hypothesis along the range for which it is optimal. Error counts for the whole corpus are simply the sums of these piecewise constant functions, leading to an efficient algorithm for finding the global optimum of the error count along any single direction.",
"cite_spans": [
{
"start": 27,
"end": 37,
"text": "Och (2003)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unidimensional MERT",
"sec_num": "2"
},
{
"text": "Such a hill-climbing algorithm in a non-convex space has no optimality guarantee: without a perfect direction finder, even a globally-exact line search may never encounter the global optimum. Coordinate ascent is often effective, though conjugate direction set finding algorithms, such as Powell's method (Powell, 1964; Press et al., 2007) , or even random directions may produce better results (Cer et al., 2008) . Random restarts, based on either uniform sampling or a random walk (Moore and Quirk, 2008) , increase the likelihood of finding a good solution. Since random restarts and random walks lead to better solutions and faster convergence, we incorporate them into our baseline system, which we refer to as 1D-MERT.",
"cite_spans": [
{
"start": 305,
"end": 319,
"text": "(Powell, 1964;",
"ref_id": "BIBREF29"
},
{
"start": 320,
"end": 339,
"text": "Press et al., 2007)",
"ref_id": "BIBREF30"
},
{
"start": 395,
"end": 413,
"text": "(Cer et al., 2008)",
"ref_id": "BIBREF3"
},
{
"start": 494,
"end": 506,
"text": "Quirk, 2008)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unidimensional MERT",
"sec_num": "2"
},
{
"text": "Finding the global optimum of Eq. 1 is a difficult task, so we proceed in steps and first analyze the case where the tuning set contains only one sentence. This gives insight on how to solve the general case. With only one sentence, one of the two summations in Eq. 1 vanishes and one can exhaustively enumerate the N translations e 1,n (or e n for short) to find the one that yields the minimal task loss. The only difficulty with S = 1 is to know for each translation e n whether its feature vector h 1,n (or h n for short) can be maximized using any linear model. As we can see in Fig. 1(a) , some hypotheses can be maximized (e.g., h 1 , h 2 , and h 4 ), while others (e.g., h 3 and h 5 ) cannot. In geometric terminology, the former points are commonly called extreme points, and the latter are interior points. 3 The problem of exactly optimizing a single N -best list is closely related to the convex hull problem in computational geometry, for which generic solvers such as the QuickHull algorithm exist (Eddy, 1977; Bykat, 1978; Barber et al., 1996) . A first approach would be to construct the convex hull conv(h 1 . . . h N ) of the N -best list, then identify the point on the hull with lowest loss (h 1 in Fig. 1 ) and finally compute an optimal weight vector using hull points that share common facets with the ",
"cite_spans": [
{
"start": 817,
"end": 818,
"text": "3",
"ref_id": null
},
{
"start": 1012,
"end": 1024,
"text": "(Eddy, 1977;",
"ref_id": "BIBREF9"
},
{
"start": 1025,
"end": 1037,
"text": "Bykat, 1978;",
"ref_id": "BIBREF2"
},
{
"start": 1038,
"end": 1058,
"text": "Barber et al., 1996)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 584,
"end": 593,
"text": "Fig. 1(a)",
"ref_id": null
},
{
"start": 1219,
"end": 1225,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multidimensional MERT",
"sec_num": "3"
},
{
"text": "Figure 1: N -best list (h 1 . . . h N )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multidimensional MERT",
"sec_num": "3"
},
{
"text": "with associated losses (here, TER scores) for a single input sentence, whose convex hull is displayed with dotted lines in (a). For effective visualization, our plots use only two features (D = 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multidimensional MERT",
"sec_num": "3"
},
{
"text": "While we can find a weight vector that maximizes h 1 (e.g., the w in (b)), no linear model can possibly maximize any of the points strictly inside the convex hull.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multidimensional MERT",
"sec_num": "3"
},
{
"text": "optimal feature vector (h 2 and h 4 ). Unfortunately, this doesn't quite scale even with a single N -best list, since the best known convex hull algorithm runs in O(N D/2 +1 ) time (Barber et al., 1996) . 4 Algorithms presented in this paper assume that D is unrestricted, therefore we cannot afford to build any convex hull explicitly. Thus, we turn to linear programming (LP), for which we know algorithms (Karmarkar, 1984) that are polynomial in the number of dimensions and linear in the number of points, i.e., O(N T ), where T = D 3.5 . To check if point h i is extreme, we really only need to know whether we can define a half-space containing all points h 1 . . . h N , with h i lying on the hyperplane delimiting that halfspace, as shown in Fig. 1 (b) for h 1 . Formally, a vertex h i is optimal with respect to arg max i {w h i } if and only if the following constraints hold: 5",
"cite_spans": [
{
"start": 181,
"end": 202,
"text": "(Barber et al., 1996)",
"ref_id": "BIBREF0"
},
{
"start": 205,
"end": 206,
"text": "4",
"ref_id": null
},
{
"start": 408,
"end": 425,
"text": "(Karmarkar, 1984)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 750,
"end": 756,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multidimensional MERT",
"sec_num": "3"
},
{
"text": "w h i = y (2) w h j \u2264 y, for each j = i (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multidimensional MERT",
"sec_num": "3"
},
{
"text": "w is orthogonal to the hyperplane defining the halfspace, and the intercept y defines its position. The 4 A convex hull algorithm polynomial in D is very unlikely. Indeed, the expected number of facets of high-dimensional convex hulls grows dramatically, and-assuming a uniform distribution of points, D = 10, and a sufficiently large N -the expected number of facets is approximately 10 6 N (Buchta et al., 1985) . In the worst case, the maximum number of facets of a convex hull is O(N D/2 / D/2 !) (Klee, 1966).",
"cite_spans": [
{
"start": 392,
"end": 413,
"text": "(Buchta et al., 1985)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multidimensional MERT",
"sec_num": "3"
},
{
"text": "5 A similar approach for checking whether a given point is extreme is presented in http://www.ifor.math.ethz. ch/\u02dcfukuda/polyfaq/node22.html, but our method generates slightly smaller LPs. above equations represent a linear program (LP), which can be turned into canonical form maximize c w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multidimensional MERT",
"sec_num": "3"
},
{
"text": "subject to Aw \u2264 b by substituting y with w h i in Eq. 3, by defining A = {a n,d } 1\u2264n\u2264N ;1\u2264d\u2264D with a n,d = h j,d \u2212 h i,d (where h j,d is the d-th element of h j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multidimensional MERT",
"sec_num": "3"
},
{
"text": ", and by setting b = (0, . . . , 0) = 0. The vertex h i is extreme if and only if the LP solver finds a non-zero vector w satisfying the canonical system. To ensure that w is zero only when h i is interior, we set c = h i \u2212 h \u00b5 , where h \u00b5 is a point known to be inside the hull (e.g., the centroid of the N -best list). 6 In the remaining of this section, we use this LP formulation in function LINOPTIMIZER(h i ; h 1 . . . h N ), which returns the weight vector\u0175 maximizing h i , or which returns 0 if h i is interior to conv(h 1 . . . h N ). We also use conv(h i ; h 1 . . . h N ) to denote whether h i is extreme with respect to this hull.",
"cite_spans": [
{
"start": 321,
"end": 322,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multidimensional MERT",
"sec_num": "3"
},
{
"text": "Algorithm 1: LP-MERT (for S = 1). input : sent.-level feature vectors H = {h 1 . . . h N } input : sent.-level task losses E 1 . . . E N , where E n := E(r 1 , e 1,n ) output : optimal weight vector\u0175 1 begin",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multidimensional MERT",
"sec_num": "3"
},
{
"text": "sort N -best list by increasing losses:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multidimensional MERT",
"sec_num": "3"
},
{
"text": "2 (i 1 . . . i N ) \u2190 INDEXSORT(E 1 . . . E N ) 3 for n \u2190 1 to N do find\u0175 maximizing i n -th element: 4\u0175 \u2190 LINOPTIMIZER(h in ; H) 5 if\u0175 = 0 then 6 return\u0175 7 return 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multidimensional MERT",
"sec_num": "3"
},
{
"text": "An exact search algorithm for optimizing a single N -best list is shown above. It lazily enumerates feature vectors in increasing order of task loss, keeping only the extreme ones. Such a vertex h j is known to be on the convex hull, and the returned vector\u0175 maximizes it. In Fig. 1 , it would first run LINOPTIMIZER on h 3 , discard it since it is interior, and finally accept the extreme point h 1 . Each execution of LINOPTI-MIZER requires O(N T ) time with the interior point Figure 2 : Running times to exactly optimize N -best lists with an increasing number of dimensions. To determine which feature vectors were on the hull, we use either linear programming (Karmarkar, 1984) or one of the most efficient convex hull computation tools (Barber et al., 1996) . method of (Karmarkar, 1984) , and since the main loop may run O(N ) times in the worst case, time complexity is O(N 2 T ). Finally, Fig. 2 empirically demonstrates the effectiveness of a linear programming approach, which in practice is seldom affected by D.",
"cite_spans": [
{
"start": 666,
"end": 683,
"text": "(Karmarkar, 1984)",
"ref_id": "BIBREF12"
},
{
"start": 743,
"end": 764,
"text": "(Barber et al., 1996)",
"ref_id": "BIBREF0"
},
{
"start": 777,
"end": 794,
"text": "(Karmarkar, 1984)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 276,
"end": 282,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 480,
"end": 488,
"text": "Figure 2",
"ref_id": null
},
{
"start": 899,
"end": 905,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multidimensional MERT",
"sec_num": "3"
},
{
"text": "We now extend LP-MERT to the general case, in which we are optimizing multiple sentences at once. This creates an intricate optimization problem, since the inner summations over n = 1 . . . N in Eq. 1 can't be optimized independently. For instance, the optimal weight vector for sentence s = 1 may be suboptimal with respect to sentence s = 2. So we need some means to determine whether a selection",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exact search: general case",
"sec_num": "3.1"
},
{
"text": "m = m(1) . . . m(S) \u2208 M = [1, N ] S of feature vectors h 1,m(1) . . . h S,m(S)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exact search: general case",
"sec_num": "3.1"
},
{
"text": "is extreme, that is, whether we can find a weight vector that maximizes each h s,m(s) . Here is a reformulation of Eq. 1 that makes this condition on extremity more explicit:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exact search: general case",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m = arg min conv(h[m];H) m\u2208M S s=1 E(r s , e s,m(n) )",
"eq_num": "(4)"
}
],
"section": "Exact search: general case",
"sec_num": "3.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exact search: general case",
"sec_num": "3.1"
},
{
"text": "h[m] = S s=1 h s,m(s) H = m \u2208M h[m ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exact search: general case",
"sec_num": "3.1"
},
{
"text": "One na\u00efve approach to address this optimization problem is to enumerate all possible combinations among the S distinct N -best lists, determine for each combination m whether h[m] is extreme, and return the extreme combination with lowest total loss. It is evident that this approach is optimal (since it follows directly from Eq. 4), but it is prohibitively slow since it processes O(N S ) vertices to determine whether they are extreme, which thus requires O(N S T ) time per LP optimization and O(N 2S T ) time in total. We now present several improvements to make this approach more practical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exact search: general case",
"sec_num": "3.1"
},
{
"text": "In the na\u00efve approach presented above, each LP computation to evaluate conv(h[m]; H) requires O(N S T ) time since H contains N S vertices, but we show here how to reduce it to O(N ST ) time. This improvement exploits the fact that we can eliminate the majority of the N S points of H, since only S(N \u2212 1) + 1 are really needed to determine whether h[m] is extreme. This is best illustrated using an example, as shown in Fig. 3 . Both h 1,1 and h 2,1 in (a) and (b) are extreme with respect to their own N -best list, and we ask whether we can find a weight vector that maximizes both h 1,1 and h 2,1 . The algorithmic trick is to geometrically translate one of the two N -best lists so that h 1,1 = h 2,1 , where h 2,1 is the translation of h 2,1 . Then we use linear programming with the new set of 2N \u2212 1 points, as shown in (c), to determine whether h 1,1 is on the hull, in which case the answer to the original question is yes. In the case of the combination of h 1,1 and h 2,2 , we see in (d) that the combined set of points prevents the maximization h 1,1 , since this point is clearly no longer on the hull. Hence, the combination (h 1,1 ,h 2,2 ) cannot be maximized using any linear model. This trick generalizes to S \u2265 2. In both (c) and (d), we used S(N \u2212 1) + 1 points instead of N S to determine whether a given point is extreme. We show in the appendix that this simplification does not sacrifice optimality.",
"cite_spans": [],
"ref_spans": [
{
"start": 421,
"end": 427,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sparse hypothesis combination",
"sec_num": "3.1.1"
},
{
"text": "Now that we can determine whether a given combination is extreme, we must next enumerate candidate combinations to find the combination that has lowest task loss among all of those that are extreme. Since the number of feature vector combinations is O(N S ), exhaustive enumeration is not a reasonable",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lazy enumeration, divide-and-conquer",
"sec_num": "3.1.2"
},
{
"text": "h 1,1 h 2,2 h 2,1 (a) (b) h 1,1 h' 2,2 (c) (d) h 1,1 h' 2,1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lazy enumeration, divide-and-conquer",
"sec_num": "3.1.2"
},
{
"text": "Figure 3: Given two N -best lists, (a) and (b), we use linear programming to determine which hypothesis combinations are extreme. For instance, the combination h 1,1 and h 2,1 is extreme (c), while h 1,1 and h 2,2 is not (d).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lazy enumeration, divide-and-conquer",
"sec_num": "3.1.2"
},
{
"text": "option. Instead, we use lazy enumeration to process combinations in increasing order of task loss, which ensures that the first extreme combination for s = 1 . . . S that we encounter is the optimal one. An S-ary lazy enumeration would not be particularly efficient, since the runtime is still O(N S ) in the worst case. LP-MERT instead uses divide-and-conquer and binary lazy enumeration, which enables us to discard early on combinations that are not extreme. For instance, if we find that (h 1,1 ,h 2,2 ) is interior for sentences s = 1, 2, the divide-and-conquer branch for s = 1 . . . 4 never actually receives this bad combination from its left child, thus avoiding the cost of enumerating combinations that are known to be interior, e.g., (h 1,1 ,h 2,2 , h 3,1 ,h 4,1 ) . The LP-MERT algorithm for the general case is shown as Algorithm 2. It basically only calls a recursive divide-and-conquer function (GETNEXTBEST) for sentence range 1 . . . S. The latter function uses binary lazy enumeration in a manner similar to (Huang and Chiang, 2005) , and relies on two global variables: I and L. The first of these, I, is used to memoize the results of calls to GETNEXTBEST; given a range of sentences and a rank n, it stores the nth best combination for that range of sentences. The global variable L stores hypotheses combination matrices, one matrix for each range of sentences (s, t) as shown in ",
"cite_spans": [
{
"start": 1027,
"end": 1051,
"text": "(Huang and Chiang, 2005)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 746,
"end": 776,
"text": "(h 1,1 ,h 2,2 , h 3,1 ,h 4,1 )",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Lazy enumeration, divide-and-conquer",
"sec_num": "3.1.2"
},
{
"text": "E = {E s,n } 1\u2264s\u2264S;1\u2264n\u2264N ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lazy enumeration, divide-and-conquer",
"sec_num": "3.1.2"
},
{
"text": "where sent.-level costs E s,n := E(r s , e s,n ) output : optimal weight vector\u0175 and its loss L 1 begin sort N -best lists by increasing losses: Fig. 4 , to determine which combination to try next. The function EXPANDFRONTIER returns the indices of unvisited cells that are adjacent (right or down) to visited cells and that might correspond to the next best hypothesis. Once no more cells need to be added to the frontier, LP-MERT identifies the lowest loss combination on the frontier (BESTINFRONTIER), and uses LP to determine whether it is extreme. To do so, it first generates an LP using COMBINE, a function that implements the method described in Fig. 3 . If the LP offers no solution, this combination is ignored. LP-MERT iterates until it finds a cell entry whose combination is extreme. Regarding ranges of length one (s = t), lines 3-10 are similar to Algorithm 1 for S = 1, but with one difference: GETNEXTBEST may be called multiple times with the same argument s, since the first output of GETNEXTBEST might not be extreme when combined with other feature vectors. Lines 3-10 of GETNEXTBEST handle this case efficiently, since the algorithm resumes at the (n + 1)-th Function GetNextBest(H,E,s,t)",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 151,
"text": "Fig. 4",
"ref_id": "FIGREF2"
},
{
"start": 654,
"end": 660,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lazy enumeration, divide-and-conquer",
"sec_num": "3.1.2"
},
{
"text": "2 for s \u2190 1 to S do 3 (i s,1 ..i s,N ) \u2190 INDEXSORT(E s,1 ..E s,N ) find best hypothesis combination for 1 . . . S: 4 (h * , H * , L) \u2190 GETNEXTBEST(H, E, 1, S) 5\u0175 \u2190 LINOPTIMIZER(h * ; H * ) 6 return (\u0175, L)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lazy enumeration, divide-and-conquer",
"sec_num": "3.1.2"
},
{
"text": "input : sentence range (s, t) output : h * : current best extreme vertex output : H * : constraint vertices output :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lazy enumeration, divide-and-conquer",
"sec_num": "3.1.2"
},
{
"text": "L: task loss of h * Losses of partial hypotheses: 1 L \u2190 L[s, t] 2 if s = t then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lazy enumeration, divide-and-conquer",
"sec_num": "3.1.2"
},
{
"text": "n is the index where we left off last time:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lazy enumeration, divide-and-conquer",
"sec_num": "3.1.2"
},
{
"text": "3 n \u2190 NBROWS(L) 4 H s \u2190 {h s,1 . . . h s,N } 5 repeat 6 n \u2190 n + 1 7\u0175 \u2190 LINOPTIMIZER(h s,in ; H s ) 8 L[n, 1] \u2190 E s,in 9 until\u0175 = 0 10 return (h s,in , H s , L[n, 1]) 11 else 12 u \u2190 (s + t)/2 , v \u2190 u + 1 13 repeat 14 while HASINCOMPLETEFRONTIER(L) do 15 (m, n) \u2190 EXPANDFRONTIER(L) 16 x \u2190 NBROWS(L) 17 y \u2190 NBCOLUMNS(L) 18 for m \u2190 x + 1 to m do 19 I[s, u, m ] \u2190 GETNEXTBEST(H, E, s, u) 20 for n \u2190 y + 1 to n do 21 I[v, t, n ] \u2190 GETNEXTBEST(H, E, v, t) 22 L[m, n] \u2190 LOSS(I[s, u, m])+LOSS(I[v, t, n]) 23 (m, n) \u2190 BESTINFRONTIER(L) 24 (h m , H m , L m ) \u2190 I[s, u, m] 25 (h n , H n , L n ) \u2190 I[v, t, n] 26 (h * , H * ) \u2190 COMBINE(h m , H m , h n , H n ) 27\u0175 \u2190 LINOPTIMIZER(h * ; H * ) 28 until\u0175 = 0 29 return (h * , H * , L[m, n])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lazy enumeration, divide-and-conquer",
"sec_num": "3.1.2"
},
{
"text": "element of the N -best list (where n is the position where the previous execution left off). 7 We can see that a strength of this algorithm is that inconsistent combinations are deleted as soon as possible, which allows us to discard fruitless candidates en masse.",
"cite_spans": [
{
"start": 93,
"end": 94,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lazy enumeration, divide-and-conquer",
"sec_num": "3.1.2"
},
{
"text": "We will see in Section 5 that our exact algorithm is often too computationally expensive in practice to be used with either a large number of sentences or a large number of features. We now present two ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximate Search",
"sec_num": "3.2"
},
{
"text": "H i \u2190 H i + h 3 for i \u2190 1 to size(H ) do 4 H i \u2190 H i + h 5 return (h + h , H \u222a H )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximate Search",
"sec_num": "3.2"
},
{
"text": "approaches to make LP-MERT more scalable, with the downside that we may allow search errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximate Search",
"sec_num": "3.2"
},
{
"text": "In the first case, we make the assumption that we have an initial weight vector w 0 that is a reasonable approximation of\u0175, where w 0 may be obtained either by using a fast MERT algorithm like 1D-MERT, or by reusing the weight vector that is optimal with respect to the previous iteration of MERT. The idea then is to search only the set of weight vectors that satisfy cos(\u0175, w 0 ) \u2265 t, where t is a threshold on cosine similarity provided by the user. The larger the t, the faster the search, but at the expense of more search errors. This is implemented with two simple changes in our algorithm. First, LINOPTIMIZER sets the objective vector c = w 0 . Second, if the output w originally returned by LINOPTIMIZER does not satisfy cos(\u0175, w 0 ) \u2265 t, then it returns 0. While this modification of our algorithm may lead to search errors, it nevertheless provides some theoretical guarantee: our algorithm finds the global optimum if it lies within the region defined by cos(\u0175, w 0 ) \u2265 t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximate Search",
"sec_num": "3.2"
},
{
"text": "The second method is a beam approximation of LP-MERT, which normally deals with linear programs that are increasingly large in the upper branches of GETNEXTBEST's recursive calls. The main idea is to prune the output of COMBINE (line 26) by model score with respect to w best , where w best is our current best model on the entire tuning set. Note that beam pruning can discard h * (the current best extreme vertex), in which case LINOPTIMIZER returns 0. w best is updated as follows: each time we produce a new non-zero\u0175, run w best \u2190\u0175 if\u0175 has a lower loss than w best on the entire tuning set. The idea of using a beam here is similar to using cosine similarity (since w best constrains the search towards a promising region), but beam pruning also helps reduce LP optimization time and thus enables us to explore a wider space. Since w best often improves during search, it is useful to run multiple iterations of LP-MERT until w best doesn't change. Two or three iterations suffice in our experience. In our experiments, we use a beam size of 1000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximate Search",
"sec_num": "3.2"
},
{
"text": "Our experiments in this paper focus on only the application of machine translation, though we believe that the current approach is agnostic to the particular system used to generate hypotheses. Both phrasebased systems (e.g., Koehn et al. (2007) ) and syntaxbased systems (e.g., Li et al. (2009) , Quirk et al. (2005) ) commonly use MERT to train free parameters. Our experiments use a syntax-directed translation approach (Quirk et al., 2005) : it first applies a dependency parser to the source language data at both training and test time. Multi-word translation mappings constrained to be connected subgraphs of the source tree are extracted from the training data; these provide most lexical translations. Partially lexicalized templates capturing reordering and function word insertion and deletion are also extracted. At runtime, these mappings and templates are used to construct transduction rules to convert the source tree into a target string. The best transduction is sought using approximate search techniques (Chiang, 2007) .",
"cite_spans": [
{
"start": 226,
"end": 245,
"text": "Koehn et al. (2007)",
"ref_id": "BIBREF14"
},
{
"start": 279,
"end": 295,
"text": "Li et al. (2009)",
"ref_id": null
},
{
"start": 298,
"end": 317,
"text": "Quirk et al. (2005)",
"ref_id": "BIBREF31"
},
{
"start": 423,
"end": 443,
"text": "(Quirk et al., 2005)",
"ref_id": "BIBREF31"
},
{
"start": 1024,
"end": 1038,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Each hypothesis is scored by a relatively standard set of features. The mappings contain five features: maximum-likelihood estimates of source given target and vice versa, lexical weighting estimates of source given target and vice versa, and a constant value that, when summed across a whole hypothesis, indicates the number of mappings used. For each template, we include a maximum-likelihood estimate of the target reordering given the source structure. The system may fall back to templates that mimic the source word order; the count of such templates is a feature. Likewise we include a feature to count the number of source words deleted by templates, and a feature to count the number of target words inserted by templates. The log probability of the target string according to a language models is also a feature; we add one such feature for each language model. We include the number of target words as features to balance hypothesis length. For the present system, we use the training data of WMT 2010 to construct and evaluate an English-to-44 German translation system. This consists of approximately 1.6 million parallel sentences, along with a much larger monolingual set of monolingual data. We train two language models, one on the target side of the training data (primarily parliamentary data), and the other on the provided monolingual data (primarily news). The 2009 test set is used as development data for MERT, and the 2010 one is used as test data. The resulting system has 13 distinct features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "The section evaluates both the exact and beam version of LP-MERT. Unless mentioned otherwise, the number of features is D = 13 and the N -best list size is 100. Translation performance is measured with a sentence-level version of BLEU-4 (Lin and Och, 2004) , using one reference translation. To enable legitimate comparisons, LP-MERT and 1D-MERT are evaluated on the same combined N -best lists, even though running multiple iterations of MERT with either LP-MERT or 1D-MERT would normally produce different combined N -best lists. We use WMT09 as tuning set, and WMT10 as test set. Before turning to large tuning sets, we first evaluate exact LP-MERT on data sizes that it can easily handle. Fig. 5 offers a comparison with 1D-MERT, for which we split the tuning set into 1,000 overlapping subsets for S = 2, 4, 8 on a combined N -best after five iterations of MERT with an average of 374 translation per sentence. The figure shows that LP-MERT never underperforms 1D-MERT in any of the 3,000 experiments, and this almost certainly confirms that Fig. 5 . LP-MERT with S = 8 checks only 600K full combinations on average, much less than the total number of combinations (which is more than 10 20 ). LP-MERT systematically finds the global optimum. In the case S = 1, Powell rarely makes search errors (about 15%), but the situation gets worse as S increases. For S = 4, it makes search errors in 90% of the cases, despite using 20 random starting points. Some combination statistics for S up to 8 are shown in Tab. 1. The table shows the speedup provided by LP-MERT is very substantial when compared to exhaustive enumeration. Note that this is using D = 13, and that pruning is much more effective with less features, a fact that is confirmed in Fig. 6 . D = 13 makes it hard to use a large tuning set, but the situation improves with D = 2 . . . 5. Fig. 7 displays execution times when LP-MERT constrains the output\u0175 to satisfy cos(w 0 ,\u0175) \u2265 t, where t is on the x-axis of the figure. The figure shows that we can scale to 1000 sentences when (exactly) searching within the region defined by cos(w 0 ,\u0175) \u2265 .84. All these running times would improve using parallel computing, since divide-andconquer algorithms are generally easy to parallelize.",
"cite_spans": [
{
"start": 237,
"end": 256,
"text": "(Lin and Och, 2004)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 693,
"end": 699,
"text": "Fig. 5",
"ref_id": "FIGREF3"
},
{
"start": 1047,
"end": 1053,
"text": "Fig. 5",
"ref_id": "FIGREF3"
},
{
"start": 1747,
"end": 1753,
"text": "Fig. 6",
"ref_id": "FIGREF4"
},
{
"start": 1851,
"end": 1857,
"text": "Fig. 7",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We also evaluate the beam version of LP-MERT, which allows us to exploit tuning sets of reasonable ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "One-dimensional MERT has been very influential. It is now used in a broad range of systems, and has been improved in a number of ways. For instance, lattices or hypergraphs may be used in place of N -best lists to form a more comprehensive view of the search space with fewer decoding runs (Macherey et al., 2008; Kumar et al., 2009; Chatterjee and Cancedda, 2010) . This particular refinement is orthogonal to our approach, though. We expect to extend LP-MERT to hypergraphs in future work. Exact search may be challenging due to the computational complexity of the search space (Leusch et al., 2008) , but approximate search should be feasible. Other research has explored alternate methods of gradient-free optimization, such as the downhillsimplex algorithm (Nelder and Mead, 1965; Zens et al., 2007; Zhao and Chen, 2009) . Although the search space is different than that of Och's algorithm, it still relies on one-dimensional line searches to reflect, expand, or contract the simplex. Therefore, it suffers the same problems of one-dimensional MERT: feature sets with complex non-linear interactions are difficult to optimize. LP-MERT improves on these methods by searching over a larger subspace of parameter combinations, not just those on a single line.",
"cite_spans": [
{
"start": 290,
"end": 313,
"text": "(Macherey et al., 2008;",
"ref_id": "BIBREF20"
},
{
"start": 314,
"end": 333,
"text": "Kumar et al., 2009;",
"ref_id": "BIBREF15"
},
{
"start": 334,
"end": 364,
"text": "Chatterjee and Cancedda, 2010)",
"ref_id": "BIBREF4"
},
{
"start": 580,
"end": 601,
"text": "(Leusch et al., 2008)",
"ref_id": "BIBREF16"
},
{
"start": 762,
"end": 785,
"text": "(Nelder and Mead, 1965;",
"ref_id": "BIBREF25"
},
{
"start": 786,
"end": 804,
"text": "Zens et al., 2007;",
"ref_id": "BIBREF37"
},
{
"start": 805,
"end": 825,
"text": "Zhao and Chen, 2009)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We can also change the objective function in a number of ways to make it more amenable to optimization, leveraging knowledge from elsewhere in the machine learning community. Instance reweighting as in boosting may lead to better parameter inference (Duh and Kirchhoff, 2008) . Smoothing the objective function may allow differentiation and standard ML learning techniques (Och and Ney, 2002) . Smith and Eisner (2006) use a smoothed objective along with deterministic annealing in hopes of finding good directions and climbing past locally optimal points. Other papers use margin methods such as MIRA (Watanabe et al., 2007; Chiang et al., 2008) , updated somewhat to match the MT domain, to perform incremental training of potentially large numbers of features. However, in each of these cases the objective function used for training no longer matches the final evaluation metric.",
"cite_spans": [
{
"start": 250,
"end": 275,
"text": "(Duh and Kirchhoff, 2008)",
"ref_id": "BIBREF8"
},
{
"start": 373,
"end": 392,
"text": "(Och and Ney, 2002)",
"ref_id": "BIBREF26"
},
{
"start": 395,
"end": 418,
"text": "Smith and Eisner (2006)",
"ref_id": "BIBREF32"
},
{
"start": 602,
"end": 625,
"text": "(Watanabe et al., 2007;",
"ref_id": "BIBREF35"
},
{
"start": 626,
"end": 646,
"text": "Chiang et al., 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Our primary contribution is the first known exact search algorithm for direct loss minimization on Nbest lists in multiple dimensions. Additionally, we present approximations that consistently outperform standard one-dimensional MERT on a competitive machine translation system. While Och's method of MERT is generally quite successful, there are cases where it does quite poorly. A more global search such as LP-MERT lowers the expected risk of such poor solutions. This is especially important for current machine translation systems that rely heavily on MERT, but may also be valuable for other textual ap-plications. Recent speech recognition systems have also explored combinations of more acoustic and language models, with discriminative training of 5-10 features rather than one million (L\u00f6\u00f6f et al., 2010) ; LP-MERT could be valuable here as well.",
"cite_spans": [
{
"start": 795,
"end": 814,
"text": "(L\u00f6\u00f6f et al., 2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "The one-dimensional algorithm of Och (2003) has been subject to study and refinement for nearly a decade, while this is the first study of multidimensional approaches. We demonstrate the potential of multi-dimensional approaches, but we believe there is much room for improvement in both scalability and speed. Furthermore, a natural line of research would be to extend LP-MERT to compact representations of the search space, such as hypergraphs.",
"cite_spans": [
{
"start": 33,
"end": 43,
"text": "Och (2003)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "There are a number of broader implications from this research. For instance, LP-MERT can aid in the evaluation of research on MERT. This approach supplies a truly optimal vector as ground truth, albeit under limited conditions such as a constrained direction set, a reduced number of features, or a smaller set of sentences. Methods can be evaluated based on not only improvements over prior approaches, but also based on progress toward a global optimum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Divide-and-conquer. This does not sacrifice optimality, since if conv(h; H) is false, then conv(h; H \u222a G) is false for any set G. Proof: Assume conv(h; H) is false, so h is interior to H. By definition, any interior point h can be written as a linear combination of other points: h = i \u03bb i h i , with \u2200i(h i \u2208 H, h i = h, \u03bb i \u2265 0) and i \u03bb i = 1. This same combination of points also demonstrates that h is interior to H \u222a G, thus conv(h; H \u222a G) is false as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Sparse hypothesis combination. We show here that the simplification of linear programs in Section 3.1.1 from size O(N S ) to size O(N S) does not change the value of conv(h; H). More specifically, this means that linear optimization of the output of the COMBINE method at lines 26-27 of function GETNEXTBEST does not introduce any error. Let (g 1 . . . g U ) and (h 1 . . . h V ) be two N -best lists to be combined, then:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "conv g u + h v ; U i=1 (g i + h v ) \u222a V j=1 (g u + h j ) = conv g u + h v ; U i=1 V j=1 (g i + h j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Proof: To prove this equality, it suffices to show that: (1) if g u +h v is interior wrt. the first conv binary predicate in the above equation, then it is interior wrt. the second conv, and (2) if g u +h v is interior wrt. the second conv, then it is interior wrt. the first conv. Claim (1) is evident, since the set of points in the first conv is a subset of the other set of points. Thus, we only need to prove (2). We first geometrically translate all points by \u2212g u \u2212h v . Since g u +h v is interior wrt. the second conv, we can write:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "0 = U i=1 V j=1 \u03bb i,j (g i + h j \u2212 g u \u2212 h v ) = U i=1 V j=1 \u03bb i,j (g i \u2212 g u ) + U i=1 V j=1 \u03bb i,j (h j \u2212 h v ) = U i=1 (g i \u2212 g u ) V j=1 \u03bb i,j + V j=1 (h j \u2212 h v ) U i=1 \u03bb i,j = U i=1 \u03bb i (g i \u2212 g u ) + V j=1 \u03bb U +j (h j \u2212 h v )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "where {\u03bb i } 1\u2264i\u2264U +V values are computed from {\u03bb i,j } 1\u2264i\u2264U,1\u2264j\u2264V as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "\u03bb i = j \u03bb i,j , i \u2208 [1, U ] and \u03bb U +j = i \u03bb i,j , j \u2208 [1, V ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Since the interior point is 0, \u03bb i values can be scaled so that they sum to 1 (necessary condition in the definition of interior points), which proves that the following predicate is false:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "conv 0; U i=1 (g i \u2212 g u ) \u222a V j=1 (h j \u2212 h v )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "which is equivalent to stating that the following is false:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "conv g u + h v ; U i=1 (g i + h v ) \u222a V j=1 (g u + h j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Note that MERT makes two types of approximations. First, the set of all possible outputs is represented only approximately, by N -best lists, lattices, or hypergraphs. Second, error functions on such representations are non-convex and previous work only offers approximate techniques to optimize them. Our work avoids the second approximation, while the first one is unavoidable when optimization and decoding occur in distinct steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A metric such as TER is decomposable by sentence. BLEU is not, but its sufficient statistics are, and the literature offers several sentence-level approximations of BLEU(Lin and Och, 2004;Liang et al., 2006).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Specifically, a point h is extreme with respect to a convex set C (e.g., the convex hull shown inFig. 1(a)) if it does not lie in an open line segment joining any two points of C. In a minor abuse of terminology, we sometimes simply state that a given point h is extreme when the nature of C is clear from context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We assume that h1 . . . hN are not degenerate, i.e., that they collectively span R D . Otherwise, all points are necessarily on the hull, yet some of them may not be uniquely maximized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Each N -best list is augmented with a placeholder hypothesis with loss +\u221e. This ensures n never runs out of bounds at line 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "One interesting observation is that the performance of 1D-MERT degrades as S grows from 2 to 8 (Fig. 5), which contrasts with the results shown in Tab. 2. This may have to do with the fact that N -best lists with S = 2 have much fewer local maxima than with S = 4, 8, in which case 20 restarts is generally enough.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Xiaodong He, Kristina Toutanova, and three anonymous reviewers for their valuable suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "In this appendix, we prove that LP-MERT (Algorithm 2) is exact. As noted before, the na\u00efve approach of solving Eq. 4 is to enumerate all O(N S ) hypotheses combinations in M, discard the ones that are not extreme, and return the best scoring one. LP-MERT relies on algorithmic improvements to speed up this approach, and we now show that none of them affect the optimality of the solution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A: Proof of optimality",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The QuickHull algorithm for convex hulls",
"authors": [
{
"first": "C",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Bradford",
"middle": [],
"last": "Barber",
"suffix": ""
},
{
"first": "David",
"middle": [
"P"
],
"last": "Dobkin",
"suffix": ""
},
{
"first": "Hannu",
"middle": [],
"last": "Huhdanpaa",
"suffix": ""
}
],
"year": 1996,
"venue": "ACM Trans. Math. Softw",
"volume": "22",
"issue": "",
"pages": "469--483",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Bradford Barber, David P. Dobkin, and Hannu Huhdan- paa. 1996. The QuickHull algorithm for convex hulls. ACM Trans. Math. Softw., 22:469-483.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Stochastical approximation of convex bodies",
"authors": [
{
"first": "C",
"middle": [],
"last": "Buchta",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "R",
"middle": [
"F"
],
"last": "Tichy",
"suffix": ""
}
],
"year": 1985,
"venue": "Math. Ann",
"volume": "271",
"issue": "",
"pages": "225--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Buchta, J. Muller, and R. F. Tichy. 1985. Stochastical approximation of convex bodies. Math. Ann., 271:225- 235.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Convex hull of a finite set of points in two dimensions",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bykat",
"suffix": ""
}
],
"year": 1978,
"venue": "Inf. Process. Lett",
"volume": "7",
"issue": "6",
"pages": "296--298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Bykat. 1978. Convex hull of a finite set of points in two dimensions. Inf. Process. Lett., 7(6):296-298.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Regularization and search for minimum error rate training",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Third Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "26--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Dan Jurafsky, and Christopher D. Manning. 2008. Regularization and search for minimum error rate training. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 26-34.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Minimum error rate training by sampling the translation lattice",
"authors": [
{
"first": "Samidh",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Cancedda",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "606--615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samidh Chatterjee and Nicola Cancedda. 2010. Min- imum error rate training by sampling the translation lattice. In Proceedings of the 2010 Conference on Em- pirical Methods in Natural Language Processing, pages 606-615. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Online large-margin training of syntactic and structural translation features",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Marton",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2008,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and structural translation features. In EMNLP.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, 33(2):201-228.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Minimum error rate training based on N-best string models",
"authors": [
{
"first": "W",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Lee",
"suffix": ""
},
{
"first": "B",
"middle": [
"H"
],
"last": "Juang",
"suffix": ""
}
],
"year": 1993,
"venue": "Proc. IEEE Int'l Conf. Acoustics, Speech, and Signal Processing (ICASSP '93)",
"volume": "2",
"issue": "",
"pages": "652--655",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Chou, C. H. Lee, and B. H. Juang. 1993. Minimum error rate training based on N-best string models. In Proc. IEEE Int'l Conf. Acoustics, Speech, and Signal Processing (ICASSP '93), pages 652-655, Vol. 2.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Beyond loglinear models: boosted minimum error rate training for programming N-best re-ranking",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Kirchhoff",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers",
"volume": "",
"issue": "",
"pages": "37--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Duh and Katrin Kirchhoff. 2008. Beyond log- linear models: boosted minimum error rate training for programming N-best re-ranking. In Proceedings of the 46th Annual Meeting of the Association for Computa- tional Linguistics on Human Language Technologies: Short Papers, pages 37-40, Stroudsburg, PA, USA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A new convex hull algorithm for planar sets",
"authors": [
{
"first": "William",
"middle": [
"F"
],
"last": "Eddy",
"suffix": ""
}
],
"year": 1977,
"venue": "ACM Trans. Math. Softw",
"volume": "3",
"issue": "",
"pages": "398--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William F. Eddy. 1977. A new convex hull algorithm for planar sets. ACM Trans. Math. Softw., 3:398-403.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Better k-best parsing",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth International Workshop on Parsing Technology",
"volume": "",
"issue": "",
"pages": "53--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and David Chiang. 2005. Better k-best pars- ing. In Proceedings of the Ninth International Work- shop on Parsing Technology, pages 53-64, Stroudsburg, PA, USA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Minimum classification error rate methods for speech recognition. Speech and Audio Processing",
"authors": [
{
"first": "",
"middle": [],
"last": "Biing-Hwang",
"suffix": ""
},
{
"first": "Wu",
"middle": [],
"last": "Juang",
"suffix": ""
},
{
"first": "Chin-Hui",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on",
"volume": "5",
"issue": "3",
"pages": "257--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Biing-Hwang Juang, Wu Hou, and Chin-Hui Lee. 1997. Minimum classification error rate methods for speech recognition. Speech and Audio Processing, IEEE Trans- actions on, 5(3):257-265.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A new polynomial-time algorithm for linear programming",
"authors": [
{
"first": "N",
"middle": [],
"last": "Karmarkar",
"suffix": ""
}
],
"year": 1984,
"venue": "Combinatorica",
"volume": "4",
"issue": "",
"pages": "373--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Karmarkar. 1984. A new polynomial-time algorithm for linear programming. Combinatorica, 4:373-395.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Convex polytopes and linear programming",
"authors": [
{
"first": "",
"middle": [],
"last": "Victor Klee",
"suffix": ""
}
],
"year": 1966,
"venue": "Proceedings of the IBM Scientific Computing Symposium on Combinatorial Problems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Klee. 1966. Convex polytopes and linear program- ming. In Proceedings of the IBM Scientific Computing Symposium on Combinatorial Problems.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [
"Birch"
],
"last": "Mayne",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch Mayne, Christopher Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of ACL, Demonstration Session.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Efficient minimum error rate training and minimum Bayes-risk decoding for translation hypergraphs and lattices",
"authors": [
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "1",
"issue": "",
"pages": "163--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shankar Kumar, Wolfgang Macherey, Chris Dyer, and Franz Och. 2009. Efficient minimum error rate train- ing and minimum Bayes-risk decoding for translation hypergraphs and lattices. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1, pages 163-171.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Complexity of finding the BLEU-optimal hypothesis in a confusion network",
"authors": [
{
"first": "Gregor",
"middle": [],
"last": "Leusch",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Matusov",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "839--847",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregor Leusch, Evgeny Matusov, and Hermann Ney. 2008. Complexity of finding the BLEU-optimal hy- pothesis in a confusion network. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 839-847, Stroudsburg, PA, USA. Zhifei Li, Chris Callison-Burch, Chris Dyer, Juri Ganitke- vitch, Sanjeev Khudanpur, Lane Schwartz, Wren N. G. Thornton, Jonathan Weese, and Omar F. Zaidan. 2009. Joshua: an open source toolkit for parsing-based MT. In Proc. of WMT.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "An end-to-end discriminative approach to machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bouchard-C\u00f4t\u00e9",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2006,
"venue": "International Conference on Computational Linguistics and Association for Computational Linguistics (COLING/ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Liang, A. Bouchard-C\u00f4t\u00e9, D. Klein, and B. Taskar. 2006. An end-to-end discriminative approach to ma- chine translation. In International Conference on Com- putational Linguistics and Association for Computa- tional Linguistics (COLING/ACL).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "ORANGE: a method for evaluating automatic evaluation metrics for machine translation",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th international conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin and Franz Josef Och. 2004. ORANGE: a method for evaluating automatic evaluation metrics for machine translation. In Proceedings of the 20th international conference on Computational Linguistics, Stroudsburg, PA, USA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Discriminative adaptation for log-linear acoustic models",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "L\u00f6\u00f6f",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Schl\u00fcter",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2010,
"venue": "INTERSPEECH",
"volume": "",
"issue": "",
"pages": "1648--1651",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas L\u00f6\u00f6f, Ralf Schl\u00fcter, and Hermann Ney. 2010. Dis- criminative adaptation for log-linear acoustic models. In INTERSPEECH, pages 1648-1651.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Lattice-based minimum error rate training for statistical machine translation",
"authors": [
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Thayer",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "725--734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfgang Macherey, Franz Och, Ignacio Thayer, and Jakob Uszkoreit. 2008. Lattice-based minimum error rate training for statistical machine translation. In Pro- ceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 725-734.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Direct loss minimization for structured prediction",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcallester",
"suffix": ""
},
{
"first": "Tamir",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Keshet",
"suffix": ""
}
],
"year": 2010,
"venue": "Advances in Neural Information Processing Systems 23",
"volume": "",
"issue": "",
"pages": "1594--1602",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McAllester, Tamir Hazan, and Joseph Keshet. 2010. Direct loss minimization for structured prediction. In Advances in Neural Information Processing Systems 23, pages 1594-1602.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "91--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 91-98.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Discriminative sentence compression with soft syntactic constraints",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "297--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald. 2006. Discriminative sentence compres- sion with soft syntactic constraints. In Proceedings of EACL, pages 297-304.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Random restarts in minimum error rate training for statistical machine translation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Quirk",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "585--592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert C. Moore and Chris Quirk. 2008. Random restarts in minimum error rate training for statistical machine translation. In Proceedings of the 22nd International Conference on Computational Linguistics -Volume 1, pages 585-592.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A simplex method for function minimization",
"authors": [
{
"first": "J",
"middle": [
"A"
],
"last": "Nelder",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mead",
"suffix": ""
}
],
"year": 1965,
"venue": "Computer Journal",
"volume": "7",
"issue": "",
"pages": "308--313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. A. Nelder and R. Mead. 1965. A simplex method for function minimization. Computer Journal, 7:308-313.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Discriminative training and maximum entropy models for statistical machine translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "295--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proc. of the 40th Annual Meet- ing of the Association for Computational Linguistics, pages 295-302.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Minimum error rate training for statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training for statistical machine translation. In Proc. of ACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2001. BLEU: a method for automatic evalu- ation of machine translation. In Proc. of ACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "An efficient method for finding the minimum of a function of several variables without calculating derivatives",
"authors": [
{
"first": "M",
"middle": [
"J D"
],
"last": "Powell",
"suffix": ""
}
],
"year": 1964,
"venue": "Comput. J",
"volume": "7",
"issue": "2",
"pages": "155--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M.J.D. Powell. 1964. An efficient method for finding the minimum of a function of several variables without calculating derivatives. Comput. J., 7(2):155-162.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Numerical Recipes: The Art of Scientific Computing",
"authors": [
{
"first": "H",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Saul",
"middle": [
"A"
],
"last": "Press",
"suffix": ""
},
{
"first": "William",
"middle": [
"T"
],
"last": "Teukolsky",
"suffix": ""
},
{
"first": "Brian",
"middle": [
"P"
],
"last": "Vetterling",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Flannery",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William H. Press, Saul A. Teukolsky, William T. Vetter- ling, and Brian P. Flannery. 2007. Numerical Recipes: The Art of Scientific Computing. Cambridge University Press, 3rd edition.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Dependency treelet translation: syntactically informed phrasal SMT",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Arul",
"middle": [],
"last": "Menezes",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "271--279",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency treelet translation: syntactically informed phrasal SMT. In Proc. of ACL, pages 271-279.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Minimum risk annealing for training log-linear models",
"authors": [
{
"first": "David",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL on Main conference poster sessions",
"volume": "",
"issue": "",
"pages": "787--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David A. Smith and Jason Eisner. 2006. Minimum risk annealing for training log-linear models. In Proceed- ings of the COLING/ACL on Main conference poster sessions, pages 787-794, Stroudsburg, PA, USA.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of AMTA",
"volume": "",
"issue": "",
"pages": "223--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proc. of AMTA, pages 223-231.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Explicit word error minimization in N-best list rescoring",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Yochai",
"middle": [],
"last": "Knig",
"suffix": ""
},
{
"first": "Mitchel",
"middle": [],
"last": "Weintraub",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. Eurospeech",
"volume": "",
"issue": "",
"pages": "163--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke, Yochai Knig, and Mitchel Weintraub. 1997. Explicit word error minimization in N-best list rescoring. In In Proc. Eurospeech, pages 163-166.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Online large-margin training for statistical machine translation",
"authors": [
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taro Watanabe, Jun Suzuki, Hajime Tsukada, and Hideki Isozaki. 2007. Online large-margin training for statisti- cal machine translation. In EMNLP-CoNLL.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Feasibility of human-in-the-loop minimum error rate training",
"authors": [
{
"first": "Omar",
"middle": [
"F"
],
"last": "Zaidan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "52--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omar F. Zaidan and Chris Callison-Burch. 2009. Feasibil- ity of human-in-the-loop minimum error rate training. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1, pages 52-61.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A systematic comparison of training criteria for statistical machine translation",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Sasa",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "524--532",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Zens, Sasa Hasan, and Hermann Ney. 2007. A systematic comparison of training criteria for sta- tistical machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natu- ral Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 524-532, Prague, Czech Republic.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A simplex Armijo downhill algorithm for optimizing statistical machine translation decoding parameters",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Shengyuan",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "21--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Zhao and Shengyuan Chen. 2009. A simplex Armijo downhill algorithm for optimizing statistical machine translation decoding parameters. In Proceedings of Human Language Technologies: The 2009 Annual Con- ference of the North American Chapter of the Associ- ation for Computational Linguistics, Companion Vol- ume: Short Papers, pages 21-24.",
"links": null
}
},
"ref_entries": {
"FIGREF2": {
"type_str": "figure",
"text": "LP-MERT minimizes loss (TER) on four sentences. O(N 4 ) translation combinations are possible, but the LP-MERT algorithm only tests two full combinations. Without divide-and-conquer-i.e., using 4-ary lazy enumeration-ten full combinations would have been checked unnecessarily.Algorithm 2: LP-MERT input : feature vectors H = {h s,n } 1\u2264s\u2264S;1\u2264n\u2264N input : task losses",
"uris": null,
"num": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Line graph of sorted differences in BLEUn4r1[%] scores between LP-MERT and 1D-MERT on 1000 tuning sets of size S = 2, 4, 8. The highest differences for S = 2, 4, 8 are respectively 23.3, 19.7, 13.1.",
"uris": null,
"num": null
},
"FIGREF4": {
"type_str": "figure",
"text": "Effect of the number of features (runtime on 1 CPU of a modern computer). Each curve represents a different number of tuning sentences.",
"uris": null,
"num": null
},
"FIGREF5": {
"type_str": "figure",
"text": "Effect of a constraint on w (runtime on 1 CPU",
"uris": null,
"num": null
},
"FIGREF6": {
"type_str": "figure",
"text": "Divide-and-conquer in Algorithm 2 discards any partial hypothesis combination h[m(j) . . . m(k)] if it is not extreme, even before considering any extension h[m(i) . . . m(j) . . . m(k) . . . m(l)].",
"uris": null,
"num": null
},
"TABREF0": {
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">{h 31 , h 41 }</td><td>{h 32 , h 41 }</td><td/><td/></tr><tr><td/><td colspan=\"2\">{h 11 , h 23 }</td><td colspan=\"2\">126.0</td><td>126.5</td><td/><td/></tr><tr><td/><td colspan=\"2\">{h 12 , h 21 }</td><td colspan=\"2\">126.1</td><td/><td/><td/></tr><tr><td/><td colspan=\"2\">h 21 h 22 h 23</td><td>h 24</td><td/><td>h 41 h 42</td><td>h 23</td><td>Combinations discarded:</td></tr><tr><td>h 11 h 12</td><td>69.1 69.2 69.3 69.4</td><td>69.2 70.0</td><td>69.9</td><td>h 31 h 32</td><td>56.8 57.1 57.3 57.6</td><td>57.9</td><td>{h 11 , h 21 , h 31 , h 41 } {h 12 , h 22 , h 31 , h 41 } {h 12 , h 12 , h 31 , h 42 }</td></tr><tr><td>h 13</td><td colspan=\"2\">L[1,2]</td><td/><td>h 33</td><td colspan=\"2\">L[3,4]</td><td>(and 7 others)</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "Combinations checked: {h 11 , h 23 , h 31 , h 41 } {h 12 , h 21 , h 31 , h 41 }"
},
"TABREF1": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": "Function Combine(h, H, h , H ) input : H, H : constraint vertices input : h, h : extreme vertices, wrt. H and H output : h * , H * : combination as in Sec. 3.1.1 1 for i \u2190 1 to size(H) do 2"
},
"TABREF4": {
"content": "<table><tr><td>: BLEUn4r1[%] scores for English-German on WMT09 for tuning sets ranging from 32 to 1024 sentences.</td></tr><tr><td>size. Results are displayed in Table 2. The gains</td></tr><tr><td>are fairly substantial, with gains of 0.5 BLEU point or more in all cases where S \u2264 512. 8 Finally, we perform an end-to-end MERT comparison, where</td></tr><tr><td>both our algorithm and 1D-MERT are iteratively used</td></tr><tr><td>to generate weights that in turn yield new N -best lists.</td></tr><tr><td>Tuning on 1024 sentences of WMT10, LP-MERT</td></tr><tr><td>converges after seven iterations, with a BLEU score</td></tr><tr><td>of 16.21%; 1D-MERT converges after nine iterations,</td></tr><tr><td>with a BLEU score of 15.97%. Test set performance</td></tr><tr><td>on the full WMT10 test set for LP-MERT and 1D-</td></tr><tr><td>MERT are respectively 17.08% and 16.91%.</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
}
}
}
}