|
{ |
|
"paper_id": "D13-1022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:41:55.174289Z" |
|
}, |
|
"title": "Optimal Beam Search for Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "MIT CSAIL", |
|
"location": { |
|
"postCode": "02139", |
|
"settlement": "Cambridge", |
|
"region": "MA", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Yin-Wen", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "MIT CSAIL", |
|
"location": { |
|
"postCode": "02139", |
|
"settlement": "Cambridge", |
|
"region": "MA", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Columbia University", |
|
"location": { |
|
"postCode": "10027", |
|
"settlement": "New York", |
|
"region": "NY", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Beam search is a fast and empirically effective method for translation decoding, but it lacks formal guarantees about search error. We develop a new decoding algorithm that combines the speed of beam search with the optimal certificate property of Lagrangian relaxation, and apply it to phrase-and syntax-based translation decoding. The new method is efficient, utilizes standard MT algorithms, and returns an exact solution on the majority of translation examples in our test data. The algorithm is 3.5 times faster than an optimized incremental constraint-based decoder for phrase-based translation and 4 times faster for syntax-based translation.", |
|
"pdf_parse": { |
|
"paper_id": "D13-1022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Beam search is a fast and empirically effective method for translation decoding, but it lacks formal guarantees about search error. We develop a new decoding algorithm that combines the speed of beam search with the optimal certificate property of Lagrangian relaxation, and apply it to phrase-and syntax-based translation decoding. The new method is efficient, utilizes standard MT algorithms, and returns an exact solution on the majority of translation examples in our test data. The algorithm is 3.5 times faster than an optimized incremental constraint-based decoder for phrase-based translation and 4 times faster for syntax-based translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Beam search (Koehn et al., 2003) and cube pruning (Chiang, 2007) have become the de facto decoding algorithms for phrase-and syntax-based translation. The algorithms are central to large-scale machine translation systems due to their efficiency and tendency to produce high-quality translations (Koehn, 2004; Koehn et al., 2007; Dyer et al., 2010) . However despite practical effectiveness, neither algorithm provides any bound on possible decoding error.", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 32, |
|
"text": "(Koehn et al., 2003)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 50, |
|
"end": 64, |
|
"text": "(Chiang, 2007)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 308, |
|
"text": "(Koehn, 2004;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 328, |
|
"text": "Koehn et al., 2007;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 347, |
|
"text": "Dyer et al., 2010)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work we present a variant of beam search decoding for phrase-and syntax-based translation. The motivation is to exploit the effectiveness and efficiency of beam search, but still maintain formal guarantees. The algorithm has the following benefits:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 In theory, it can provide a certificate of optimality; in practice, we show that it produces optimal hypotheses, with certificates of optimality, on the vast majority of examples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 It utilizes well-studied algorithms and extends off-the-shelf beam search decoders.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Empirically it is very fast, results show that it is 3.5 times faster than an optimized incremental constraint-based solver.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While our focus is on fast decoding for machine translation, the algorithm we present can be applied to a variety of dynamic programming-based decoding problems. The method only relies on having a constrained beam search algorithm and a fast unconstrained search algorithm. Similar algorithms exist for many NLP tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We begin in Section 2 by describing constrained hypergraph search and showing how it generalizes translation decoding. Section 3 introduces a variant of beam search that is, in theory, able to produce a certificate of optimality. Section 4 shows how to improve the effectiveness of beam search by using weights derived from Lagrangian relaxation. Section 5 puts everything together to derive a fast beam search algorithm that is often optimal in practice.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Experiments compare the new algorithm with several variants of beam search, cube pruning, A * search, and relaxation-based decoders on two translation tasks. The optimal beam search algorithm is able to find exact solutions with certificates of optimality on 99% of translation examples, significantly more than other baselines. Additionally the optimal beam search algorithm is much faster than other exact methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The focus of this work is decoding for statistical machine translation. Given a source sentence, the goal is to find the target sentence that maximizes a combination of translation model and language model scores. In order to analyze this decoding problem, we first abstract away from the specifics of translation into a general form, known as a hypergraph. In this section, we describe the hypergraph formalism and its relation to machine translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Throughout the paper, scalars and vectors are written in lowercase, matrices are written in uppercase, and sets are written in script-case, e.g. X . All vectors are assumed to be column vectors. The function \u03b4(j) yields an indicator vector with \u03b4(j) j = 1 and \u03b4(j) i = 0 for all i = j.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Notation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A directed hypergraph is a pair (V, E) where V = {1 . . . |V|} is a set of vertices, and E is a set of directed hyperedges. Each hyperedge e \u2208 E is a tuple", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "v 2 , . . . , v |v| , v 1 where v i \u2208 V for i \u2208 {1 . . . |v|}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The head of the hyperedge is h(e) = v 1 . The tail of the hyperedge is the ordered sequence t(e) = v 2 , . . . , v |v| . The size of the tail |t(e)| may vary across different hyperedges, but |t(e)| \u2265 1 for all edges and is bounded by a constant. A directed graph is a directed hypergraph with |t(e)| = 1 for all edges e \u2208 E.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Each vertex v \u2208 V is either a non-terminal or a terminal in the hypergraph. The set of non-terminals is N = {v \u2208 V : h(e) = v for some e \u2208 E}. Conversely, the set of terminals is defined as T = V \\N .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "All directed hypergraphs used in this work are acyclic: informally this implies that no hyperpath (as defined below) contains the same vertex more than once (see Martin et al. (1990) for a full definition). Acyclicity implies a partial topological ordering of the vertices. We also assume there is a distinguished root vertex 1 where for all e \u2208 E, 1 \u2208 t(e).", |
|
"cite_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 182, |
|
"text": "Martin et al. (1990)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Next we define a hyperpath as x \u2208 {0, 1} |E| where x(e) = 1 if hyperedge e is used in the hyperpath, Figure 1 : Dynamic programming algorithm for unconstrained hypergraph search. Note that this version only returns the highest score: max x\u2208X \u03b8 x + \u03c4 . The optimal hyperpath can be found by including back-pointers.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 109, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "procedure BESTPATHSCORE(\u03b8, \u03c4 ) \u03c0[v] \u2190 0 for all v \u2208 T for e \u2208 E in topological order do v 2 , . . . , v |v| , v 1 \u2190 e s \u2190 \u03b8(e) + |v| i=2 \u03c0[v i ] if s > \u03c0[v 1 ] then \u03c0[v 1 ] \u2190 s return \u03c0[1] + \u03c4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "x(e) = 0 otherwise. The set of valid hyperpaths is defined as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "X = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 x : e\u2208E:h(e)=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "x(e) = 1, e:h(e)=v", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "x(e) = e:v\u2208t(e)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "x(e) \u2200 v \u2208 N \\ {1} \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The first problem we consider is unconstrained hypergraph search. Let \u03b8 \u2208 R |E| be the weight vector for the hypergraph and let \u03c4 \u2208 R be a weight offset. 1 The unconstrained search problem is to find This maximization can be computed for any weights and directed acyclic hypergraph in time O(|E|) using dynamic programming. Figure 1 shows this algorithm which is simply a version of the CKY algorithm.", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 155, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 324, |
|
"end": 332, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Next consider a variant of this problem: constrained hypergraph search. Constraints will be necessary for both phrase-and syntax-based decoding. In phrase-based models, the constraints will ensure that each source word is translated exactly once. In syntax-based models, the constraints will be used to intersect a translation forest with a language model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In the constrained hypergraph problem, hyperpaths must fulfill additional linear hyperedge constraints. Define the set of constrained hyperpaths as X = {x \u2208 X : Ax = b} where we have a constraint matrix A \u2208 R |b|\u00d7|E| and vector b \u2208 R |b| encoding |b| constraints. The optimal constrained hyperpath is x * = arg max x\u2208X \u03b8 x + \u03c4 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Note that the constrained hypergraph search problem may be NP-Hard. Crucially this is true even when the corresponding unconstrained search problem is solvable in polynomial time. For instance, phrase-based decoding is known to be NP-Hard (Knight, 1999) , but we will see that it can be expressed as a polynomial-sized hypergraph with constraints.", |
|
"cite_spans": [ |
|
{ |
|
"start": 239, |
|
"end": 253, |
|
"text": "(Knight, 1999)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypergraphs and Search", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Consider translating a source sentence w 1 . . . w |w| to a target sentence in a language with vocabulary \u03a3. A simple phrase-based translation model consists of a tuple (P, \u03c9, \u03c3) with \u2022 P; a set of pairs (q, r) where q 1 . . . q |q| is a sequence of source-language words and r 1 . . . r |r| is a sequence of target-language words drawn from the target vocabulary \u03a3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 \u03c9 : R |P| ; parameters for the translation model mapping each pair in P to a real-valued score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 \u03c3 : R |\u03a3\u00d7\u03a3| ; parameters of the language model mapping a bigram of target-language words to a real-valued score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The translation decoding problem is to find the best derivation for a given source sentence. A derivation consists of a sequence of phrases p = p 1 . . . p n . Define a phrase as a tuple (q, r, j, k) consisting of a span in the source sentence q = w j . . . w k and a sequence of target words r 1 . . . r |r| , with (q, r) \u2208 P. We say the source words w j . . . w k are translated to r.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The score of a derivation, f (p), is the sum of the translation score of each phrase plus the language model score of the target sentence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "f (p) = n i=1 \u03c9(q(p i ), r(p i )) + |u|+1 i=0 \u03c3(u i\u22121 , u i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where u is the sequence of words in \u03a3 formed by concatenating the phrases r(p 1 ) . . . r(p n ), with boundary cases u 0 = <s> and u |u|+1 = </s>.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Crucially for a derivation to be valid it must satisfy an additional condition: it must translate every source word exactly once. The decoding problem for phrase-based translation is to find the highestscoring derivation satisfying this property.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We can represent this decoding problem as a constrained hypergraph using the construction of Chang and Collins (2011) . The hypergraph weights encode the translation and language model scores, and its structure ensures that the count of source words translated is |w|, i.e. the length of the source sentence. Each vertex will remember the preceding target-language word and the count of source words translated so far.", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 117, |
|
"text": "Chang and Collins (2011)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The hypergraph, which for this problem is also a directed graph, takes the following form.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Vertices v \u2208 V are labeled (c, u) where c \u2208 {1 . . . |w|} is the count of source words translated and u \u2208 \u03a3 is the last target-language word produced by a partial hypothesis at this vertex. Additionally there is an initial terminal vertex labeled (0, <s>).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 There is a hyperedge e \u2208 E with head (c , u ) and tail (c, u) if there is a valid corresponding phrase (q, r, j, k) such that c = c + |q| and u = r |r| , i.e. c is the count of words translated and u is the last word of target phrase r. We call this phrase p(e).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The weight of this hyperedge, \u03b8(e), is the translation model score of the pair plus its language model score", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u03b8(e) = \u03c9(q, r)+ \uf8eb \uf8ed |r| i=2 \u03c3(r i\u22121 , r i ) \uf8f6 \uf8f8 +\u03c3(u, r 1 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 To handle the end boundary, there are hyperedges with head 1 and tail (|w|, u) for all u \u2208 \u03a3. The weight of these edges is the cost of the stop bigram following u, i.e. \u03c3(u, </s>).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "While any valid derivation corresponds to a hyperpath in this graph, a hyperpath may not correspond to a valid derivation. For instance, a hyperpath may translate some source words more than once or not at all. : Hypergraph for translating the sentence w = les 1 pauvres 2 sont 3 demunis 4 with set of pairs P = {(les, the), (pauvres, poor), (sont demunis, don't have any money)}. Hyperedges are color-coded by source words translated: orange for les 1 , green for pauvres 2 , and red for sont 3 demunis 4 . The dotted lines show an invalid hyperpath x that has signature Ax = 0, 0, 2, 2 = 1, 1, 1, 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We handle this problem by adding additional constraints. For all source words i \u2208 {1 . . . |w|}, define \u03c1 as the set of hyperedges that translate w i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u03c1(i) = {e \u2208 E : j(p(e)) \u2264 i \u2264 k(p(e))}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Next define |w| constraints enforcing that each word in the source sentence is translated exactly once", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "e\u2208\u03c1(i) x(e) = 1 \u2200 i \u2208 {1 . . . |w|}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These linear constraints can be represented with a matrix A \u2208 {0, 1} |w|\u00d7|E| where the rows correspond to source indices and the columns correspond to edges. We call the product Ax the signature, where in this case (Ax) i is the number of times word i has been translated. The full set of constrained hyperpaths is X = {x \u2208 X : Ax = 1 }, and the best derivation under this phrase-based translation model has score max x\u2208X \u03b8 x + \u03c4 . Figure 2 .2 shows an example hypergraph with constraints for translating the sentence les pauvres sont demunis into English using a simple set of phrases. Even in this small example, many of the possible hyperpaths violate the constraints and correspond to invalid derivations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 432, |
|
"end": 440, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Example: Phrase-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Syntax-based machine translation with a language model can also be expressed as a constrained hypergraph problem. For the sake of space, we omit the definition. See Rush and Collins (2011) for an indepth description of the constraint matrix used for syntax-based translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example: Syntax-Based Machine Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This section describes a variant of the beam search algorithm for finding the highest-scoring constrained hyperpath. The algorithm uses three main techniques: (1) dynamic programming with additional signature information to satisfy the constraints, (2) beam pruning where some, possibly optimal, hypotheses are discarded, and (3) branch-andbound-style application of upper and lower bounds to discard provably non-optimal hypotheses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Variant of Beam Search", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Any solution returned by the algorithm will be a valid constrained hyperpath and a member of X . Additionally the algorithm returns a certificate flag opt that, if true, indicates that no beam pruning was used, implying the solution returned is optimal. Generally it will be hard to produce a certificate even by reducing the amount of beam pruning; however in the next section we will introduce a method based on Lagrangian relaxation to tighten the upper bounds. These bounds will help eliminate most solutions before they trigger pruning. Figure 3 shows the complete beam search algorithm. At its core it is a dynamic programming algorithm filling in the chart \u03c0. The beam search chart indexes hypotheses by vertex v \u2208 V as well as a signature sig \u2208 R |b| where |b| is the number of constraints. A new hypothesis is constructed from each hyperedge and all possible signatures of tail nodes. We define the function SIGS to take the tail of an edge and re-turn the set of possible signature combinations", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 542, |
|
"end": 550, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Variant of Beam Search", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "SIGS(v 2 , . . . v |v| ) = |v| i=2 {sig : \u03c0[v i , sig] = \u2212\u221e}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where the product is the Cartesian product over sets. Line 8 loops over this entire set. 2 For hypothesis x, the algorithm ensures that its signature sig is equal to Ax. This property is updated on line 9.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The signature provides proof that a hypothesis is still valid. Let the function CHECK(sig) return true if the hypothesis can still fulfill the constraints. For example, in phrase-based decoding, we will define CHECK(sig) = (sig \u2264 1); this ensures that each word has been translated 0 or 1 times. This check is applied on line 11.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Unfortunately maintaining all signatures is inefficient. For example we will see that in phrasebased decoding the signature is a bit-string recording which source words have been translated; the number of possible bit-strings is exponential in the length of the sentence. The algorithm includes two methods for removing hypotheses, bounding and pruning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Bounding allows us to discard provably nonoptimal solutions. The algorithm takes as arguments a lower bound on the optimal score lb \u2264 \u03b8 x * + \u03c4 , and computes upper bounds on the outside score for all vertices v: ubs[v], i.e. an overestimate of the score for completing the hyperpath from v. If a hypothesis has score s, it can only be optimal if s + ubs[v] \u2265 lb. This bound check is performed on line 11.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Pruning removes weak partial solutions based on problem-specific checks. The algorithm invokes the black-box function, PRUNE, on line 13, passing it a pruning parameter \u03b2 and a vertex-signature pair. The parameter \u03b2 controls a threshold for pruning. For instance for phrase-based translation, it specifies a hard-limit on the number of hypotheses to retain. The function returns true if it prunes from the chart. Note that pruning may remove optimal hypotheses, so we set the certificate flag opt to false if the chart is modified.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "1: procedure BEAMSEARCH(\u03b8, \u03c4, lb, \u03b2) 2: ubs \u2190 OUTSIDE(\u03b8, \u03c4 ) 3: opt \u2190 true 4: \u03c0[v, sig] \u2190 \u2212\u221e for all v \u2208 V, sig \u2208 R |b| 5: \u03c0[v, 0] \u2190 0 for all v \u2208 T 6: for e \u2208 E in topological order do 7: v 2 , . . . , v |v| , v 1 \u2190 e 8:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "for sig (2) . . . sig (|v|) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 11, |
|
"text": "(2)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 22, |
|
"end": 27, |
|
"text": "(|v|)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2208 SIGS(v 2 , . . . , v |v| ) do 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "sig \u2190 A\u03b4(e) + |v| i=2 sig (i) 10:", |
|
"cite_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 29, |
|
"text": "(i)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "s \u2190 \u03b8(e) + |v| i=2 \u03c0[v i , sig (i) ] 11: if \uf8eb \uf8ed s > \u03c0[v 1 , sig] \u2227 CHECK(sig) \u2227 s + ubs[v 1 ] \u2265 lb \uf8f6 \uf8f8 then 12: \u03c0[v 1 , sig] \u2190 s 13: if PRUNE(\u03c0, v 1 , sig, \u03b2) then opt \u2190 false 14: lb \u2190 \u03c0[1, c] + \u03c4 15: return lb , opt Input: \uf8ee \uf8ef \uf8ef \uf8f0 (V, E, \u03b8, \u03c4 ) hypergraph with weights (A, b)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "matrix and vector for constraints lb \u2208 R lower bound \u03b2 a pruning parameter", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Output: lb resulting lower bound score opt certificate of optimality Figure 3 : A variant of the beam search algorithm. Uses dynamic programming to produce a lower bound on the optimal constrained solution and, possibly, a certificate of optimality. Function OUTSIDE computes upper bounds on outside scores. Function SIGS enumerates all possible tail signatures. Function CHECK identifies signatures that do not violate constraints. Bounds lb and ubs are used to remove provably non-optimal solutions. Function PRUNE, taking parameter \u03b2, returns true if it prunes hypotheses from \u03c0 that could be optimal.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 77, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "This variant on beam search satisfies the following two properties (recall x * is the optimal constrained solution)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Property 3.1 (Primal Feasibility). The returned score lb lower bounds the optimal constrained score, that is lb \u2264 \u03b8 x * + \u03c4 . Property 3.2 (Dual Certificate). If beam search returns with opt = true, then the returned score is optimal, i.e. lb = \u03b8 x * + \u03c4 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "An immediate consequence of Property 3.1 is that the output of beam search, lb , can be used as the input lb for future runs of the algorithm. Furthermore,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "procedure PRUNE(\u03c0, v, sig, \u03b2) C \u2190 {(v , sig ) : ||sig || 1 = ||sig|| 1 , \u03c0[v , sig ] = \u2212\u221e} D \u2190 C \\ mBEST(\u03b2, C, \u03c0) \u03c0[v , sig ] \u2190 \u2212\u221e for all v , sig \u2208 D if D = \u2205 then return true else return false Input:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(v, sig) the last hypothesis added to the chart \u03b2 \u2208 Z # of hypotheses to retain Output: true, if \u03c0 is modified if we loosen the amount of beam pruning by adjusting the pruning parameter \u03b2 we can produce tighter lower bounds and discard more hypotheses. We can then iteratively apply this idea with a sequence of parameters \u03b2 1 . . . \u03b2 K producing lower bounds lb (1) through lb (K) . We return to this idea in Section 5.", |
|
"cite_spans": [ |
|
{ |
|
"start": 363, |
|
"end": 366, |
|
"text": "(1)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 378, |
|
"end": 381, |
|
"text": "(K)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Example: Phrase-based Beam Search. Recall that the constraints for phrase-based translation consist of a binary matrix A \u2208 {0, 1} |w|\u00d7|E| and vector b = 1. The value sig i is therefore the number of times source word i has been translated in the hypothesis. We define the predicate CHECK as CHECK(sig) = (sig \u2264 1) in order to remove hypotheses that already translate a source word more than once, and are therefore invalid. For this reason, phrase-based signatures are called bit-strings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A common beam pruning strategy is to group together items into a set C and retain a (possibly complete) subset. An example phrase-based beam pruner is given in Figure 4 . It groups together hypotheses based on ||sig i || 1 , i.e. the number of source words translated, and applies a hard pruning filter that retains only the \u03b2 highest-scoring items", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 168, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(v, sig) \u2208 C based on \u03c0[v, sig].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Define the set O(v, x) to contain all outside edges of vertex v in hyperpath x (informally, hyperedges that do not have v as an ancestor). For all v \u2208 V, we set the upper bounds, ubs, to be the best unconstrained outside score", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Upper Bounds", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "ubs[v] = max x\u2208X :v\u2208x e\u2208O(v,x) \u03b8(e) + \u03c4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Upper Bounds", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "This upper bound can be efficiently computed for all vertices using the standard outside dynamic programming algorithm. We will refer to this algorithm as OUTSIDE(\u03b8, \u03c4 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Upper Bounds", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Unfortunately, as we will see, these upper bounds are often quite loose. The issue is that unconstrained outside paths are able to violate the constraints without being penalized, and therefore greatly overestimate the score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Upper Bounds", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Beam search produces a certificate only if beam pruning is never used. In the case of phrase-based translation, the certificate is dependent on all groups C having \u03b2 or less hypotheses. The only way to ensure this is to bound out enough hypotheses to avoid pruning. The effectiveness of the bounding inequality, s + ubs[v] < lb, in removing hypotheses is directly dependent on the tightness of the bounds. In this section we propose using Lagrangian relaxation to improve these bounds. We first give a brief overview of the method and then apply it to computing bounds. Our experiments show that this approach is very effective at finding certificates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finding Tighter Bounds with Lagrangian Relaxation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In Lagrangian relaxation, instead of solving the constrained search problem, we relax the constraints and solve an unconstrained hypergraph problem with modified weights. Recall the constrained hypergraph problem: max", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "x\u2208X :Ax=b \u03b8 x + \u03c4 . The La- grangian dual of this optimization problem is L(\u03bb) = max x\u2208X \u03b8 x + \u03c4 \u2212 \u03bb (Ax \u2212 b) = max x\u2208X (\u03b8 \u2212 A \u03bb) x + \u03c4 + \u03bb b = max x\u2208X \u03b8 x + \u03c4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where \u03bb \u2208 R |b| is a vector of dual variables and define \u03b8 = \u03b8 \u2212 A \u03bb and \u03c4 = \u03c4 + \u03bb b. This maximization is over X , so for any value of \u03bb, L(\u03bb) can be calculated as BestPathScore(\u03b8 , \u03c4 ). Note that for all valid constrained hyperpaths x \u2208 X the term Ax\u2212b equals 0, which implies that these hyperpaths have the same score under the modified weights as under the original weights, \u03b8 x + \u03c4 = \u03b8 x+\u03c4 . This leads to the following two properties, \u03bb (k\u22121) ) if opt then return \u03bb (k) , ub, opt return \u03bb (K) , ub, opt Input: \u03b1 1 . . . \u03b1 K sequence of subgradient rates where x \u2208 X is the hyperpath computed within the max, Property 4.1 (Dual Feasibility). The value L(\u03bb) upper bounds the optimal solution, that is L(\u03bb) \u2265 \u03b8 x * + \u03c4 Property 4.1 states that L(\u03bb) always produces some upper bound; however, to help beam search, we want as tight a bound as possible: min \u03bb L(\u03bb).", |
|
"cite_spans": [ |
|
{ |
|
"start": 472, |
|
"end": 475, |
|
"text": "(k)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 495, |
|
"end": 498, |
|
"text": "(K)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 441, |
|
"end": 448, |
|
"text": "\u03bb (k\u22121)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "procedure LRROUND(\u03b1 k , \u03bb) x \u2190 arg max x\u2208X \u03b8 x + \u03c4 \u2212 \u03bb (Ax \u2212 b) \u03bb \u2190 \u03bb \u2212 \u03b1 k (Ax \u2212 b) opt \u2190 Ax = b ub \u2190 \u03b8 x + \u03c4 return \u03bb , ub, opt procedure LAGRANGIANRELAXATION(\u03b1) \u03bb (0) \u2190 0 for k in 1 . . . K do \u03bb (k) , ub, opt \u2190 LRROUND(\u03b1 k ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The Lagrangian relaxation algorithm, shown in Figure 5 , uses subgradient descent to find this minimum. The subgradient of L(\u03bb) is Ax \u2212 b where x is the argmax of the modified objective x = arg max x\u2208X \u03b8 x + \u03c4 . Subgradient descent iteratively solves unconstrained hypergraph search problems to compute these subgradients and updates \u03bb. See Rush and Collins (2012) for an extensive discussion of this style of optimization in natural language processing.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 54, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Example: Phrase-based Relaxation. For phrasebased translation, we expand out the Lagrangian to", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "L(\u03bb) = max x\u2208X \u03b8 x + \u03c4 \u2212 \u03bb (Ax \u2212 b) = max x\u2208X e\u2208E \uf8eb \uf8ed \u03b8(e) \u2212 k(p(e)) i=j(p(e)) \u03bb i \uf8f6 \uf8f8 x(e) + \u03c4 + |s| i=1 \u03bb i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The weight of each edge \u03b8(e) is modified by the dual variables \u03bb i for each source word translated by the edge, i.e. if (q, r, j, k) = p(e), then the score is modified by k i=j \u03bb i . A solution under these weights may use source words multiple times or not at all. However if the solution uses each source word exactly once (Ax = 1), then we have a certificate and the solution is optimal.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For many problems, it may not be possible to satisfy Property 4.2 by running the subgradient algorithm alone. Yet even for these problems, applying subgradient descent will produce an improved estimate of the upper bound, min \u03bb L(\u03bb).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Utilizing Upper Bounds in Beam Search", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To utilize these improved bounds, we simply replace the weights in beam search and the outside algorithm with the modified weights from Lagrangian relaxation, \u03b8 and \u03c4 . Since the result of beam search must be a valid constrained hyperpath x \u2208 X , and for all x \u2208 X , \u03b8 x + \u03c4 = \u03b8 x + \u03c4 , this substitution does not alter the necessary properties of the algorithm; i.e. if the algorithm returns with opt equal to true, then the solution is optimal.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Utilizing Upper Bounds in Beam Search", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Additionally the computation of upper bounds now becomes", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Utilizing Upper Bounds in Beam Search", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "ubs[v] = max x\u2208X :v\u2208x e\u2208O(v,x) \u03b8 (e) + \u03c4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Utilizing Upper Bounds in Beam Search", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "These outside paths may still violate constraints, but the modified weights now include penalty terms to discourage common violations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Utilizing Upper Bounds in Beam Search", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The optimality of the beam search algorithm is dependent on the tightness of the upper and lower bounds. We can produce better lower bounds by varying the pruning parameter \u03b2; we can produce better upper bounds by running Lagrangian relaxation. In this section we combine these two ideas and present a complete optimal beam search algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Optimal Beam Search", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our general strategy will be to use Lagrangian relaxation to compute modified weights and to use beam search over these modified weights to attempt to find an optimal solution. One simple method for doing this, shown at the top of Figure 6 , is to run in stages. The algorithm first runs Lagrangian relaxation to compute the best \u03bb vector. The algorithm then iteratively runs beam search using the parameter sequence \u03b2 k . These parameters allow the algorithm to loosen the amount of beam pruning. For example in phrase based pruning, we would raise the number of hypotheses stored per group until no beam pruning occurs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 239, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Optimal Beam Search", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "A clear disadvantage of the staged approach is that it needs to wait until Lagrangian relaxation is completed before even running beam search. Often beam search will be able to quickly find an optimal solution even with good but non-optimal \u03bb. In other cases, beam search may still improve the lower bound lb.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Optimal Beam Search", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "This motivates the alternating algorithm OPT-BEAM shown Figure 6 . In each round, the algorithm alternates between computing subgradients to tighten ubs and running beam search to maximize lb. In early rounds we set \u03b2 for aggressive beam pruning, and as the upper bounds get tighter, we loosen pruning to try to get a certificate. If at any point either a primal or dual certificate is found, the algorithm returns the optimal solution.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 64, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Optimal Beam Search", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Approximate methods based on beam search and cube-pruning have been widely studied for phrasebased (Koehn et al., 2003; Tillmann and Ney, 2003; Tillmann, 2006) and syntax-based translation models (Chiang, 2007; Huang and Chiang, 2007; Watanabe et al., 2006; Huang and Mi, 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 119, |
|
"text": "(Koehn et al., 2003;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 120, |
|
"end": 143, |
|
"text": "Tillmann and Ney, 2003;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 144, |
|
"end": 159, |
|
"text": "Tillmann, 2006)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 210, |
|
"text": "(Chiang, 2007;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 211, |
|
"end": 234, |
|
"text": "Huang and Chiang, 2007;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 257, |
|
"text": "Watanabe et al., 2006;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 258, |
|
"end": 277, |
|
"text": "Huang and Mi, 2010)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "There is a line of work proposing exact algorithms for machine translation decoding. Exact decoders are often slow in practice, but help quantify the errors made by other methods. Exact algorithms proposed for IBM model 4 include ILP (Germann et al., 2001) , cutting plane (Riedel and Clarke, 2009) , and multi-pass A* search (Och et al., 2001 ). Zaslavskiy et al. (2009) formulate phrase-based decoding as a traveling salesman problem (TSP) and use a TSP decoder. Exact decoding algorithms based on finite state transducers (FST) (Iglesias et al., 2009) have been studied on phrase-based models with limited reordering (Kumar and Byrne, 2005) . Exact decoding based on FST is also feasible for certain hierarchical grammars (de Gispert et al., 2010). Chang", |
|
"cite_spans": [ |
|
{ |
|
"start": 234, |
|
"end": 256, |
|
"text": "(Germann et al., 2001)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 298, |
|
"text": "(Riedel and Clarke, 2009)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 343, |
|
"text": "(Och et al., 2001", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 371, |
|
"text": "Zaslavskiy et al. (2009)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 531, |
|
"end": 554, |
|
"text": "(Iglesias et al., 2009)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 620, |
|
"end": 643, |
|
"text": "(Kumar and Byrne, 2005)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "procedure OPTBEAMSTAGED(\u03b1, \u03b2) \u03bb, ub, opt \u2190LAGRANGIANRELAXATION(\u03b1) if opt then return ub \u03b8 \u2190 \u03b8 \u2212 A \u03bb \u03c4 \u2190 \u03c4 + \u03bb b lb (0) \u2190 \u2212\u221e for k in 1 . . . K do lb (k) , opt \u2190 BEAMSEARCH(\u03b8 , \u03c4 , lb (k\u22121) , \u03b2 k ) if opt then return lb (k) return max k\u2208{1...K} lb (k) procedure OPTBEAM(\u03b1, \u03b2) \u03bb (0) \u2190 0 lb (0) \u2190 \u2212\u221e for k in 1 . . . K do \u03bb (k) , ub (k) , opt \u2190 LRROUND(\u03b1 k , \u03bb (k\u22121) ) if opt then return ub (k) \u03b8 \u2190 \u03b8 \u2212 A \u03bb (k) \u03c4 \u2190 \u03c4 + \u03bb (k) b lb (k) , opt \u2190 BEAMSEARCH(\u03b8 , \u03c4 , lb (k\u22121) , \u03b2 k )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "if opt then return lb (k) return max k\u2208{1...K} lb (k) Input: \u03b1 1 . . . \u03b1 K sequence of subgradient rates \u03b2 1 . . . \u03b2 K sequence of pruning parameters Output: optimal constrained score or lower bound Figure 6 : Two versions of optimal beam search: staged and alternating. Staged runs Lagrangian relaxation to find the optimal \u03bb, uses \u03bb to compute upper bounds, and then repeatedly runs beam search with pruning sequence \u03b2 1 . . . \u03b2 k . Alternating switches between running a round of Lagrangian relaxation and a round of beam search with the updated \u03bb. If either produces a certificate it returns the result.", |
|
"cite_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 25, |
|
"text": "(k)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 50, |
|
"end": 53, |
|
"text": "(k)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 207, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "and Collins (2011) and Rush and Collins (2011) develop Lagrangian relaxation-based approaches for exact machine translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Apart from translation decoding, this paper is closely related to work on column generation for NLP. and Belanger et al. (2012) relate column generation to beam search and produce exact solutions for parsing and tagging problems. The latter work also gives conditions for when beam search-style decoding is optimal.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 127, |
|
"text": "Belanger et al. (2012)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "To evaluate the effectiveness of optimal beam search for translation decoding, we implemented decoders for phrase-and syntax-based models. In this section we compare the speed and optimality of these decoders to several baseline methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "For phrase-based translation we used a German-to-English data set taken from Europarl (Koehn, 2005) . We tested on 1,824 sentences of length at most 50 words. For experiments the phrase-based systems uses a trigram language model and includes standard distortion penalties. Additionally the unconstrained hypergraph includes further derivation information similar to the graph described in Chang and Collins (2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 99, |
|
"text": "(Koehn, 2005)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 414, |
|
"text": "Chang and Collins (2011)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup and Implementation", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "For syntax-based translation we used a Chineseto-English data set. The model and hypergraphs come from the work of Huang and Mi (2010) . We tested on 691 sentences from the newswire portion of the 2008 NIST MT evaluation test set. For experiments, the syntax-based model uses a trigram language model. The translation model is tree-tostring syntax-based model with a standard contextfree translation forest. The constraint matrix A is based on the constraints described by Rush and Collins (2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 134, |
|
"text": "Huang and Mi (2010)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 473, |
|
"end": 496, |
|
"text": "Rush and Collins (2011)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup and Implementation", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "Our decoders use a two-pass architecture. The first pass sets up the hypergraph in memory, and the second pass runs search. When possible the baselines share optimized construction and search code.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup and Implementation", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "The performance of optimal beam search is dependent on the sequences \u03b1 and \u03b2. For the stepsize \u03b1 we used a variant of Polyak's rule (Polyak, 1987; Boyd and Mutapcic, 2007) , substituting the unknown optimal score for the last computed lower bound: \u03b1 k \u2190 ub (k) \u2212lb (k) ||Ax (k) \u2212b|| 2", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 146, |
|
"text": "(Polyak, 1987;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 147, |
|
"end": 171, |
|
"text": "Boyd and Mutapcic, 2007)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 268, |
|
"text": "(k)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup and Implementation", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": ". We adjust the order of the pruning parameter \u03b2 based on a function \u00b5 of the current gap: \u03b2 k \u2190 10 \u00b5(ub (k) \u2212lb (k) ) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Previous work on these data sets has shown that exact algorithms do not result in a significant increase in translation accuracy. We focus on the efficiency and model score of the algorithms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The experiments compare optimal beam search (OPTBEAM) to several different decoding methods. For both systems we compare to: BEAM, the beam search decoder from Figure 3 using the original weights \u03b8 and \u03c4 , and \u03b2 \u2208 {100, 1000}; LR-TIGHT, Lagrangian relaxation followed by incre- Figure 7 : Two graphs from phrase-based decoding. Graph (a) shows the duality gap distribution for 1,824 sentences after 0, 5, and 10 rounds of LR. Graph (b) shows the % of certificates found for sentences with differing gap sizes and beam search parameters \u03b2. Duality gap is defined as, ub -(\u03b8 x * + \u03c4 ). mental tightening constraints, which is a reimplementation of Chang and Collins (2011) and Rush and Collins (2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 646, |
|
"end": 670, |
|
"text": "Chang and Collins (2011)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 675, |
|
"end": 698, |
|
"text": "Rush and Collins (2011)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 168, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 286, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baseline Methods", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "For phrase-based translation we compare with: MOSES-GC, the standard Moses beam search decoder with \u03b2 \u2208 {100, 1000} (Koehn et al., 2007) ; MOSES, a version of Moses without gap constraints more similar to BEAM (see Chang and Collins (2011) ); ASTAR, an implementation of A * search using original outside scores, i.e. OUTSIDE(\u03b8, \u03c4 ), and capped at 20,000,000 queue pops.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 136, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 215, |
|
"end": 239, |
|
"text": "Chang and Collins (2011)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Methods", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "For syntax-based translation we compare with: ILP, a general-purpose integer linear programming solver (Gurobi Optimization, 2013) and CUBEPRUNING, an approximate decoding method similar to beam search (Chiang, 2007) , tested with \u03b2 \u2208 {100, 1000}. Table 1 : Experimental results for translation experiments. Column time is the mean time per sentence in seconds, cert is the percentage of sentences solved with a certificate of optimality, exact is the percentage of sentences solved exactly, i.e. \u03b8 x + \u03c4 = \u03b8 x * + \u03c4 . Results are grouped by sentence length (group 1-10 is omitted for space).", |
|
"cite_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 216, |
|
"text": "(Chiang, 2007)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 255, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baseline Methods", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "is seven times faster than the decoder of Chang and Collins (2011) and 3.5 times faster then our reimplementation, LR-TIGHT. ASTAR performs poorly, taking lots of time on difficult sentences. BEAM runs quickly, but rarely finds an exact solution. MOSES without gap constraints is also fast, but less exact than OPTBEAM and unable to produce certificates. For syntax-based translation. OPTBEAM finds a certificate on 98.8% of solutions with an average time of 1.75 seconds per sentence, and is four times faster than LR-TIGHT. CUBE (100) is an order of magnitude faster, but is rarely exact on longer sentences. CUBE (1000) finds more exact solutions, but is comparable in speed to optimal beam search. BEAM performs better than in the phrasebased model, but is not much faster than OPTBEAM. Figure 7 .2 shows the relationship between beam search optimality and duality gap. Graph (a) shows how a handful of LR rounds can significantly tighten the upper bound score of many sentences. Graph (b) shows how beam search is more likely to find optimal solutions with tighter bounds. BEAM effectively uses 0 rounds of LR, which may explain why it finds so few optimal solutions compared to OPTBEAM. Table 2 breaks down the time spent in each part of the algorithm. For both methods, beam search has the most time variance and uses more time on longer sentences. For phrase-based sentences, Lagrangian relaxation is fast, and hypergraph construction dom- Table 2 : Distribution of time within optimal beam search, including: hypergraph construction, Lagrangian relaxation, and beam search. Mean is the percentage of total time. Median is the distribution over the median values for each row.", |
|
"cite_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 66, |
|
"text": "Chang and Collins (2011)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 791, |
|
"end": 799, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1193, |
|
"end": 1200, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1448, |
|
"end": 1455, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "inates. If not for this cost, OPTBEAM might be comparable in speed to MOSES (1000).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "In this work we develop an optimal variant of beam search and apply it to machine translation decoding. The algorithm uses beam search to produce constrained solutions and bounds from Lagrangian relaxation to eliminate non-optimal solutions. Results show that this method can efficiently find exact solutions for two important styles of machine translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "The purpose of the offset will be clear in later sections. For this section, the value of \u03c4 can be taken as 0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For simplicity we write this loop over the entire set. In practice it is important to use data structures to optimize lookup. SeeTillmann (2006) andHuang and Chiang (2005).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Map inference in chains using column generation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Belanger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Passos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1853--1861", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Belanger, Alexandre Passos, Sebastian Riedel, and Andrew McCallum. 2012. Map inference in chains using column generation. In NIPS, pages 1853-1861.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Exact decoding of phrase-based translation models through lagrangian relaxation", |
|
"authors": [ |
|
{ |
|
"first": "Yin-Wen", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "26--37", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yin-Wen Chang and Michael Collins. 2011. Exact de- coding of phrase-based translation models through la- grangian relaxation. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing, pages 26-37. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Hierarchical phrase-based translation. computational linguistics", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "201--228", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Chiang. 2007. Hierarchical phrase-based transla- tion. computational linguistics, 33(2):201-228.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Hierarchical Phrase-Based Translation with Weighted Finite-State Transducers and Shallow-n Grammars", |
|
"authors": [ |
|
{ |
|
"first": "Adria", |
|
"middle": [], |
|
"last": "De Gispert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gonzalo", |
|
"middle": [], |
|
"last": "Iglesias", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graeme", |
|
"middle": [], |
|
"last": "Blackwood", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduardo", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Banga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Computational linguistics", |
|
"volume": "36", |
|
"issue": "", |
|
"pages": "505--533", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adria de Gispert, Gonzalo Iglesias, Graeme Blackwood, Eduardo R. Banga, and William Byrne. 2010. Hierar- chical Phrase-Based Translation with Weighted Finite- State Transducers and Shallow-n Grammars. In Com- putational linguistics, volume 36, pages 505-533.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "cdec: A decoder, alignment, and learning framework for finitestate and context-free translation models", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathen", |
|
"middle": [], |
|
"last": "Weese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ferhan", |
|
"middle": [], |
|
"last": "Ture", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hendra", |
|
"middle": [], |
|
"last": "Setiawan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vlad", |
|
"middle": [], |
|
"last": "Eidelman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Dyer, Adam Lopez, Juri Ganitkevitch, Jonathen Weese, Ferhan Ture, Phil Blunsom, Hendra Setiawan, Vlad Eidelman, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite- state and context-free translation models.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Fast decoding and optimal decoding for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Ulrich", |
|
"middle": [], |
|
"last": "Germann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Jahr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Yamada", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, ACL '01", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "228--235", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ulrich Germann, Michael Jahr, Kevin Knight, Daniel Marcu, and Kenji Yamada. 2001. Fast decoding and optimal decoding for machine translation. In Proceed- ings of the 39th Annual Meeting on Association for Computational Linguistics, ACL '01, pages 228-235.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Gurobi optimizer reference manual", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Inc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gurobi Optimization", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Inc. Gurobi Optimization. 2013. Gurobi optimizer refer- ence manual.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Better k-best parsing", |
|
"authors": [ |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Ninth International Workshop on Parsing Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "53--64", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang Huang and David Chiang. 2005. Better k-best parsing. In Proceedings of the Ninth International Workshop on Parsing Technology, pages 53-64. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Forest rescoring: Faster decoding with integrated language models", |
|
"authors": [ |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "144--151", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proceedings of the 45th Annual Meeting of the Asso- ciation of Computational Linguistics, pages 144-151, Prague, Czech Republic, June. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Efficient incremental decoding for tree-to-string translation", |
|
"authors": [ |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haitao", |
|
"middle": [], |
|
"last": "Mi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "273--283", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang Huang and Haitao Mi. 2010. Efficient incremental decoding for tree-to-string translation. In Proceedings of the 2010 Conference on Empirical Methods in Natu- ral Language Processing, pages 273-283, Cambridge, MA, October. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Rule filtering by pattern for efficient hierarchical translation", |
|
"authors": [ |
|
{ |
|
"first": "Gonzalo", |
|
"middle": [], |
|
"last": "Iglesias", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adri\u00e0", |
|
"middle": [], |
|
"last": "De Gispert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduardo", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Banga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "380--388", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gonzalo Iglesias, Adri\u00e0 de Gispert, Eduardo R. Banga, and William Byrne. 2009. Rule filtering by pattern for efficient hierarchical translation. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 380-388, Athens, Greece, March. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Decoding complexity in wordreplacement translation models", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Computational Linguistics", |
|
"volume": "25", |
|
"issue": "4", |
|
"pages": "607--615", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Knight. 1999. Decoding complexity in word- replacement translation models. Computational Lin- guistics, 25(4):607-615.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Statistical phrase-based translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz", |
|
"middle": [ |
|
"Josef" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, NAACL '03", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "48--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceed- ings of the 2003 Conference of the North American Chapter of the Association for Computational Linguis- tics on Human Language Technology, NAACL '03, pages 48-54.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Moses: Open source toolkit for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Bertoldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brooke", |
|
"middle": [], |
|
"last": "Cowan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wade", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Moran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Zens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Con- stantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceed- ings of the 45th Annual Meeting of the ACL on Inter- active Poster and Demonstration Sessions, ACL '07, pages 177-180.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Pharaoh: a beam search decoder for phrase-based statistical machine translation models. Machine translation: From real users to research", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "115--124", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn. 2004. Pharaoh: a beam search decoder for phrase-based statistical machine translation mod- els. Machine translation: From real users to research, pages 115-124.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Local phrase reordering models for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Shankar", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "161--168", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shankar Kumar and William Byrne. 2005. Local phrase reordering models for statistical machine translation. In Proceedings of Human Language Technology Con- ference and Conference on Empirical Methods in Nat- ural Language Processing, pages 161-168, Vancou- ver, British Columbia, Canada, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Polyhedral characterization of discrete dynamic programming", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kipp", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rardin", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Rardin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Campbell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Operations research", |
|
"volume": "38", |
|
"issue": "1", |
|
"pages": "127--138", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Kipp Martin, Rardin L. Rardin, and Brian A. Camp- bell. 1990. Polyhedral characterization of dis- crete dynamic programming. Operations research, 38(1):127-138.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "An efficient A* search algorithm for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Franz Josef", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Ueffing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the workshop on Data-driven methods in machine translation", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och, Nicola Ueffing, and Hermann Ney. 2001. An efficient A* search algorithm for statisti- cal machine translation. In Proceedings of the work- shop on Data-driven methods in machine translation - Volume 14, DMMT '01, pages 1-8, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Introduction to Optimization", |
|
"authors": [ |
|
{ |
|
"first": "Boris", |
|
"middle": [], |
|
"last": "Polyak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Boris Polyak. 1987. Introduction to Optimization. Opti- mization Software, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Revisiting optimal decoding for machine translation IBM model 4", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Clarke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Riedel and James Clarke. 2009. Revisiting optimal decoding for machine translation IBM model 4. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguis- tics, Companion Volume: Short Papers, pages 5-8. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Parse, price and cut: delayed column and row generation for graph based parsers", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "732--743", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Riedel, David Smith, and Andrew McCallum. 2012. Parse, price and cut: delayed column and row generation for graph based parsers. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 732-743. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Exact decoding of syntactic translation models through lagrangian relaxation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Alexander", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "72--82", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander M Rush and Michael Collins. 2011. Exact decoding of syntactic translation models through la- grangian relaxation. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, vol- ume 1, pages 72-82.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "A tutorial on dual decomposition and lagrangian relaxation for inference in natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Alexander", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "45", |
|
"issue": "", |
|
"pages": "305--362", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander M Rush and Michael Collins. 2012. A tutorial on dual decomposition and lagrangian relaxation for inference in natural language processing. Journal of Artificial Intelligence Research, 45:305-362.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Word reordering and a dynamic programming beam search algorithm for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Tillmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "29", |
|
"issue": "1", |
|
"pages": "97--133", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christoph Tillmann and Hermann Ney. 2003. Word re- ordering and a dynamic programming beam search al- gorithm for statistical machine translation. Computa- tional Linguistics, 29(1):97-133.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Efficient dynamic programming search algorithms for phrase-based SMT", |
|
"authors": [ |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Tillmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing, CHSLP '06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christoph Tillmann. 2006. Efficient dynamic pro- gramming search algorithms for phrase-based SMT. In Proceedings of the Workshop on Computationally Hard Problems and Joint Inference in Speech and Lan- guage Processing, CHSLP '06, pages 9-16.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Left-to-right target generation for hierarchical phrase-based translation", |
|
"authors": [ |
|
{ |
|
"first": "Taro", |
|
"middle": [], |
|
"last": "Watanabe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hajime", |
|
"middle": [], |
|
"last": "Tsukada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideki", |
|
"middle": [], |
|
"last": "Isozaki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "777--784", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taro Watanabe, Hajime Tsukada, and Hideki Isozaki. 2006. Left-to-right target generation for hierarchical phrase-based translation. In Proceedings of the 21st International Conference on Computational Linguis- tics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44, pages 777-784, Morristown, NJ, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Phrase-based statistical machine translation as a traveling salesman problem", |
|
"authors": [ |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Zaslavskiy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Dymetman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Cancedda", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "333--341", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikhail Zaslavskiy, Marc Dymetman, and Nicola Can- cedda. 2009. Phrase-based statistical machine transla- tion as a traveling salesman problem. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 -Volume 1, ACL '09, pages 333-341, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "Figure 2: Hypergraph for translating the sentence w = les 1 pauvres 2 sont 3 demunis 4 with set of pairs P = {(les, the), (pauvres, poor), (sont demunis, don't have any money)}. Hyperedges are color-coded by source words translated: orange for les 1 , green for pauvres 2 , and red for sont 3 demunis 4 . The dotted lines show an invalid hyperpath x that has signature Ax = 0, 0, 2, 2 = 1, 1, 1, 1 .", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"text": "Pruning function for phrase-based translation. Set C contains all hypotheses with ||sig|| 1 source words translated. The function prunes all but the top-\u03b2 scoring hypotheses in this set.", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"text": "vector ub upper bound on optimal constrained solution opt certificate of optimality", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"text": "Lagrangian relaxation algorithm. The algorithm repeatedly calls LRROUND to compute the subgradient, update the dual vector, and check for a certificate.", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"text": "Property 4.2 (Primal Certificate). If the hyperpath x is a member of X , i.e. Ax = b, then L(\u03bb) = \u03b8 x * + \u03c4 .", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td>11-20 (558)</td><td/><td/><td>21-30 (566)</td><td/><td/><td>31-40 (347)</td><td/><td/><td>41-50 (168)</td><td/><td/><td>all (1824)</td><td/></tr><tr><td>Phrase-Based</td><td>time</td><td>cert</td><td>exact</td><td>time</td><td>cert</td><td>exact</td><td>time</td><td>cert</td><td>exact</td><td>time</td><td>cert</td><td>exact</td><td>time</td><td>cert</td><td>exact</td></tr><tr><td>BEAM (100)</td><td>2.33</td><td>19.5</td><td>38.0</td><td>8.37</td><td>1.6</td><td>7.2</td><td>24.12</td><td>0.3</td><td>1.4</td><td>71.35</td><td>0.0</td><td>0.0</td><td>14.50</td><td>15.3</td><td>23.2</td></tr><tr><td>BEAM (1000)</td><td>2.33</td><td>37.8</td><td>66.3</td><td>8.42</td><td>3.4</td><td>18.9</td><td>21.60</td><td>0.6</td><td>3.2</td><td>53.99</td><td>0.6</td><td>1.2</td><td>12.44</td><td>22.6</td><td>36.9</td></tr><tr><td>BEAM (100000)</td><td>3.34</td><td>83.9</td><td colspan=\"2\">96.2 18.53</td><td>22.4</td><td>60.4</td><td>46.65</td><td>2.0</td><td>18.1</td><td>83.53</td><td>1.2</td><td>6.5</td><td>23.39</td><td>43.2</td><td>62.4</td></tr><tr><td>MOSES (100)</td><td>0.18</td><td>0.0</td><td>81.0</td><td>0.36</td><td>0.0</td><td>45.6</td><td>0.53</td><td>0.0</td><td>14.1</td><td>0.74</td><td>0.0</td><td>6.0</td><td>0.34</td><td>0.0</td><td>52.3</td></tr><tr><td>MOSES (1000)</td><td>2.29</td><td>0.0</td><td>97.8</td><td>4.39</td><td>0.0</td><td>78.8</td><td>6.52</td><td>0.0</td><td>43.5</td><td>9.00</td><td>0.0</td><td>19.6</td><td>4.20</td><td>0.0</td><td>74.6</td></tr><tr><td>ASTAR (cap)</td><td>11.11</td><td>99.3</td><td colspan=\"2\">99.3 91.39</td><td>53.9</td><td>53.9</td><td>122.67</td><td>7.8</td><td>7.8</td><td>139.61</td><td>1.2</td><td>1.2</td><td>67.99</td><td>58.8</td><td>58.8</td></tr><tr><td>LR-TIGHT</td><td colspan=\"6\">4.20 100.0 100.0 23.25 100.0 100.0</td><td>88.16</td><td>99.7</td><td>99.7</td><td>377.9</td><td>97.0</td><td>97.0</td><td>60.11</td><td>99.7</td><td>99.7</td></tr><tr><td>OPTBEAM</td><td colspan=\"6\">2.85 100.0 100.0 10.33 100.0 100.0</td><td colspan=\"3\">28.29 100.0 100.0</td><td>84.34</td><td>97.0</td><td>97.0</td><td>17.27</td><td>99.7</td><td>99.7</td></tr><tr><td>ChangCollins</td><td colspan=\"6\">10.90 100.0 100.0 57.20 100.0 100.0</td><td>203.4</td><td>99.7</td><td>99.7</td><td>679.9</td><td>97.0</td><td>97.0</td><td>120.9</td><td>99.7</td><td>99.7</td></tr><tr><td>MOSES-GC (100)</td><td>0.14</td><td>0.0</td><td>89.4</td><td>0.27</td><td>0.0</td><td>84.1</td><td>0.41</td><td>0.0</td><td>75.8</td><td>0.58</td><td>0.0</td><td>78.6</td><td>0.26</td><td>0.0</td><td>84.9</td></tr><tr><td>MOSES-GC (1000)</td><td>1.33</td><td>0.0</td><td>89.4</td><td>2.62</td><td>0.0</td><td>84.3</td><td>4.15</td><td>0.0</td><td>75.8</td><td>6.19</td><td>0.0</td><td>79.2</td><td>2.61</td><td>0.0</td><td>85.0</td></tr><tr><td/><td/><td>11-20 (192)</td><td/><td/><td>21-30 (159)</td><td/><td/><td>31-40 (136)</td><td/><td colspan=\"2\">41-100 (123)</td><td/><td/><td>all (691)</td><td/></tr><tr><td>Syntax-Based</td><td>time</td><td>cert</td><td>exact</td><td>time</td><td>cert</td><td>exact</td><td>time</td><td>cert</td><td>exact</td><td>time</td><td>cert</td><td>exact</td><td>time</td><td>cert</td><td>exact</td></tr><tr><td>BEAM (100)</td><td>0.40</td><td>4.7</td><td>75.9</td><td>0.40</td><td>0.0</td><td>66.0</td><td>0.75</td><td>0.0</td><td>43.4</td><td>1.66</td><td>0.0</td><td>25.8</td><td>0.68</td><td>5.72</td><td>58.7</td></tr><tr><td>BEAM (1000)</td><td>0.78</td><td>16.9</td><td>79.4</td><td>2.65</td><td>0.6</td><td>67.1</td><td>6.20</td><td>0.0</td><td>47.5</td><td>15.5</td><td>0.0</td><td>36.4</td><td>4.16</td><td>12.5</td><td>65.5</td></tr><tr><td>CUBE (100)</td><td>0.08</td><td>0.0</td><td>77.6</td><td>0.16</td><td>0.0</td><td>66.7</td><td>0.23</td><td>0.0</td><td>43.9</td><td>0.41</td><td>0.0</td><td>26.3</td><td>0.19</td><td>0.0</td><td>59.0</td></tr><tr><td>CUBE (1000)</td><td>1.76</td><td>0.0</td><td>91.7</td><td>4.06</td><td>0.0</td><td>95.0</td><td>5.71</td><td>0.0</td><td>82.9</td><td>10.69</td><td>0.0</td><td>60.9</td><td>4.66</td><td>0.0</td><td>85.0</td></tr><tr><td>LR-TIGHT</td><td colspan=\"3\">0.37 100.0 100.0</td><td colspan=\"3\">1.76 100.0 100.0</td><td colspan=\"3\">4.79 100.0 100.0</td><td>30.85</td><td>94.5</td><td>94.5</td><td>7.25</td><td>99.0</td><td>99.0</td></tr><tr><td>OPTBEAM</td><td colspan=\"3\">0.23 100.0 100.0</td><td colspan=\"3\">0.50 100.0 100.0</td><td colspan=\"3\">1.42 100.0 100.0</td><td>7.14</td><td>93.6</td><td>93.6</td><td>1.75</td><td>98.8</td><td>98.8</td></tr><tr><td>ILP</td><td colspan=\"6\">9.15 100.0 100.0 32.35 100.0 100.0</td><td colspan=\"3\">49.6 100.0 100.0</td><td colspan=\"3\">108.6 100.0 100.0</td><td>40.1</td><td>100.0</td><td>100.0</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "shows the main results. For phrase-based translation, OPTBEAM decodes the optimal translation with certificate in 99% of sentences with an average time of 17.27 seconds per sentence. This" |
|
} |
|
} |
|
} |
|
} |